Uncommon Descent Serving The Intelligent Design Community

Reinstating the Explanatory Filter

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In an off-hand comment in a thread on this blog I remarked that I was dispensing with the Explanatory Filter in favor of just going with straight-up specified complexity. On further reflection, I think the Explanatory Filter ranks among the most brilliant inventions of all time (right up there with sliced bread). I’m herewith reinstating it — it will appear, without reservation or hesitation, in all my future work on design detection.

P.S. Congrats to Denyse O’Leary, whose Post-Darwinist blog tied for third in the science and technology category from the Canadian Blog Awards.

Comments
After reading some responses on other forums, it appears that the anti-IDists 1- do not understand the meaning of INFERENCE concerning science and 2- do not undrstand that the science of today does not and cannot wait for what the future may or may not reveal. Joseph
Hi Mike I see your question. I first note that it is to some extent misdirected. For, we are not interested in whether the onion's cells [including DNA, enzymes etc] show more evidence of FSCI than the onion's or the converse. Instead, the material point is that BOTH are well beyond the reasonable threshold for being reached by chance forces on the gamut of the observed universe across any reasonable estimate of its lifespan. What do I mean by that?
1 --> One can often easily enough estimate configuration spaces, and 2 --> can also reasonably identify that a function based on particular states within that space of possible configurations is prone to breakdown on perturbation of the relevant information. 3 --> These are the keys to identifying the search space and the relative size of the island of relevant function in that space. 4 --> FSCI is in the first instance based on finding a reasonable threshold of complexity [i.e. number of configs] that would exhaust the universe's search resources to get to islands of function of reasonable size. 5 --> For practical purposes, when . . . 6 --> config spaces require more than about 500 - 1,000 bits [the latter to take in generous islands of function that leave a lot of room for "climbing" up hills of performance from minimal to optional by your favourite search algorithm . . .] and 7 --> function is vulnerable to perturbation of the information, THEN . . . 8 --> we are dealing with FSCI.
So, we have a reasonable lower bound on reliably inferring to directed rather than undirected contingency being responsible for an observed configuration that functions in some context or other. (This is entirely similar to standard hypothesis testing techniques that work off the principle that predominant clusters mean that small target zones are sufficiently unlikely to show up in reasonably sized samples that if we see these results, we are entitled to infer to intent not happenstance as the most reasonable cause.) In the case of living systems, the current lower bound on an independent life-form plausible as first life is a geneome of about 300 - 500, 000 G/C/A/T elements (or possibly the RNA equivalent). That is a config space based on 4-state elements, and at the lower end, 4^300,000 ~ 9.94 * 10^180,617. Both carrots and onions would be well beyond that threshold, and it is reasonable to deduce that the basic genome is explained by intelligence not chance. If you then want to factor in the elaborations to get to the body-plans and peculiarities of the carrot or the onion, you are simply getting into overkill. On evidence, basic body-plans will require 1's to 10's or even 100's of millions of additional DNA G/C/A/T elements. Even the difference between a carrot and an onion would be well beyond the 500 - 1,000 bit threshold. We would reasonably infer that that difference is due to directed contingency, by whatever mechanisms such a designer would use. As to metrics of FSCI that give numerical values as opposed to threshold judgements, we note that FSCI is a sub-set of CSI, so the Dembski models and metrics for CSI would apply. For instance in 2005, he modelled a metric [here using X for chi and p for phi]: X = –log2[10^120·pS(T)·P(T|H)] Thus we have a framework for supplying the table of CSI values, but to go beyond the threshold type estimate to that is a far harder exercise, and it would not make any material difference. For instance post no 100 is an apparent message that is responsive to the context of this thread, and has in it 403 ASCII characters. 128^403 ~1.61 * 10^849, the number of cells in the config space for that length of text. I comfortably infer that this is message not lucky noise, per FSCI, as 1000 bits specifies about 10^301 states. Are you willing to challenge that design inference? On what grounds? GEM of TKI kairosfocus
kairosfocus
That, we exemplify intelligent agents and demonstrate on a routine basis that we leave FSCI as characteristic traces of our intelligent designs
Aplogies if this has been asked before, but do you have a list of objects and the FSCI contained within them? I'd be interested to see how the figures work out. Do onions have alot of FSCI due to their unusual genonme for example? More then carrots? MikeKratch
PS: The whole TMLO book by Thaxton et al is available here as a PDF, about 70 MB if memory serves. kairosfocus
Gentlemen Following up on a few points: 1] Patrick at 95: Links. Excellent links! Thanks. I particularly like the remarks in mere Creation that explored the contrast between crystals and biopolymer based systems, with sidelights on Prigogine's work. [BTW, Thaxton et al's TMLO has a very good discussion of Prigogine's work in the online chapters 7 - 9.] 2] Ratzsch example Event & aspect: tumbleweed tumbles through small hole in fence. EF look:
Contingent? Yes. (Also, various mechanical forces are at work: wind, interaction with ground, gravity, but that is not relevant to this aspect.) Specified: yes Complex in info storing sense? No. In v .low probability sense? No. Verdict: Chance (+ necessity).
3] 97: People build their little logical boxes based upon preconceptions and attempt to forcefit/mangle everything into it. I don’t want to “build” such a box and call it reality, I want to know what our box called reality really is. Sadly apt. Science, at its best is an unfettered (but ethically and intellectually responsible) search for the truth about our world, in light of empirical evidence and logical/mathematical analysis. Too often, today, that is being censored in pursuit of the sort of politically correct materialistic agendas I cited from Lewontin at 86 above. In case some may be tempted to think that Lewontin is unrepresentative, I here excerpt from the US NAS's latest [2008] version of their pamphlet against "Creationism":
Definition of Science The use of evidence to construct testable explanations and predictions of natural phenomena, as well as the knowledge generated through this process. [US NAS, 2008]
that sounds fairly innocuous, until you see the immediately preceding context:
In science, explanations must be based on naturally occurring phenomena. Natural causes are, in principle, reproducible and therefore can be checked independently by others. If explanations are based on purported forces that are outside of nature, scientists have no way of either confirming or disproving those explanations . . .
Cue: red flashing lights . .. SOUND Effects: ERRMRR! EERMRR! ERRMRR! . . . SCREECH! Black-suited, lab - coated jackbooted (actually, penny loafers are more likely . . . ) "Polizei": "We're the thought police and we're here to help you!" On a more serious note did it ever occur to the NAS . . .
a --> that we do not only contrast natural/ supernatural, but also natural/ artificial (i.e. intelligent)? b --> That, we exemplify intelligent agents and demonstrate on a routine basis that we leave FSCI as characteristic traces of our intelligent designs? c --> That such empirical signs of design allow us to reasonably infer that where we see further instances, we can on the same confident grounds that we provisionally accept explanatory laws, and chance models, accept that intelligent action is being detected? d --> That inferring from the sign tot he signified,t hen discussing who the possible candidates are, is a legitimate and empirically anchored, testable process? e --> That a supernatural, intelligent, cosmos generating agent is logically possible and that such an agent might just leave behind signs of his action in the structures and operations of the cosmos? [And, in fact many scientists of the founding era and up to today, think and have done their science in the context of accepting that this is so, including classically Newton in the greatest scientific work of all time, Principia.] f --> That when we join the finetuning of the observed cosmos for life as we observe it, to the evident FSCI that pervades the structures of the cell up to the major body plans of life forms, it is not unreasonable to infer that a credible candidate for the author of life is the same author of the cosmos? (Indeed, at least as reasonable as any materialist system of thought.) g --> That many scientists, past and present (including Nobel Prize winners) have successfully practised science in such a "thinking God's thoughts after him" [Boyle, if memory serves] paradigm, and have obviously not been ill equipped to so practise science? h --> that re-opening up the vista of scientific explanations to include and accept chance, necessity and intelligence is just that, an opening up to permit unfettered, uncensored, empirically controlled pursuit of the truth, not a closing down?
4] JT, 93: why isn’t an ordinary snowflake in nature complex and specified on the same basis. It seems clearly it is. This has already been answered, more than once. The issue is that we need specification and complexity in the same aspect of the object, event or process. That is what sets up the large config space and the narrow island of functionality. In the naturally occurring snowflake [not my suggested Langley mod for steganographic coding purposes]:
a --> the simple, tight, elegant specification of hexagonal crystalline structure is set by forces of polarisation and the geometry of the H2O molecule. [There are considerations that suggest this molecule looks like an elegant cosmological level design in itself, as a key to life -- but that's another story.] b --> this exhibits, by virtue of the dominant forces, low contingency, so it will not store information. [One could in principle store information in artfully placed defects, e.g similar to a hologram, but then that is going to be a high contingency that may in future be directed but is undirected.] c --> in the case of e.g. dendritic star flakes, teh dendrites show high contingency based on the peculiar circumstances of their formation, giving rise to the story that no two snowflakes are exactly alike. Plainly, high contingency, and high information storage potential. but, we see not informational patterns and so infer that the dendritic growths reflect chance acting. d --> I proposed a technique for storing a bit-string around the star's perimeter using snowflakes, or more realistically computer manipulated images thereof. the idea was, that like the prongs on a Yale-type lock's key, the dendrites would serve as a long-string coded pattern. e --> Such would be directed contingency, and would function as a pass-code, i.e an electronic or software based key. [We could do a physical form of it, a six-prong update to the Yale Lock . . .] f --> were that to be done, we would at once see that we specify function through a tight island of functionality, in a very larger configuration space. [BTW, I think that the typical Yale Lock has about six pins, with three possible positions. the no of configs is 3^6 = 729; multiplied by the number of slot and angle arrangements on the key's body. . That's enough to be pretty secure in your house or car, but it would be a lot fewer than the hypothetical snowflake key or the more relevant DNA and protein cases! [Well, a car has a two-sided Yale key, or 531,441 basic tumbler positions; though typically they just do a symmetrical key(thus tumbling from 1/2 million to an island of less than 1,000). Lock picks allow thieves or locksmiths to "feel" and trigger the pins.]]
5] . . .some completely different ontological category of causation called “Intelligent Design” which some say doesn’t even exist And thereby fall immediately into self-referential absurdity and selective hyperskepticism, for they themselves are intelligent, are designers and have conscious minds. So, to then turn around and object to the implications of such empirically established phenomena reflects very sadly indeed on the current state of the intellectual life in our civilisation at the hands of the evolutionary materialists. As has already been pointed out. Details here. ______________ At this stage, the ball is plainly in JT's court. G'day GEM of TKI kairosfocus
Agreed. People build their little logical boxes based upon preconceptions and attempt to forcefit/mangle everything into it. I don't want to "build" such a box and call it reality, I want to know what our box called reality really is. Many of the recent arguments seem to be along these lines: "I do not like the results so I am going to redefine the variables to get the results I desire." Patrick
Patrick, What is occurring is an attempt to rationalize away reality. tribune7
I was curious whether Dembski had ever commented on the Snowflake argument. This is all I could find: Mere Creation Page 12 of No Free Lunch also has a reference to crystals, but it's not on google, although snowflake examples are. "Refutations" by Darwinists seem to typically consist of mangling the concepts to be whatever they want (aka strawmen). For example:
Ratzsch, Nature, Design, and Science. The example of a false positive produced by the EF given in this book (pp. 166-167) is a case of driving on a desert road whose left side was flanked by a long fence with a single small hole in it. A tumbleweed driven by wind happened to cross the road in front of Ratzsch's car and rolled precisely through the sole tiny hole. The event had an exceedingly small probability and was "specified" in Dembski's sense (exactly as a hit of a bull's-eye by an arrow in Dembski's favorite example). Dembski's EF leads to the conclusion that the event in question (tumbleweed rolling through the hole in the fence) was designed while it obviously was due to chance; this is a false positive.
Where's my "rollseyes" button?
Of course, ID would indicate the drawing to be.
And any digital string encoding the drawing of the log, as well, presuming the encoding method can be found. Patrick
JT--Say on a sheet of paper the bit 1 corresponds to black and 0 represents white. Say someone draws an ordinary snowflake on that paper. Now take each row of the paper and lay them out end to end so you have a million-bit long string of digits. The same thing would be true of a drawing of a rotted log. Are you saying that ID would indicate the log was designed? Of course, ID would indicate the drawing to be. tribune7
OK KF, why isn't an ordinary snowflake in nature complex and specified on the same basis. It seems clearly it is. So if its CSI, all we can conclude from that is its not the result of metaphysical randomness - that's all that the Design Inference can establish. The design inference cannot determine whether the snowflake was caused by A) laws or B)some completely different ontological category of causation called "Intelligent Design" which some say doesn't even exist. You have to decide that on your own. OK I'll quit monopolizing this thread and see if I can figure out from KF's post and Jerry's what FCSI is all about. JT
And for the record, I generally put "mind" in quotes when referring to the ID concept of it and don't use the term much at all, because of the potential for confusion. JT
JT: Before locking off after doing some major downloads, I decided to come back by UD. Saw your 88. 1] Say someone draws an ordinary snowflake on that paper. H'mm seems a bit obvious, but we can look back as a case of known origin per gedankenexperiment. 2] take each row of the paper and lay them out end to end so you have a million-bit long string of digits. this gives us a 1 mb string, bearing a code based on the algorithm, snip at every so many bits, then align. Sort of like what I think was called the Caesar code -- wound up on a stick as I recall. A 1 mb string is complex. Assuming we can "spot" a pattern, and thence see that there is a specifying algorithm, it will be recognisably specified. (Sort of like SETI.) Once we see that there is a functional pattern here -- picture of a snowflake, that will give us a basis for inferring that we have complex string fulfilling a narrowe target. Designed. If we cannot spot the pattern we will infer complex but no evident functional pattern so default to chance. [Though with so simple qa case, there will be strong correlations from row to row so the pattern will be easy enough to spot.) BTW, this is a simplified version of what is alleged to be going on in a recent twist on codes and ciphers: steganography. If one fails to spot the pattern, the EF will default to chance, per its deliberate bias, and will make in this case a false negative. 9It is designed to be reliable on a positive ruling [by using so extreme a degree of threshold for ruling complexity], but will cheerfully accept being wrong on the negative ruling. 3] just say that a random snowflake lands on the paper Maybe this requires either a giant snowflake or a very good CCD display element so that the flake will block light on some pixels but not others. Then we row by row convert and use the resulting string as a transmitted string. this is rather like how in C17 - 18, I gather colonial authorities in what would become the US sometimes would use a leaf as a design on paper money so that the counterfeiting would be impossible. Again, what happens is that the correlations along the bit string will suggest that this is slices of a pic, like a raster scan. (My students in Jamaica loved to hear that term!) The ruling wlll on that outcome, be: designed, and it would relate to the composition of the string, not the features of the snowflake - i.e. is a digital or old fashioned chemical photograph designed or a mere product of chance and necessity? GEM of TKI kairosfocus
7] If an agent’s actions are not predicatable then his actions equates to RANDOMNESS. Onlookers, this is of course the precise problem that evo mat thinking lands in as it fruitlessly tries to account for the mind on the basis of chance + necessity acting on matter. It ends up in assigning messages to lucky noise acting on essentially mechanistic systems. Thus, JT needs to address the challenge of why he would take seriously the apparent message resulting from a pile of rocks sliding down a hillside and by astonishing luck coming up: “Welcome to Wales,” all on the border of Wales. But we are not locked up to such a view. For the first fact of our existence as intelligent creatures is that we are conscious and know that we reason, think decide and act in ways led by our minds, not random forces. Indeed, that is the assumption that underlies the exchange in this blog — i.e self-referential incoherence yet again lurks here for the materialist. For the record - you're entirely missing my point. I don't think the actions of a human being are random. I say that they are not random because they are potentially predictable, and they are predictable because humans operate according to laws, albeit the very very complex laws embodied in our phyiscal make up - our brains and so forth. My point was to say that a mind not determined by laws equates to randomness and so therefore such a view of mind is incoherent. I do not personally think a mind equates to randomness. I say this is what the ID view of mind equates to. Yes, its obvious that mind is not random, so that must mean the ID view is wrong So hopefully, that clears up that point. JT
I am not sure when the term FCSI first arose on this site but about a year and a half ago or maybe it was 2 1/2 years ago we were going through a bi monthly examination of the just what does CSI means and having little success. You have poker or bridge hands, coin tosses, voting patterns, sculptures, writing and language, computer programs and DNA. What is the commonality between each one of these things and no one was able to provide a definition that would encompass all these things. It seems we could not get specific about specificity. Then in one of the comments, bfast made the observation that specificity is relevant because the data specifies something. Bfast is a computer programmer I believe, so the use of code to specify basic instructions to the hardware of a computer is a natural association and Meyers had frequently made the association of language with CSI. So the distinction of FCSI with CSI became part of the thinking here and kairosfocus was one of the people making it. However, I don't know if Demski ever made the same distinction. If he did, then maybe someone has a reference. FCSI is easy to understand and it makes the ID case very readily while CSI, a broader concept and more vague, leads to the meaningless sniping against ID that we all witness. As I just said, the sniping is meaningless. Some how they think that if they can discredit CSI as a scientific concept, they have won the day or won a major battle. This inane thinking is more of an indictment of them then they realize. They constantly need to win small battles to think their world view is correct when they are overwhelmed by the other data which they dismiss by hand waiving. So we get the anti ID crowd coming here and picking away at CSI while they rarely ever go after FCSI. I personally suggest we steer any assault against CSI, a vague concept, towards FCSI, a very concrete concept that describes what happens in life. Defending CSI, whatever it is, has not been fruitful and will not be till a clear definition of just what it is, becomes available. No one should have to be conversant in obscure mathematics to understand it. If anyone disagrees, then I would be interested in just how they define specificity? jerry
kairosfocus: At this point, I am not in debating mode, just trying to understand your point of view. So while still going over your most recent post, let me throw a scenario at you. Say on a sheet of paper the bit 1 corresponds to black and 0 represents white. Say someone draws an ordinary snowflake on that paper. Now take each row of the paper and lay them out end to end so you have a million-bit long string of digits. It seems that string is complex and specified according to the design inference and could not have happened by chance. Right? ('Yes' or 'No' is OK.) Or just say that a random snowflake lands on the paper and answer the same question. JT
KF, good points about FSCI. I wish Dembski would make more use of the concept. With Patrick jogging my memory, patterns are specified (sorry Mark) but repetitive patters are not complex & crystals (snowflakes, stalagmites) are repetitive patterns hence not complex, and so I guess that is the what I should have remembered about crystals being specifically addressed by Dembski. Something to consider: Would ID be able to determine if 010101010101010101 repeated to 10^whatever was designed? No, although it may very well could be. OTOH, if that code was found to have a function -- something unexpected and useful occurred when we ran it -- then I think all of us would infer design. tribune7
Okay A few remarks on points: 1] JT: when people bring up the snowflake example, its not to imply that snowflakes are pretty much the same as life and that proves that the forces that created a snowflake can create life as well. Mischaracterisation of the rebuttal. The evolutionary materialist claim, FIRST, is that the snowflake is produced by chance plus necessity, and that this pattern is also able to account for the origin and body-plan level biodiversity of life. In that context, evo mat advocates then raise the onward, even more specious objection: 2] “Here is something that everyone would agree is caused by natural laws. And yet the EF (or the design inference or whatever) would seem to imply that a snowflake is designed by what they call an ‘intelligent agent’. This proves that the Design Inference is not reliable.” First, the EF has always focussed on objects, situations and aspects thereof that SIMULTANEOUSLY exhibit [a]complexity, and [2] (often, functional) specification. the simultaneous side is important as the point is that complexity implies a large information storage capacity. "Simple" and/or independent specifiability implies that the observed result is in a small, v. hard to reach [practically, impossible; per search resource exhaustion] by chance, target zone. But, we know that intelligent agents routinely achieve such outcomes, so when we see CSI (or, its easier to understand subset FSCI] we infer to agents. Indeed, just by inferring that this post is not lucky noise, you are making such an inference. In the case of the snowflake, as I pointed out above, at 73, that the specification relates tot he crystalline structure, and the complexity tot he dendrites. thus they do not constitute a case that has a single aspect that exhibits BOTH storage capacity and tight specification to a target zone in the resulting large config space:
HEX STRUCTURE: Law, so specifcity but no room for information storing high contingency, so not complex. DENDRITES: Complex but produced by effectively random circumstances — undirected high contingency, i.e. complex but not simply specific or functional. Chance.
In short, the objection is based on a misunderstanding, or a misconstruing of what is being discussed. 3] DNA by contrast: In my appendix, I observe, for this very reason: DNA exhibits both information storage capacity AND is tightly functionally specified. thus there is a strong contrast to the snowflake. Observe what happens just after the section you excerpted:
. . . . The tendency to wish to use the snowflake as a claimed counter-example alleged to undermine the coherence of the CSI concept thus plainly reflects a basic confusion between two associated but quite distinct features of this phenomenon: (a) external shape -- driven by random forces and yielding complexity [BTW, this is in theory possibly useful for encoding information, but it is probably impractical!]; and, (b) underlying hexagonal crystalline structure -- driven by mechanical forces and yielding simple, repetitive, predictable order. [This is not useful for encoding at all . . .] Of course, other kinds of naturally formed crystals reflect the same balance of forces and tend to have a simple basic structure with a potentially complex external shape, especially if we have an agglomeration of in effect "sub-crystals" in the overall observed structure. In short, a snowflake is fundamentally a crystal, not an aperiodic and functionally specified information-bearing structure serving as an integral component of an organised, complex information-processing system, such as DNA or protein macromolecules manifestly are.
4] you saying that a snowflake may indeed be complex and specified In fact, as the very snippet you excerpted shows, I am pointing out that in the dendritic case [the other types of snowflake have simple plate or columnar gross structures, thus my "may" . . .] there is a difference in aspects that the complexity relates to, and the one that the specification relates to. I think there is a concept gap at work, as is further brought out by your . . . 5] it seems clear to me that by FSCI you just mean anything associated with life, and are not able to get any more detailed than that. Not at all. First, the term I have used is not original to me or to Dembski or to the design movement. It is a summarydescription of the substance of what was highlighted by OOL researchers by the 1970's. As Thaxton et al summarised in 1984 [a decade before Dembski]:
Yockey7 and Wickens5 develop the same distinction, that "order" is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, "organization" refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity. In short, the redundant order of crystals cannot give rise to specified complexity of the kind or magnitude found in biological organization; attempts to relate the two have little future.
Save in the service of debate! More seriously, it should be plain that I have simply clustered the terms into a phrase: functionally specified complex information, FSCI. Plainly, this term relates to the sort of integrated multi-part function that bio-systems exemplify, and which is based on complex, specified information. Bio-systems "exemplify" -- they do not "exhaust." Indeed, FSCI is a characteristic feature of engineered systems, and even written text that functions as messages under a given code or language. And that is exactly the sort of illustrative example that was used in the 1980's, and which I excerpted in the immediate context of the cite, right after the famous 1973 Orgel quote:
1. [Class 1:] An ordered (periodic) and therefore specified arrangement: THE END THE END THE END THE END Example: Nylon, or a crystal . . . . 2. [Class 2:] A complex (aperiodic) unspecified arrangement: AGDCBFE GBCAFED ACEDFBG Example: Random polymers (polypeptides). 3. [Class 3:] A complex (aperiodic) specified arrangement: THIS SEQUENCE OF LETTERS CONTAINS A MESSAGE! Example: DNA, protein.
This example form its polymer context, comes from Walter Bradley, author no 2 of The Mysery of Life's Origin. An acknowledged polymer expert. And he highlights that DNA is a message-bearing functional digital string of monomers, by contrast with the tangled "mess" that happens when we say polymerise amino acids at "random, and again as opposed to the more orderly pattern shown by nylon. A crystal is of course a 3-d "polymer" structure that exhibits fantastic order. I trust that his will help clarify. 6] 83: A process determined entire by law can have EXTREMELY complex behavior and extremely difficult to predict behavior. No "process" is "determined entire[ly] by law." As you will note, form the above, I stated a general framework for system dynamics in no 80 just above:
. . . if a given aspect of a situation or object is produced by law, it is inherently of low contingency: reliably, given Forces F comforming to laws L, and under boundary and intervening conditions C, then result R [up to some noise, N] will result. Cf case of a falling heavy object.
I cited a simple case to illustrate the point, but he context for this was my background in systems modelling whereby sets of differential equations show how change happens relative to initial and intervening conditions. Once conditions are the same, onward unfolding will be the same. [The problem with sensitive dependence on initial conditions is precisely that in these cases, through amplification of small differences, we cannot keep the conditions the same from one case to another. (in these cases,w e see a higher law, showing itself in a pattern: the strange attractor in phase space. That is, this undercores the point.)] And it is precisely the reliability of similarity of ourtcomes from case to case under similar conditions that is the signature of law. And it is precisely this point that leads to low contingency and to lack of information storing power. In the case of proposed info systems that use chaotic systems to lead to divergent or convergent outcomes as required to create and detect messages, it is the contingency in the conditions that makes of the information storage capacity. 7] If an agent’s actions are not predicatable then his actions equates to RANDOMNESS. Onlookers, this is of course the precise problem that evo mat thinking lands in as it fruitlessly tries to account for the mind on the basis of chance + necessity acting on matter. It ends up in assigning messages to lucky noise acting on essentially mechanistic systems. Thus, JT needs to address the challenge of why he would take seriously the apparent message resulting from a pile of rocks sliding down a hillside and by astonishing luck coming up: "Welcome to Wales," all on the border of Wales. But we are not locked up to such a view. For the first fact of our existence as intelligent creatures is that we are conscious and know that we reason, think decide and act in ways led by our minds, not random forces. Indeed, that is the assumption that underlies the exchange in this blog -- i.e self-referential incoherence yet again lurks here for the materialist. on the contrary, JT: [a] we explain regularities by mechanical forces that are expressible in laws, [b] we explain UNDIRECTED CONTIGENCY by chance, and [c] we explain DIRECTED CONTINGENCY by design. 8] If you don’t know what caused something you can’t encode it, and thus can’t gauge its probability. First, the case in question is one where we do know the sort of forces that lead to the pattern of dendrites on a snowflake, so this is tangential and maybe distractive. Second, we can observe directly that a given aspect of a situation exhibits high and evidently undirected contingency; up to some distribution. So we can characterise chance without knowing dynamics that give rise to it, apart form inferring from the distribution to the sort of model that gives rise to is. We do that all the time, even in comparative case studies and in control-treatment experiment designs. Third, we have no commitment to needing to know the universal decoder of information in any and all situations. Once we do recognise that something exhibits high contingency and is functional in a system, we have identified FSCI. From massive experience of the source of FSCi, we can then induce that an agent is at work in this case too, with high confidence. 9] a string’s probability is proportional, not to its own length, but the length of the smallest program-input that could generate it. Yes, and that is precisely an example of a potentially simple specification. What happens is that WD is saying that MOST long strings are resistant to that sort of reduction, i.e they are K-incompressible. In effect to describe and regenerate them, you have to have prior knowledge of the actual string and in effect copy it directly or indirectly. That is, active information on a specified target. Now, such an algorithm -- even at the "hello world" level -- needs to be expressed in a coding system, and to be stored in a storage medium and to be physically instantiated through executing machinery. Factor these parts in,and the complexity goes right back up. And that is what we are dealing with when it comes tot he origin of life or the body-plan level innovations to get to major forms of life. It is also what we are dealing with when it comes to our Collins universe baking breadmaker. 10] in the ID conception, intelligence is not a mechanism, not something that can be deconstructed or explained, and there is no consensus that such a thing conceived like that is an explanation for anything. Now, WHO is saying that, again? Is it not a self- aware, self- conscious, intelligent creature who knows that he acts into the empirical world based on intelligence; even to type what was just cited? In short, we do not need to understand what intelligence is or how it arises or how it acts, to KNOW that: --> it is, that --> it acts and that --> it is a key causal factor in many relevant situations. Indeed, we know THAT --> it leaves behind certain reliable signs of its passage, such as FSCI , CSI and IC! In short, this last is self referentially incoherent and selectively hyper-skeptical tot he point of absurdity, compounded by dismissive contempt. JT, you can do better than that, a lot better. As to the final point, I note simply by citing Lewontin in his 1997 review of Sagan's last book, on the role of evo mat in modern science; observing also that the NAS etc now insist on precisely this same imposition of materialism in their attempted re-definition of science, in our day. Here is Lewontin:
. . . to put a correct view of the universe into people's heads we must first get an incorrect view out . . . the problem is to get them to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth . . . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door.
The materialist agenda could not be plainer than that, regardless of what theistic evolutionism may wish to say. GEM of TKI kairosfocus
KF I understand now that appendix was intented primarily to discuss FCSI not CSI, so sorry about the mischaracterization. JT
predicatable = predictable JT
kairosfocus [80]: Just some quick responses I'll fire off to you at this point without a lot of planning: First, if a given aspect of a situation or object is produced by law, it is inherently of low contingency... I could not disagree with that assertion more vehemently. This idea you express stems from two common-sense type assumptions: 1) That laws must be necessarily trivial and simple, because the natural laws we happen to know about are simple (at least in comparison to the complexity of the genome) The second misguided idea is that because laws are determinsitic there is no contingency. But there is contingency if you do not know what complex determinsitic process is causing something. The contigency only dissapears when you know what that process is. A process determined entire by law can have EXTREMELY complex behavior and extremely difficult to predict behavior. I can't do better than that at the moment for a rebuttal, but I am telling you the idea you've expressed above is among the most misguided in all of ID thought. As another rejoinder to you, I would submit to you that the ID concept of agency equates to randomness. If an agent's actions are determined by laws, even extremely complex laws, then his actions are potentially predicatable. Reciprocally [we can turn anything into an adverb in English right?], to the extent an agent's actions are potentially predicatable we can derive a program, a set of laws characterizing how he is known to operate. OTOH If an agent's actions are not predicatable then his actions equates to RANDOMNESS. am sure you will understand that the longer the required specific bit string on our model snowflake, the harder it is to accidentally duplicate [or crack by brute force]. The length of that bitstring, if not based on something trivial like just the number of atoms, would be based on our knowledge of processes that could have caused it. So any such encoding scheme requires knowledge. If you don't know what caused something you can't encode it, and thus can't gauge its probability. This could be stated much better as well - but that's the gist of it. Also, yes I do understand, or provisionally accept that (to use WmAD's reasonings) the percentage of compressible strings is incredibly minute, so observing one of a certain length means you could definitely rule it out occuring by a series of coin flips for example. OTOH, there's another way of looking at probability wherein a string's probability is proportional, not to its own length, but the length of the smallest program-input that could generate it. SO in that scenario a string of 100 1's would be highly probable. And it does seem you do see that type of regularity in the natural world (but maybe not in coin flips). Repeating myself here, admittedly. If not, why then do so many evo mat advocates foam at the mouth when the obvious point is put: there is a well-known, even routine, source of such complexity: intelligence. Because in the ID conception, intelligence is not a mechanism, not something that can be deconstructed or explained, and there is no consensus that such a thing conceived like that is an explanation for anything. I conclude (provisionally but confidently, as is characteristic of scientific investigations) that the evidence of a programmed observed cosmos points to: design of the cosmos. 1 –> The first point of our awareness of the world is that we are conscious, intelligent, designing creatures who act into the world to cause directed contingency. I don't think even you or any other ID advocate know specifically what you mean by "directed contigency". –> We know that intelligence produces directed coningency, and that resulting FSCI is beyond the random search resources of our observed cosmos. In the case of life systems, VASTLY so; e.g. 300, 000 DNA base prs (lower end of estimates for credible 1st life) has a config space of about 9.94 * 10^180,617. 7 –> Now, given the raw power required to make a cosmos on the scale of the one we see, that sounds a lot like a Supreme Architect of the cosmos etc. that makes a lot of people very uncomfortable, and I detect that in the loaded terms you used above. I didn't really imagine that the ID - evolution debate really had to do with the existence of God - after all there are theistic evolutionists. The whole point of science is to explicate the laws, the process that caused something. Your idea that there are some things that laws just cannot do is ill-founded. Admittedly whatever processes that could account for us would equate to us and be extremely complex. But also there has to be a lot of randomness in there or why is the unverse is large as it is. I'm verging into an area that I've already discussed in other threads before, so will not rehash that whole discussion at this point. JT
KF: We were out of sync there- just saw your new post. It could be a while before I respond. JT
[Have no idea why my entire post is blockquoted.] KF wrote [73]:
You may find my discussion and the onward links here helpful. Some nice snowflake pics, too
As a general introduction let me say that, when people bring up the snowflake example, its not to imply that snowflakes are pretty much the same as life and that proves that the forces that created a snowflake can create life as well. Rather, the implicit argument is, "Here is something that everyone would agree is caused by natural laws. And yet the EF (or the design inference or whatever) would seem to imply that a snowflake is designed by what they call an 'intelligent agent'. This proves that the Design Inference is not reliable." So the focus is on the specific arguments made by Dembski. Therefore, while dwelling on the difference between a snowflake and DNA in a detailed and laborious way which you do at times in this section, however enlightening it is in a general sense, is not pertinent to the actual debate because everyone understands that a snowflake and life are not the same thing. The discussion is enlightening no doubt - just of questionable immediate relevance. If you can distill all that discussion down to a formula that compactly distinguishes life from nonlife and utilize such a formula in conclusively showing that natural laws (known or unknown) cannot account for life, that's another matter. But I don't see any evidence you've done that.
A snowflake may indeed be (a) complex in external shape [reflecting random conditions along its path of formation] and (b) orderly in underlying hexagonal symmetrical structure [reflecting the close-packing molecular forces at work], but it simply does not encode functionally specific information. Its form simply results from the point-by-point particular conditions in the atmosphere along its path as it takes shape under the impact of chance [micro-atmospheric conditions] + necessity [molecular packing forces].
In the above you saying that a snowflake may indeed be complex and specified which is what it is Dembski's scheme as well. I know in response to me previously you said that a snowflake was not complex. (Note also that the order in a snowflake you allude to would most definitely qualify as a pattern for the purposes of specifiication in the Dembskian scheme.) The crucial factor for you however (or maybe you're quoting a source here) is that the snowflake does not encode "functionally specific" information. But if your stated goal in this section is to clarify what Dembski was talking about, he has nothing to say about functional specificity. No such terminiology appears in his "Specification" paper. Maybe he engages the reader in some speculative discussion to this effect in some other book of is, but not in what he's presented of late as his definitive monograph of the subject of CSI. You alluded to "functionally specified complex information" and "FSCI", at the very beginning of this appendix, and if I understood your remarks correctly, you said that although your primary purpose in this appendix was to clarify the concept of CSI, that FSCI was "a more easily identified and explained subset of the CSI concept". So I went looking for a definition for FSCI in your paper and found the following:
Functionally Specific, Complex Information [FSCI] is a characteristic of complicated messages that function in systems to practically solve problems faced by intelligent agents.
(In fact there is a hyperlink to the above from, Defining "Functionally Specific, Complex Information" [FSCI] This seemed a little vague to me, so I went looking for more specific references to the concept in your paper:
But also, in so solving their problems, intelligent agents may leave behind empirically evident signs of their activity; and -- as say archaeologists and detectives know -- functionally specific, complex information [FSCI] that would otherwise be utterly improbable, is one of these signs.
...In short, we all intuitively and even routinely accept that: Functionally Specified, Complex Information, FSCI, is a signature of messages originating in intelligent sources.
That's about it. Then there was the following hyperlink
"Definitionitis" vs. the case by case recognition of FSCI
...In short, we do not need to have a "super-definition" of functionally specified complex information and/or an associated super-algorithm in hand that can instantly recognise and decode any and all such ciphers, to act on a case by case basis once we have already identified a code.
... This is of course another common dismissive rhetorical tactic. Those who use it should consider whether we cannot properly study cases of life under the label "Biology," just because there is no such generally accepted definition of "life."
So from the above it seems clear to me that by FSCI you just mean anything associated with life, and are not able to get any more detailed than that. So in your previous quote where you say that snowflakes are complex and specified, but not functionally specified, your objection is that apparently that they're not life, but we already knew that. Your closing remark in that quote is that,
[The snowflake's] form simply results from the point-by-point particular conditions in the atmosphere along its path as it takes shape under the impact of chance [micro-atmospheric conditions] + necessity [molecular packing forces]
But we knew already snowflakes are the result of chance and necessity, and not "designed". The argument is that the Design Inference would conclude they are designed and so is therefore invalid. Also, the fact that you would point out the obvious (that snowflakes are the result of chance and necessity) implies that its simply a matter of definition for you that anything caused by chance and necessity could not be life.
Moreover, as has been pointed out, if one breaks a snowflake, one still has (smaller) ice crystals. But, if one breaks a bio-functional protein, one does not have smaller, similarly bio-functional proteins. That is, the complexity of the protein molecule as a whole is an integral part of its specific functionality.
Undeniable, but considerations regarding periodicity [which you discuss elsewhere - the above has to do with recursiveness] do not appear to be relevant in the design inference itself. Also people can already see that among objects in the natural world, what we call life is the only thing that encodes information in the way that it does. Its also evident that life is not a matter of simple patterned or repeating complexity. To detail all these differences between life and nonlife in a systematic way do not establish that a seperate ontological category of "agency" is necessary to account for them. As far as the chance aspect of the discussion, Dembski's argument may possibly be relevant there (but I'm not sure). But actually, it would be sufficient to merely show that strict darwinianism largely equates to pure chance, because that is what they adamantly deny. They and everyone else understand that IF darwinism largely equates to chance, then its meaningless. You don't actually have to establish in a formal sense I think that an object of such and such complexity could not happen by pure chance. Or maybe you do, who knows. I'll abruptly end my discussion of your paper (or specifically that appendix you requested I read). You could probably have predicted my response. Could probably continue in a similar vein for a while, but just wanted to acknowledge I read it. Not saying the entire 120 page paper is worthless or something. Well there is one more comment I need to make. You write:
2] By developing the concept of the universal probability bound, UPB, [Dembski] was able to stipulate a yardstick for sufficient complexity to be of interest, namely odds of 1 in 10^150. Thus, we can see that for a uniquely specified configuration, if it has in it significantly more than 500 bits of information storage capacity, it is maximally unlikely that it is a product of chance processes.
So evidently you do understand that the complexity that Dembski is talking about is simply the number of bits required to express a value. JT
Okay JT: A few notes: 1] Dembski on specification First, I note that we are not dependent on WmAD or the specific document in 2005. He is providing a MODEL that gives a metric for CSI, not the origination of the concept. That is in part why I often refer back to Orgel et al in the 1970's and build on the more basic foundation of looking at the configuration space implied by information-storing capacities. Once those capacities cross 500 - 1,000 bits, and we have reasonably specific function (vulnerable to perturbations, and requiring precise coding as a result) we are looking at the sort of isolation in config space issues I documented in my always linked online note. Within that context, WmAD has provided a useful model for certain cases, both in the older and in the newer formats; e.g. "K-compressibility" of a string's description relative to its own length is a useful metric on simopplicity of specifiability. But, "functions as such and such a particular component in a processing or info or control system" is just as valid. "Unfair[ness]" is irrelevant. 2] Snowflakes and complexity First, if a given aspect of a situation or object is produced by law, it is inherently of low contingency: reliably, given Forces F comforming to laws L, and under boundary and intervening conditions C, then result R [up to some noise, N] will result. Cf case of a falling heavy object. Low or no contingency means that that aspect cannot store information, i.e. its capacity is well below the threshold. For snowflakes, the forces connected to the H2O polarisation and geometry specify its crystallisation, leading to a very regular, low contingency pattern [up to the inclusion of the usual crystal defects]. As Orgel wrote in '73 -- originally describing the concept of CSI -- and as you seem to have overlooked:
Living organisms are distinguished by their specified complexity [i.e. as expressed in aperiodic, information-rich, specifically functional macromolecules -- GEM]. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.6 [Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189.]
This is a classic definition by archetypal example and family resemblance thereto. The aspect of snowflakes that IS complex is different, e.g. the dendritic growth patterns on a star-shaped flake. In effect we can view this as a bit pattern running around the perimeter, similar to how the length and position of prongs on a Yale type lock's key specifies its function. Such a pattern may reflect undirected contingency (chance) or possibly could be manipulated to store information. [I wonder if some of those smart boys over in Langley have used altered snowflake patterns to store access codes?] I trust that this gedankenexperiment example will make plain the differences highlighted by the EF: low contingency is associated with specification of course but not complexity. High contingency may be undirected (chance) or directed (design). I am sure you will understand that the longer the required specific bit string on our model snowflake, the harder it is to accidentally duplicate [or crack by brute force]. And the function would be very specific and simple to describe: "access granted/denied." [Cf Dembski's example from the 2005 paper, of Langdon and Neveau's attempted access to the bank vault in the Da Vinci Code novel. In short, WmAD and I have the same basic ideas in mind.] In the relevantly parallel case of protein codes, they must fold to a given key-lock fit 3-d shape, and must then function properly in the cell's life processes. Hundreds, indeed thousands of times over, with average length of proteins 300 20-state elements, i.e. requiring about 900 G/C/A/T elements, or 1,800 bits. With even generous allowances for redundancies, that is well beyond the reach of random chance in any plausible prebiotic soup; not to mention that the codes and algorithms and algorithm executing machines all have to come together in the right order within about 10^-6 m. That is why OOL research on the evo mat paradigm is in such a mess. At body-plan evo level we have to address the genetic code and epigenetic info issues as well to innovate body plans to get tot he functions that natural selection forces can cull from. That is why the Cambrian fossil life body plan level explosion is such a conundrum; and has been since Darwin's day -- 150 years of unsolved "anomaly." Hence Denton's "theory in crisis" thesis. 3] Complexity, programs and their execution Working programs as just noted are of course based on highly contingent codes, algorithms and implementing machines; whether in life-based info systems or in non-life-based info systems. If you want to look at the laws of the cosmos as a program, I ask a question: who or what designed the language, wrote the code,developed the algorithms and the implementing "cosmos bread-making factory" machinery, as Robin Collins put it? Do you know of a case where complex programs have ever written themselves or designed themselves [apart form preprogrammed genetic algorithms that have specfified target zones and so put in active informaiton at the outset], "blind watchmaker" style beyond he 500 - 1,000 bit threshold? [Methinks it is like a Weasel etc (and evidently up to Avida) are bait-switches on blind searched, substituted by targeted ones.] If not, why then do so many evo mat advocates foam at the mouth when the obvious point is put: there is a well-known, even routine, source of such complexity: intelligence. So, per scientific induction, we infer a "law" of information: FSCI is a reliable sign of intelligence. From that we look at the program that wrote the cosmos, i.e. its fine-tuned highly complex set of physical laws. [Onlookers have a read of the online physics survey book, Motion Mountain. "Google," and if necessary follow up the ScribD source if the original still will not download.] Reckoning back on inference to best explanation, I conclude (provisionally but confidently, as is characteristic of scientific investigations) that the evidence of a programmed observed cosmos points to: design of the cosmos. [Provisionally of course implies falsifiability, or at least ability to in the Lakatos sense distinguish progressive and degenerating research programmes. And, if cosmos-generatign and regulating physics is algorithm, then it can in principle be hacked; maybe tapping into that dark energy out there as a power source. If that does not warm the cockles of the heart of any adventuresome physicist, I don't know what will! Imagine the possibility of superluminal travel by being able to create/access parallel universes that bring points far apart in our spacetime to our neighbourhood. Wormholes for real!] 4] The only question is, does the fact that we don’t know what caused life mean it happend by magic. Excuse me!
1 --> The first point of our awareness of the world is that we are conscious, intelligent, designing creatures who act into the world to cause directed contingency. 2 --> This is more certain than anything else! Indeed, it is the premise on which we live together,interact and communicate, etc in our world. 3 --> So, intelligence that acts based on mind into the world is credibly real, and actual, not mythical magic. 4 --> Further, it underscores the proper -- as opposed to conveniently strawmannish -- contrast: natural/ARTificial (or, intelligent) as opposed tot he ever so convenient and loaded: natural/supernatural. 5 --> We know that intelligence produces directed coningency, and that resulting FSCI is beyond the random search resources of our observed cosmos. In the case of life systems, VASTLY so; e.g. 300, 000 DNA base prs (lower end of estimates for credible 1st life) has a config space of about 9.94 * 10^180,617. 6 --> So, we may confidently reason from the info sys characteristics of life to is=ts credible origin: directed contingency. There are various possible candidates for that, but he most credible would the the same one responsible for a fine-tuned cosmos that is fine-tuned to set up and sustain cell-based life. 7 --> Now, given the raw power required to make a cosmos on the scale of the one we see, that sounds a lot like a Supreme Architect of the cosmos etc. that makes a lot of people very uncomfortable, and I detect that in the loaded terms you used above. 8 --> Well, so what? The chain of reasoning is from inductively well-grounded sign to the signified, not the other way around. 9 --> For, science is supposed to be an empirically based, open-minded, unfettered (but ethically and intellectually responsible) and open-ended search for the truth about the cosmos, not a lapdog, handmaid and enforcer to Politically Correct materialism hiding in a lab coat.
5] there’s no reason to belabor the differences But, that was the precise point: we see grounds for seeing tha there is a crucial difference between the snowflake and the info systems of life. Namely, that snowflakes do not have aspects where we see CSI, apart from the possibility of the boys from Langley intervening. And that is the exact point . . . GEM of TKI kairosfocus
w/ apologies for corrections: [76] You may have your own definition of complexity, and I have mine, but its important to keep Dembski’s in mind:
(?R) cannot plausibly be attributed to chance...It’s this combination of pattern- simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance) that makes the pattern exhibited by (?R) - but not R - a specification.
-”Specification: The Pattern that Signifies Intelligence” JT
correction: 76 was in reply to kairosfocus [73]. JT
KF [76]: One other thing: There's no doubt that life is very very different from nonlife, and actually nobody denies this. Nobody thinks that a snowflake is the same as life. So to me, there's no reason to belabor the differences. The only question is, does the fact that we don't know what caused life mean it happend by magic. Maybe it might be reasonable to say that, since in our experience life only comes from life, that whatever complex forces and laws out there we're currently unaware of that directly account for life, those forces and laws, no matter how diffuse and indirect they may be, must equate to life in some real sense. To me, this view is preferable than creating a new seperate ontological category of "agent" that essentially operates by magic. JT
You may have your own definition of complexity, and I have mine, but its important to keep Dembski's in mind:
It’s this combination of pattern- simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance) that makes the pattern exhibited by (?R) - but not (R) - a specification.
-"Specification: The Pattern that Signifies Intelligence" In this scheme every bit-string of length n has the same complexity, whether the string is all 1's, completely random, or anything in between. It can be confusing because subsequent to this there are repeated references to descriptive complexity, i.e. the complexity of the actual pattern, but as its explained, descriptive complexity actually has to be kept low or the Design Inference won't work. (Note that the terms "CSI" or "complex specified information" do not appear in the above paper, but that's what its about - W.D: "For my most up-to-date treatment of CSI, see “Specification: The Pattern That Signifies Intelligence”) So its event complexity that is relevant - "the difficulty of reproducing the event by chance". In reference to bit strings this is taken to be its length. It seems some straightforward measure of length should be used in a natural context as well (e.g. number of atoms). But some people take into consideration known processes to arrive at complexity for natural events. But in that case there is an unfair prejudice against anything we don't know the cause for, wherein we say any such thing is really really improbable and complex because we don't know what caused it. But anyway my own measure of complexity would be the size of the smallest program-input that could generate a number so that a string of all 1's would be highly probable. This I think is the conventional measure of complexity. You wrote: "In the case of the snowflake, the typical hex symmetry is set by law-like forces. That gives specification but no complexity" I think you're intending to imply that law-like forces cannot result in your concept of complexity, or maybe you exclude by definition anything produced by law-like forces from being complex, I'm not sure. But in the conventional definition of complexity its assumed that everything can be produced by some set of laws (i.e. some program), maybe not some known set of natural laws, or a simple set of laws, but some set of laws nonetheless. (It could be that we don't have the brainpower to identify the natural laws that created us because they're too complex.) You may find my discussion and the onward links here helpful. Some nice snowflake pics, too. Thanks. I'll get back with you on this later. JT
GSV, message #11 "I am trying to explain the EF to a friend who does not have any of your books Mr Dembski so I was hoping somewhere there was an example of it’s use on the web. Anyone?" http://www.arn.org/docs/dembski/wd_explfilter.htm Ray R. Martinez
GSV, message #11 "I am trying to explain the EF to a friend who does not have any of your books Mr Dembski so I was hoping somewhere there was an example of it’s use on the web. Anyone?" http://www.arn.org/docs/dembski/wd_explfilter.htm Ray R. Martinez
JT Saw your The snowflake would be both complex and specified . . . [71] while downloading and searching on a 6to4 adapter Vista headache. The key to the problem is to understand that the EF is speaking about complex specified information relating to a given aspect of a phenomenon. THAT is what puts the outcome in an isolated island of Functionality in a broad config space, or into a relatively tiny target zone that is a supertask for a random search to try to find. In the case of the snowflake, the typical hex symmetry is set by law-like forces. That gives specification but no complexity: you have a periodic, repeating structure that has low contingency; so little capacity to store information. Where there is complexity is in e.g dendritic flakes. This is driven by the random pattern of microcurrents and condensation of tiny drops of water from the air as the flake falls and tumbles. So, we see a complex but not directed/controlled branching structure superposed on the hex symmetry. [With such direction, it COULD in principle be made to store working information, as it has high contingency, but of course in the generally observed case, it does not store any functional information.] So, the specificity and the complexity speak to divergent aspects of certain snowflakes -- dendritics form under certain conditions, and not others. Tha tis why teh Ef will rule for the two aspects:
HEX STRUCTURE: Law, so specifcity but no room for information storing high contingency, so not complex. DENDRITES: Complex but produced by effectively random circumstances -- undirected high contingency, i.e. complex but not simply specific or functional. Chance.
You may find my discussion and the onward links here helpful. Some nice snowflake pics, too. Update just finished, and so am I. (I hope my wireless net access will start back working . . . Vista can be a real pain.) GEM of TKI kairosfocus
I suggest people here consider what it means by specified or specifies. DNA specifies many other things independent of itself that have a function, a very organized function. Just as the letters of the alphabet when properly arranged specify meaning in a language or a computer code specifies operations in the hardware of a computer. The only place this appears in nature is in life. Nowhere else does such a phenomenon happen that uses one configuration to specify another configuration that has an organized function. Also there does not appear any instances where new specificity in life has appeared that isn't just trivial additions or subtractions to the current specificity. Something like the EF may be an attempt to explain every possible instance of non chance and non law situation but this attempt may be too ambitious. As far as the evolution debate is concerned, this universality is not needed. So argue over the universality of the EF, not whether it applies to life or not. As an aside, chance is an intrinsic part of the Modern Evolutionary Theory and always has been no matter what the name is. It operates mainly on the variation side of the theory, or how does one explain the origin of new variation in a population gene pool. The answer is the so called engines of variation that add so called new genetic information to the gene pool. On the genetic side which includes natural selection, chance operates also but to a lesser extent. If the sexual reproduction does not produce the right combinations of genes or genetic elements, there may not be a chance for natural selection to work in the way it is suggested it works. The environment is very chance oriented. Theoretically, selection and environment will lead to one gene pool in the future but chance elements could modify or even thwart this from happening. So on the genetic side, there is also chance elements. Then there is the discussion if there anything called chance at all or is chance just our inability to describe the deterministic forces at play and forces us to use probability distributions to explain the array of situations when each instance may be determined. jerry
Patrick [61]: Snowflakes are crystals. Crystals are just the same simple pattern repeated. Simple, repeated patterns are not complex. ... The problem is to explain something like the genetic code, which is both complex and specified. Patrick, I feel fairly certain your understand of CSI is flawed here. The snowflake would be both complex and specified. [Although I could be incorrect and still trying to clarify this for myself - ALSO see Para. 7 below] In the Dembskian scheme a binary number's complexity is determined strictly by the number of bits it contains. Considering the UPB is in reference to the number of possible particle interactions or some such, I think we would have to look at the total number of atoms in the snowflake to determine its complexity. That's what its comprised of - atoms. So a snowflake is complex in the design inference just on that basis. Also note that the only type of patterns that can be referenced in the design inference are simple patterns. If you consider some biological functionality comprised of a great number of interworking parts, thats not something the Design Inference can handle. So anyway the pattern of the snowflake is right in line with what the design inference typically references. Consider the Bacterial Flagellum - the pattern identified has only four components. All the design inference does is rule out chance, and then its up to you to decide whether its cause is mechanism or not (i.e. either necessity or design). The thing with the snowflake is we presumably already know about a mechanism to explain it. The way the design inference is usually employed is to assume that if a mechanism is not known, then we are justified in saying its design. It almost seems like pointless exercise in the context of science to rule out chance, as no one (Darwinians or whomever) would consider chance an explanation for anything (or at least they wouldn't admit it). The goal of science is to explicate - i.e. to propose a mechanism. Chance doesn't explain anything. Neither does design for that matter. Para 7. So basically any object in nature that has an identifiable pattern could not have happened by chance. OTOH there does seem to be a way that some treat CSI wherein the complexity (probability) of an object is determined, not by the number of atoms it contains for example, but by our knowledge of mechanisms that could have caused it. Therefore, all non-life phenomena are assumed to be in that category, that is it is assumed that we know about mechanisms to cause them, so they would automatically be labeled probable and non complex. That leaves life. It is obvious to everyone that life is not explained by the typical physical forces we see operating on earth (e.g. wind, erosion, and so on.) Thus the hand-waving and appeals to great lengths of time in most naturalistic explanations. So life would be considered highly improbable with respect to the physical laws that we DO know about. But its an appeal to ignorance to say that because we don't know about any other laws (i.e. mechanisms) to account for life that no such laws exist. Contingency/laws can explain complexity but not specification. ... On the other hand, laws can explain specification but not complexity. (?) A mechanism can explain anything if the mechanism is known to exist. There is certainly no law that says laws (i.e. mechanism, necessity) cannot be complex. JT
Mark, Trib, Patrick: Passed by while doing long wait downloads. Orgel, 1973:
Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.6 [Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189.]
That should help. Maybe, my own comment as well, here? [Notice how different aspects make an appearance in discussing dendritic snowflakes.] Hey, the 43 Mbyter just finished; back to work. GEM of TKI kairosfocus
Patrick #61. Thanks. However, repetitive structures, such as crystals, do constitute specificity. I was responding to Tribune7 #51 Dembski addresses crystals and patterns & such. If they aren’t’ specified they aren’t designed. So with luck Tribune7 will now accept that you are right and that they are specified. Mark Frank
PS: And, law of averages expectations implicitly, intuitively and naturally -- I daresay inescapably -- bring in the issues underlying Fisherian Elimination. Even, in a context where an officially "Bayesian" approach is being used that does not explicitly refer to targets and rejection regions or islands of function and seas of non-function, etc. kairosfocus
Patrick, at 55: Thanks for the thought. H'mm, I thought the definition of aspects would bring out the point that one is separating out for analysis what is in fact inseparable practically? Namely:
as·pect . . . . 3. A way in which something can be viewed by the mind: looked at all aspects of the situation. [AmHDict] . . . . 1. a distinct feature or element in a problem or situation [Collins] . . . . a distinct feature or element in a problem; "he studied every facet of the question" [WordNet 3.0] . . . . Synonyms: phase, aspect, facet, angle2, side These nouns refer to a particular or possible way of viewing something, such as an object or a process: Phase refers to a stage or period of change or development: "A phase of my life was closing tonight, a new one opening tomorrow" Charlotte Brontë. Aspect is the way something appears at a specific vantage point: considered all aspects of the project. A facet is one of numerous aspects: studying the many facets of the intricate problem. Angle suggests a limitation of perspective, frequently with emphasis on the observer's own point of view: the reporter's angle on the story. Side refers to something having two or more parts or aspects: "Much might be said on both sides" Joseph Addison. [Synonyms at Phase, AmHDict]
Is there a better term out there? I am also trying not to make the diagram into a spaghetti-monster with all sorts of loops and iteration-loop counts, etc. I do believe one may take a law pass a chance pass -- i.e. "further investigation" -- and a design pass on the same situation, looking at different aspects. E.g. consider a torsional pendulum expt [a wire with a disc weight on the end, oscillating by twisting back and forth] where one isolates the law-like regularities, the experimental scatter that tends to hide that, and the problems due to bias due to experiment design. [We used to use some fairly sophisticated graphing tricks to isolate these features, e.g. log-log and log-lin plots to linearise and then take out scatter through plotting best straight lines etc.] Maybe what is needed is to give an illustrative instance or two in explaining the general utility of the EF, then bring up case studies that show how it relates to the cases of real interest? Thanks. __________ PO and Mark: Re law vs design . . . As you will see from my discussion at 46, law ties to reliable natural regularity, so the similar set-up will repeatedly get very similar results. So, there is low contingency. In case where similar setups give divergent results, then we look for the factors associated with high contingency: chance (undirected) and design (directed). Think about the die example, and how the House in Vegas looks at it: it wants undirected contingency and takes steps to ensure that that is what happens. ______________ PO: Re flat distributions. In a great many relevant cases of practical interest, flat or near-flat distributions are either what is supposed to have been there, or is a very reasonable assumption in light of credible models of the dynamics at work. Cf my reproduction of GP's very relevant discussion on elimination in biology here. In the case of Caputo, he was supposed to preside over a fair system, so the result should have been close enough to what we expect from a fair coin toss exercise as makes no difference. As you know, per basic microstate and cluster considerations of thermodynamics, the predominant cluster of outcomes for such will strongly dominate, i.e a near 50-50 pattern is heavily to be expected. C saw a strong run, on the assumption of innocence. That implies that he had an obvious duty to fix the problem, and simply calling in D and R scrutineers and flipping a coin would have doine nicely. [He didn't even need to think of doing an ideal "lens -shaped 2-sided die" to eliminate the coin edge problem.] So, even the inadvertent unfair coin idea comes down to design by willful negligence, once we factor in the statistical process control implications of a strong run. And the issue of runs emerges as a natural issue given the valid form of the layman's intuitive law of averages and related sampling theory. __________ G'day all GEM of TKI kairosfocus
Patrick[64], As I said on Olofsson I, I bet there is an assumption of a uniform distribution in the paper. We shall see. Does anybody know where it will appear? Prof_P.Olofsson
#60
Your post is quite hard to understand, but I think that what you are saying is that it is OK to deduce design by elimination of other causes but not OK to deduce to necessity by elimination of other causes. If so, why?
Come on; my english is certainly quite bad but I don't think it is hard to understand the basic points (provided that one wants to understand). Anyway, let us see the basic points: Your argument fails because necessity and design are asymmetric explications and cannot be exchanged in the first step of EF. In fact, why is it simply no sense to put design detection as the first step? Because there are only two ways to detect design for a given entity: 1. I already know that that entity was designed; and in this case the EF is useless. 2. I don’t know anything about; and in this case design inference has to be done by looking at the entity and finding in it overwhelming proofs that it was designed and non the mere output of necessity and chance. But thi is just the output of an EF that has previously excluded that the entity could have been arisen by means of natural, non-driven, forces. It's for this reason that design and necessity are not exchangeable in the EF, and it's no sense to require it. At the end I provided an example to show how this kind of asymmetry is typical of many and different problems. Let us consider the task to decide if a given natural number is a prime number or not. To the best of our knowledge there isn’t any direct algorithm or formula that allow us to say that a given number N is or not a prime number. In fact, to solve the problem one needs to verify that the number N cannot be decomposed as a product of other prime numbers. Now, isn’t this task similar to design detection, in which it is required the exclusion of the other explications? And wouldn’t be a silly non sense to ask that the decision about N be put as the first step? kairos
Bill had a more interesting comment in that other thread:
There’s a paper Bob Marks and I just got accepted which shows that evolutionary search can never escape the CSI problem (even if, say, the flagellum was built by a selection-variation mechanism, CSI still had to be fed in).
If I may interpret what I think he's saying: that even if an Indirect Stepwise Pathway was found to be capable ID would not be falsified completely, as the problem would then be shifted to the active information injected at OOL. Patrick
vjtorley, Your comment actually makes my point for me, which is that the WMAP data are relevant in deciding whether the universe is infinite. Therefore Dembski's objection in The Chance of the Gaps does not apply:
Nevertheless, even though the four inflatons considered here each possesses explanatory power, none of them possesses independent evidence for its existence.
You write:
Third, even if it could be established on inferential grounds that the universe is infinite, nevertheless, when making design inferences, it might still make perfectly good sense to confine ourselves to the event horizon (i.e the observable universe), which is finite:
That would make even less sense than it would have made for Eratosthenes, having measured the diameter of the Earth, to assume that the rest of the world must resemble the Mediterranean. ribczynski
Patrick, that's seems to be the phrasing :-) tribune7
The paper does not explicitly talk about crystals but it defines specificity in terms of a pattern that can be expressed in a small number of symbols. Crystals clearly fall into that category.
To save time I'll just quote myself:
Snowflakes are crystals. Crystals are just the same simple pattern repeated. Simple, repeated patterns are not complex. Repetitive structures, with all the info already in H2O, whose hexagonal structure/symmetry is determined by the directional forces - ie wind, gravity- are by no means complex. However, repetitive structures, such as crystals, do constitute specificity. Snowflakes, although specified, are also low in information, because their specification is in the laws, which of course means that node 1 in the Explanatory Filter (Does a law explain it?) would reject snowflakes as being designed. Contingency/laws can explain complexity but not specification. For instance, the exact time sequence of radioactive emissions from a chunk of uranium will be contingent, complex, but not specified. On the other hand, laws can explain specification but not complexity. The formation of a salt crystal follows well-defined laws, produces an independently known repetitive pattern, and is therefore specified; but like the snowflake that pattern will also be simple, not complex. The problem is to explain something like the genetic code, which is both complex and specified.
Patrick
#58 Kairos Your post is quite hard to understand, but I think that what you are saying is that it is OK to deduce design by elimination of other causes but not OK to deduce to necessity by elimination of other causes. If so, why? Mark Frank
#53 He talks about in his 1998 book Mere Creation: Science, Faith and Intelligent Design . The phrasing is not what I remember but the idea is the same. That is 7 years prior to the paper Specification: the pattern that significies intelligence - which he said on this site just a few days ago was definitive. The paper does not explicitly talk about crystals but it defines specificity in terms of a pattern that can be expressed in a small number of symbols. Crystals clearly fall into that category. Mark Frank
#40 Mark Frank
What interests me is the parallel between this first step and the first and second steps in ID version. Why do we need to start with necessity and chance? Why not start with eliminating design and thus concluding necessity and/or chance?
Now I've understood your point; but it seemes to me that your argument fails in the beginning, where design and necessity are basicly treated as symmetric explications for the production of a given artifact/natural entity. Indeed this doesn't seem to be the case. The fact that necessity and design are asymmetric explication is due to the fact that, in absence of preliminary information about, design recognition is just performed by excludinf that the entity could have been arisen by means of non-driven activities. In thise sense it's simply no sense to put design detection as the first step because two are the possible cases: 1. I already know that that entity was designed; and in this case the EF is useless; OR (aut) 2. I don't know anything about; and in this case design inference is the possible output of a filter that has previously excluded the other two possibilities. To explain the difference I would propose a mathematical example about (please take it only for what it is; a convincing analogy). Let us consider the task to decide if a given natural number is a prime number or not. To the better of our knowledge there isn't any direct algorithm or formula that allow us to say that a given number N is a prime number or not without requiring the verification that the number N cannot be decomposed as a product of other prime numbers. Now, isn't in a certain sense the task to recognizing if N is a prime number something similar to design detection, which does require the exclusion of the other possibilities? Wouldn't it be a silly non sense to ask for the reverse of the verification steps? kairos
Your attempts to use empirical evidence and logic to refute a bias are utterly illogical. Sal Gal, there is a logic to it. Our society and science is based on a philosophy that isn't reasonable -- i.e. only answers provided by methodological naturalism are acceptable -- and Dembski successfully confronts that philosophy on its own terms. We can either have a society based on the view that maybe there is a God (and behave accordingly) or one based on the view that maybe there isn't a God (and behave accordingly). Right now we are the latter and most seem to be making legal/economic/political and cultural decisions based on such. One of the Fox cartoons -- I think it was Family Guy -- had as a recurring punchline "Laura Bush killed a guy" an apparent reference to her accident as a teenager. If those scriptwriters had just an inkling that they might one day have to explain why they did that in circumstance that would have extreme consequences to themselves, I don't think they would have written that. tribune7
William A. Dembski, Any universe we know is finite. Let's stick to that universe. The universe is. It is not an instance of anything. There is no sample space. The universe is not an event. There is no probability that the universe is the way it is. There is no information in the universe as a whole. The universe is what it is. We cannot learn what the universe is by induction. There is no bias-free learning. Your attempts to use empirical evidence and logic to refute a bias are utterly illogical. You have indicated that you regard methodological naturalism as spiritually pernicious because it turns inexorably into philosophical naturalism (materialism). I'm actually with you there. But I will always hold that empirical science is a workhorse, not a racehorse. It's not going to get us to the Truth before we die. I believe that you haplessly align yourself with the atheists when you make too much of science. Sal Gal
kf,
I wonder if my recently updated EF flowchart and discussion here may be of further help as well.
I like your flowchart better but it still does not explicitly make it clear the Design can incorporate Necessity and Chance. I especially like how you have a "Further Inquiries" node at the end. I think you could improve that by listing conditions which would warrant a re-evaluation. Patrick
Mark, I can't remember where I read it first either. I think it was online somewhere. He talks about in his 1998 book Mere Creation: Science, Faith and Intelligent Design . The phrasing is not what I remember but the idea is the same. tribune7
"Dembski addresses crystals and patterns & such. If they aren’t’ specified they aren’t designed." I get muddled with all the different books. Was this before or after he defined specified in terms of compressability? Mark Frank
ribczynski You write: "As Monton points out, patterns in the cosmic microwave background provide independent evidence for an infinite universe." Not so fast. I'd like to refer you to an article at http://arxiv.org/PS_cache/arxiv/pdf/0801/0801.0006v2.pdf which makes a strong case on observational grounds that we live in a finite dodecahedral universe. Monton seems to base his case for an infinite universe upon recent WMAP observations indicating that the universe is flat. As I am not a scientist, I will keep my comments brief. First, according to the NASA Website http://map.gsfc.nasa.gov/universe/uni_shape.html , "We now know that the universe is flat with only a 2% margin of error." However, according to the same Web page, the universe is flat only if the density of the universe EXACTLY equals the critical density. I respectfully submit that a 2% margin of error is, by itself, woefully insufficient evidence for the existence of an infinite universe - particularly when other, finite-universe hypotheses are compatible with the WMAP observational data. Second, the Poincare dodecahedral universe (which is finite) is perfectly consistent with the WMAP data, as far as I am aware. Third, even if it could be established on inferential grounds that the universe is infinite, nevertheless, when making design inferences, it might still make perfectly good sense to confine ourselves to the event horizon (i.e the observable universe), which is finite: "The observable universe is the space around us bounded by the event horizon - the distance to which light can have traveled since the universe originated. This space is huge but finite with a radius of 10^28 cm. There are definite total numbers of everything: about 10^11 galaxies, 10^21 stars, 10^78 atoms, 10^88 photons" ( http://universe-review.ca/F02-cosmicbg.htm ). vjtorley
No function. They are extraordinary in their detailed symmetry. Well, they would be filtered out. Dembski addresses crystals and patterns & such. If they aren't' specified they aren't designed. However, I have since thought of a better example - courtesy of one of the many Daniel Bernoullis in 1734. As you probably know, all the planets (with the possible exception of Pluto) have orbits that are closely aligned. Ahhh, now that's a different subject and you getting into Privileged Planet territory :-) tribune7
I think the last node on the EF needs split into 2 nodes. Lumping the specification with the small probablity is confusing. Why not separate them? Would this not make your case for CSI stronger? the wonderer
#46 Kairosfocus Law and design are not simply interchangeable, once we see the sharp contrast in degree of contingency for the two. Can you expand on this. I don't understand what "degree of contingency" means in this sentence. Thanks Mark Frank
# 43 Tribune Mark, what specification do this extraordinary shapes have? What are their functions? No function. They are extraordinary in their detailed symmetry. However, I have since thought of a better example - courtesy of one of the many Daniel Bernoullis in 1734. As you probably know, all the planets (with the possible exception of Pluto) have orbits that are closely aligned. If the orbits were randomly oriented (uniform pdf over angle) then the chances of them being so closely aligned is less than 1 in 3 million. I don't think the concept of specification stands up to detailed scrutinty; but this is close to idea of "specification as simplicity" in the 2005 paper. So now imagine you are Bernoulli or any contemporary. You have done the probability calculation now you wonder - how come? So you apply my filter. Were they designed to be so closely aligned? There is no hypothesis about a designer who has the power and inclination (no pun intended) to align them. So we eliminate design. Did they end up this way by chance? We just did this calculation. Not the UPB but pretty far fetched. Therefore, it must have been necessity from some as yet unknown natural cause. Of course we could use the ID filter in which case the role of design and necessity are reversed. No known natural cause of necessity, therefore as yet unknown design. Mark Frank
kf[46], I was sloppy; I meant that the order in which they are investigated ought to be interchangeable. Why not start with design?- Prof_P.Olofsson
Prof PO A brief note: Cf points 5 - 8, no 41. Law and design are not simply interchangeable, once we see the sharp contrast in degree of contingency for the two. GEM of TKI kairosfocus
PPS: Trib is right to point to the issue of absence of FSCI. The Carlsbad caves can be accounted for on laws plus random chance circumstances, e.g. formation of stalactites and stalagmites, and how they occasionally meet and fuse. So, that is the best current explanation. (BTW Trib, bouncebacks . . . ) kairosfocus
Mark[40], A very good point. Chance is the odd one out because it is eliminated by computing probabilities but the other two are interchangeable. Unless, of course, one have already decided to infer design but that couldn't be the case, could it? Prof_P.Olofsson
apply it to it to the extraordinary shapes in the Carlsbad caves. Mark, what specification do this extraordinary shapes have? What are their functions? tribune7
Sparc, is that so? Larry Moran at Sandwalk voted for me? As it happens, I didn't vote - I had thought it best to wait out the horde of trolls storming by. But Larry has done so much to attract visitors to my Post-Darwinist blog that - had I realized - I would have voted for him. Noblesse oblige and all that, you know. O'Leary
H'mm: First, let me heartily join in the chorus of congratulations to Ms O'Leary on a job well done and finally recognised. Second, I am glad to see Dr Dembski clarifying his earlier remarks that are obviously being pounced on by the PT's of this world. (BTW: I have observed on reading his usage of "Dispense" and the like, that Dr Dembski uses it in the context of not dis-estasblishing or repudiating, but: getting to a more general, simple or intuitively clear result that allows one to not use the former step by step process or approach.) Third some remarks on the EF, from my take: . . . especially for Patrick, on no 19 . . .
1 --> EVERYBODY uses it, and implicitly relies on it. (So, critics need to beware of falling into self-referential incoherence and selective hyperskepticism.) 2 --> For instance, we all take it for granted that posts on this thread originate with agents, not noise. But, strictly noise can mimic any digital signal. So, we MUST use an infernce process that leads us to see tha tper best explanation it is agent not noise and not medchanical necessity. 3 --> Intuitively, we are inferring that since the apparent messages are functionally specific and sufficiently complex [read that, have more than 500 - 1,000 functional bits . . . here at 7 - 8 bits per alphanumeric character on ASCII codes, starting with log-in] to make the odds of random processes getting to such islands of function in the available config spaces, negligibly different from zero. 4 --> but also,there is a logic of categorisation, which is where the filter takes its structure. 5 --> Start with how we identify a natural law at work. Namely, we observe a reliable natural regularity, i.e. low contingency of outcomes under similar circumstances, revealing that mechanical forces of one kind or another are at work, leading to predictable outcomes. Thus, law as the description of the pattern, e.g. F = k.x, F = m.a, KE = 1/2 mv^2, e = m.c^2, F = G. m1.m2/r^2, dN/dt = -lambda.N(t) etc. 6 --> By contrast, we can have high contingency, e.g when we toss a die, its uppermost face on settling under generally similar circumstances varies significantly. 7 --> Such high contingencies, as gaming houses in Las Vegas know all too well, may be [a] undirected (chance) or [b] directed (design). (And indeed, at the tables in Las Vegas, intelligent agents are making advantageous use of the laws of gravity and mechanics to use dice to hopefully generate random outcomes to play a game of chance. Thus, chance, necessity and agency can interact in a situation, but for analysis we can look at key aspects and see which causal factor is the important one for that facet. BTW, too, trick die throwing is now at the level where the rules are that dice are transparent and balanced, and must be tossed so that they hit the table and bounce off a wall studded with tiny square-based pyramids, to sufficiently randomise outcomes.]) 8 --> So, so far we see [c] CONTINGENCY as hi/lo. Where: [d] low points to law-like mechanical necessity, and [e] high points to chance (when undirected) or design (when directed). the issue then is to discern degree of contingency and whether or not high contingency is directed; taking "undirected" as the default in cases where we have no reason to start from the assumption or observation of direction. 9 --> Mix in the idea that if something happens reliably under certain circumstances, it has quite high probability of being observed under those circumstances. So,[f] fig 19's node 1: high probability --> law makes sense. 10 --> As outcomes may vary across a range, the probability for any one outcome being observed under given circumstances is proportionately lower. 11 --> Now, apply chance as the default, and the principle that reasonably random samples from a population tend to reflect its gross structure. (That's why Fisherian Elimination gets suspicious when events from unlikely clusters are seen, esp. if the observed events would fit with a candidate agent's possible purposes.) 12 --> So then [g] if the probability is lower than for the first node, but within a cluster of possible outcomes that a random sample might likely get there, it makes sense to infer that chance is the best candidate explanation. So, node 2 in fig 19 also makes sense: intermediate probability points to chance. 13 --> But also, [h] we have highlighted a circumstance under which chance might not be the best explanation: a purpose-serving outcome of very low probability. 14 --> This is the point of node 3: [i] an outcome from a statistically relatively scarce cluster of possible outcomes that fits a reasonable specification (especially a functional and potentially purposeful one)is best explained by design, not chance. 15 --> And indeed, on massive experience and observation, [j] when we see things that exhibit complex specified information, they are the product of agents when we know the causal story directly. 16 --> That means that [k] CSI or its more easily recognised subset, FSCI [functionally specific, complex information] is an empirically reliable sign of intelligence. 17 --> Once that is seen, [l] we can therefore "dispense with" the step by step process that warrants the claim and use the sign directly. In short, we can then [m] simply use the identified presence of CSI/FSCI in akey aspect of a situation, event or object as a reliable sign of design.
In short, infelicitous wording notwithstanding, BOT the EF and CSI are valid and reliable in the scientific exploration of signs of intelligence. I wonder if my recently updated EF flowchart and discussion here may be of further help as well. G'day all; and . . . Season's Greetings. GEM of TKI. PS: Trib, can you kindly contact me through my email accessible through my handle in the always linked article in the left hand column? kairosfocus
#39 Kairos "Have they been designed? There is no design hypothesis which can account for them. Why would anyone want to create caves of this shape and how would they do it?" This would be the final output of EF. I understand that. This is offered as an alternative EF with a different final output. What interests me is the parallel between this first step and the first and second steps in ID version. Why do we need to start with necessity and chance? Why not start with eliminating design and thus concluding necessity and/or chance? Mark Frank
#36 Mark Frank
apply it to it to the extraordinary shapes in the Carlsbad caves. Have they been designed? There is no design hypothesis which can account for them. Why would anyone want to create caves of this shape and how would they do it?
This would be the final output of EF.
Was it chance? The chances of such extraordinary shapes forming at random must be much lower than the UPB.
. No, that's not true. Each specific configuration would have a very low prob., but what it's important is the specificity of the configuration.
Therefore, it was necessity. There must be an unknown natural law that creates such shapes.
No, there's a known set of natural laws which are able to yield some form of configuration. kairos
vjtorley wrote:
Dr. Dembski, Thanks for the paper, which I’ve just been reading. I think you’ve rebutted Monton’s arguments successfully.
vjtorley, I disagree. First, the four "inflatons" that Dembski specifically criticizes do not include the idea of a single, infinite universe. Second, consider Dembski's general justification for rejecting these "inflatons":
Nevertheless, even though the four inflatons considered here each possesses explanatory power, none of them possesses independent evidence for its existence.
As Monton points out, patterns in the cosmic microwave background provide independent evidence for an infinite universe. Dembski writes:
It is logically possible that the laws of physics might have been different, not only in their parameters but also in their basic form. It is logically possible that instead of turning to mathematics I might have become a rock and roll singer.
I don't know, Bill... Some things are impossible in any universe. :-) ribczynski
Congrats to Denyse O’Leary, whose Post-Darwinist blog tied for third in the science and technology category from the Canadian Blog Awards.
Actually, one of the 31 votes came from Larry of Sandwalk. sparc
I want to put an alternative version of the EF for all your consideration. You might for example apply it to it to the extraordinary shapes in the Carlsbad caves. Have they been designed? There is no design hypothesis which can account for them. Why would anyone want to create caves of this shape and how would they do it? Was it chance? The chances of such extraordinary shapes forming at random must be much lower than the UPB. Therefore, it was necessity. There must be an unknown natural law that creates such shapes. Reasonable? Mark Frank
Dr. Dembski, Thanks for the paper, which I've just been reading. I think you've rebutted Monton's arguments successfully. vjtorley
vjtorley: Read my article "The Chance of the Gaps" that Monton cites in his piece. Monton remarks about my artice, "Dembski has given a response that’s related to my line of argument above." Actually, it deals with Monton's concerns spot-on. You can also find the same basic argument at the end of chapter 2 of my book NO FREE LUNCH. William Dembski
Good Call, Bill! *applause* crandaddy
Dr. Dembski, Just a quick question. Have you read Dr. Bradley Monton's paper, "Design Inferences in an Infinite Universe" at http://philsci-archive.pitt.edu/archive/00003997/ and have you published a response? vjtorley
You say, I am so clever. My filter is the best thing since the wheel. Oh, William, sooo humble. I think you’ve been defeated over at Panda’s Thumb recently. Did you catch the article? It was about one of your recent concessions. Which is apparently took you years to admit! The post is called Vindication: http://pandasthumb.org/archive.....ation.html You should respond to it on this blog! (end quote) It's one thing to link to an article that quotes out of context, but coming on here under the guise of "noted scholar???" F2XL
Something to consider: The Explanatory Filter (EF)- Who uses it?: The explanatory filter (EF) is a process that can be used to reach an informed inference about an object or event in question. The EF mandates a rigorous investigation be conducted in an attempt to figure out how the object/ structure/ event in question came to be (see Science Asks Three Basic Questions, question 3). So who would use such a process? Mainly anyone and everyone attempting to debunk a design inference. This would also apply to anyone checking/ verifying a design inference. As I said in another opening post, Ghost Hunters use the EF. The EF is just a standard operating procedure used when conducting an investigation in which the cause is in doubt or needs to be verified. Joseph
Mapou wrote [22]: 1. Is there a law that requires huge numbers of particles (e.g., electrons) to have the exact same properties (e.g., mass, charge, spin orientations)? Answer: No. 2. Given that the number of possible properties that particles can have is infinite, is it likely that huge numbers of particles would have the exact same properties, if one assumed that the universe is a chance occurrence? Answer: No. Mapou I think that is an interesting perspective on CSI applied at a macro level, a subject I've been preoccupied with the last couple of days. Here is my own take: We could consider the entire biology of planet earth and indisputibly apply the label "biology" to it. So now we know from the design inference that it didn't occur by chance. (But certainly some aspects of it occured by chance didnt' they? I'll get back to that in a minute.) The first comment would most likely be, "Sure it didn't happen by chance, its the result of the mechanism we call random change and natural selection.) OK, then lets apply the label "caused bilogy" to the natural laws and mutations collectively that caused biology on earth. You just said they caused biology so the label must be accurate. So the natural laws and mutations did not happen by chance according to the design inference. Maybe the natural laws existed forever and so we can explain them that way. But the mutations are defined as happening by chance. But does it really violate the design inference to say only part of a string happened by chance? If 1% of a string was nonrandom, and 99% was, you could not say the string was entirely random, so you couldn't apply the label "random" to the entire string. So the design inference just seems to say that if a string has an idetifiable pattern it cannot have happened entirely by chance. 99% by chance? 5% by chance? Fine. But not 100%. JT
The flavor is better kept if the loaf is kept intact and only sliced as needed.
Do you have a relative in the knife business? :) Americans slice bread? Only the few and the brave... :) Joseph
"So working with the EF or SC end up being interchangeable. In THE DESIGN OF LIFE (published 2007), I simply go with SC. In UNDERSTANDING INTELLIGENT DESIGN (published 2008), I go back to the EF." "R. Martinez: Does this mean you have not read THE DESIGN REVOLUTION? Read Part II and get back to me. For a carefully nuanced exposition of the EF and how chance, necessity, and design relate, see especially ch. 11." Would it be possible to have a definitive position without having to buy a library? Mark Frank
@Denyse O'Leary, Congratulations! This is no doubt due to your tireless efforts in the face of constant and virulent attacks. Your enemies (and I'm sure they are as the sand of the sea) must be seething with envy and rage. It's hard for me to refrain from laughing out loud. Mapou
What's so great about sliced bread? The flavor is better kept if the loaf is kept intact and only sliced as needed. Olofsson, Professor of psomitology Prof_P.Olofsson
You say,
I am so clever. My filter is the best thing since the wheel.
Oh, William, sooo humble. I think you've been defeated over at Panda's Thumb recently. Did you catch the article? It was about one of your recent concessions. Which is apparently took you years to admit! The post is called Vindication: http://pandasthumb.org/archives/2008/12/vindication.html You should respond to it on this blog! NS http://sciencedefeated.wordpress.com/ notedscholar
Re Denyse O'Leary's blog, the Post-Darwinist - which some have reported as placing 3rd in the Canadian sci-tech blogger awards - it actually placed 4th. I had not publicized its placement; that was done by others, and I only found out about it while checking my mail. In fact, I had not written a post about this myself because I had not had a chance to verify it, due to unavoidable other business yesterday. When I did check early this morning, I noticed that the Post-D was 4th. Apparently, the news poster who said it was 3rd had experienced a sight error while viewing the list. Here is the listing as of December 11, 2008 2:10 pm EST. I am informed by a friend that most of the people grumbling about "pseudo-science" placed worse than the Post-Darwinist. If so, I wonder if they will try to block the Post-D's entry next year. Shrug. O'Leary
I think that, if one were to apply Dr. Dembski's explanatory filter to the universe itself, the latter would be seen as having been designed. Here's my take on it: 1. Is there a law that requires huge numbers of particles (e.g., electrons) to have the exact same properties (e.g., mass, charge, spin orientations)? Answer: No. 2. Given that the number of possible properties that particles can have is infinite, is it likely that huge numbers of particles would have the exact same properties, if one assumed that the universe is a chance occurrence? Answer: No. 3. Are the properties of the particles that comprise the universe specified? Answer: Yes. Corrollary: If the universe was designed, how plausible is it that the same intelligent agency that designed it could have just as easily designed complex lifeforms? Answer: Extremely plausible. Mapou
Patrick, You may recall my point in the original thread that the rigorous math and science should come first, and the writing for a general audience should come later. A picture is merely an aid to intuition. It is not a rigorous explanation. If we are indeed talking about a "flow of logic," then the filter can be expressed formally. It is generally more natural to express "flow" with an algorithm than with a set of logical sentences, but something along the lines of what we saw in the appendix of The Design Inference, along with an interpretation (in the sense of semantics), would be fine. Sal Gal
Excuse me. May I please have your attention- Ladies and Gentlemen, Let me introduce you to the Pre-Natural Intelligent Designer Thank You. Thank you very much... Joseph
Bill, would you still say that the following image adequately represents the EF? The Explanatory Filter I realize the "in practice" usage is explained in text elsewhere but personally I feel that the binary node flowchart does not adequately or at least precisely explain the flow of logic. I think that's the main brunt of criticism and it can be easily be fixed by updating the graphic and publishing it in a new book. Patrick
Sal Gal, The elimination is of regularity ALONE and chance ALONE. We know that designing agencies take into account the physical laws. So designs are a combination of agency working within those laws or using those laws as part of the design. Electronics is an example. And as far as CSI, I see it as a verifier of the design inference reached via the EF. To jerry- great point! Signs of intelligence takes many forms. Joseph
William Dembski says,
The EF is what philosophers of science call a “rational reconstruction” — it takes pre-theoretic ordinary reasoning and attempts to give it logical precision.
In an appendix of The Design Inference: Eliminating Chance Through Small Probabilities, you described the filter in a formal logical language. Where is the logically precise statement of your current version of the EF? If you no longer proceed by elimination of chance and regularity, then what sense is there in saying that you are filtering? Sal Gal
This is great news! The EF in combination with SC make a wonderful collaboration of logical framework in design detection! Thanks for the post Bill PaulN
I made this comment yesterday on the Olofsson thread. It seems appropriate here. We use the term intelligence loosely in the EF but what it means is non law and non chance. A lot of the so called fossils in the Pre Cambrian are “trace fossils.” These are not body forms or even parts of bodies but evidence that a life form was there. In other words, paths in the sediment made as some worm like creature passed through. So the paleontologist rightly concluded these were due to some life form and not to any chance event or law like process. The intelligence was minimal but instinctively the classification was made. Is this not an example of the EF being used? I thing we would use the EF to conclude that life was present. For example, if the trace was due to a plant or maybe even a single celled organism if such an organism was capable of leaving a trace. I am not trying to make a big deal of this but just thought it was curious given the discussion. Namely, that the EF is a natural process we as human beings use. jerry
Dave, I would also add that the EF is only as good as the people (person) using it. Joseph
Bill, "it takes pre-theoretic ordinary reasoning and attempts to give it logical precision" I like this; it's what I've also been trying to do for 30 years now, with mixed success! Granville Sewell
I think the Explanatory Filter ranks among the most brilliant inventions of all time… I agree, but keep in mind that it still ranks far behind Darwin’s theory, which is, as we all know, The Best Idea Anyone Ever Had. Congrats to Denyse indeed! How did those guys come up with the courage to vote for her blog? Aren’t they risking their careers and reputations by giving credibility to creationists who are attempting to destroy science and establish a theocracy? GilDodgen
I am trying to explain the EF to a friend who does not have any of your books Mr Dembski so I was hoping somewhere there was an example of it's use on the web. Anyone? GSV
DaveScot: Right. I came up with the EF on observing example after example in which people were trying to sift among necessity, chance, and design to come up with the right explanation. The EF is what philosophers of science call a "rational reconstruction" -- it takes pre-theoretic ordinary reasoning and attempts to give it logical precision. But what gets you to the design node in the EF is SC (specified complexity). So working with the EF or SC end up being interchangeable. In THE DESIGN OF LIFE (published 2007), I simply go with SC. In UNDERSTANDING INTELLIGENT DESIGN (published 2008), I go back to the EF. I was thinking of just sticking with SC in the future, but with critics crowing about the demise of the EF, I'll make sure it stays in circulation. William Dembski
Good. The explantory filter is as robust as the data that is used with it. In fact I would say that the basic structure of the explanatory filter is instinctive in the human species. All of us use it frequently to distinguish between true and false. DaveScot
William Dembski: "Does this mean you have not read THE DESIGN REVOLUTION? Read Part II and get back to me. For a carefully nuanced exposition of the EF and how chance, necessity, and design relate, see especially ch. 11." No, I haven't. But I was alluding to your comment that *implied* "chance" and "design" *were* mutually exclusive when you stated that they were not in the off-hand comment. I will, with great interest, read TDR. Also, if I recall correctly, you said in "Intelligent Design" (1999) that mutation cannot be random based on the existence of specified complexity. Of course, I agree. Buttressing this fact and its logic is the existence of Intelligence and Design, and adaptation, seen in every aspect of nature. Ray R. Martinez
R. Martinez: Does this mean you have not read THE DESIGN REVOLUTION? Read Part II and get back to me. For a carefully nuanced exposition of the EF and how chance, necessity, and design relate, see especially ch. 11. William Dembski
William Dembski: "In an off-hand comment in a thread on this blog I remarked that I was dispensing with the Explanatory Filter...." Does this mean rejection of mechanisms tied to a genuine element of "chance" (mechanisms that we know do not exist)? Does this mean "chance" & "design" are, in fact, mutually exclusive? If not, you should consider facts explained briefly in post #47 of the extended Professor Olofsson topic, facts that you seem to have forgotten temporarily when you posted the "off-hand comment[s]." Ray R. Martinez
By all means, keep the X filter. It seems like the best way to sum up the overall methodology of using specified complexity in the first place. F2XL
Maybe you can even refine it some: Is it necessity i.e. is there any law that can explain it or can increase the chance of it happening as per a Fisherian null hypothesis? Can it be chance using a probability based on a Fisherian null hypothesis? If no, to both you have design. tribune7
Dr. Dembski, You told us previously what was wrong with the EF. What was wrong with your statement of what was wrong? Sal Gal
Brilliant! I guess that is why this guy didn't get the "thank you" he was hopin' for. Those guys will be very upset. . . shame that. Robbie
All the EF does is increase the burden of proof for design. It doesn't hurt anything. tribune7

Leave a Reply