Uncommon Descent Serving The Intelligent Design Community

HeKS strikes gold again, or, why strong evidence of design is so often stoutly resisted or dismissed

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

New UD contributor HeKS notes:

The evidence of purposeful design [–> in the cosmos and world of life]  is overwhelming on any objective analysis, but due to Methodological Naturalism it is claimed to be merely an appearance of purposeful design, an illusion, while it is claimed that naturalistic processes are sufficient to achieve this appearance of purposeful design, though none have ever been demonstrated to be up to the task. They are claimed to be up to the task only because they are the only plausible sounding naturalistic explanations available.

He goes on to add:

The argument for ID is an abductive argument. An abductive argument basically takes the form: “We observe an effect, x is causally adequate to explain the effect and is the most common [–> let’s adjust: per a good reason, the most plausible] cause of the effect, therefore x is currently the best explanation of the effect.” This is called an inference to the best explanation.

When it comes to ID in particular, the form of the abductive argument is even stronger. It takes the form: “We observe an effect, x is uniquely causally adequate to explain the effect as, presently, no other known explanation is causally adequate to explain the effect, therefore x is currently the best explanation of the effect.”

Abductive arguments [–> and broader inductive arguments] are always held tentatively because they cannot be as certain as deductive arguments [–> rooted in known true premises and using correct deductions step by step], but they are a perfectly valid form of argumentation and their conclusions are legitimate as long as the premises remain true, because they are a statement about the current state of our knowledge and the evidence rather than deductive statements about reality.

Abductive reasoning is, in fact, the standard form of reasoning on matters of historical science, whereas inductive reasoning is used on matters in the present and future.

And, on fair and well warranted comment, design is the only actually observed and needle in haystack search-plausible cause of functionally specific complex organisation and associated information (FSCO/I) which is abundantly common in the world of life and in the physics of the cosmos. Summing up diagramatically:

csi_defn

Similarly, we may document the inductive, inference to best current explanation logic of the design inference in a flow chart:

explan_filterAlso, we may give an iconic case, the protein synthesis process (noting the functional significance of proper folding),

Proteinsynthesis

. . . especially the part where proteins are assembled in the ribosome based on the coded algorithmic information in the mRNA tape threaded through the Ribosome:

prot_transln

And, for those who need it, an animated video clip may be helpful:

[youtube aQgO5gGb67c]

So, instantly, we may ask: what is the only actually — and in fact routinely — observed causal source of codes, algorithms, and associated co-ordinated, organised execution machinery?

ANS: intelligently directed contingency, aka design, where there is no good reason to assume, imply or constrain such intelligence to humans.

Where also, FSCO/I or even the wider Complex Specified Information is not an incoherent mish-mash dreamed up by silly brainwashed or machiavellian IDiots trying to subvert science and science education by smuggling in Creationism while lurking in cheap tuxedos, but instead the key notions and the very name itself trace to events across the 1970’s and into the early 1980’s as eminent scientists tried to come to grips with the evidence of the cell and of cosmology, as was noted in reply to a comment on the UD Weak Argument Correctives:

. . . we can see across the 1970′s, how OOL researchers not connected to design theory, Orgel (1973) and Wicken (1979) spoke on the record to highlight a key feature of the organisation of cell based life:

ORGEL, 1973: . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [The Origins of Life (John Wiley, 1973), p. 189.]

WICKEN, 1979: ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [ –> i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [ –> originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65.]

At the turn of the ’80′s Nobel-equivalent prize-holding astrophysicist and lifelong agnostic, Sir Fred Hoyle, went on astonishing record:

Once we see that life is cosmic it is sensible to suppose that intelligence is cosmic. Now problems of order, such as the sequences of amino acids in the chains which constitute the enzymes and other proteins, are precisely the problems that become easy once a directed intelligence enters the picture, as was recognised long ago by James Clerk Maxwell in his invention of what is known in physics as the Maxwell demon. The difference between an intelligent ordering, whether of words, fruit boxes, amino acids, or the Rubik cube, and merely random shufflings can be fantastically large, even as large as a number that would fill the whole volume of Shakespeare’s plays with its zeros. So if one proceeds directly and straightforwardly in this matter, without being deflected by a fear of incurring the wrath of scientific opinion, one arrives at the conclusion that biomaterials with their amazing measure or order must be the outcome of intelligent design. No other possibility I have been able to think of in pondering this issue over quite a long time seems to me to have anything like as high a possibility of being true.” [Evolution from Space (The Omni Lecture[ –> Jan 12th 1982]), Enslow Publishers, 1982, pg. 28.]

Based on things I have seen, this usage of the term Intelligent Design may in fact be the historical source of the term for the theory.

The same worthy also is on well-known record on cosmological design in light of evident fine tuning:

From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? . . . I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has “monkeyed” with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. [F. Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16]

A talk given to Caltech (For which the above seems to have originally been conclusive remarks) adds:

The big problem in biology, as I see it, is to understand the origin of the information carried by the explicit structures of biomolecules. The issue isn’t so much the rather crude fact that a protein consists of a chain of amino acids linked together in a certain way, but that the explicit ordering of the amino acids endows the chain with remarkable properties, which other orderings wouldn’t give. The case of the enzymes is well known . . . If amino acids were linked at random, there would be a vast number of arrange-ments that would be useless in serving the pur-poses of a living cell. When you consider that a typical enzyme has a chain of perhaps 200 links and that there are 20 possibilities for each link,it’s easy to see that the number of useless arrangements is enormous, more than the number of atoms in all the galaxies visible in the largest telescopes. This is for one enzyme, and there are upwards of 2000 of them, mainly serving very different purposes. So how did the situation get to where we find it to be? This is, as I see it, the biological problem – the information problem . . . .

I was constantly plagued by the thought that the number of ways in which even a single enzyme could be wrongly constructed was greater than the number of all the atoms in the universe. So try as I would, I couldn’t convince myself that even the whole universe would be sufficient to find life by random processes – by what are called the blind forces of nature . . . . By far the simplest way to arrive at the correct sequences of amino acids in the enzymes would be by thought, not by random processes . . . .

Now imagine yourself as a superintellect working through possibilities in polymer chemistry. Would you not be astonished that polymers based on the carbon atom turned out in your calculations to have the remarkable properties of the enzymes and other biomolecules? Would you not be bowled over in surprise to find that a living cell was a feasible construct? Would you not say to yourself, in whatever language supercalculating intellects use: Some supercalculating intellect must have designed the properties of the carbon atom, otherwise the chance of my finding such an atom through the blind forces of nature would be utterly minuscule. Of course you would, and if you were a sensible superintellect you would conclude that the carbon atom is a fix.

These words in the same talk must have set his audience on their ears:

I do not believe that any physicist who examined the evidence could fail to draw the inference that the laws of nuclear physics have been deliberately designed with regard to the consequences they produce within stars. [“The Universe: Past and Present Reflections.” Engineering and Science, November, 1981. pp. 8–12]

So, then, why is the design inference so often so stoutly resisted?

LEWONTIN, 1997: . . . to put a correct view of the universe into people’s heads we must first get an incorrect view out . . . the problem is to get them to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth [–> NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting] . . . .

It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [–> another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [–> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door. [Billions and billions of demons, NYRB Jan 1997. If you imagine that the above has been “quote mined” kindly read the fuller extract and notes here on, noting the onward link to the original article.]

NSTA BOARD, 2000: The principal product of science is knowledge in the form of naturalistic concepts and the laws and theories related to those concepts [–> as in, Phil Johnson was dead on target in his retort to Lewontin, science is being radically re-defined on a foundation of a priori evolutionary materialism from hydrogen to humans] . . . .

Although no single universal step-by-step scientific method captures the complexity of doing science, a number of shared values and perspectives characterize a scientific approach to understanding nature. Among these are a demand for naturalistic explanations [–> the ideological loading now exerts censorship on science] supported by empirical evidence [–> but the evidence is never allowed to speak outside a materialistic circle so the questions are begged at the outset] that are, at least in principle, testable against the natural world [–> but the competition is only allowed to be among contestants passed by the Materialist Guardian Council] . . . .

Science, by definition, is limited to naturalistic methods and explanations and, as such, is precluded from using supernatural elements [–> in fact this imposes a strawman caricature of the alternative to a priori materialism, as was documented since Plato in The Laws, Bk X, namely natural vs artificial causal factors, that may in principle be analysed on empirical characteristics that may be observed. Once one already labels “supernatural” and implies “irrational,” huge questions are a priori begged and prejudices amounting to bigotry are excited to impose censorship which here is being insitutionalised in science education by the national science teachers association board of the USA.] in the production of scientific knowledge. [[NSTA, Board of Directors, July 2000. Emphases added.]

MAHNER, 2011: This paper defends the view that metaphysical naturalism is a constitutive ontological principle of science in that the general empirical methods of science, such as observation, measurement and experiment, and thus the very production of empirical evidence, presuppose a no-supernature principle . . . .

Metaphysical or ontological naturalism (henceforth: ON) [“roughly” and “simply”] is the view that all that exists is our lawful spatiotemporal world. Its negation is of course supernaturalism: the view that our lawful spatiotemporal world is not all that exists because there is another non-spatiotemporal world transcending the natural one, whose inhabitants—usually considered to be intentional beings—are not subject to natural laws . . . .

ON is not part of a deductive argument in the sense that if we collected all the statements or theories of science and used them as premises, then ON would logically follow. After all, scientific theories do not explicitly talk about anything metaphysical such as the presence or absence of supernatural entities: they simply refer to natural entities and processes only. Therefore, ON rather is a tacit metaphysical supposition of science, an ontological postulate. It is part of a metascientific framework or, if preferred, of the metaparadigm of science that guides the construction and evaluation of theories, and that helps to explain why science works and succeeds in studying and explaining the world. Now this can be interpreted in a weak and a strong sense. In the weak sense, ON is only part of the metaphysical background assumptions of contemporary science as a result of historical contingency; so much so that we could replace ON by its antithesis any time, and science would still work fine. This is the view of the creationists, and, curiously, even of some philosophers of science (e.g., Monton 2009). In the strong sense, ON is essential to science; that is, if it were removed from the metaphysics of science, what we would get would no longer be a science. Conversely, inasmuch as early science accepted supernatural entities as explainers, it was not proper science yet. It is of course this strong sense that I have in mind when I say that science presupposes ON. [In, his recent Science and Education article, “The role of Metaphysical Naturalism in Science” (2011) ]

In short, there is strong evidence of ideological bias and censorship in contemporary science and science education on especially matters of origins, reflecting the dominance of a priori evolutionary materialism.

To all such, Philip Johnson’s reply to Lewontin of November 1997 is a classic:

For scientific materialists the materialism comes first; the science comes thereafter. [Emphasis original.] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”

. . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [Emphasis added.] [The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]

Please, bear such in mind when you continue to observe the debate exchanges here at UD and beyond. END

Comments
#109 Nightlight - Despite the really interesting stuff you can extract from CSI, I can't help thinking that you are (almost wilfully) missing the elephant in the room. If you don't think Dembski's done it, how would you propose that we can consistently and reliably detect design (I do not mean designers, who may be inaccessible to observation, but just whether something has been designed by a mind)?Thomas2
October 10, 2014
October
10
Oct
10
10
2014
10:33 AM
10
10
33
AM
PDT
#107 HeKS
But if we want to know how much CSI is actually associated with an event produced by natural processes, we have to know the actual probability of the occurrence of that event. But in order to know the actual probability of the occurrence of the event, we need to know the actual chance-based process that caused the occurrence of the event in the first place, since this is what the probability calculation must be based on in order to have any relevance to reality.
You are perfectly correct here, but only as far as you go. What you have arrived at is the realization that the amount of "information" (which is the 'I' in 'CSI') is a relative quantity, like saying coordinate x of some object is 500 yards, which only means that object is 500 yards away from the arbitrarily chosen origin x=0 of the coordinate system (see earlier post on this point). But then for some "mysterious" reason you pulled back, stopping short of the next natural reasoning step. We can solve the "mystery" if we follow up your truncated reasoning just few more steps where it will become clear why you had to abruptly halt it. Following up in your own words, is there a way to be certain that you truly "know the actual process" that "caused the occurrence of the event" ? In fact, you have no way of knowing that, unless perhaps you can prove that you are an omniscient being. Consequently, what you are really saying about large CSI of the structures in live organisms that you computed is: "if God were as smart as I am presently, he would have needed to input this amount of CSI into the construction of this structure." So what? What if God's IQ is different than your IQ? Shouldn't we allow for that a possibility, perhaps? Wouldn't that make the actual CSI (from the actual process) different than the claimed figure? In other words, CSI=500 bits to construct some object is as universally significant as saying coordinate x of the object is 500 yards. Marveling at how some protein got to have CSI=500 bits is like marveling at how some rock got to have coordinate x=500 yards -- it got it because you happened to set the origin x=0 of coordinate system 500 yards to the left of that rock, that's how. Similarly, that protein has got 500 bits of CSI because you "HeKS" personally happened to be able to come up so far with a kind of computing system and an algorithm running on it that needs 500 bit program (code+data) to reproduce it. Big too-doo, lets trumpet that around the world. Hence, for any quantitative CSI claim you make, you need to effectively retract it immediately with a qualifier "to the best of my (algorithmic) ingenuity". Of course, such retraction transforms the alleged major "scientific discovery" of universal truths into merely a fanciful way to disclose the "state of your (algorithmic) ingenuity." While that disclosure may be of interest perhaps to your teacher or to your employer, it is certainly not a "scientific discovery" worthy trumpeting in science courses around the world from now on. That also renders "Barry's challenge" scientifically vacuous. But the more important (than the above vacuity) unfortunate side effect of wrapping the CSI concept into the wishful and superfluous concoctions of Discovery Institute's ID is that it buries and debases the genuinely valuable CSI findings by Dembski and others on whose research his works was built upon. As explained in a previous post, the real CSI finding of universal importance is that phenomena in nature are lawful or compressible and how much lawful/compressible i.e. they are computable using less front loaded information than what the raw data of the phenomena would suggest. The CSI is then a way to quantify that difference i.e. it is a way to quantify the lawfulness in nature. Note that unlike the vacuity of the absolute claims like 'rock has x=500 yards', this is a relative quantification, like saying rock is 500 yards to the right of that cliff from which it broke off. Of course, that finding is not only perfectly harmonious with the basic premise of natural science, comprehensibility of nature, but it also corroborates its defining mission which is none other than discovering the nature's compression algorithms (natural laws) i.e. finding the 'go of it' as James Clerk Maxwell used to put it.nightlight
October 10, 2014
October
10
Oct
10
10
2014
08:47 AM
8
08
47
AM
PDT
R0bb:
But to determine whether an agent has produced CSI, you have to do the calculation in terms of the correct hypothesis, namely design.
Cuz you say so or do you have a valid reason? What we do is to determine if CSI is present. That alone is evidence for design for the reason provided. That said if you or anyone else ever demonstrates that nature, operating freely, can produce CSI, the presence of CSI will no longer indicate intelligent design.Joe
October 10, 2014
October
10
Oct
10
10
2014
03:58 AM
3
03
58
AM
PDT
@R0bb #102 Hi R0bb, I'm not really sure what's happening in the discussion between you and I, but I have to assume that there is some kind of serious, fundamental misunderstanding between us, because nothing you said in your comment has anything to do with what I said or in any way follows from it. Your conclusion about how to reword Barry's challenge might as well have come right out of thin air. Operating on the assumption that there is, indeed, some kind of fundamental misunderstanding going on here that has led to all this confusion, I'm going to try this once more, from the start, to reason this through with you. If I happen to dwell on some point you're already aware of, you'll have to forgive me, cause I don't want to chance further misunderstanding. Now, let's start at the beginning. What does it mean to say that some object, pattern, event, system, etc. has "CSI", or Complex Specified Information? Well, it's primarily the first two words we need to concern ourselves with in terms of the methodology and logic of calculating CSI, so let's consider them individually. Complex The word "complex" is used in two primary senses. The first and most commonly used meaning is, "consisting of many well-matched parts". The second meaning of "complex" is, "improbable". When it comes to calculating a value of Complex Specified Information, "complex" refers to the second meaning, "improbable". [As a side point, a part of me thinks that some amount of confusion could be avoided if the name was changed from Complex Specified Information (CSI) to Highly-Improbable Specified Information (HISI).] Now, recognizing that the "complexity" of CSI corresponds to improbability, there are a few things that are vitally important to understand. First, it is incoherent to discuss improbability apart from a chance hypothesis. While we can talk about the probability of a quarter landing heads-up or a rolled die coming up 3, we don't talk about the probability of a person intentionally placing a quarter heads-up on a table, or of purposefully setting down a die so that it shows the number 3. These latter types of events are determined by intentional action rather than being governed in some respect by random or unforeseeable processes. Second, improbability values do not exist in a vacuum, nor are they inherent to a pattern, event, etc. Rather, a measure of the improbability of some event, pattern, etc., is directly connected to a specific chance hypothesis that seeks to explain the event, and it is only valid in relation to that particular chance hypothesis used to make the calculation. Let's consider a very simple example. Imagine a case where you are considering the occurrence of an event, which we'll call EVENT-X, for which two chance hypotheses, which we'll call HYP-A and HYP-B, have been offered to explain the occurrence of EVENT-X. After doing some math that I won't attempt, suppose we determine that the chances of EVENT-X happening given HYP-A as the proposed explanation are 1 in 3, while the chances of EVENT-X happening given HYP-B as the proposed explanation are 1 in 10,000. In determining this, we cannot say that EVENT-X is inherently either probable or improbable in and of itself. What we can say is that EVENT-X is probable on HYP-A, but is improbable on HYP-B. Assuming for a moment that we don't already know what actually caused EVENT-X to occur, and assuming that HYP-A and HYP-B are the only known relevant chance hypotheses that might be able to account for the occurrence of EVENT-X, it is reasonable for us to conclude that HYP-A is the proper explanation, since the occurrence of EVENT-X is highly probable on HYP-A, while it would be highly improbable on HYP-B. But now let's change this up and say that we actually know what caused EVENT-X to occur and it really was HYP-A that got the job done. If this is the case, the occurrence of EVENT-X was not improbable, because it was actually very probable given the process that caused it. We cannot say that the occurrence of EVENT-X was actually highly improbable because the odds of it occurring would have been 1 in 10,000 if it had occurred as a result of HYP-B. The 1 in 10,000 odds have no validity apart from a calculation that assumes HYP-B was the cause, which it wasn't. But now let's change it up again, and suppose that we still know what actually caused EVENT-X to occur, but it was really HYP-B rather than HYP-A. In this case we can say that the occurrence of EVENT-X was highly improbable, because EVENT-X occurred as a result of HYP-B and the chances of it occurring on HYP-B were only 1 in 10,000. If HYP-B was the culprit, we cannot turn around and say that the occurrence of EVENT-X really wasn't improbable after all because the odds of it occurring would have been 1 in 3 if it had occurred as a result of HYP-A. Just as was the case before, the 1 in 3 odds have no validity apart from a calculation that assumes HYP-A was the cause, which it wasn't. So, to recap, while still assuming that the actual chance cause is really known, the probability or improbability of the occurrence of some event depends entirely on the chance process that actually brought it about. Probabilities of its occurrence if it had been caused by some different process are irrelevant to the actual probability or improbability of its occurrence. One cannot simply port probabilities between different chance hypotheses, nor can one smuggle the probability associated with an incorrect chance hypothesis over to the event itself to avoid the probability or improbability that is calculated on the basis of the correct chance hypothesis. Specified What does it mean to say that some event or pattern is specified? An event, pattern, object, etc. is considered to match the requirement of specification when 1) the configuration of its make-up or structure falls within a range of possibilities that is subject to a relatively simple, generalized description (i.e. the specification), and 2) where the pattern, object, event under consideration is an independent instantiation of the specification, which means that the specification itself cannot be in the causal chain of the instantiation. CSI We can now put this together to consider how the Design Inference is made and how something is determined to be an example of CSI. To say that something exhibits a high degree of CSI is to say that it constitutes a highly improbable match to an independent specification. But again, because this is a matter of probability/improbability, what it really means is that it constitutes a match to an independent pattern that would be highly improbable to arise through any known and relevant chance process. Before such a determination can be made, one must consider all known, relevant chance processes that might be capable of bringing about the pattern, event, etc. in question. The only way that the pattern, event, etc. will be determined to exhibit a high degree of CSI is if it meets the necessary requirements for that designation under all relevant chance hypotheses that might be capable of explaining it. If some event exhibits very high CSI under some chance hypotheses (because of being highly improbable to occur by those processes) but exhibits little or no CSI under other chance hypotheses (because of being highly probable to occur by those processes), the event will not be considered to have passed muster and it will not be considered to exhibit a high degree of CSI. Instead, it will be assumed that the occurrence of the event is explainable by reference to one of the chance hypotheses that rendered its occurrence probable. It is very important to understand that the event will not be considered to have a high degree of CSI simply because one or some of the proposed chance hypotheses might have led to a high calculation of CSI. To repeat, in order for an event to be determined to exhibit a high degree of CSI it needs to be highly improbable under all relevant chance hypotheses, not just some. If the event does meet all requirements - including a high degree of improbability - under all chance hypotheses, then it will be deemed to exhibit a high degree of CSI, which, once again, means that it will be deemed to be a match to (or an instantiation of) an independent specification that is highly improbable to occur by means of any known naturalistic processes. On this basis, it will be inferred that the event was a product of design. That's how the situation plays out when the actual cause is not already known. But now let's flip things around and use our earlier example of EVENT-X on HYP-A and HYP-B to see how it would work to determine the CSI associated with EVENT-X when we already know the cause. Suppose now that we calculate the CSI associated with EVENT-X on the assumption of HYP-A as being 3 bits, but we calculate the CSI associated with EVENT-X on HYP-B as being 10,000 bits. Now let's picture two scenarios. In Scenario 1, we know for a fact that EVENT-X is properly explained by HYP-A. How many bits of CSI do we then conclude are associated with EVENT-X? The answer is 3 bits. Why? Because the calculation of 3 bits is exclusively associated with HYP-A and is only valid under HYP-A as it is based on a calculation of probability that is exclusively associated with and relevant to HYP-A. And if EVENT-X exhibits 3 bits of CSI and was brought about by the natural process connected to HYP-A, how many bits of CSI did a natural process produce in this instance? Again, obviously, 3 bits. We cannot appeal to the fact that HYP-B led to a calculation of 10,000 bits and so that is how many bits of CSI natural processes really produced in this instance, because HYP-B didn't cause EVENT-X in this scenario, so we cannot smuggle over a calculation of 10,000 bits that is only valid and relevant to the discarded hypothesis (HYP-B) and which has no actual connection to reality. Conversely, in Scenario 2, if we know for a fact that EVENT-X was actually caused by HYP-B, then EVENT-X would actually exhibit 10,000 bits of CSI and we could then say that natural processes produced 10,000 bits of CSI in this instance. And just like before, we could not appeal to the fact that EVENT-X only exhibited 3 bits of CSI under HYP-A and so that is the amount of CSI we should associate with EVENT-X, because we would know that EVENT-X was actually caused by HYP-B and was not caused by HYP-A, which means the 3 bits calculation was a purely hypothetical value strictly associated with a false hypothesis and has no connection to reality. Revenge of the Challenge In finally coming to Barry's challenge, we must properly understand the circumstances that are implied by it. And what are those circumstances? Well, it requires that we know the event we're measuring was actually caused by natural processes and that we know specifically what natural process brought about the event, pattern, object, etc. that we're calculating a CSI value on. It is only under those circumstances that we can get an actual measure of the CSI associated with the event, because it is the only way we can get an actual rather than purely hypothetical measure of the improbability of the event, which is a calculation that is entirely dependent upon the chance process that actually brought it about. If the CSI value of the event turns out to be over 500 bits when the calculation is made with reference to that chance process that was actually responsible for the event, then Barry's challenge will have been met. But if the CSI value turns out to be lower than 500 bits the challenge will not have been met What one absolutely cannot do is come up with a high CSI calculation based on the assumption of a false hypothesis and then attempt to port that CSI value over to the event in order to claim that the challenge has been met. This simply doesn't work. Such a value would be completely, utterly, and absolutely irrelevant to the challenge. What it would be is simply a calculation of how much CSI would have been produced by a natural process if some other process that rendered the occurrence of the event highly improbable had actually been the one to produce it. One can imagine these kinds of scenarios all they like, but such imaginings and hypotheticals are irrelevant to the challenge. Nothing in any of this suggests that "to determine whether an agent has produced CSI, you have to do the calculation in terms of the correct hypothesis [of] design." This is simply a complete misunderstanding of the nature of the challenge, which is about demonstrating that a natural process is capable of producing a large amount of CSI. In order to demonstrate that a natural process can produce a large amount of CSI, the correct hypothesis obviously has to be a natural one, not one that appeals to design (if it was designed then it wasn't natural). But if we want to know how much CSI is actually associated with an event produced by natural processes, we have to know the actual probability of the occurrence of that event. But in order to know the actual probability of the occurrence of the event, we need to know the actual chance-based process that caused the occurrence of the event in the first place, since this is what the probability calculation must be based on in order to have any relevance to reality. You can't just choose whatever chance hypothesis you like because it happens to provide a high CSI calculation. The challenge is about what natural processes can actually do, not about what they might hypothetically be able to do. If you want to meet a challenge asking you to demonstrate that natural processes can actually do something specific, then you need to demonstrate that they can actually do that specific thing. If the challenge is to show that natural processes can produce a large amount of CSI, then you can only point to events, object, patterns, etc. that you know for a fact were produced by natural processes, which demands that you know what the actual process was that produced it. You must then show that the event, object, pattern, etc. in question is calculated to have a large amount of CSI when the calculation is made on the basis of that natural process that you know to have been the cause. Honestly, beyond what I've written here, I don't know what else I could possibly say to make this any more clear. Take care, HeKSHeKS
October 10, 2014
October
10
Oct
10
10
2014
01:18 AM
1
01
18
AM
PDT
HeKS:
Here’s the problem: The fact that those presidents had faces did not cause the likeness of those faces to be carved into those rocks. The brute fact that George Washington had a face did not, as a matter of physical necessity, and through purely natural processes, cause the complex process to occur that resulted in a giant likeness of his face appearing on a mountainside in South Dakota.
Further, even if we granted that the face caused the likeness, there are many presidents all of whom had a face, and only four likenesses. I suppose we'll just have to wait to see if likenesses of the faces of the other presidents appear, and until then, just take it on faith that they will. There's no law against it, you know.Mung
October 9, 2014
October
10
Oct
9
09
2014
04:01 PM
4
04
01
PM
PDT
#104 Nightlight - Noting that some things in nature are the result of intentional mindfull design, Dembski has attemped to formulate and develop a scientific way of reliably describing/detecting such design. His religion-free proposal says that certain observations scrutinised and analysed in a specific manner can unequivocally justify an intelligent design hypothesis; and when such hypotheses are made, it seems to me that you should normally have material for further investigation and test in the natural world. Thus you have in Dembski's proposal a genuine natural scientific law. Your observation that the particular tool of CSI (an attempt to give improved precision and quantification to "specified complexity") actually detects lawfulness or compressibilty in the observed phenomena might possibly be the case, but it does not replace the legitimate, repeatable and testable inference to a design hypothesis. [PS: When in this context, Dembski is wearing his science hat, he does not employ "theological verbiage"; and as regards any philosophical reasoning he might use, it should be noted that without philosophy science cannot work - it is dead, going nowhere.]Thomas2
October 9, 2014
October
10
Oct
9
09
2014
03:55 PM
3
03
55
PM
PDT
#103 Essentially ID proposes a scientific law for detecting design. What the Dembski's CSI method, stripped of the above fluff and scientifically vacuous theological/philosophical verbiage, actually detects is lawfulness or compressibility in the observed phenomena. Hence all it shows is that the observed phenomena can be computed using much smaller front loading than their raw (uncompressed) appearance would suggest.nightlight
October 9, 2014
October
10
Oct
9
09
2014
01:47 PM
1
01
47
PM
PDT
#101 Nightlight- In a nutshell, ID as a would-be natural scientific theory states that where in nature an entity exhibits non-deterministic, appropriately statistically significant, tractable and conditionally independent specified complexity then an unequivocal intelligent design inference/hypothesis can be made, (where the resulting design hypothesis should then itself be subject to appropriate test and CSI is a particular way of defining and quantifying specified complexity). [Note that religious views don't come into it.] Essentially ID proposes a scientific law for detecting design. How is this not a form of natural science?Thomas2
October 9, 2014
October
10
Oct
9
09
2014
12:06 PM
12
12
06
PM
PDT
HeKS, Thanks for contacting Ewert. I have to admit that I'm surprised at his answer. I would think that the ramifications of such an interpretation would be unacceptable to ID proponents. For example, consider Joe's statement in #80:
The only evidence that we have says that CSI only comes from intelligent agencies.
Joe, and every other IDist that I know, consider it uncontroversial that CSI comes from intelligent agents. But to determine whether an agent has produced CSI, you have to do the calculation in terms of the correct hypothesis, namely design. No IDist has ever done that, and Dembski argues that the concept doesn't even make sense. (See here where he says that there is "no reason to think that such probabilities make sense", and The Design Inference p. 39 where he says that explanations that appeal to design are not "characterized by probability".) So in saying that Ewert hasn't met Barry's challenge, you're consequently denying a fundamental ID claim. You're saying that the following modified version of Barry's challenge can't be met: Show me one example – just one; that’s all I need – of an intelligent agent creating 500 bits of complex specified information. That seems like an awfully high price to pay.R0bb
October 9, 2014
October
10
Oct
9
09
2014
11:52 AM
11
11
52
AM
PDT
#98 HeKS
The fact that a method of detection seeking to identify the results of design over chance processes would be fine-tuned in a way that typically excludes, with high fidelity, the sorts of things we see happen by chance processes is unsurprising.
I see. While I was discussing whether Discovery Institute's ID is suitable as part of natural science, you were apparently talking about whether it is suitable as theological or philosophical or literary or conversational material. While there is no question that anyone can play with semantics of CSI and weave some warm and fuzzy lines and shapes around it into passable stories in any of those other fields, there isn't a scrap worth of natural science in any such narrative that could be legitimately taught in science class. Unfortunately, all that yarn fueled by religious zealotry (or maybe by plain fear of death in some) has completely buried under layers of muck a little gem worthy scientific attention, which is the CSI as an intriguing mathematical abstraction yielding interesting results about power of search algorithms and restating the older concepts of lawfulness and compressibility in language of search algorithms.
3) If you don't see a difference between a rock making an imprint in mud that it happens to come into contact with and the creation of, say, a computer monitor, I don't think there's much I can say to help you.
I wasn't discussing what I can see or feel, but whether Discovery Institute's ID is a natural science (it's not). Namely, not everything that you or I can see or sense or feel is part of natural science. Natural science doesn't capture (as yet) the complete content of human experience. My point is that you can't just go out and peddle any odd feelings and sensations that come over you as a natural science.nightlight
October 9, 2014
October
10
Oct
9
09
2014
09:24 AM
9
09
24
AM
PDT
nightlight:
My real point is that the none of the CSI ID arguments prove that the observed phenomena of life and its evolution are not even in principle computable (e.g. by some underlying computational process) i.e. they cannot be a manifestation of some purely lawful underlying processes.
If you are asking for a formal, deductive proof, then you are correct, we don't have proof of a negative. No-one has claimed to have such a proof. What we need to do is look at the overall weight of the evidence and draw an inference to the best explanation. For example, we have multiple, observable examples in the real world of engineering and design that are at least similar to some of the kinds of systems we see in biology. And we know that those required sophisticated and carefully coordinated programming and intelligence for their existence -- essentially across the board. And yet the best you can come up with is the assertion that there might be some unidentifiable, unknown, as-yet-undiscovered "few lines of code" that could produce everything we see in the living world? That doesn't even pass the smell test, much less come close to being the "best explanation" for living systems. You want to hold out hope for some as-yet-undiscovered natural algorithm that can produce everything? Fine, you have the prerogative to repose your blind faith wherever you wish, with the hope that at some distant day your faith will be confirmed. The rest of us prefer to look at the actual evidence on the ground today to see what the best explanation is. (Actually, it is much worse than that, because there are excellent reasons to affirmatively conclude that such a natural algorithm is not possible, even in principle.) So far the only examples you've been able to come up with are a couple of simplistic and poor analogies. You keep harping on chess, for example. Yet a chess program is written by intelligent beings, operating on an intelligently-designed operating system, on intelligently-designed hardware. There is nothing purely naturalistic about it. Furthermore, the fact that it can often beat its creator tells us nothing about whether it has somehow gone beyond its initial programming to create new things. The reason a computer can beat me at chess is because the program is set up to take advantage of what computers are stupendously good at: running myriad calculations per second and tracking possibilities. We are duly impressed by the speed and extent of its calculations and its ability to track move possibilities, but it isn't doing anything special beyond what it was programmed to do. You haven't provided any evidence that a simple algorithm could, even in principle, produce all the design we see in life. So as we look to draw a reasonable inference about best explanation for life, we have a stark contrast: Your proposal has no real-world examples and is based on a hoped-for future discovery of some undefined, unknown, heretofore unseen algorithm in nature. In contrast, intelligent design has billions of real-world examples and is based on what we do currently know about nature and the cause and effect relationships that exist in the world. ID wins hands down. It is not even a close call.Eric Anderson
October 9, 2014
October
10
Oct
9
09
2014
09:17 AM
9
09
17
AM
PDT
I think nightlight's point is this: "Since everything that exists is reducible to physical matter/processes, then all CSI is always the product of the same (and has the same origin). The causal-chain merely takes us back through physical mechanisms to the big-bang or the multiverse." But this idea breaks down in the evolutionary model and abiogenesis models when we ask for evidence of the natural origin of the first DNA or the first multicellular life or the first body plans, etc. You can't just assume reductionism - you have to prove it. If unproven, which it is, there remains the proposal that intelligence is not a product of, or determined by, physical processes alone.Silver Asiatic
October 9, 2014
October
10
Oct
9
09
2014
08:54 AM
8
08
54
AM
PDT
@nightlight #97 I'll try to look at this in more depth tomorrow, but a few quick comments. 1) It's not clear to me that you actually understand the No True Scotsman fallacy in spite of your propensity for invoking it. The fact that a method of detection seeking to identify the results of design over chance processes would be fine-tuned in a way that typically excludes, with high fidelity, the sorts of things we see happen by chance processes is unsurprising. It is to be expected in principle. And yet it does not, by definition, eliminate the possibility of chance processes accomplishing these things. There is nothing fallacious in this methodology. 2) The matter of causal chain length has nothing to do with anything. There's no magic or hidden number of steps in the chain at which something changes. It's as simple this: For the creation of an object, event or pattern to be an example of the creation of CSI, the specification must be independent of the instantiation, and by independent it is meant that the specification cannot be in the causal chain leading to the arising of the instantiation at all. 3) If you don't see a difference between a rock making an imprint in mud that it happens to come into contact with and the creation of, say, a computer monitor, I don't think there's much I can say to help you.HeKS
October 9, 2014
October
10
Oct
9
09
2014
02:04 AM
2
02
04
AM
PDT
#96 HeKS
Your comment asserts a world of absolute determinism governed by unknown, allegedly-simple algorithms
Yes, anything we observe could have been computed by perfectly deterministic algorithms. There is no way to exclude such possibility since any finite sequence of observational data points can be generated (computed) by a suitable algorithm.
allowing every possible outcome we might observe to be the necessary product of natural laws.
This is a very common misunderstanding of "natural laws" at UD. Perfectly deterministic natural laws (such as physical laws and any computations) do not on their own determine the future events. The deterministic laws are merely one part of the input into the "physics algorithm", the second part of the input are data representing initial and boundary conditions (IBC). Only the combined input of Natural Laws + IBC yields via "physics algorithm" (or general computations) the specific events or outcomes. For example, a ball satisfies Newton laws of motion and gravity. But those laws don't tell you or determine what the ball will do next. You also need to input (i.e. put in by hand) the initial position and velocity of the ball (as two 3-D vectors), plus any forces (such as intercepts, winds, friction, etc) it will encounter during the flight. The latter two data sets are the arbitrary IBC data. If you consider the entire universe as the "infinite physical system" to which the natural laws are applied (hence there is no finite boundary effects), then you still need to specify initial conditions of the universe. I.e. even then the natural laws on their own, despite being fully deterministic (like computation), don't on their own determine the future of that system. Only combination of inputs: Laws + arbitrary 'Initial Conditions' determine the actual outcome. If you look at the natural laws as a compression algorithms (for observational data), then the laws are the code (instructions) of the compression algorithm while the IBC are the compressed data that are being expanded (by the laws) into the detailed sequence of states or trajectory that the system will traverse. E.g. in the ball case, the full trajectory (thousands or millions of time stamped coordinates) of a flying ball represents the raw, uncompressed data. The physical laws allow you to compress all this mass of thousands of trajectory numbers into just 6 numbers, the vector of initial position (x, y, z) and vector of initial velocity (Vx, Vy, Vz). If you input those 6 numbers into the 'laws algorithm' it will expand them into thousands (or millions) of numbers of the full trajectory that the ball will traverse. In short, the fact that some process unfolds perfectly lawfully via deterministic laws does not mean there is just one way that process will unfold. There are in fact as many ways it will unfold as there are possible IBC inputs, which is generally infinitely many possible paths. Restating this in compression perspective on natural laws, there are as many possible expanded data sequences (the full trajectories of sthe system) as there are possible compressed sequences (the IBC data sets) i.e. there are infinitely many. The lawfulness is in fact merely another way to restate compressibility of the observed path/trajectory data points. But that is precisely the same feature of data sequences that presence of CSI identifies. Namely, in CSI the combined sequence X = D + S, where D='designed pattern' and S='specification pattern for D', is necessarily compressible since symbols from S predict (specify) symbols from D, hence D has some level of redundancy (how much, depends on the tightness of specification). In fact 'no free lunch' results of Dembski and others are merely trivial restatements or translations into search language of the older well known incompressibility results (based on pigeonhole principle) of 'random' or already compressed sequences (generally, of max entropy sequences). Hence, the CSI not only does not disprove lawfulness of the processes in observed phenomena, but is actually restatement of lawfulness in the language of search algorithms. Since a more general perspective on 'lawfulness' (physical laws) is compressibility, or computability, the detection of CSI in patterns in nature points to computational origin of such CSI sequences & their specification. It discovers in effect that the universe is rigged i.e. that there is an underlying computational process which is much more economical than the superficial appearance of the phenomena would suggest. This observation has also been expressed (by Eugene Wigner) as "The Unreasonable Effectiveness of Mathematics in the Natural Sciences". This is precisely why you and others here were stomped at drawing a line which could exclude 3-D mud image of a rock from the usual examples of CSI. You can't draw such line because there isn't any such line. You can only play 'no true Scotsman' sleight of hand by shifting around the semantics of 'local independence' (for face saving tapering of the "debate" it seems). But there really is no coherent way out, since all three concepts: lawfulness, compressibility and CSI describe precisely the same property of the natural phenomena -- the computability via more economical data than what is contained in the raw data of the observed phenomena. Note also the additional parallel or 'coincidence' here -- in classical philosophy and theology (and especially in mystery cults, such as those of Pythagoras, gnostics and neo-Platonists), the observed lawfulness of universe was used to argue for existence of God, just as ID argues the same from its restated lawfulness (the identification of CSI phenomena). The Discovery Institute's ID argument based on CSI is basically a warmed over theological argument for existence of God going back to ancient Greeks (at least; more likely even farther to ancient Egypt, Persia, China and India).
Here's the problem: The fact that those presidents had faces did not cause the likeness of those faces to be carved into those rocks. The brute fact that George Washington had a face did not, as a matter of physical necessity, and through purely natural processes, cause the complex process to occur that resulted in a giant likeness of his face appearing on a mountainside in South Dakota.
It caused it in the same sense that 3-D image of the rock was imprinted in the mud. There was no necessity from physical laws for the rock to imprint its image in the mud. With any change in initial or boundary conditions of the rock, mud or anything in between, there may have not been 3-D image of the rock in the mud (there are infinitely many scenarios that could have happened with that rock and that mud, some with image some without). Which is exactly the situation with the connection via cusal chain of lawful processes between Mount Rushmore images and the faces of presidents. The only difference is in 'which specific interactions made up the corresponding chains' and the lengths of chains. But neither you nor anyone else was able to draw scientifically and logically coherent semantic line that can separate the two examples. All the defense amounts to handwaving the 'no true Scotsman' song and dance. There is not an ounce of science in any of it.
In order for the paintings of the presidents to arise it required an intelligent agent to manipulate matter into a highly complex and improbable arrangement that matched the independent specification of their faces.
There you go, after the first 'true Scotsman' got pinned down on his back with no way out, now you bring in his brother, the 'intelligent agent' as the backup, for another round of the same semantic song and dance. Namely, the 'intelligent agent' is another one of those entities like 'locally independent' or beauty or consciousness, that is in the eye of the beholder, but scientifically sterile or vapid. For example, if you try to work out exactly via math and physics the interaction of that rock with mud, you will find that exact actions that take place are far beyond the smartest physicists and mathematicians -- you can put all of them together with all the computers they ask for, and ask them to predict precisely atom by atom that outcome, they will be stomped and give up. So, rock and mud were doing something so rich in content that no human intelligence and technology can fully comprehend or model. The best we can do is provide extremely coarse grained sketches of what's going on, but can never reach the true richnes of the phenomena achieved by the real master of that particular realm, the rock and the mud. So, I may choose to call what the rock and the mud were producing there an action of 'intelligent agent' since not even the smartest people on Earth could truly figure it out in all its richness. Or take the other example, the 'machines inheriting the Earth' scenario sketched in the last post, where the robots take over and build their own version mount Rushmore with images of Apple II and IBM PC. Are these computers 'intelligent agents'? If not, have you tried playing chess with a computer lately, since they can beat, even when running on smart phone, the best human players in the world. There is no 'intelligent agent' in any scientific definition of CSI. Interjecting that (or consciousness) is pure evasion. The CSI is a mathematical concept (restating compressibility of phenomenal data), not a topic for a literary essay of free associations where you can dredge up anything, including kitchen sink.
What points to design in the case of CSI is largely the need for a mind to be able to recognize a specification and then intentionally carry out steps to independently reproduce or in some way instantiate that specification, producing an outcome that would be incredibly improbable on any naturalistic hypothesis.
Ok, now we got yet another brother of the 'true Scotsman', the mind. Again there is no scientifically founded relation between 'mind' and CSI. Of course, in free association literary genre, anything goes with anything, whatever it feels like.
It seems that you're trying to stretch the concept and logic of CSI to cover cases where complex but unspecified patterns are reproduced through simplistic processes where the outcome of replication is highly probable (if not certain) and then trying to use this to discredit the entire concept of CSI as a reasonable indicator of intelligent activity.
Whoa Nelly, we got now the whole 'true Scotsman' family here, wife, kids and the rest, to help weave that free associations essay about CSI (well, only the Scotsman's cousin, "functional" CSI, is missing in your salvage crew). Basically, as explained above, the CSI (with related constraints on efficiency of search) is a purely mathematical result, equivalent to older compressibility results in information theory, or to lawfulness in physics. Therefore, no amount of semantic squirming and no army of 'true Scotsman' and his large family, will let you scientifically distinguish between the usual CSI examples parroted here at UD and the rock leaving its 3-D image in the mud example. There is no way to rigorously or scientifically distinguish between the two not because of some debating ineptitude of the DI's ID supporters, but because the CSI is merely a restatement of the lawfulness concept from physics (or compressibility concept from information theory) in the language of search algorithms. Behind the curtain of terminological conventions, the two (lawfulness and CSI) identify one and the same property of natural phenomena.nightlight
October 9, 2014
October
10
Oct
9
09
2014
12:23 AM
12
12
23
AM
PDT
@nightlight #93 I repeat, it's not about the length of the causal chain. It's about the nature of the causal chain. I'm not trying to be rude or anything, but your comment seems to be an exercise in question begging, which also happens to misrepresent the logic of CSI determination. Your comment asserts a world of absolute determinism governed by unknown, allegedly-simple algorithms allowing every possible outcome we might observe to be the necessary product of natural laws. And if you're not asserting this than your whole point seems to immediately break down before it gets going. You give a purely deterministic / naturalistic account of some process leading from the faces of presidents to the carvings of those faces at Mount Rushmore. Here's the problem: The fact that those presidents had faces did not cause the likeness of those faces to be carved into those rocks. The brute fact that George Washington had a face did not, as a matter of physical necessity, and through purely natural processes, cause the complex process to occur that resulted in a giant likeness of his face appearing on a mountainside in South Dakota. In order for the paintings of the presidents to arise it required an intelligent agent to manipulate matter into a highly complex and improbable arrangement that matched the independent specification of their faces. But their faces did not, as a necessary result of natural laws, cause the paintings of their faces to arise. And the same is true of Mount Rushmore. Likewise, the specification pattern provided by the grammar of the English language does not cause any piece of English literature to arise. Rather, a piece of literature is an independent instantiation of a pattern that conforms to the specification of the English language. Now, it goes without saying that there obviously has to be some connection between a pattern and its specification, otherwise there could be no specification in the first place. That is why you will often hear people speak of "local independence". If there was a requirement for absolute independence then the only way Mount Rushmore could display CSI would be if the faces had been carved in the likeness of the presidents' faces without knowing about or ever having seen them or any other faces, which is silly. On the other hand, local independence is essentially used to mean that there is no simple, deterministic process that leads necessarily from the specification itself to a pattern that corresponds to the specification. A simplistic process that ineluctably results in a duplication of some pattern where that duplication is a highly probable event given that simplistic process cannot be said to be a case of natural forces generating CSI. We might marvel at a fabulously complex design etched into a stamp, but we do not marvel at the creative power of natural law if a wind comes along, knocks the stamp off its ink pad, and the stamp leaves an imprint of its design on the floor. What points to design in the case of CSI is largely the need for a mind to be able to recognize a specification and then intentionally carry out steps to independently reproduce or in some way instantiate that specification, producing an outcome that would be incredibly improbable on any naturalistic hypothesis. It seems that you're trying to stretch the concept and logic of CSI to cover cases where complex but unspecified patterns are reproduced through simplistic processes where the outcome of replication is highly probable (if not certain) and then trying to use this to discredit the entire concept of CSI as a reasonable indicator of intelligent activity. But these cases you're trying to bring under the umbrella of CSI wouldn't end up getting high CSI calculations anyway due to the incredibly simple naturalistic hypotheses that explain them and the correspondingly high probability of the outcomes, so I don't really see what the point is. And to take issue with the fact that the method of calculating CSI ends up ruling out such simple naturalistic events as creating CSI would be to simply take issue with the fact that the method of calculation as been tuned to reliably indicate designed events and rule out natural ones in cases where we already know the cause, which should be considered a point in its favor.HeKS
October 8, 2014
October
10
Oct
8
08
2014
08:44 PM
8
08
44
PM
PDT
Because his conceptualization of information has no basis in reality. His model has everything to do with the argument he wants to present and nothing to do with the way information operates in the natural world. :|Upright BiPed
October 8, 2014
October
10
Oct
8
08
2014
05:22 PM
5
05
22
PM
PDT
Why is it that to much of what nightlight writes I am so tempted to respond with a mere... So?Mung
October 8, 2014
October
10
Oct
8
08
2014
04:50 PM
4
04
50
PM
PDT
#90 HeKS
I'm not sure why you think it's the length of the causal chain that matters. What matters is that the pattern under investigation arises independent of the specification used to describe it.
The problem is that "independent of the specification" doesn't exist other than as a wishful arbitrary definition or semantic game. Consider Mount Rushmore statues with images of presidents. The sculptors shaping them were not "independent" of the specifications since they saw the paintings of those presidents. Without unbroken chain of lawful interactions between specification and its CSI patterm, no statues in the likeness of those presidents would have been produced. The interaction chain started with photons scattering from faces of those presidents into retinas of painters, then brains of the painters based on those signals computed actions for their hands, how to pick and apply the paints to the canvases. Then years later the retinas of sculptors captured photons scattered from those paintings, processed the signals and computed actions of their hands that shaped the molds for the statues. Then construction teams interacted with molds (again via photons scatterig on the molds, the retinas, computations in their brains directing their hands and voices), finally continuing the chain of the interactions down to workers, their retinas, brains and hands operating machinery that carved the faces in the rocks. So interaction chain of lawful interactions of that particular length connecting the specification and its CSI pattern is claimed here at UD to be long enough to qualify for "independence" between specification pattern (faces of presidents) and the CSI pattern (images in the rocks). Thus that is an example of 'true CSI'. But the other chain, of rock striking the mud and leaving its detailed 3-D image in the mud, is apparently not long enough causal chain of lawful interactions to qualify for "independence", thus according to UD wisdom, it is not an example of 'true CSI'. So where exactly is the threshold length of the chain of lawful interactions between 'specification' and its 'CSI pattern' beyond which you call it "independent" allowing you to declare the 'candidate CSI pattern' as 'true CSI', in contrast to improper one, like the rock imprint in the mud? There must be some threshold value of chain length in order for you to make such distinction. Namely, there is no question that in all CSI cases there is always a causal chain of lawful interaction between the two patterns, the specification pattern and the CSI pattern. So the only issue of contention is the semantics of "independence" attribute -- causal chains of lawful interactions shorter than certain secret length (in the spirit 'no true Scotsman' fallacy) are disqualified as not being 'true Scotsman' chains (or not 'independent' enough for 'true CSI'), while chains longer than this secret length qualify as 'true Scotsman' chains, so they result in 'true CSI'. What I am asking is what is this secret threshold length of a causal chain of lawful interactions that allows two patterns at the two ends of the causal chain to be called "independent", hence the 'true CSI' case (as opposed to mud imprint by a rock, which apparently is not a 'true CSI' due to shortness of causal chain between the two patterns). Note also, before anyone start on with "consciousness" talk evasion, imagine a future in which robots take over the Earth and carve statues to their own 'founding fathers' (say, Apple II and IBM PC) on their own Mount Rushmore. In that case, interaction chains remain in substance the same kind as those that produced our Mount Rushmore (photon scattering, image processing algorithms, motoric instructions, etc). The hardware and software differ, but the high level algorithms would work the same way. In this case the causal chain of lawful interaction is fully explicit (consisting of programs running on deterministic hardware) between the 'specification' pattern and the rock images. Would rock images have large CSI in this case? If they would, wouldn't that then satisfy Barry's request for an example of CSI produced by causal chain of lawful interactions? Note that no humans ever programmed these robots to build their own Mount Rushmore, the specification pattern and CSI pattern are connected exclusively by an explicitly known chain of lawful interacitons.nightlight
October 8, 2014
October
10
Oct
8
08
2014
03:30 PM
3
03
30
PM
PDT
@R0bb re: my comment #77 I don't know where the question marks came from in my quote of Ewert. It should have just said: "Yes. You're exactly right."HeKS
October 8, 2014
October
10
Oct
8
08
2014
03:06 PM
3
03
06
PM
PDT
nightlight,  
You can’t derive “flight response” for that rabbit with already operational “flight response” mechanism and that fox. It took many generations of not just of rabbits and foxes but their ancestors (possibly going back to single cell organisms since they all have it), to program their “flight” response into their (genetically) built in response repertoire.
  This is a non-response. Darwinian evolution is not even the issue here, and doesn’t even exist until the system I’ve described is in place. To suggest that Darwinian evolution is responsible for the organization of the system is to say that a thing that does not yet exist on a pre-biotic earth caused something to happen. (which is obviously false)
That’s merely a variant of my example with fox fur color adaptation — the cold weather with upcoming snow doesn’t change genes to turn fox fur white in any simple push-button way. That takes many generations of foxes and snow, probably with epigenetic imprint happening first which eventually gets transferred and hardwired into genetic record (via Shapiro’s “natural genetic engineering”).
  You seem to be missing the issue. Everything you say to support your position requires the functional organization that only comes from the translation of information.. The requirements I am pointing out to you are the necessary material conditions for that translation to occur..  If A requires B to exist, then A cannot be the source of B.   I am interested in the source of B, not the operation of A.  
Regarding your explanation, it seems your “local independence” is “no true Scotsman” fallacy — it can shift the definition to exclude lawful processes (such as computations by biochemical networks) by definition as you wish.
  You are mistaken.. I clearly stated that purely deterministic forces are not excluded from being the source of the system. They are simply required to explain the system as it actually is, not as someone might wish it to be. If on the other hand, the reality of the system is physically and conceptually inconvenient, then I suggest that one might want to adopt a different approach to the problem.  
That in turn renders Barry’s challenge into a vapid semantic game — you can’t show that “locally independent” lawful process can produce CSI, since the exact level of “local independence” that is “required” is apparently a secret definition invoked and tailored to exclude whatever you want.
  On the contrary, I stated exactly what the issues are, and why they are the way they are. If the aaRS establishes the effect of the codon while preserving the discontinuity between the codon and the effect, then it is not me tailoring the options for a rational explanation of the system -  it’s reality itself.   Frankly, I wouldn’t have it any other way.  
There is no definition stating what exactly is the minimum length of causal chain of lawful interactions between generations of foxes and rabbits, or foxes and winter snows from my example, and computations by their biochemical networks, which would qualify outcomes of such chain of lawful interactions as “locally independent”. It’s an empty verbiage.
  Since you failed to actually address any specific material observation I made, yet have concluded the observations are empty, then I suppose you needn't fool with it any longer. cheers    Upright BiPed
October 8, 2014
October
10
Oct
8
08
2014
02:29 PM
2
02
29
PM
PDT
@nightlight #89
There is no definition stating what exactly is the minimum length of causal chain of lawful interactions ... which would qualify outcomes of such chain of lawful interactions as “locally independent”. It’s an empty verbiage.
I'm not sure why you think it's the length of the causal chain that matters. What matters is that the pattern under investigation arises independent of the specification used to describe it. Obviously, something cannot be its own independent specification. However improbable the surface of any given rock may be, the chance of that shape being left behind when it makes contact with some soft surface is highly probable, as is the chance that some hardening material poured into the imprint will harden into the shape of part of the rock. There are straightforward chance hypotheses to explain these events on which the outcomes are not considered to be improbable at all. Furthermore, none of these events or outcomes correspond to a specification pattern that did not directly cause or lead to their existence. Does this mean, then, that natural process are eliminated from being able to produce CSI by definition? No, not at all. All it means is that, in order to do so, natural process would need to cause some pattern, object or event that corresponds to an independently recognizable specification. For example, if a windstorm whipped a bunch of leaves into an improbable pattern that corresponded to an English word or sentence, that would be a case of natural processes creating CSI. Or if an earthquake was followed by volcano and tsunami and this happened to organize some scrap materials into a kind of machine-like contraption capable of fulfilling some mechanical function, that would also be an example of natural processes creating CSI. The key feature here is that the specification pattern (an English word or sentence, or a mechanical function) is not causally responsible for the for the particular pattern that matches it arising. The specification and the pattern that match it are independent of each other.HeKS
October 8, 2014
October
10
Oct
8
08
2014
01:57 PM
1
01
57
PM
PDT
#88 Upright Biped
As an example, a rabbit sees a fox coming up from behind him. The rabbit responds with the "flight response" which includes increased breathing and heart rate, heightened sensory awareness, and motor function it its legs. What happened? The specialized organization of the rabbit's visual system physically transcribed the image of the fox into a neural representation, which then travels through the optical nerve to the visual cortex and brain. But you cannot derive the "flight response" of a rabbit from the arrangement of the neural representation in the optical nerve.
You can't derive "flight response" for that rabbit with already operational "flight response" mechanism and that fox. It took many generations of not just of rabbits and foxes but their ancestors (possibly going back to single cell organisms since they all have it), to program their "flight" response into their (genetically) built in response repertoire. That's merely a variant of my example with fox fur color adaptation -- the cold weather with upcoming snow doesn't change genes to turn fox fur white in any simple push-button way. That takes many generations of foxes and snow, probably with epigenetic imprint happening first which eventually gets transferred and hardwired into genetic record (via Shapiro's "natural genetic engineering"). But computations (intelligence) needed to reshape operation of DNA of similar complexity are routinely done by the cellular biochemical networks in the processes of reproduction, ontogenesis, immune response, etc. These networks are distributed self-progamming computers which are far more intelligent and knowledgeable about molecular scale bioengineering than all the human exerts and their biotechnology taken together. After all, ask human molecular biologists to synthesize a live cell from simple molecules. They would have no clue how to even get started on synthesizing one live organelle of a cell from simple molecules, let alone whole live cell, to say nothing of organizing trillions of cells into a live organism. Yet, the biochemical networks of your own cells have accomplished this humanly unachievable technological feat of bioengineering (synthesizing live cell from simple molecules) thousands of times as you read this paragraph. The human level of expertise in this realm is not even close to that of cellular biochemical networks. But we know that humans can already genetically engineer some useful features into the live organisms (GMO technology). The biochemical networks which are the real masters of molecular engineering light years ahead of the human molecular biologists, could surely then do thousands of times more complex transformations. Regarding your explanation, it seems your "local independence" is "no true Scotsman" fallacy -- it can shift the definition to exclude lawful processes (such as computations by biochemical networks) by definition as you wish. That in turn renders Barry's challenge into a vapid semantic game -- you can't show that "locally independent" lawful process can produce CSI, since the exact level of "local independence" that is "required" is apparently a secret definition invoked and tailored to exclude whatever you want. There is no definition stating what exactly is the minimum length of causal chain of lawful interactions between generations of foxes and rabbits, or foxes and winter snows from my example, and computations by their biochemical networks, which would qualify outcomes of such chain of lawful interactions as "locally independent". It's an empty verbiage.nightlight
October 8, 2014
October
10
Oct
8
08
2014
12:13 PM
12
12
13
PM
PDT
nightlight,
What is “local independence from physical determinism”?
Information depends on two interdependent things to exist: representation and specification. Firstly, information requires an arrangement of matter as a medium (i.e. a representation). That medium is translated to produce physical effects in nature. But the effects produced cannot be derived from the arrangement of the medium. It requires a second arrangement of matter to establish (i.e. specifty) what the effects will be. Therefore, there is a natural discontinuity between the arrangement of the medium and its post-translation effects. That discontinuity must be preserved by the system, or else the system becomes locked into physical determinism and cannot produce the effects in question. In other words, if the effects of translation were derivable from the arrangement of the medium, it would be so by the forces of inexorable law, and those inexorable forces would limit the system to only those effects that can actually be derived from the arrangement of the medium – making the production of effects not derivable from the medium impossible to obtain. As an example, a rabbit sees a fox coming up from behind him. The rabbit responds with the “flight response” which includes increased breathing and heart rate, heightened sensory awareness, and motor function it its legs. What happened? The specialized organization of the rabbit’s visual system physically transcribed the image of the fox into a neural representation, which then travels through the optical nerve to the visual cortex and brain. But you cannot derive the "flight response" of a rabbit from the arrangement of the neural representation in the optical nerve. It requires a second arrangement of matter in the visual cortex/brain to specify what the response will be. In other words, the survival of a rabbit (“run away and hide”) is not something that can be derived from inexorable law, so a natural discontinuity will exist in any system that produces such an effect. This is accomplished by having two arrangements of matter; one to serve as a physical representation and another to physically establish (specify) the effect. Preserving the discontinuity between the arrangement of the medium and its post translation effect is therefore a physical necessity of the system, and this discontinuity establishes a local independence from physical determinism. This architecture can be found in any such system. Since DNA is generally the topic here, we can easily analyze the genetic translation system and find the exact same architecture. During protein synthesis, the arrangement of bases within codons are used to evoke specific amino acids to be presented for binding. But there is nothing you can do to a codon to relate it to an amino acid except translate it (which is what the cell does). The arrangement of the codon evokes the effect, but does not physically determine what the effect will be. That effect is physically determined in spatial and temporal isolation by a second arrangement of matter in the system (the protein aaRS) before the tRNA ever enters the ribosome – thus preserving the discontinuity between the codon and its post-translation effect, while simultaneously specifying what that effect will be. You can’t derive via physical law an amino acid from a codon; you can’t derive a survival response from a neural impulse; you can’t derive the “ahh” sound from the letter “a” or the paper it’s written on; you can derive which direction a bird should fly to catch a grasshopper; you can’t derive “middle C” from a pin on a music box cylinder; you can’t derive “defend the mound” from the atoms of a pheromone. You can’t derive any of these effects of information by the arrangements of the matter that evoke them. They all require a second arrangement of matter to establish specification upon translation, and they all require the discontinuity to be preserved. Thus, the translation of information produces lawful effects in nature which are not locally determined by inexorable law. They are only derivable from the organization of the systems that use and translate the information.
the above shift of goalposts renders ridiculous Barry’s challenge
My comments are only related to your position regarding law.
Either lawful processes are allowed as the mechanism for the CSI or Barry’s “challenge” is a pointless word play.
The issue is not whether lawful processes are allowed as a potential source of CSI, they are. The issue is that the systems that translate information must preserve the physicochemical discontinuity between the arrangement of a medium that evokes an effect, and the effect itself. The advocates of materialism will simply have to account for this within their models. It cannot be denied without sinking into absurdity.Upright BiPed
October 8, 2014
October
10
Oct
8
08
2014
11:03 AM
11
11
03
AM
PDT
Jerry, Functionality is observable, and moreso function dependent on particular organisation. When the organisation to effect function requires more than 500 - 1,000 Yes/No structured questions to answer it, that gives a config space beyond the plausible reach of atomic and temporal resources for our solar system or the observed cosmos going at fast chem rxn rates. Thus a blind chance and/or mechanical necessity based search will be maximally implausible to arrive at such islands of function. That is plain, so plain that every effort is exerted to obfuscate it. Worse, it traces to Wicken, and to direct implications of the context of use of specified complexity by Orgel. As in, 1979 and 1973, nigh twenty years before Dembsky and five to fifteen years before the first technical ID book, by Thaxton et al. (That timeline gives the lie to the NCSE etc tale about ID being invented to deflect impacts of US Supreme Court decisions of 1987.) WmAD sought to build on the general concept specified complexity, noting in NFL that for biological systems, it is tied to function, giving yardsticks. I think much of the problem with that generalisation is, it opened the way for obfuscators. I actually find it quite sensible at core, whatever quibbles one may have on points. But the nature of the real problems comes out when we see people pretending that the genetic code is not a directly recognisable case of machine language, or that chance and randomness can be emptied of meaning, or that function is meaningless etc etc etc. At this point, I conclude that too often we are not dealing with intellectually serious or constructive people but with zero concessions to IDiots ideologues who have no regard for truth, fairness or genuine learning. Such intellectual nihilism will do us no good if it is allowed to gain the upper hand and have free course to do what it will. Not all critics are like that, but too many are. KFkairosfocus
October 8, 2014
October
10
Oct
8
08
2014
10:18 AM
10
10
18
AM
PDT
Sorry - reformatted #74 nightlight
There is no coherent universal definition of “functional” (CSI) — anything does something, changes something, downstream, via interactions. When is that “something” deserving of label “functional” effect?
I think Jerry answers all of this, including the history of the development of CSI measurements in the ID world in post #82.
But if a rock breaks off and leaves its imprint in the mud, that is not “S.A.-functional” since it does nothing S.A. cares about.
The rock falling is not causing a complex, specified functional-operation as a result. It’s causing a determined, predictable result – which is explainable by natural law or chance. There is no need for a design inference here. You could make it simpler – every pile of rocks … is that CSI? No, because it’s not a specified result. Specification implies a future state. DNA code or any language is a classic example. The code functions for a future complex operation as a result. There’s a communication network between sender and receiver – requiring a translation/de-coding and interpretation of symbol. In the end, an operation occurs. Visual images are the weakest evidence of CSI (faces in rocks on the moon for example).
In short “functional” is an arbitrary label without intrinsic or universal meaning or definition.
Many terms in science have no intrinsic or universal meaning (species?, nature?, life?, mind?). For the sake of understanding the natural world, we use concepts that have generally understood meanings. Functional and non-funtionoal are terms applied in various contexts. They do have specific meaning (“Three weeks after death, the heart is non-functioning”).
One can attach it or not attach it to any effects of any interactions as one wishes.
“Three weeks after death, the heart is still functioning”? That doesn’t work.
The upshot is, such semantic games are not going to prove God or convince anyone to teach any such word play at schools as natural science.
ID is more than word games. ID doesn’t attempt to prove God. ID has convinced many scientists who are very accomplished in their field of study.Silver Asiatic
October 8, 2014
October
10
Oct
8
08
2014
08:53 AM
8
08
53
AM
PDT
#74 nightlight
There is no coherent universal definition of “functional” (CSI) — anything does something, changes something, downstream, via interactions. When is that “something” deserving of label “functional” effect?
I think Jerry answers all of this, including the history of the development of CSI measurements in the ID world in post #82.
But if a rock breaks off and leaves its imprint in the mud, that is not “S.A.-functional” since it does nothing S.A. cares about.
The rock falling is not causing a complex, specified functional-operation as a result. It's causing a determined, predictable result - which is explainable by natural law or chance. There is no need for a design inference here. You could make it simpler - every pile of rocks ... is that CSI? No, because it's not a specified result. Specification implies a future state. DNA code or any language is a classic example. The code functions for a future complex operation as a result. There's a communication network between sender and receiver - requiring a translation/de-coding and interpretation of symbol. In the end, an operation occurs. Visual images are the weakest evidence of CSI (faces in rocks on the moon for example).
In short “functional” is an arbitrary label without intrinsic or universal meaning or definition.
Many terms in science have no intrinsic or universal meaning (species?, nature?, life?, mind?). For the sake of understanding the natural world, we use concepts that have generally understood meanings. Functional and non-funtionoal are terms applied in various contexts. They do have specific meaning ("Three weeks after death, the heart is non-functioning").
One can attach it or not attach it to any effects of any interactions as one wishes.
"Three weeks after death, the heart is still functioning"? That doesn't work.
The upshot is, such semantic games are not going to prove God or convince anyone to teach any such word play at schools as natural science.
ID is more than word games. ID doesn't attempt to prove God. ID has convinced many scientists who are very accomplished in their field of study.Silver Asiatic
October 8, 2014
October
10
Oct
8
08
2014
08:51 AM
8
08
51
AM
PDT
#76 Upright Biped
nightlight "My real point is that the none of the CSI ID arguments prove that the observed phenomena of life and its evolution are not even in principle computable (e.g. by some underlying computational process) i.e. they cannot be a manifestation of some purely lawful underlying processes. The translation of information requires a local independence from physical determinism. It accomplishes this by preserving the necessary discontinuity between the arrangement of the medium and its post-translation effect. It's a coherent system, made coherent by this independence. It could not function without it.
What is "local independence from physical determinism" ? You can't coherently define CSI to exclude by definition causal interactions connecting the two harmonized patterns, then claim as ID "discovery" that causal interactions cannot yield CSI so defined. That's a trivial tautology, not a "discovery". Similarly, the above shift of goalposts renders ridiculous the Barry's challenge asking anyone to show CSI produced by lawful processes, when CSI now excludes by definition any harmonized patterns that are result of lawful processes. You can't have it both ways. Either lawful processes are allowed as the mechanism for the CSI or Barry's "challenge" is a pointless word play. Back to problem proper. There is actually no real independence from lawful physical interactions, ever, between CSI pattern (e.g. encoded in the DNA) and the properties of other systems in the environment the pattern is harmonized with. As to how the causal chain of lawful interactions actually creates such CSI (or harmonization of patterns), consider for example the fur color of polar fox which is programmed to turn white in the winter and darker in the summer. Here the CSI in the DNA of the fox controlling the fur color is harmonized with the environmental colors, and this connection is not physically independent. Namely, countless previous generations of foxes have interacted with that same environment and passed genetic information (including any environmental interaction induced changes) to their offspring. So, while DNA of the current fox has not interacted yet with the environment of the upcoming winter, yet it is still synchronized with the environmental colors of that upcoming winter, the past generations of the foxes have interacted with the winter colors and their genetic and epigenetic code has imprinted or harmonized with this pattern and passed it on to the following generations (or even altered it epigenetically in the existent generation for the rest of the current winter). Of course, neo-Darwinian "random mutation" is a non-starter as the mechanism for such change. But the James Shapiro's "natural genetic engineering" or generally the computations by the cellular biochemical networks can and does accomplish such kind of targeted changes (epigenetically and/or genetically; e.g. see adaptive mutation experiments). The cellular biochemical networks are networks with adaptable links, hence they are a distributed self-programming computer of the same kind as human or animal brain (which are also an adaptable networks, but made of neurons). These cellular biochemical network are unrivalled experts of molecular bioengineering, far ahead of human brains and technology in that field. E.g. these networks manufacture routinely new live cells from scratch (from simple molecules), the task that human molecular biologist can only dream of achieving some day. Hence, the fur color harmonization in polar animals is the result of physical interactions essentially in the same way that the colors of military uniforms are harmonized with the environmental colors in which soldiers are deployed. Namely, soldiers uniforms also haven't interacted with environment of the upcoming winter, yet the colors of the uniforms are synchronized with the colors of that upcoming winter. How can that be? Well, the military staff in charge of uniforms design knows a bit about winter colors and advantages of having uniform colors blending into the environment, hence their brains designed (i.e. adaptable networks of their neurons computed) uniform colors to match the expected colors at the place and time of the deployment. With polar animals, their cellular biochemical networks (adaptable networks of molecules) similarly computed the advantageous fur colors for their environment and altered the DNA to cycle the fur colors in harmony with seasons. So, the adaptation is intelligently guided, but the intelligence that achieved this doesn't require some Jewish or Sumeran tribal sky god figuring it all out up there in the heavens, then sending his angels or ancient aliens down to Earth to muck with the DNA of the foxes. That fairytale "mechanism" is absurd and unnecessary when the needed intelligence and expertise in molecular engineering is readily available in the systems at hand locally. Namely, the well known and plentifully demonstrated intelligence of cellular biochemical networks (e.g. clearly evident in processes of reproduction & ontogenesis) suffices for such design and computation. If human molecular biologist can archive some desired adaptation in GMO plants and animals, then the cellular biochemical networks which are light years ahead in mastery of molecular bioengineering can easily do it without any help from tribal sky gods. Of course, the next natural question is how did distributed self-programming computers, such as the above cellular biochemical networks, come to be at all? This is the origin of life problem. Taking also into account the related fine tuning problem, the most plausible hypothesis is that the computational processes by adaptable networks didn't start with biochemical networks, but are also the underlying processes behind the physical laws of matter-energy and space-time. Such computational 'pregeometry' models based on adaptable networks at Planck scale have been descrihbed and discussed at UD in several longer threads (see second half of this post for hyperlinked TOC of these discussions). In this kind of bottom-up computational models, the cellular biochemical networks are a large scale technology designed (computed) and built by these Planck scale networks (via the intermediate technologies of physical fields and particles), just as biological organisms are a large scale technology designed and built by the cellular biochemical networks, or as industrial societies, manufacturing, internet, etc. are large scale technologies designed and built by certain kinds of these organisms (humans). Since this question always comes up here, the bottom level Planck scale networks are a front loaded ontological foundation from which everything else follows. The key difference between this approach and theological or religious perspectives is that this front loading is far simpler and more economical than the omnipotent and omniscient front loading of the religions. The 'chief programmer of the universe' who front loaded the Planck scale networks need not have any clue what the networks will compute eventually, just as human programmer who wrote a program to compute million digits of number Pi has no clue what digits the program will spew out (beyond perhaps the first few).nightlight
October 8, 2014
October
10
Oct
8
08
2014
08:31 AM
8
08
31
AM
PDT
By the way if one goes to the link I provided above and look at the last comment which is by KF, you will see a sentiment that I am referring to. Also the commenter just before KF comment last year on this thread is the only truly honest Darwinist I ever met. He soon stopped commenting here but was an evolutionary biologist.jerry
October 8, 2014
October
10
Oct
8
08
2014
07:07 AM
7
07
07
AM
PDT
There has always been a problem with the definition of CSI. We had a long thread about this 7 years ago which KF knows well because it where he first chose to comment here. https://uncommondescent.com/biology/michael-egnor-responds-to-michael-lemonick-at-time-online/ It is a long thread but it shows that the powers to be at UD really did not have a good definition of CSI. It led to a focus on the relationship between A and B where A leads directly to B and B is functional. So the term Functional Complex Specified Information came into being with a couple of variances. KF has done a lot of the building of this. Some obvious examples were language and writing, computer programming and of course DNA. Extremely complex information that led to something else that had function. The anti ID people have since been led to inanity trying to undermine this so obvious of a concept. It is actually CSI on steroids but it has the advantage of being so obvious and understandable. FCSI or whatever the proper abbreviation, is so simple and obvious. But CSI is a little different. It has been undermined by the lack of a good definition and the mind numbing mathematics behind Dembski's calculations along with those of his cohorts. The main complaint about it is that it can not be quantified in a meaningful way which is why we get people like R0bb ranting on from time to time about this. The math if it could be applied in any easy way wold indicate such large numbers that 500 bits would seem child's play but we get the usual assault, that there are no calculations and because of this that CSI is nonsense. Well their complaints are what is really nonsense and they all know it but their purpose in life it seems is to be critical in any way they can and never being constructive. CSI is more nebulous because the relationship is less clear than it is for FCSI which has a direct link between one complexity and an independent complexity. For CSI there is often no direct link. Take for example the often used Mt. Rushmore. There is no direct link between any of the faces on the mountain and the presidents represented other than what is in our heads. If we didn't know that the sculpture used the likenesses of these men we would only be speculating that was the origin. Here we have two independent patterns, one of which preceded the other and which are closely linked in a way that it could not have been chance or law. Even if we did not know of this relationship, we would know the link between the rock formation and typical human faces even if we did not know who the faces belonged to. Try to calculate the odds of this and one will get into numbers so large that there is not enough zeros in all the printing devices on the planet to illustrate it. But we will get the usually malcontents challenging the concept for a lack of clarity of the calculations. What childish behavior. But this is all they have.jerry
October 8, 2014
October
10
Oct
8
08
2014
06:58 AM
6
06
58
AM
PDT
#68 Nightlight – Further to my earlier response at #71, the point made by other contributors (and consistent with Barry's original challenge) is that the stone itself has no complex specified information, only complex information; your point that it nevertheless provides a specification for an imprint or mould so that the subsequent impression now has CSI fails because that impression is only a duplication/transmission of the original information: it’s not specified by the stone, only copied from it, so the goal posts haven’t changed. And upon further consideration I would agree with that. Unless considered to be another way of saying the same thing, my previous points would still hold additionally: the information impressed into the cast is a necessary (deterministic) direct product of the casting process (and so discounts a design inference), and it is not in the least independent of the “specification” (again discounting a design inference). Regarding your overall approach, science is based upon what we currently know (or have reason to consider to be the case), not on what we don’t: it requires positive evidence for its claims, not unsupported speculations, and the onus of proof lies with whoever positively asserts.Thomas2
October 8, 2014
October
10
Oct
8
08
2014
05:51 AM
5
05
51
AM
PDT
1 2 3 4 5

Leave a Reply