Uncommon Descent Serving The Intelligent Design Community

METHODOLOGICAL NATURALISM, REVISIONIST HISTORY, AND MORPHING DEFINITIONS

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Whenever I tune in to any discussion on the subject of “methodological naturalism,” I often marvel at the extent to which Darwinists will rewrite history and manipulate the language in their futile attempt to defend this so-called  “requirement” for science. In order to set the stage, we must first try to understand what methodological naturalism could possibly mean.

First, we have what one might call the “soft” definition, characterized as a preference for identifying for natural causes, a position which makes no final judgment about a universal  line of demarcation between science and non-science. Second, we have the “hard” definition as used by all the institutional Darwinists. In the second context, methodological naturalism is an institutional “rule” by which one group of researchers imposes on another group of researchers  an arbitrary, intrusive, and non-negotiable standard which states that scientists must study nature as if nature is all there is.

Ah, but that is where things start getting interesting. “How can you say that we are imposing arbitrary rules, Darwinists protest, when we are simply explaining the way that science has always been done?” Notice the deft change of cadence by which they shift from the concept of an unbending rule, which is the matter under discussion, to the notion of an often used practice, smuggling in the soft definition in the middle of a debate about the hard definition.  With respect to the latter, keep  in mind that no universally binding rule for scientific methods existed prior to the 1980’s, so there really isn’t much to argue about on that front. Rather than address the argument or  concede the fact, however, Darwinists simply evade the point, reframe the issue, and carry on a sleek as ever, hoping that no one will notice that the terms of the debate have been rewritten on the fly.

For that matter, not even the soft definition always applied to the earlier scientists, who simply used whatever methods that seemed right for the multi-varied research projects they were investigating. Some studied the law-like regularities of the universe, and it was in that context that they formulated their hypotheses. Others, more interested in outright design arguments, established their hypotheses on exactly that basis. Kepler’s laws of motion, for example, stemmed from his perception of design in the mathematical precision of planetary motion. Newton, in his classic work, Optics argued for the intelligent design of the eye and, at other places, presented something like the modern “anthropic principle” in his discussion on the positioning of the planets. No one, not even those who “preferred” to study solely natural causes,  would have dared to suggest that no other kind  of research question should ever be asked or that no other hypothesis should ever be considered.

What they were all trying to avoid was the commonplace and irrational  element of superstition and the notion that God acts capriciously, recklessly,  or vindictively,  without purpose or  thought. What they most decidedly were not doing was arguing that design cannot be a cause. On the contrary, they wanted to know more about the design that was already manifest—or to put it in the most shocking and offensive language possible—they wanted to know more about how God made the world so they could give him praise and glory, as is evident from the title page of many of their works.

If the universe wasn’t designed to be comprehensible and rational, they reasoned, there is no reason to believe that it is comprehensible and rational. Thus, there would be no reason to try to comprehend it or make rational statements about it. What would be the point? One cannot comprehend the incomprehensible or unravel the reasonableness of that which is not reasonable—nor can anything other than a reasonable being do the unraveling. They believed that the Creator set it up, as it were, so that there was a correspondence between that which was to be unraveled [the object of investigation] and the capacity of the one doing the unraveling [the investigator]. It would have gone without saying that the investigator and the investigation cannot be one and the same thing, meaning that both realms of existence are a given.  In order for [A] to correspond with [B], both [A] and [B] must exist. Thus, these scientists were 180 degrees removed from the idea that nature, one of those two realms, must be studied, as MN dictates,  as if it is the only realm. That would be tantamount to saying that nature must be investigated as if there is there is no such thing as an investigator–as of nature could investigate itself.

Returning to the present, methodological naturalists do not even have a coherent formulation with which to oppress their adversaries. Notice, for example, how selective they are about enforcing their petty rule, applying it only to ID scientists, and exempting all other researchers who violate the principle, such as searchers for Extra Terrestrial Intelligence and Big Bang Theorists.  Of course, what they are refusing to enforce in these cases are the hard definition, since ID qualifies under the soft definition.

Once this is pointed out, they morph the argument again, holding that MN, that is, the hard rule, is the preferred method for science because “it works.” But what exactly does “it” mean. Clearly, what works is not the rule because the rule, which presumes to dictate and make explicit what is “required” for science, is only about twenty-five years old. On the contrary, all real progress comes from the common sense approach of asking good questions and searching for relevant answers, using whatever methods that will provide the needed evidence and following that evidence wherever it leads.   For most, that means looking at law-like regularities, but for others it means probing the mysteries of information and the effects of intelligence. For some, it means conducting experiments and acquiring new data, but for others it means looking at what we already know in different ways. That is exactly what Einstein and Heisenberg did. We experience the benefits of science when we sit at the feet of nature and ask it to reveal its secrets, not when we presume to tell it which secrets we would prefer not to hear.

It gets worse. In fact, methodological naturalists do not even know what they mean by the two words they use to frame their rule. On the First Things blog, I recently asked several MN advocates to define the words, “natural” and “supernatural. After a series of responses, one of the more thoughtful commentators ended the discussion by writing, It seems that defining what is “natural” is one of the tasks before us.”

Indeed.  Now think about this for a moment. Entrenched bureaucrats, who do not know what they mean by the word “natural,” are telling ID scientists, who do know what they mean by the word, “natural,” that science can study only natural causes.  In effect, here is what they are saying: “You [ID scientists] are restricted to a study of the natural world, and, although I have no idea what I mean by that term, which means that I have no idea of what I mean by my rule, you are, nevertheless, condemned if you violate it.

There is more. This natural/supernatural dichotomy on which MN stands plunges Darwinists [and TEs, for that matter] in intellectual quicksand on yet another front, leaving them only one of two options:

[A] Methodological naturalism conflates all immaterial, non-natural causes, such as Divine intelligence, superhuman intelligence, and human intelligence, placing them all in the same category. Using that formulation, the paragraph I just wrote, assuming that I have a mind, was a supernatural event, which means I am a supernatural cause, —yet if I have no mind, that would mean that my brain was responsible, which would suddenly reduce me to a natural cause. This is where the Darwinists take the easy way out by simply declaring that there are no immaterial minds, while the TE’s split their brains in two pieces trying to make sense of it.

Or,

[B] Methodological naturalism defines all things that are not “supernatural” as natural, placing human cognition, human volition, earthquakes, and tornadoes in the same category. Indeed, everything is then classified as a natural cause—everything. So, whatever caused Hurricane Katrina is the same kind of cause that generated my written paragraph because, as the Darwinists instruct us, both things occurred “in nature,” whatever that means. So, if all causes are natural, then there is no way of distinguishing the cause of all the artifacts found in ancient Pompei from the cause of the volcano that buried them.  Indeed, by that standard, the archeologist cannot even declare that the built civilization of Pompei ever existed as a civilization, since the apparent evidence of human activity may well not have been caused by human activity at all.  The two kinds of causes are either substantially different or they are not. If they are different, as ID rightly insists, then those differences can be identified. If they are not different, as the Darwinists claim, then those differences cannot be identified, which means that whatever causes a volcano to erupt is comparable to whatever caused Beethoven’s Fifth Symphony to erupt.

By contrast, ID scientists point to three causes, all of which can be observed and identified: Law, chance, and agency. Once we acknowledge that point, everything falls into place. It would be so much easier to avoid all this nonsense, drop the intrusive rule of methodological naturalism, and simply concede the obvious point: Since only the scientist knows which research question he is trying to answer, only the scientist can decide which method or methods are appropriate for obtaining that answer.

Comments
CJYman @513, You have a possible convert! :) Please bear with me for a short experiment. A display monitor has a picture of two ferns. One of them was generated by randomly assembling millions of pixels directly in the video card memory, the other was generated by randomly generating a thousand bits of object code for a function. If you can determine which image is more improbable, based on the image you see on the screen, I'll switch to your side.Toronto
February 16, 2010
February
02
Feb
16
16
2010
10:33 AM
10
10
33
AM
PDT
Toronto: "The point of the fern generator is to show that the ID argument of calculated CSI is misleading." And the fern example, as should be obvious with my last comment to you, does no such thing. Toronto: "You can no longer look at a display monitor with a picture of two ferns and calculate the CSI of the each image based on what you see." ummmm, yes you can, and I've shown you how above -- based on two different understandings of how law and chance are related. Care to actually respond to my comments? Toronto: "If one image is rendered a bit at a time by the computer randomly setting that pixel to 1 or 0, and the other is generated by an equation that was arrived at by randomly setting the bit of a block of data which you would use as an equation, the probability of randomly getting the equation is less than randomly getting the image itself. That is the crux of the ID argument, that the probability of an event is based on generating the final result, instead of generating an intermediate step." But, an EA is required for intermediate steps, and that EA is the equation which needs to be searched for. So again, generating the EA is just as hard as generating the final result. Furthermore, your fern example is not an example of an EA. EAs have a fitness criteria and select the most fit replicators in a population. Your fern example is merely an example of the fact that a randomly generated set of laws can produce ordered, repetitious, regular patterns. Toronto: "If I showed you two pictures of a fern on a display, you couldn’t tell which was bit-mapped and which was generated. Because of that, you can’t calculate the probability of the result, and thus the whole concept of CSI is not applicable to the subject of the generation of life-forms." Incorrect, since CSI is calculated against a uniform probability distribution. Were you around when I was explaining that earlier?CJYman
February 16, 2010
February
02
Feb
16
16
2010
10:09 AM
10
10
09
AM
PDT
Hello again ROb. Great to have you back. Sorry this reply is so long, but apparently I needed to re-explain some of my understanding of the topic. ROb: "As with much of ID, I’m hung up on the fundamental principles that underwrite your ideas. Take, for example, your description of organized CSI as “patterns not defined by law+chance”. I can’t tell if you’re stating this as part of the definition of organized CSI, or if it’s an empirical claim. If the former, it would seem that the definition of organized CSI is somewhat question-begging." I stated: An organized pattern is one which is not merely ordered — defined by law/regularity such as the patterns of planets orbiting the sun, crystals repetitiously arranged, or a vortex. Ordered patterns emerge from the physical/material/measurable properties of matter and energy, whereas organized patterns are not defined by such properties of the materials utilized to generate the organization. If we stopped here though, any statistically random pattern would qualify as “organization.” However there are patterns which are not merely “ordered” (defined by law) and yet are also not random in the sense of lacking correlation. These organized, non-ordered, yet also non-random patterns are usually arranged according to an independent non-random scheme or diagram (which is why they lack “randomness.”) That was the definition of organized and a beginning of the explanation. Do you have a problem with any part specifically? And CSI reliably eliminates chance on two levels: correlation and improbability of finding that correlated pattern given available space and time for a random search (probabilistic resources). Chance/randomness is defined as a lack of correlation. So, combine organized patterns with CSI and voila ... event isn't defined by law and also is not best explained by chance ROb: "Regardless, I’ll explain a problem I have with ID’s claims regarding “law and chance”." This makes for some good discussion as my thoughts on the topic are not fully formed yet. Let's continue. ROb: "First of all, I tend to avoid “law and chance” discussions, as I’m never sure how the terms are being used. We might think the terms refer to non-deterministic and deterministic phenomena respectively, but that would leave no third alternative." Ahhh, yes, it's a good thing you are bringing this up, since anything can be determined (unless you look at some interpretations of QM). The ID debate is fundamentally not one of determined vs. non-determined but of teleology vs. non-teleology. It makes no difference if life and intelligence were determined to occur. I have already provided my definition of law (in the comment I linked to re: law, order, and organization) and this definition is built upon the idea used in referring to the "law of gravity." The event is defined by law if the effect is the result of specific material/physical/measurable properties of the matter and energy involved in the reaction; responsible for the ending pattern. This produces ordered/regular events, thus those types of events are defined by law. It's all explained in that linked comment of mine. ROb: "Furthermore, Dembski’s usage of “chance” includes deterministic processes, and “law” sometimes refers to non-deterministic processes, e.g. QM. So I have no doubt that terminological problems will attend the following, but I’ll forge ahead anyway." That's because ID Theory is interested in parsimony and detecting differing levels of causality. IE: we have an event and out of all possible fundamental causes, which are necessary and sufficient in order to explain said event? The way that I understand and defend the fundamentals of ID has no problems with determinism or non-determinism. ROb: "ID math deals pretty much exclusively with pure chance hypotheses. This is the case for traditional tornado-in-a-junkyard probabilities, Dembski’s CSI (in practice, not theory), Marks and Dembski’s EIL work, and Sewell’s SLoT math. The obvious problem is that everybody already knows that structured entities, such as biological organisms, do not arise from pure chance, so ID proponents are wasting their time when they address the pure chance hypothesis." Of course CSI reliably eliminates the chance hypothesis. Dembski has stated this many times. He has also been stating that EAs don't generate CSI from scratch; they merely unfold pre-existing information and then he and Marks backed up that understanding of the flow of information with the EIL work. So, it is not true that ID proponents are wasting their time by eliminating the chance hypothesis, since Dembski and Marks work shows (apparently via mathematical proof) that it is just as improbable to match search space to search procedure to generate an event as it is to generate that event by chance in the first place. Are you seeing the argument flow from CSI to NFLT to active info and Dembski and Marks work. Then adding, organization to CSI merely allows us to quickly and reliably flow through the explanatory filter. Of course, when law as I have explained it is included alongside the pure chance hypothesis, then we don't need an EA to produce the event since any random set of laws will produce ordered, regular, lawful patterns (resulting from the physical/material properties of the states) and thus the resulting pattern is not improbable in the first place. ROb: "Dembski seems to recognize this problem, as he attempts to address law as well as chance. He defines CSI in terms of all relevant chance hypotheses, with “chance hypotheses” including law-like processes. The most obvious problem is that nobody actually does this when they calculate CSI — they calculate it strictly in terms of pure chance." And I've explained above why I disagree with Dembski's inclusion of chance in law. 1. First, when organization is added to CSI there is no need to include law in CSI to eliminate law. 2. Lawful patterns are different from chance patterns. Ie: asdasdasdasdasd vs aj v[w=0tuwetn repectively. However, there is a way where CSI eliminates law and this is the level at which I am reconsidering Dembski's inclusion of law in chance. If we look at chaos theory, we can see that practically any combination of random sets of laws will produce ordered regularities, so the specificity of lawful patterns such as asdasdasdasdasd becomes 1 or very close to 1 and thus those patterns have no CSI. ROb: "But suppose we decided to buck the trend and actually include all relevant “chance” hypotheses in our CSI calculations. How would we go about identifying our hypotheses? Presumably, we should include natural behaviors that are currently known to us, but what about aspects of nature that are unknown or not understood?" That's the thing though. We understand chance on many different levels. Chance is a lack of correlation and is equated with statistical randomness. So, to eliminate the chance hypothesis we merely need to eliminate lack of correlation and statistical randomness and show that the correlation/specificity is improbable when compared to probabilistic resources. Then, if ordered events can indeed result from any random collection of laws as I've explored above, then we also merely need to eliminate mere order -- where organization defines the opposite of order combined with specificity -- to reliably eliminate all chance hypothesis (randomly selected sets of laws). ROb: "Dembski argues that we don’t need to worry about these, but is that a good scientific approach?" Actually in this case, yes it is, since he has shown that the set of laws required to generate an EA are just as hard to arrive at as are the events that the EA produces. So, if we see organized CSI (which is neither defined by law nor best explained by chance) then we know that it wasn't merely law+chance absent intelligence that generated the EA to produce those results. This is a theory of the flow of specified information and is akin to stating that according to our understanding of the transfer of energy, perpetual motion free energy machines are impossible. So, yes, statements such as those, which define limitations (or no-go theorems) can definitely be a part of a good scientific approach. ROb: "When it was discovered that EM waves violate Galilean relativity, should scientists have simply attributed EM’s highly improbable (i.e. impossible) behavior to design?" How is it highly improbable? Is this the same type of probability used in CSI? My answer to "should we have attributed EM to design" is not at all, since EM is definable as regular, ordered events -- unless organized CSI is embedded in the EM radiation such as with a radio signal that carries your favorite tunes. As such we defer to law as a causally adequate explanation of EM (unless organized CSI is present of course). ROb: "But let’s assume, for the sake of argument, that the currently understood laws of physics are exhaustive, and there is nothing left to discover on this front. Could we calculate CSI against this set of laws?" Yes, since organized CSI reliably eliminates the causal categories of only law+chance (absent intelligence) so stating that there are other laws out there does nothing to rescue the ID critics argument since that would be akin to stating that there is "other chance" out there. Chance is chance and law is law as the argument pertains to causal categories. The deal is that organized CSI reliably eliminates law+chance (absent intelligence). ROb: "Perhaps we could for some very simple cases, but certainly not for biological structures. How would we go about calculating the probability that physics, coupled with unknown (but non-biological) initial conditions, would give rise to biological structures?" But I have never argued that the laws of physics, if properly configured will not give rise to life. The argument is, and I thought you understood this by now, that law+chance on their own will not be able to find the matching of search space to search algorithm necessary to produce an EA that will generate life if law+chance (absent intelligence) can't produce life de novo in the first place. This understanding is merely based on an understanding of the flow of specified probabilities. ROb: "It’s an established fact that complexity can arise from simplicity, especially when that simplicity is non-linear and closed-loop." Yes, and statistical randomness is the highest amount of complexity, if you are merely discussing complexity in terms of compressibility. Of course, ID Theory deals in terms of complex specificity, so you are going to have to show how that statement of yours is applicable. ROb: "How do we go about circumscribing the capabilities of law-like physical processes? It seems that, at best, we could rule out non-computable functions, but even that seems debatable." By showing that the arrangement is not defined by mere order and regularities and by showing that the arrangement does not arise out of the physical/material properties of the states utilized. That "defeats" law as a defining and thus explanatory factor on two levels. Again refer to https://uncommondescent.com/intelligent-design/polanyi-and-ontogenetic-emergence/#comment-337588 ROb: "Switching gears, Dembski and Marks attempt to address non-random factors by invoking the NFL principle. But their framework actually adds nothing to the equation, as we still end up with nothing more than a hypothesis of pure randomness." What else, other than mere regularities/order does law+chance on its own (absent intelligence) produce? ROb: "The EIL takes into account non-random factors, but assumes that those factors were chosen randomly from an exhaustive set of possibilities." If there is no intelligence involved, how else are those non-random factors -- I'm presuming you mean laws/algorithms -- selected. If you refer to a more fundamental set of laws, then the exact same question applies. That's the whole point of the regress of information that the EIL deals with. If you are stating that law+chance absent any foresight/consideration for future results will produce an EA that produces organized CSI then ... 1. Please provide some evidence of a set of laws selected absent any consideration for future results that will produce an EA that generates organized CSI. 2. you must also be contending that law+chance can also produce CSI de-novo. ROb: "The problem is that randomly-chosen non-randomness is mathematically equivalent to randomness. This is easily proven for all of the scenarios that the EIL presents. (Except when it’s the search algorithm that’s being randomly chosen. In that case, Dembski and Marks assume that the aforementioned principle is true, so we don’t need to prove it.) See sections 5 and 6 of Haggstrom’s “Intelligent Design and the NFL Theorems” for elaboration." 1. I'm not understanding what the problem is. What other options for the fortuitous matching shown to be required by the NFLT, other than chance, are you proposing? 2. You are going to have to be specific, since I've seen some of Haggstrom's criticisms and it appears he doesn't really understand what Dembski and Marks are getting at. It seems that, from what I can remember, he thinks that Debmski is contending that EAs don't work. ROb: "In summary, ID math doesn’t show the limitations of law+chance — it only shows the limitations of chance." All we need to do is add organization to CSI and we have described an event which is neither defined by law nor by chance. Thus, organized CSI shows the limits of law+chance (on their own absent intelligence).CJYman
February 16, 2010
February
02
Feb
16
16
2010
09:40 AM
9
09
40
AM
PDT
CJYman, The point of the fern generator is to show that the ID argument of calculated CSI is misleading. You can no longer look at a display monitor with a picture of two ferns and calculate the CSI of the each image based on what you see. If one image is rendered a bit at a time by the computer randomly setting that pixel to 1 or 0, and the other is generated by an equation that was arrived at by randomly setting the bit of a block of data which you would use as an equation, the probability of randomly getting the equation is less than randomly getting the image itself. That is the crux of the ID argument, that the probability of an event is based on generating the final result, instead of generating an intermediate step. If I showed you two pictures of a fern on a display, you couldn't tell which was bit-mapped and which was generated. Because of that, you can't calculate the probability of the result, and thus the whole concept of CSI is not applicable to the subject of the generation of life-forms.Toronto
February 16, 2010
February
02
Feb
16
16
2010
06:03 AM
6
06
03
AM
PDT
Toronto: "First of all, I don’t want to make anyone angry, yet I feel that you are." Not at all. I just get frustrated when others resort to dishonest debating tactics -- obfuscation, strawmen, ignoring my arguments, etc. I've seen it all and I'm sure you have as well. If you don't understand any of my arguments just please ask. Another thing is that I'm going over the same thing with multiple people and I just get a little tired of explaining the same basic concept over and over again. That's pretty much why I started bookmarking my discussions and explanations and refer other people to them. Toronto: "We are two sides of a debate, not enemies. When I disagree with you it’s because I feel you are wrong, not for any other reason." Not a problem at all, so long as everything is kept civil and you actually engage in what I am stating and explaining. Toronto: “That bit pattern that looks like a fern is “generated” by the chaotic algorithm, not “uncompressed” by it.” CJYman: "Uhuh … and the result is a compressible event — defined by law and order. What’s your point?" Toronto: "The point is that I no longer have to randomly arrive at the proper combination of millions of bits to reach the CSI represented by the fern, I only need to accidently stumble onto 256." And I just responded to this above. That is why I'm getting a little frustrated. I am not disagreeing with this point of yours, and I've explained why it does nothing to dilute my argument. In fact, before we started this discussion I was stating pretty much the same point to others here on this thread. So, I'm really not sure who you are arguing with here. Toronto: "This greatly increases the probability of any event happening by chance since we don’t need to find the actual result we are looking for, just a generator." Exactly!! We don't have to find the end result since it is an ordered compressible event and as such is defined by law. CJYman: "I have so far supported the contention that a lawful pattern such as your fern pattern can contain a large amount of CSI, however if it is true that any chaotic set of laws will generate an algorithmically compressible pattern, then the specificity of an ordered pattern is 1 and thus there will be no CSI in the event. So, your example would fail on that account." Toronto: "That means that the DNA which results in the compressible me, has no CSI." Not at all, since compressibility doesn't automatically = no specificity. I stated that *ordered*, compressible patterns could have no specificity if I am right about that line of reasoning that I discussed with you. Here's the difference between a specifically ordered event and a specifically organized event. Since a specified event is an event which can be formulated as an independent pattern, the event of a human is formulated as an independent string of DNA as well as other levels of cellular information and epi-genetics and that is where the specificity lies. Now, to calculate specificity, we need to be able to gather some data on how rare certain arrangements of states of those independent patterns are that will produce functionality. As I've shown with the calculation of the protein Titin, at one of the lowest levels of biological information processing, there is a lot of specificity since biologically relevant, folding proteins are rare in the search space when compared with the probabilistic resources available. Conversely, with the ordered specificity that you refer to in your fern example, any random state that produces a collection of laws will produce ordered compressible events. Thus, the specificty = 1 or very close to 1. IOW, it is really easy for chance+law -- pretty much guaranteed -- to generate ordered compressible events such as the fern example. However, organized patterns aren't defined by law and if they are highly specified and improbable then they are not best explained by chance. Toronto: “I am talking about how we view the evolutionary process itself. When evolution plays poker, it does so with an unlimited amount of draws. It doesn’t need to draw a royal flush on the first deal.” CJYman: "Unlimited? Where do you get that? That is blatantly incorrect." Toronto: "Evolution is doing it’s thing right now, every second of every day on everything that is alive and will continue do so until the last living thing dies, whether tomorrow or in billions of centuries from now." Sure, if it is programmed well. In fact, the longer an EA runs and the higher amounts of organizaed CSI it produces, the stronger the argument for ID and the more evidence it provides against chance+law acting on their own absent intelligence. Toronto: "That is the claim of evolution, that it is a process which has no goal and will never end." No goal? Again assuming your conclusions. Do you care to back up your assertion here. Furthermore, how do you know that evolution will never end? Toronto: "If an environment doesn’t like what evolution produces it simply rejects it." Of course. That's the natural selection part of evolution. It is pretty much stating the obvious. That which doesn't survive to reproduce doesn't pass on its genetic information. Toronto: "Not only does evolution play with an unlimited amount of draws," You mean an unlimited amount of possible genetic states? Of course, and when compared to the extremely limited amount of time, unguided processes have no chance at producing what we see produced by evolution. You are providing an ID argument by introducing the fact that the search space of life is virtually unlimited relevant to the number of quantum state changes in the whole life of our universe. That's why Titin (only one protein) gives such a high value of CSI. So, if it is true that an EA is as hard to produce as the results it generates, then the evolution we see is not an unguided (merely law+chance absent intelligence) process. Toronto: "... it plays with an almost unlimited amount of cards, i.e.,everyhing currently alive." Not near unlimited compared to the search space of life. In fact, the number of quantum state changes that have transpired in our whole universe pale in comparison to the size of the search space of life. Toronto, so far it has been a pleasure discussing this topic with you. I hope we can keep it that way. If I don't understand something I will ask you to clarify and I will attempt to engage all of your questions and arguments.CJYman
February 15, 2010
February
02
Feb
15
15
2010
07:39 PM
7
07
39
PM
PDT
CJYMan, It is my assessment of your recent response that there is little be gained from going further here, but I will not a couple things.
I showed you exactly how, even without completing the calculation, we know we will arrive at a < 1 value. This is simply based on an understanding of -log2 of the large number multiplied by 1. If you assert that there is anything wrong with my explanation it is up to you to go through the calculation and show me where I am wrong.
CSI is the ID movements signature concept. Your (and I use you as an avatar for all ID proponents) reluctance to actually use it in a manner that actually addresses any current scientific hypothesis tells more than you would otherwise let on. Any request go beyond the "de novo creation of a protein X amino acids in length" and assess the probabilistic influence of known physical law on CSI is met with the ID advocate making arguments of the form "it is obvious, so the calculation is unnecessary" and/or telling the critic to do the calculation himself. Similar responses are seen when the ID advocate is asked to demonstrate CSI on a known undesigned object. I can only conclude that CSI, as a concept, is only operable in the trivial, which tells us nothing and is unusable when any level of detailed analysis is called for. I suppose I really should say unused since no one has even attempted to assess the probabilistic impact of known physical law on CSI. It is a Potemkin concept.
efren ts: "Yes and no. Yes, I am able to define nature. No, it isn’t yet clear it excludes ID Theory. We’ll figure it out once we get there. So, as the first step in that direction, I would refer you back to the final paragraph in my comment 467. With the clarification offered in that comment, do you agree that definitions should be neutral as not to bias the subsequent inquiry?" No I still do not agree. A definition is what it is. The definition, though, will constrain its utility. In your example, the "incorrect" definition of ET merely makes it less useful.
And here is where we conclude. If you cannot see, or are just unwilling to concede, that a poor definition can impede a scientific inquiry by precluding both specific avenues of analysis and eventual conclusions, then there really isn't anywhere for us to go. By defining nature as law and chance (exclusive of intelligence) you have assumed your conclusion and have emphatically closed yourself off from any possible recognition of a natural/material explanation for human cognition. Whether you believe such an explanation exists or not (and I understand you are on the side of not), you have limited yourself to those conclusions you are predisposed to and have no way to get to a contrary conclusion. I can only conclude that the ID movements rallying cry of "following the evidence wherever it leads" is a fine bit of political sloganeering, but not much more.efren ts
February 15, 2010
February
02
Feb
15
15
2010
05:08 PM
5
05
08
PM
PDT
CJYman, I promised a response to your "organized CSI" idea, and I'm sure you're not on the edge of your seat, but I'll take a stab at a beginning. As with much of ID, I'm hung up on the fundamental principles that underwrite your ideas. Take, for example, your description of organized CSI as "patterns not defined by law+chance". I can't tell if you're stating this as part of the definition of organized CSI, or if it's an empirical claim. If the former, it would seem that the definition of organized CSI is somewhat question-begging. Regardless, I'll explain a problem I have with ID's claims regarding "law and chance". First of all, I tend to avoid "law and chance" discussions, as I'm never sure how the terms are being used. We might think the terms refer to non-deterministic and deterministic phenomena respectively, but that would leave no third alternative. Furthermore, Dembski's usage of "chance" includes deterministic processes, and "law" sometimes refers to non-deterministic processes, e.g. QM. So I have no doubt that terminological problems will attend the following, but I'll forge ahead anyway. ID math deals pretty much exclusively with pure chance hypotheses. This is the case for traditional tornado-in-a-junkyard probabilities, Dembski's CSI (in practice, not theory), Marks and Dembski's EIL work, and Sewell's SLoT math. The obvious problem is that everybody already knows that structured entities, such as biological organisms, do not arise from pure chance, so ID proponents are wasting their time when they address the pure chance hypothesis. Dembski seems to recognize this problem, as he attempts to address law as well as chance. He defines CSI in terms of all relevant chance hypotheses, with "chance hypotheses" including law-like processes. The most obvious problem is that nobody actually does this when they calculate CSI -- they calculate it strictly in terms of pure chance. But suppose we decided to buck the trend and actually include all relevant "chance" hypotheses in our CSI calculations. How would we go about identifying our hypotheses? Presumably, we should include natural behaviors that are currently known to us, but what about aspects of nature that are unknown or not understood? Dembski argues that we don't need to worry about these, but is that a good scientific approach? When it was discovered that EM waves violate Galilean relativity, should scientists have simply attributed EM's highly improbable (i.e. impossible) behavior to design? But let's assume, for the sake of argument, that the currently understood laws of physics are exhaustive, and there is nothing left to discover on this front. Could we calculate CSI against this set of laws? Perhaps we could for some very simple cases, but certainly not for biological structures. How would we go about calculating the probability that physics, coupled with unknown (but non-biological) initial conditions, would give rise to biological structures? It's an established fact that complexity can arise from simplicity, especially when that simplicity is non-linear and closed-loop. How do we go about circumscribing the capabilities of law-like physical processes? It seems that, at best, we could rule out non-computable functions, but even that seems debatable. Switching gears, Dembski and Marks attempt to address non-random factors by invoking the NFL principle. But their framework actually adds nothing to the equation, as we still end up with nothing more than a hypothesis of pure randomness. The EIL takes into account non-random factors, but assumes that those factors were chosen randomly from an exhaustive set of possibilities. The problem is that randomly-chosen non-randomness is mathematically equivalent to randomness. This is easily proven for all of the scenarios that the EIL presents. (Except when it's the search algorithm that's being randomly chosen. In that case, Dembski and Marks assume that the aforementioned principle is true, so we don't need to prove it.) See sections 5 and 6 of Haggstrom's "Intelligent Design and the NFL Theorems" for elaboration. In summary, ID math doesn't show the limitations of law+chance -- it only shows the limitations of chance.R0b
February 15, 2010
February
02
Feb
15
15
2010
03:44 PM
3
03
44
PM
PDT
CJYman, First of all, I don't want to make anyone angry, yet I feel that you are. We are two sides of a debate, not enemies. When I disagree with you it's because I feel you are wrong, not for any other reason. What I have found since coming to this site, is that there are a lot of people here who know a lot more about some things than I do.
Toronto: “That bit pattern that looks like a fern is “generated” by the chaotic algorithm, not “uncompressed” by it.” Uhuh … and the result is a compressible event — defined by law and order. What’s your point?
The point is that I no longer have to randomly arrive at the proper combination of millions of bits to reach the CSI represented by the fern, I only need to accidently stumble onto 256. This greatly increases the probability of any event happening by chance since we don't need to find the actual result we are looking for, just a generator.
I have so far supported the contention that a lawful pattern such as your fern pattern can contain a large amount of CSI, however if it is true that any chaotic set of laws will generate an algorithmically compressible pattern, then the specificity of an ordered pattern is 1 and thus there will be no CSI in the event. So, your example would fail on that account.
That means that the DNA which results in the compressible me, has no CSI.
Toronto: “I am talking about how we view the evolutionary process itself. When evolution plays poker, it does so with an unlimited amount of draws. It doesn’t need to draw a royal flush on the first deal.” [CJYman] Unlimited? Where do you get that? That is blatantly incorrect.
Evolution is doing it's thing right now, every second of every day on everything that is alive and will continue do so until the last living thing dies, whether tomorrow or in billions of centuries from now. That is the claim of evolution, that it is a process which has no goal and will never end. If an environment doesn't like what evolution produces it simply rejects it. Not only does evolution play with an unlimited amount of draws, it plays with an almost unlimited amount of cards, i.e.,everyhing currently alive.Toronto
February 15, 2010
February
02
Feb
15
15
2010
12:46 PM
12
12
46
PM
PDT
Toronto: “It is a dynamic process so it must be modeled as a dynamic process, not a static one.” CJYman: "I am not arguing against the operation or efficacy of an EA, as an EA can be as efficient as the programmer is able to program it to be." Toronto: "I am not talking about programming an EA here, ..." Well, if you wish to argue against my conclusion then you should be talking about programming an EA, since the topic of programming an EA that will produce organized CSI is integral to my argument. Toronto: "I am talking about how we view the evolutionary process itself. When evolution plays poker, it does so with an unlimited amount of draws. It doesn’t need to draw a royal flush on the first deal." Unlimited? Where do you get that? That is blatantly incorrect. And when did I ever state that it needs to draw a royal flush on the first draw? It seems that you are beginning to argue with someone else here, yet you are directing your comments to me. I'm sorry, but I can't afford to waste my time and if you'd like I can leave so that you can continue this discussion with whoever it is you are arguing with. Or, it could just be that you are misunderstanding me? But, then again, you aren't answering the questions I'm asking and you seem not to be reading the links to my explanations of terms I am using yso you seem to be ignoring me. What'll it be? Do you care to continue this discussion with me? If you are unsure of what I mean by something please ask. CJYman: "To match a search space to search algorithm in order to provide a pathway that performs better than chance in finding an event is as hard (in terms of specified probabilities) as generating that event by chance without the EA in the first place. Moreover, the search space can’t be merely the result of chance — characterized by a uniform probability distribution." Toronto: "Search algorithms don’t apply to evolution since no end result is being searched for." First, it seems you are using the term "search" incorrectly -- as necessarily a teleological concept rather than merely a stepping through a search space. Second, it finally seems that we are getting somewhere, however you are assuming your conclusion. Evolution is the result of search space being matched to search procedure in a specific way -- to produce better than chance results. That is directly from the NFLT. So, in order to generate an EA, a search is being done to provide that matching. The question now becomes, will a random search or a random search combined with random laws produce that matching or is an intelligence (foresighted) search required? Do you understand this so far? Toronto: "Imagine a buffet where people have an unlimited amount of plates." Evolution does not have an unlimited amount of anything so I'm not sure what the analogy is here. Toronto: "The food that is consumed in the largest amount is the one that is constantly replenished." Is that a randomly generated rule set or an intelligently generated rule set? Will the food get replenished by law+chance or by intelligence? Toronto: "No one could predict what food that might be on any night and they don’t even try to calculate the resulting winner with any type of algorithm at all." I think I see what you are stating. Is this akin to the engineers designing an EA that will produce a more efficient antenna, without predicting what it will look like in the end? Toronto: "It’s not necessary for the restaurant to be a success. In another part of the city, it might be a different type of food. If you bring out food based on a calculated prediction instead of the selection of the customers, your business may fail completely. Evolution is a process that reacts, not predicts." I fully agree. That type of evolution is the survival of the fittest that is seen in business all the time and would not be possible on earth without the creative genius of intelligent, foresight utilizing, systems known as humans. Business success is relative to a persons ability to see into the future and set up their business in the present for events that will occur in the future. Sure there is some luck involved in a business but it ain't vegas out there in the business world and if you think that luck will get you going and keep you going and "evolve you into something better" I would highly recommend that you don't get into business for to long. The principles of evolution merely weed out those who aren't able to succeed in a specific business venture at a specific time for a vast number of possible reasons. Now, can we get back to my argument? Actually better yet, if evolution is so powerful absent intelligence as you seem to wish to argue, which is something none of your examples have yet shown, please just show me one example of engineers generating functionally organized results based on law+chance without constraining initial conditions for a future known (foresighted) result of form or function.CJYman
February 14, 2010
February
02
Feb
14
14
2010
10:09 PM
10
10
09
PM
PDT
Toronto: “The bits generated in the display of the fern are in the millions while the algorithm can be represented in about 256.” CJYman: "Yes, it is a fact that you can compress ordered patterns." Toronto: "But the chaotic algorithm is not a compressor." I never said that a chaotic algorithm is a "compressor." I stated that it is a fact that you can compress ordered patterns. The algorithm is a compressed version of the resulting effect. That is exactly what a law is. Have you yet read my explanation of law, order, and organization that I linked for you? Toronto: "That bit pattern that looks like a fern is “generated” by the chaotic algorithm, not “uncompressed” by it." Uhuh ... and the result is a compressible event -- defined by law and order. What's your point? Toronto: "There is no fern-like bit pattern presented to the algorithm that is in some way stored. In other words, the information does not exist at all until the algorithm is executed." That is incorrect. All the information for an ordered compressible pattern (such as the fern) is contained in the algorithm. ie: print "12 X 10" contains all the information necessary to print, and is a compressed version of, 12121212121212121212. Are you by any chance aware of K-Complexity? Toronto: "That is the point of yours I was addressing. The CSI of the algorithm is orders of magnitude less than the CSI of the resulting output. With a small CSI input, (the algorithm only), I get a much larger CSI output, a picture of a fern." That actually brings up something I have been thinking over for some time. I have been in disagreement with Demsbski that law is a subset of chance, however you have brought up something that I have been thinking about which makes me not so sure. I have so far supported the contention that a lawful pattern such as your fern pattern can contain a large amount of CSI, however if it is true that any chaotic set of laws will generate an algorithmically compressible pattern, then the specificity of an ordered pattern is 1 and thus there will be no CSI in the event. So, your example would fail on that account. OTOH, CSI only measures chance, and if it doesn't take law into consideration, then yes with law one can generate an ordered pattern with lots of CSI if the above para is not taken into consideration. But, that is what I have been arguing for all along. So either way you look at it, your example does nothing to negate my argument that ... 1. Law+chance wont generate organized CSI since law doesn't even define such patterns and chance is not the best explanation. 2. An EA is just as hard to produce as the event that it generates. The way to look at this is that if we are considering only chance, then your fern example contains large amounts of CSI, but if we include law as a cause there is no CSI. Of course, you have provided evidence (by referencing chaos theory) that a random set of laws will produce algorithmically compressible patterns such as the fern example, so if that is your contention, there is no CSI in the fern example. Now, let's get back to my argument referencing eliminating law+chance on their own (absent intelligence) as per the EF and organized CSI. Have you read my explanation of organization that I linked for you? CJYman: "Chaos theory pretty much shows us that random sets of laws can produce order, but no one has shown that they can produce organization." Toronto: "The Mandelbrot Set is organized, and it is a result of a chaotic algorithm." Although I highly doubt that law+chance will produce the Mandelbrot Set, it is still defined as ordered and compressible and is by no way organized as per my explanation of "organization" that I linked for you. You must not have read that explanation. Please don't tell me you are ignoring my arguments and trying to do that whole "strawman" thing. That becomes a tremendous waste of time for both of us. Toronto: "StephenB asked for a definition of a natural cause and this is what I provided. “A natural cause is one that solely originates from, is constrained by, and is the result of, the forces of physics.- ” Can everyone agree that this is what we mean by a natural cause?" I already stated that was pretty much StephenB's definition and I agree with it as an artificial yet useful definition. It's really too bad you missed half of the discussion. But, let's continue anyway. CJYman: "That’s pretty much StephenB’s definition and as such, as I stated before, whatever can “go above and beyond, and supervenes over [nature]” is “super[natural].” IOW, nature is subservient to that which is supernature. Agreed?" Toronto: "I don’t think we can go that far without knowing the intent of the designer." What designer? I haven't even touched on a designer yet. I merely asked you if you agreed with the definition of "supernature" given your definition of "nature" and the actual definition of the prefix "super." Can you please just answer the question. Toronto: "If something created nature, the intention might be to be a peer of the designer. Do you agree that we can’t know that without investigating the designer?" If I agreed with you, I would be agreeing that archaeology, forensics, and SETI are invalid areas of research. Thus, I cannot agree with you.CJYman
February 14, 2010
February
02
Feb
14
14
2010
10:03 PM
10
10
03
PM
PDT
CJYman, kairosfocus
To match a search space to search algorithm in order to provide a pathway that performs better than chance in finding an event is as hard (in terms of specified probabilities) as generating that event by chance without the EA in the first place. Moreover, the search space can’t be merely the result of chance — characterized by a uniform probability distribution.
Search algorithms don't apply to evolution since no end result is being searched for. Imagine a buffet where people have an unlimited amount of plates.The food that is consumed in the largest amount is the one that is constantly replenished. No one could predict what food that might be on any night and they don't even try to calculate the resulting winner with any type of algorithm at all. It's not necessary for the restaurant to be a success. In another part of the city, it might be a different type of food. If you bring out food based on a calculated prediction instead of the selection of the customers, your business may fail completely. Evolution is a process that reacts, not predicts.Toronto
February 14, 2010
February
02
Feb
14
14
2010
07:11 AM
7
07
11
AM
PDT
CJTman,
Toronto: “It is a dynamic process so it must be modeled as a dynamic process, not a static one.” I am not arguing against the operation or efficacy of an EA, as an EA can be as efficient as the programmer is able to program it to be.
I am not talking about programming an EA here, I am talking about how we view the evolutionary process itself. When evolution plays poker, it does so with an unlimited amount of draws. It doesn't need to draw a royal flush on the first deal.Toronto
February 14, 2010
February
02
Feb
14
14
2010
06:55 AM
6
06
55
AM
PDT
CJYman,
That’s pretty much StephenB’s definition and as such, as I stated before, whatever can “go above and beyond, and supervenes over [nature]” is “super[natural].” IOW, nature is subservient to that which is supernature. Agreed?
I don't think we can go that far without knowing the intent of the designer. If something created nature, the intention might be to be a peer of the designer. Do you agree that we can't know that without investigating the designer?Toronto
February 13, 2010
February
02
Feb
13
13
2010
06:27 PM
6
06
27
PM
PDT
CJYman.
Toronto: “The bits generated in the display of the fern are in the millions while the algorithm can be represented in about 256.”
CJYman. Yes, it is a fact that you can compress ordered patterns.
But the chaotic algorithm is not a compressor. That bit pattern that looks like a fern is "generated" by the chaotic algorithm, not "uncompressed" by it. There is no fern-like bit pattern presented to the algorithm that is in some way stored. In other words, the information does not exist at all until the algorithm is executed. That is the point of yours I was addressing. The CSI of the algorithm is orders of magnitude less than the CSI of the resulting output. With a small CSI input, (the algorithm only), I get a much larger CSI output, a picture of a fern.
Chaos theory pretty much shows us that random sets of laws can produce order, but no one has shown that they can produce organization.
The Mandelbrot Set is organized, and it is a result of a chaotic algorithm. StephenB asked for a definition of a natural cause and this is what I provided.
“A natural cause is one that solely originates from, is constrained by, and is the result of, the forces of physics.- ”
Can everyone agree that this is what we mean by a natural cause? Let's make StephenB's first post noteworthy as providing a term that both sides of the debate can use freely without biasing any of our arguments.Toronto
February 13, 2010
February
02
Feb
13
13
2010
06:11 PM
6
06
11
PM
PDT
efren ts, It appears my response to you has been rescued from the spam filter at comment #478.CJYman
February 13, 2010
February
02
Feb
13
13
2010
04:37 PM
4
04
37
PM
PDT
CJYman: "The point is that, according to the most recent work by Dembski and Marks, it is just as improbable to generate and EA that will lead to a certain event as it is to generate that event from scratch." Toronto: "But evolution does not generate events from scratch. It starts with its current state and moves to the next." ... and an EA is a matching between search space and search procedure. The point is that it is just as improbable to generate an EA (create that matching) that will lead to a certain event as it is to generate that event from scratch. Toronto: "It is a dynamic process so it must be modeled as a dynamic process, not a static one." I am not arguing against the operation or efficacy of an EA, as an EA can be as efficient as the programmer is able to program it to be. Toronto: "Imagine a poker game where you hold 4 cards of a possible Royal Flush and you have a single card draw available to you. Are the odds of getting your one required card the same as drawing five on the first deal?" I would say you definitely have a better chance of drawing the one card. Now, I have a question for you. With a deck of cards, randomly shuffled, and a randomly generated search procedure will a computer be able to beat the odds of finding any of the hands in poker? Actually I should start by asking you if you are familiar with the basic concept of the No Free Lunch Theorems? Toronto: "What I would like to see is a calculation of the CSI moving from, e.g., (state 10^100) to (state 10^100+1)." That would depend on the size of the search space and the number of specified events to the next state. As KF has pointed out many times, if the search space is composed of small islands of functionality inside vast expanses of non-functionality, then EAs won't work to produce efficient results. To match a search space to search algorithm in order to provide a pathway that performs better than chance in finding an event is as hard (in terms of specified probabilities) as generating that event by chance without the EA in the first place. Moreover, the search space can't be merely the result of chance -- characterized by a uniform probability distribution. Toronto: "The Chaos book written by James Gleick, whose exact title I forget, has an interesting part relating to the generation of ferns by chaotic algorithms." That is interesting and shows that a random set of laws will generate periodic, ordered, algorithmically compressible patterns. Conversely, organized CSI is algorithmically complex, aperiodic, and neither defined by law nor best explained by chance. Chaos theory pretty much shows us that random sets of laws can produce order, but no one has shown that they can produce organization (as per my previous explanations and Trevor and Abel's paper on the "Capabilities of Chaos and Complexity." Also, please refer to my comments distinguishing order from organization: https://uncommondescent.com/intelligent-design/polanyi-and-ontogenetic-emergence/#comment-337588 CJYman: "The point is that, according to the most recent work by Dembski and Marks, it is just as improbable to generate and EA that will lead to a certain event as it is to generate that event from scratch." Toronto: "The bits generated in the display of the fern are in the millions while the algorithm can be represented in about 256." Yes, it is a fact that you can compress ordered patterns. Toronto: "That means if we look only at the amount of bits in the two items, there is far less CSI in the algorithm than in the event." But, of course, law defines ordered, compressible patterns, so we defer to law as per the EF. Toronto: "A natural cause is one that solely originates from, is constrained by, and is the result of, the forces of physics.- " That's pretty much StephenB's definition and as such, as I stated before, whatever can "go above and beyond, and supervenes over [nature]" is "super[natural]." IOW, nature is subservient to that which is supernature. Agreed?CJYman
February 13, 2010
February
02
Feb
13
13
2010
04:26 PM
4
04
26
PM
PDT
kairosfocus @496
[1] A natural cause is one that solely originates from, is constrained by, and is the result of, the forces of physics.-
How do you feel about this strictly as a definition? Is this term something that we can agree on as defining a natural cause?Toronto
February 13, 2010
February
02
Feb
13
13
2010
09:15 AM
9
09
15
AM
PDT
CJYman, Is it possible the spam filter is now somehow reacting to your screen name? As a test, you might try posting the same comment with two different user names. Also, due to my moderation issues, you may have missed my comments at 488 and 489.Toronto
February 13, 2010
February
02
Feb
13
13
2010
09:08 AM
9
09
08
AM
PDT
Toronto: Appreciated. Re, 493 [or so once CJY gets out of mod]: [1] A natural cause is one that solely originates from, is constrained by, and is the result of, the forces of physics.- a --> I take it you mean to say the fundamental forces of physics [currently, understood under four heads: strong and weak nuclear, electromagnetic, gravity], as mediated by objects, surroundings, etc. b --> this would be definitely one cluster of mechanical necessity and stochastic/ probabilistic patterns, i.e chance + necessity. b --> So soon as we move up to living systems, though, this begins to break down, so we need to address issues of higher order behaviours that harness but go beyond the direct action of forces of physics and derived chemistry, materials and structures. Some seem to be built in programs or reflexes, others "instinct" -- e.g. nest building by birds -- yet others purposeful and to one extent or another intelligent -- e.g. homing pigeons getting home form novel locations, and of course use of language by humans. c --> We also have to factor in the role of information and associated algorithms and programming, e.g. in the self-regulating life of the cell. That info has to be accounted for, and it definitely is in nature [but not necessarily originally of natutre . . .. and we cannot afford some q-begging here]. d --> The definition of nature should not a priori impose the assumption that such spontaneously emerged, especially given the problems of accounting for such functionally specific configs in the sea of non-function given the resources of search of no more than 10^150 moves on the gamut of our cosmos. e --> Broadening, we see that nature brings to bear forces of chance plus necessity, acting on objects as they are, in circumstances that happen to be there, without active intelligent, purposeful intervention. f --> by contrast, intelligence acts with purpose, and uses skill, knowledge, language, algorithms/ procedures etc, all of which tend to manifest in more or less complex, functional organisation and associated information that would otherwise be most unlikely on the hyp of chance + necessity. (That is we infer from regularity to law, and from contingency to chance by default unless we see signs of intelligence.) g --> Thence, the significance of the EF approach to the broadly considered generic scientific investigatory strategy -- "describe, explain, predict, control or influence" -- on empirical evidence and inference to best explanation. g --> It might help to reflect too on Dembski's recent statement of purpose for the EIL:
Intelligent design is the study of patterns in nature best explained as the product of intelligence . . . Archeology, forensics, and the search for extraterrestrial intelligence (SETI) all fall under this definition. In each of these cases, however, the intelligences in question could be the result of an evolutionary process. But what if patterns best explained as the product of intelligence exist in biological systems? . . . By looking to information theory, a well-established branch of the engineering and mathematical sciences, evolutionary informatics shows that patterns we ordinarily ascribe to intelligence, when arising from an evolutionary process, must be referred to sources of information external to that process [[nb: as it is not seriously credible that complex algorithmic or linguistic, specifically functional information comes about by in effect “lucky noise”]. Such sources of information may then themselves be the result of other, deeper evolutionary processes. But what enables these evolutionary processes in turn to produce such sources of information? Evolutionary informatics demonstrates a regress of information sources. At no place along the way need there be a violation of ordinary physical causality. And yet, the regress implies a fundamental incompleteness in physical causality’s ability to produce the required information. Evolutionary informatics . . . thus points to the need for an ultimate information source qua intelligent designer.
GEM of TKIkairosfocus
February 13, 2010
February
02
Feb
13
13
2010
05:43 AM
5
05
43
AM
PDT
Moderation and Administrators, Sorry to bother you again, but I may have a comment caught in the spam filter. If someone could check into this that would be great. Thank you.CJYman
February 12, 2010
February
02
Feb
12
12
2010
07:35 AM
7
07
35
AM
PDT
efron ts, CJYman: "If you are implying that it is easier for law+chance to build an EA that will generate CSI, then please just show me that it is easier for law+chance to generate an EA that will produce this comment of mine absent intelligence in its causal chain." efren ts: "Well, it is a shame that Zachriel was banned, since he has done some work in that regard. Google “Word Mutagenation” and “Phrasenation” to see what he has done. First hit on both searches." Zachriel and I have a lengthy history and I've been well aware of his EAs for quiet some time now. The question still remains. Is it easier for law+chance absent intelligence to program those EAs that Zachriel has produced than for law+chance to generate those same end points without the EA? Let's start with the NFLT. I highly suggest you read through it. CJYman: "It’s about time you start backing up your assertions about what law+chance can do to generate those patterns not even defined by law+chance" efren ts: "No one, but no one, can get to proof requested in the first part of the sentence until they can get past the stacked deck in the second part." I've shown that CSI reliably eliminates chance on two different accounts -- correlation and probability. Further, I've shown that "organized" patterns aren't defined by law. It's all explained in one of those links to my previous comments that I sent you to. So, again, it’s about time you start backing up your assertions about what law+chance can do to generate those patterns not even defined by law+chance. efren ts: "In general, I would point to the vast body of empirical science across multiple disciplines that addresses the efficacy of evolution in nature." When have I ever questioned the efficacy of evolution in nature? It is genuinely a beautiful process. If I haven't told you yet, which I'm pretty sure I have, I am by no means an interventionist and I agree that some type of abiogenetical process must have occurred in the past and I am fully aware that evolution works marvelously well. So much for your strawman. efren ts: "You are the one making the claim that 150 years of empirical science is wrong. It is your responsibility to support that claim." 150 years of showing us that evolution most likely occurred? When have I ever even attempted to negate that? I think I'm gonna have to start counting strawmen here just to see how much energy you are expending erecting and then destroying them. efren ts: "To date, your argument boils down to “it can’t be done de novo and, obviously, EAs are less probable, therefore it is proven.”" Not at all. I have never stated that anything is "proven." It was ROb that pointed to the mathematically "proven" results published by Dembski and Marks which show that EAs are just as improbable as the events which they produce. I have merely referenced the NFLT and the work done by Dembski and Marks. Let's start with the NFLT, since that is the foundation for their work. Do you have any specific problem with it? efren ts: "If you tell me that you will get to supporting the “obviously” with R0b, I will be happy to let you do so, since the sentence quoted above does nicely bring us back to the questions I am more interested in exploring with you regarding neutrality of definitions." The "obviously" is based on the fact that no one is yet able to produce a result which falsifies the ID hypothesis along with the fact that Dembski and Marks have already done the work which supports it mathematically. Do you have any specific problem with their work, which is merely an extension of the NFLT. If you wish to discuss something in particular, just bring your question forward. No need to build up strawman after strawman while ignoring my explanation of how organized CSI is neither defined by law nor chance (absent intelligence). Which brings me back to the point that you have not yet backed up your implied assertion that law+chance will produce events not even defined by law+chance. Evolution itself is not an "obvious" creation of only law+chance, so if you assert that it is, then please back up your assertion with some evidence. So, if you assert that law+chance will generate either patterns neither defined by law+chance or the EA to produce those patterns by matching search space to search procedure, then you also need to back up your assertions which is something I have already done by explaining my argument in detail and referencing where the NFLT and Dembski and Marks work supports my conclusions. efren ts: "But, I will need to wait until your comment clears moderation and I have time to read it. As you might imagine, my wife of 18 years has a claim on my time this weekend." That is perfectly understandable. My wife also has that same claim. And yes, I am looking forward to my comment being cleared.CJYman
February 12, 2010
February
02
Feb
12
12
2010
07:32 AM
7
07
32
AM
PDT
kairosfocus, A definition of natural cause: [1] A natural cause is one that solely originates from, is constrained by, and is the result of, the forces of physics.- An apple falling from a tree is a "chance" event that is "directed" by the force of gravity. No outside intelligent agent is required to make that apple too ripe and heavy to cling to the tree or take the shortest path available to the ground when it finally breaks its bond. Evolution works the same way, restricted by the forces of physics and thus directed by them.Toronto
February 12, 2010
February
02
Feb
12
12
2010
06:17 AM
6
06
17
AM
PDT
Oh yes: EAs require differential function to foster hill-climbing within an island of already established function. The issue being examined by design theory is how do we get to shores of islands of function in the face of a vast non-functional sea of configs? The empirically tested answer to that is: by intelligence. that is art-ificial injection of active information. And in the case of EAs getting them to initially work requires just such intelligent input, and -- let us remember -- their progress requires an existing degree of function. GEM of TKIkairosfocus
February 12, 2010
February
02
Feb
12
12
2010
05:59 AM
5
05
59
AM
PDT
Note: As we come close to 500 posts, we still cannot see a non-question-begging definition of "nature" and "natural cause" by evo mat advocates that is materially different from: occurrences tracing to credibly undirected chance +/or necessity.kairosfocus
February 12, 2010
February
02
Feb
12
12
2010
05:47 AM
5
05
47
AM
PDT
Re:
In general, I would point to the vast body of empirical science across multiple disciplines that addresses the efficacy of evolution in nature.
Elephant hurling in the support of a priori imposition that -- as Johnson pointed out [cf cite earlier today] -- forces an evolutionary materialist explanation of origins, and censors out critique. (When a summary statement that tosses a body of claimed evidence into the ring and says "that settles it" without specific substantial examination on the merits, the fallacy of elephant hurling has entered the ring. here it is a case also of appeal to blind loyalty to the authority of the evolutionary materialist magisterium.) When worldview level issues are in the mix, only level playing field comparative difficulties analysis can solve the problem. And, ET would do well to take a look at Wells' critique of the icons that have persuaded so many about the solidity of the claimed "proofs" of macroevolution, e.g. here. GEM of TKI PS: Even the much despised Creationists [in this case Dr Jonathan Sarfati of Australia] have some serious words on the problem here.kairosfocus
February 12, 2010
February
02
Feb
12
12
2010
05:42 AM
5
05
42
AM
PDT
CJYman, The Chaos book written by James Gleick, whose exact title I forget, has an interesting part relating to the generation of ferns by chaotic algorithms.
The point is that, according to the most recent work by Dembski and Marks, it is just as improbable to generate and EA that will lead to a certain event as it is to generate that event from scratch.
The bits generated in the display of the fern are in the millions while the algorithm can be represented in about 256. That means if we look only at the amount of bits in the two items, there is far less CSI in the algorithm than in the event.Toronto
February 12, 2010
February
02
Feb
12
12
2010
05:33 AM
5
05
33
AM
PDT
CJYman,
The point is that, according to the most recent work by Dembski and Marks, it is just as improbable to generate and EA that will lead to a certain event as it is to generate that event from scratch.
But evolution does not generate events from scratch. It starts with its current state and moves to the next. It is a dynamic process so it must be modeled as a dynamic process, not a static one. Imagine a poker game where you hold 4 cards of a possible Royal Flush and you have a single card draw available to you. Are the odds of getting your one required card the same as drawing five on the first deal? What I would like to see is a calculation of the CSI moving from, e.g., (state 10^100) to (state 10^100+1).Toronto
February 12, 2010
February
02
Feb
12
12
2010
05:13 AM
5
05
13
AM
PDT
If you are implying that it is easier for law+chance to build an EA that will generate CSI, then please just show me that it is easier for law+chance to generate an EA that will produce this comment of mine absent intelligence in its causal chain.
Well, it is a shame that Zachriel was banned, since he has done some work in that regard. Google "Word Mutagenation" and "Phrasenation" to see what he has done. First hit on both searches.
It’s about time you start backing up your assertions about what law+chance can do to generate those patterns not even defined by law+chance
No one, but no one, can get to proof requested in the first part of the sentence until they can get past the stacked deck in the second part. In general, I would point to the vast body of empirical science across multiple disciplines that addresses the efficacy of evolution in nature. You are the one making the claim that 150 years of empirical science is wrong. It is your responsibility to support that claim. To date, your argument boils down to "it can't be done de novo and, obviously, EAs are less probable, therefore it is proven." Of course, it isn't obvious to critics and they ask you to justify that, which you haven't done yet. So, I am not ignoring you, I am merely stating that it is not my job to do your work for you. If you tell me that you will get to supporting the "obviously" with R0b, I will be happy to let you do so, since the sentence quoted above does nicely bring us back to the questions I am more interested in exploring with you regarding neutrality of definitions. But, I will need to wait until your comment clears moderation and I have time to read it. As you might imagine, my wife of 18 years has a claim on my time this weekend.efren ts
February 12, 2010
February
02
Feb
12
12
2010
04:15 AM
4
04
15
AM
PDT
PPS: On [searches for searches]^n. It should be fairly clear that EA's as observed are ALGORITHMS, designed by known intelligences, and tailored by rtehm to particular hill-climbing tasks. Wiki:
Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the environment within which the solutions "live" . . . . Usually, an initial population of randomly generated candidate solutions comprise the first generation. The fitness function is applied to the candidate solutions and any subsequent offspring. In selection, parents for the next generation are chosen with a bias towards higher fitness. The parents reproduce one or two offsprings (new candidates) by copying their genes, with two possible changes: crossover recombines the parental genes and mutation alters the genotype of an invididual in a random way. These new candidates compete with old candidates for their place in the next generation (survival of the fittest). This process can be repeated until a candidate with sufficient quality (a solution) is found or a previously defined computational limit is reached.
. Thus, the EA strategy begs the key question being asked by design theory: how do we get to the shores of initial functionality, which is plainly the precondition for some members of a population having a higher fitness that can then be differentially selected. Especially, in a context where such function is based on specific, complex -- often code based -- organisation? All of the above in turn pivots on having a carefully set up match between algorithm and optimisation challenge. So, in a real worlds where the sea of non-function reduces the islands -- just see what a fairly modest degree of random perturbation will do to the EA's implementing code! -- of specific organised complex function to a dust of islands, the biggest challenge is to get to such a match. In general we have horses for courses, and the space of potenial EAs is another highly complex config space, leading to the next level of search. But in turn that is another algorithm . . . And slowly but surely, an infinite regress of cosmologically implausible search challenges emerges. In short, the attempt to substitute computer simulation for the challenge to tget to OOl and to body plan level biodiversity by chance + necessity only, fails. And, onward, the attempt to get to a credible reasoning mind on such premises also fails, implicating a self-referential inconsistency of the first order. So the Lewontinian imposition of a priori materialism through the rule of methodological naturalism fails.kairosfocus
February 12, 2010
February
02
Feb
12
12
2010
04:13 AM
4
04
13
AM
PDT
PS: To see the further implications of Wicken's "wiring diagram" remark, we might want to compare the functional layout of a petroleum plant with a diagram of the integrated set of metabolic reactions of the living cell, especially given Denton's observation:
To grasp the reality of life as it has been revealed by molecular biology, we must magnify a cell a thousand million times until it is twenty kilometers in diameter [so each atom in it would be “the size of a tennis ball”] and resembles a giant airship large enough to cover a great city like London or New York. What we would then see would be an object of unparalleled complexity and adaptive design. On the surface of the cell we would see millions of openings, like the port holes of a vast space ship, opening and closing to allow a continual stream of materials to flow in and out. If we were to enter one of these openings we would find ourselves in a world of supreme technology and bewildering complexity. We would see endless highly organized corridors and conduits branching in every direction away from the perimeter of the cell, some leading to the central memory bank in the nucleus and others to assembly plants and processing units. The nucleus itself would be a vast spherical chamber more than a kilometer in diameter, resembling a geodesic dome inside of which we would see, all neatly stacked together in ordered arrays, the miles of coiled chains of the DNA molecules. A huge range of products and raw materials would shuttle along all the manifold conduits in a highly ordered fashion to and from all the various assembly plants in the outer regions of the cell. We would wonder at the level of control implicit in the movement of so many objects down so many seemingly endless conduits, all in perfect unison. We would see all around us, in every direction we looked, all sorts of robot-like machines . . . . We would see that nearly every feature of our own advanced machines had its analogue in the cell: artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of components, error fail-safe and proof-reading devices used for quality control, assembly processes involving the principle of prefabrication and modular construction . . . . However, it would be a factory which would have one capacity not equaled in any of our own most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours . . . . Unlike our own pseudo-automated assembly plants, where external controls are being continually applied, the cell's manufacturing capability is entirely self-regulated . . . . [Denton, Michael, Evolution: A Theory in Crisis, Adler, 1986, pp. 327 – 331. This work is a classic that is still well worth reading.]
j --> In short, both are functionally, "aperiodically" organised and integrated spatially and temporally on a "wiring diagram" to achieve definite outcomes, both are exceedingly complex, and neither is reasonably explicable on undirected chance + necessity on the gamut of our observed cosmos. k --> But the first shows, indisputably, how intelligence can design and implement such a complex chemical engineering "plant" l --> The difference: the petrochem plant takes up many acres and requires a significant crew to operate it, but the cell is completely autonomous, self-replicating [on stored blueprint information!] and the cell takes up only about a few microns of space m --> In short, the cell is evidently a far more advanced technology than the petrochem plant. n --> But, thoughts like that are very unwelcome to Lewontinian materialists . ..
It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [“Billions and Billions of Demons,” NYRB, January 9, 1997]
o --> Johnson's rebuke of Nov 1997 is apt:
For scientific materialists the materialism comes first; the science comes thereafter. We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose."  . . . .   The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. When the public understands this clearly, Lewontin’s Darwinism will start to move out of the science curriculum and into the department of intellectual history, where it can gather dust on the shelf next to Lewontin’s Marxism. [The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
kairosfocus
February 12, 2010
February
02
Feb
12
12
2010
03:52 AM
3
03
52
AM
PDT
1 2 3 18

Leave a Reply