Uncommon Descent Serving The Intelligent Design Community

Two forthcoming peer-reviewed pro-ID articles in the math/eng literature

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The publications page at EvoInfo.org has just been updated. Two forthcoming peer-reviewed articles that Robert Marks and I did are now up online (both should be published later this year).*

——————————————————-

“Conservation of Information in Search: Measuring the Cost of Success”
William A. Dembski and Robert J. Marks II

Abstract: Conservation of information theorems indicate that any search algorithm performs on average as well as random search without replacement unless it takes advantage of problem-specific information about the search target or the search-space structure. Combinatorics shows that even a moderately sized search requires problem-specific information to be successful. Three measures to characterize the information required for successful search are (1) endogenous information, which measures the difficulty of finding a target using random search; (2) exogenous information, which measures the difficulty that remains in finding a target once a search takes advantage of problem-specific information; and (3) active information, which, as the difference between endogenous and exogenous information, measures the contribution of problem-specific information for successfully finding a target. This paper develops a methodology based on these information measures to gauge the effectiveness with which problem-specific information facilitates successful search. It then applies this methodology to various search tools widely used in evolutionary search.

[ pdf draft ]

——————————————————-

“The Search for a Search: Measuring the Information Cost of Higher Level Search”
William A. Dembski and Robert J. Marks II

Abstract: Many searches are needle-in-the-haystack problems, looking for small targets in large spaces. In such cases, blind search can stand no hope of success. Success, instead, requires an assisted search. But whence the assistance required for a search to be successful? To pose the question this way suggests that successful searches do not emerge spontaneously but need themselves to be discovered via a search. The question then naturally arises whether such a higher-level “search for a search” is any easier than the original search. We prove two results: (1) The Horizontal No Free Lunch Theorem, which shows that average relative performance of searches never exceeds unassisted or blind searches. (2) The Vertical No Free Lunch Theorem, which shows that the difficulty of searching for a successful search increases exponentially compared to the difficulty of the original search.

[ pdf draft ]

—————

*For obvious reasons I’m not sharing the names of the publications until the articles are actually in print.

Comments
CJYman For my more detailed explanation of “specification” and an actual measurement, check out “Specifications (part 1), what exactly are they?” in the top left margin on my blog. This obviously addresses my comment #247. But where is your blog?Mark Frank
January 30, 2009
January
01
Jan
30
30
2009
01:42 PM
1
01
42
PM
PDT
CYMman #235 I am afraid I cannot see how you have avoided. In #1 you say: "we observe intelligent agents creating specifications every day" but you have not defined specification. Meanwhile Paul (227) and Gpuccio (226) above appear to define specification as "patterns which are produced by intelligence". So we need a definition of specification that does not rely on intelligence. Otherwise all you are saying is that "we see intelligent agents creating things that are created by intelligent agents" You could use Dembski's most recent definition of specification which is roughly speaking: "pattern that can be produced with a very small computer programme". But in that case crystals and snowflakes are specified. And it is no longer evidence for design.Mark Frank
January 30, 2009
January
01
Jan
30
30
2009
01:39 PM
1
01
39
PM
PDT
JayM, For my more detailed explanation of "specification" and an actual measurement, check out "Specifications (part 1), what exactly are they?" in the top left margin on my blog.CJYman
January 30, 2009
January
01
Jan
30
30
2009
01:29 PM
1
01
29
PM
PDT
JayM: "The fitness functions used by Tierra and the group evolving antenna designs at MIT have no knowledge of the solution nor of what the optimum performance will be (in the case of Tierra, the concept of optimal performance is not even applicable)." They don't know what the exact solution will look like, however, they program the search space constraints and the relevant math used to determine what the constraints of the design will be. IOW, they know the parameters which need to be provided by them (the intelligent designers) and the program which will work with those parameters in order to solve a specific problem. If they had no idea they were looking for an antenna which was optimized according to their provided criteria, would the EA produce an optimized antenna which is exactly what they are looking for?CJYman
January 30, 2009
January
01
Jan
30
30
2009
01:27 PM
1
01
27
PM
PDT
Hello Rob, I was actually not responding to gpuccio re: specification not being an example of circular reasoning. I was responding to JayM comment #232. Rob: "In fact, flat-out copying is an example of design according to Dembski." Yes, because it would take some type of intelligence to "cheat" in that way. Ie: I have a deck of cards "randomly" shuffled and so do you. We both reveal our decks to each other at the same time and discover that they are identically ordered. Is chance a best explanation or did someone "purposefully cheat?" However, that is based on a pre-specification as opposed to a specification.CJYman
January 30, 2009
January
01
Jan
30
30
2009
01:22 PM
1
01
22
PM
PDT
djmullen, Just to keep you on the same playing field as most of the rest of the ID folks, understand some basics. There is nothing in the MET or whatever the latest version of it is in terms of processes that ID objects to. We do not object to the findings of modern evolutionary biology. ID accepts everything that could be classified as micro evolution and your examples were within micro evolution. Notice I said findings. I did not say conclusions. ID does not believe the findings of evolutionary biology since Charles Darwin and especially within the last 10 years support macro evolution in the sense of the construction of novel complex capabilities. Certainly new functions and capabilities have developed but while important for the species, medicine or other areas they are trivial for evolutionary biology which must explain how the very complicated information network in a genome arose and changed over time to specify these novel complex functional capabilities. As an aside, no one has been able to explain it to us here or in any written medium that we are aware of. We have had at least four evolutionary biologists here who have been anti ID and several biologists and none of them have been able to do it either. So what are we to think? So we repeat this short message and some will engage you on longer exchanges but the result is always the same. No one makes a crack in the argument. Few of us are scientists but many of us are well read and most of the people you engage take positions not based on religion but on the results of scientific studies. So you will have to up the level a lot to be productive here. And if your objective is to provide the logic to convince us we are deluded, you will be the first. The evolutionary biologists have always given us irrelevant stuff hoping we won't notice. But when we do notice, what does that say to the strength of the anti ID argument. As I said to you on another comment. Human nature is the biggest support we have for our position.jerry
January 30, 2009
January
01
Jan
30
30
2009
01:19 PM
1
01
19
PM
PDT
CJYman @240
I think the most fascinating implication of the NFLT is that it seems to show that any simulation of evolution requires information about characteristics of the problem to be solved to be incorporated into the behavior of the algorithm. In fact, the authors pretty much state that in their conclusion. So far, no one has shown otherwise, and no one has shown that evolutionary algorithms will generate themselves absent previous foresight of a target to be reached (actual information of the future problem being used to program and match the search procedure and search space).
I must disagree with you here. In fact, most GAs are not of the Weasel variety. The fitness functions used by Tierra and the group evolving antenna designs at MIT have no knowledge of the solution nor of what the optimum performance will be (in the case of Tierra, the concept of optimal performance is not even applicable). MET mechanisms demonstrably channel information from the environment (simulated or natural) to the populations that are evolving. The question isn't whether this happens, but how far it can go. JJJayM
January 30, 2009
January
01
Jan
30
30
2009
01:16 PM
1
01
16
PM
PDT
CJYman @240
As to a specification being measurable, it seems we are just going to have to disagree over that.
I disagree. ;-) We don't have to agree to disagree, actually. If specification can be quantitatively measured, that can be demonstrated to anyone interested. If it can't be demonstrated, it's not reasonable to assert that it is quantifiable. To your credit, you seize the bull by the horns in your next sentences:
I do agree that not every single act of intelligence will be measurable as a specification, but that is no problem, since many aspects are measurable. A specification merely measures the pattern against a uniform probability distribution and then multiplies this by how many patterns of same length are specified (able to be formulated as an independent event) and then multiplies this by number of trials available. Take the -log2 and you have your value.
Why a uniform probability distribution? That isn't applicable to biological systems. What is exactly does "able to be formulated as an independent event" mean? How could this be done for even a simple biological system? You've hit on exactly my frustration when reading the ID literature on CSI. I don't see how to compute it, even in principle. That doesn't mean it's not computable, but I've dived (doven?) deeper into the available material than most people would be willing to, and I haven't found a clear explanation of how to apply it. Heck, I'm not even sure what the units of CSI are, from the definitions I've seen. Let's calculate some CSI! JJJayM
January 30, 2009
January
01
Jan
30
30
2009
01:11 PM
1
01
11
PM
PDT
JayM: "Thanks for the detailed breakdown and clarification of the argument. Too frequently I have seen it presented as “Humans generate CSI, biological systems contain CSI, therefore biological systems are the result of intelligence.” That form is, if not circular, certainly uncompelling without further proof that CSI cannot be created by other means." Well of course, some people probably state it that way as an extremely simplified version since it does take a bit of "paper" to explain the details. But, yes, I can see to an extent how the form you mention is quite lacking. JayM: "My first comment is that there are many ways of identifying intelligence. I’m not certain that the creation of specifications is a unique identifier, or even measurable." Yes, there may be many ways to identify intelligence, but you gotta start somewhere and if there is a measurable way, then that would be the best place to start. Specifications will be a unique identifier until the ID Hypothesis is falsified and someone provides evidence of chance and law absent intelligence generating specifications. As to a specification being measurable, it seems we are just going to have to disagree over that. I do agree that not every single act of intelligence will be measurable as a specification, but that is no problem, since many aspects are measurable. A specification merely measures the pattern against a uniform probability distribution and then multiplies this by how many patterns of same length are specified (able to be formulated as an independent event) and then multiplies this by number of trials available. Take the -log2 and you have your value. Properly understood, none of those are arbitrary measurements. They can be updated upon the presentation of new data and measured to a greater precision, but that is the same with any measurement. To date, no one has shown an example of a specification being generated absent consideration for future results. JayM: "I’m also not convinced that using specification for design detection is meaningful in biological systems. As has been pointed out by Mark Frank, MET mechanisms aren’t searching for anything in particular, they just, by their nature, explore nearby genome spaces." Yah, so ... I recommend you read through my comment #121 and 122 above to see how this idea of measuring a search is perfectly applicable to biology. JayM: "Considering the current result of that exploration to be a specification is too close to drawing bullseyes around bullet holes after shooting for my taste. When there are many possible outcomes, using the outcome we observe as the specification and noting how well reality matches it is unconvincing." All you need is a specified pattern (many biological patterns are specified by functional events), an understanding of the probabilistic resources available (max. number of bit operations available to life on earth based on mutation rates), an approximation of specified patterns of same length (research in progress), and the measurement of "shannon information" (provided by Hubert Yockey and others). There is no assumption of anything to measure for a specification. A few assumptions do come in to play to tie the measurement of a specification into an hypothesis, but those assumptions are not problematic at all as I've already explained in my last comment. Furthermore, the hypothesis is fully falsifiable, so there is no major problem as far as I can see. Sure there is more research to be done but that has never posed a problem for science. JayM: "I just re-read some of the NFL literature and, as with Dr. Dembski’s two papers that started this thread, they deal with evenly distributed solution spaces (objective functions). It’s hard to see how this corresponds to what we know about biological systems." NFLT explicitly deals with the matching of correct search space to correct search procedure in order to produce better than chance performance -- better than chance performance being measured against uniform probability distribution. I have already explained the connection of "search" to biology, but I think the most fascinating implication of the NFLT is that it seems to show that any simulation of evolution requires information about characteristics of the problem to be solved to be incorporated into the behavior of the algorithm. In fact, the authors pretty much state that in their conclusion. So far, no one has shown otherwise, and no one has shown that evolutionary algorithms will generate themselves absent previous foresight of a target to be reached (actual information of the future problem being used to program and match the search procedure and search space). My question is, "if all simulations of evolution simulate the basic process of reproduction, mutation, and culling, and they require foresight of the problem to be solved (problem specific information), then why would the natural process get by without foresight if it too consists primarily of reproduction, mutation, and culling?" Of course the power of non-foresighted mechanisms is testable by creating a program which generates patterns or bit strings and a filter based on background noise and an arbitrary collection of laws (set of laws chosen with no regard for future results). As long as that produces no active information (problem specific information), evolutionary algorithm, or specification then the ID hypothesis stands. JayM: "In fact, David Wolpert, one of the authors of the NFL theorems paper, rejects the idea that NFL is applicable to MET mechanisms." Then you are going to have to tell me what MET mechanisms are, since near the beginning of the paper he states and I quote: “Given our decision to only measure distinct function evaluations even if an algorithm revisits previously searched points, our definition of an algorithm includes all common black-box optimization techniques like simulated annealing and evolutionary algorithms.” ...and... "In particular, if for some favorite algorithms a certain well behaved f results in better performance than does the random f then that well behaved f gives worse than random behavior on the set all remaining algorithms. In this sense just as there are no universally efficacious search algorithms, there are no universally benign f (optimizations problems) which can be assured of resulting in better than random performance regardless of ones algorithm.” ...and... "First, if the practitioner has knowledge of problem characteristics but does not incorporate them into the optimization algorithm, then P(f) is effectively uniform. (Recall that P(f) can be viewed as a statement concerning the practitioner’s choice of optimization algorithms.) In such a case, the NFL theorems establish that there are no formal assurances that the algorithm chosen will be at all effective. Second, while most classes of problems will certainly have some structure which, if known, might be exploitable, the simple existence of that structure does not justify choice of a particular algorithm; that structure must be known and reflected directly in the choice of algorithm to serve as such a justification. In other words, the simple existence of structure per se, absent a specification of that structure, cannot provide a basis for preferring one algorithm over another ... The simple fact that the P(f) at hand is non-uniform cannot serve to determine one’s choice of optimization algorithm.” Furthermore, it seems that once Dembski began to formalize the concept of active information, Wolpert fell silent. BTW: I have written a bit about this subject on my blog. Just click on my handle and you can peruse through the topics in the left sidebar at the top of my blog.CJYman
January 30, 2009
January
01
Jan
30
30
2009
12:59 PM
12
12
59
PM
PDT
CJYman:
The hypothesis that specification signals previous intelligence is not circular at all if stated correctly and not in a straw-man variant.
I would call gpuccio's variant a misunderstanding rather than a strawman. He certainly wasn't intending that it be knocked down. gpuccio said: Specification comes form the conscious representation of an intelligent, conscious beings. That’s why computers can create new complexity (see my example of pi calculation), but not new specifications. But then, why do we need complexity (that is, improbability) in the concept of CSI? It’s simple. it’s because if something is simple, it could “appear” specified, even if nobody has ever thought of outputting that specification. Inother words, specification could be a pseudo-specification. So it appears that the defining characteristic of specification, as opposed to pseudo-specification, is that it originates from somebody's mind. Under this definition, to infer design from specification is certainly tautological. But obviously gpuccio's usage of the term does not match Dembski's. BTW, gpuccio's pi example is interesting. According to his logic, we should not infer design from an extraterrestial signal that contains the digits of pi, since there is "no new specification" in it. The "new specification" requirement makes no sense in Dembski's framework, which defines specification as pre-existing or easy-to-construct patterns. In fact, flat-out copying is an example of design according to Dembski.R0b
January 30, 2009
January
01
Jan
30
30
2009
12:31 PM
12
12
31
PM
PDT
Paul, thank you for providing answers which, franky, I felt no more motivated to give, given the general tone of the discussion. I agree with everything. I must say that one of the things which I most appreciate in you is that, while being an YEC (which I am not), your religious beliefs never, and I say never, influence your serene approach to ID questions. That is really no small virtue. About Apolipoprotein A-I Milano, I would like to add that, while its protective effect in vivo is usually assumed, but not really proved, given the very small number of carriers, its usefulness as a drug, even if still in the experimental phase, is much more likely, and I hope that it will be confirmed. What is much less certain instead, as can be seen from the abstract I pasted in my previous post, and from one or two other papers I found, is that the positive effetcs of Apolipoprotein A-I Milano as a drug be necessarily greater than those of Apolipoprotein A-I wildtype, since the "natural" form too has proven effective as a therapeutic tool. I paste here the conclusions from a recent review of the subject (Shah, Indian Heart J 2006; 58: 209–215) "Infusion of HDL has been shown to reduce atherosclerosis in animal models and preliminary proof from small studies in humans supports this novel therapeutic paradigm. However, a number of questions remain unanswered, such as the consistency and durability of the athero-protective effect of HDL infusion, the optimal dose and the frequency at which such infusions would be needed. Some other concerns are the relative efficacy of different forms of HDL (wild type Apo AI vs Apo A1 Milano containing HDL; plasma-derived versus recombinant HDL), long-term safety and, most importantly, whether such therapy can reduce adverse cardiovascular events in a cost-effective manner." But even is that were the case, your observation remains perfectly valid: it remains to be seen if the positive effects are not balanced by collateral effects. And anyway, it is obvious that Apolipoprotein A-I is only one of many variants of the wildtype protein, and like in many other genetic variants of proteins, some physiological functions can be slightly "tweaked" in the variant, with effects which can be negative or neutral, and sometimes can imply some protection against some pathological state (the example you made of resistance of the sickle cell carrier to malaria is perfect). Suggesting that the natural polymorphism of most proteins in our (and other) species is a real example of darwinian evolution is beyond my understanding. Obviously, I am not making these points for you, who know all that very well. And finally, we will see if this new "trait" really expands in the world population by the powerful effect of natural selection, and becomes a basic step towards the new human race. After all, we already have about 40 people here in Italy.gpuccio
January 30, 2009
January
01
Jan
30
30
2009
12:01 PM
12
12
01
PM
PDT
CJYman @235
The hypothesis that specification signals previous intelligence is not circular at all if stated correctly and not in a straw-man variant.
Thanks for the detailed breakdown and clarification of the argument. Too frequently I have seen it presented as "Humans generate CSI, biological systems contain CSI, therefore biological systems are the result of intelligence." That form is, if not circular, certainly uncompelling without further proof that CSI cannot be created by other means.
1. Observation: we observe intelligent agents creating specifications every day. In fact, it is these specifications that basically define something as intelligent. And this is by no means circular reasoning. Tell me, how would you know that something is intelligent if it did not produce specifications, yet was mumbling incoherently and randomly bumping into walls? This is merely an observation of what basically defines what we determine to be intelligent (system able to utilize foresight — modeling the future and generating targets).
My first comment is that there are many ways of identifying intelligence. I'm not certain that the creation of specifications is a unique identifier, or even measurable. One of the reasons I consider Dr. Behe's "edge of evolution" to be more fertile ground for ID research than the mathematical approach of Dr. Dembski is that, even with a background in mathematics and after reading the relevant literature, I can't quantitatively determine the "specification" in any artifact. Specification requires a standard against which an artifact is measured. ID theory hasn't yet reached the point where the selection of the specification criteria is non-arbitrary. I'm also not convinced that using specification for design detection is meaningful in biological systems. As has been pointed out by Mark Frank, MET mechanisms aren't searching for anything in particular, they just, by their nature, explore nearby genome spaces. Considering the current result of that exploration to be a specification is too close to drawing bullseyes around bullet holes after shooting for my taste. When there are many possible outcomes, using the outcome we observe as the specification and noting how well reality matches it is unconvincing. If I'm representing the use of specification unfairly, please excuse me. I genuinely don't see how to measure it quantitatively nor how to apply it to biological systems.
2. The math behind measuring for a specification combined with NFLT (and furthermore, active info) seems to put a no-go on what chance and law will produce absent previous intelligence.
This is another common claim that I'm not sure is justified. I just re-read some of the NFL literature and, as with Dr. Dembski's two papers that started this thread, they deal with evenly distributed solution spaces (objective functions). It's hard to see how this corresponds to what we know about biological systems. In fact, David Wolpert, one of the authors of the NFL theorems paper, rejects the idea that NFL is applicable to MET mechanisms. My view, therefore, is that specification needs to be more rigorously defined and applied, and that papers like Dr. Dembski's two newest need to be explicitly linked to biological systems before specification can be used for design detection. I suspect, although I'll be happy to be proven wrong, that investigation into the limits of MET mechanisms and the distribution of viable regions in genome space are more likely to generate strong evidence in favor of design in the near future. JJJayM
January 30, 2009
January
01
Jan
30
30
2009
11:03 AM
11
11
03
AM
PDT
djmullen (#228) Your comment is, as JayM stated in #231, "somewhat brutally stated". Unfortunately, it also misunderstands the subject badly and overstates the evidence enough that it is not likely to be of much persuasive value. If you spend most of your life in an echo chamber, you may not realize this. So perhaps I can give you a few pointers. First, every time you accuse ID advocates of being stealth YEC's, you turn everybody off. There are some YEC's here; I am one of them. But I am not exactly stealth, and those who disagree with me on this (and there are many, including the person who started this blog) really resent being told that they don't know their own minds. This is strawman arguing of the worst sort; building the strawman while everyone tells you it is a strawman. There is an anachronism in your presentation which betrays your inability to hear ancients on their own terms. You say, "We observe some facts, such as the great numbers of species. Then a Hebrew writer borrows a Babylonian theory, modifies it a little and ... that becomes for the moment the “best explanation”. The "great number of species" was arguably not known before Linnaeus. In fact, Linnaeus believed later in life that he had defined "species" too narrowly, and many creationists (I am now speaking of YEC's, or at least YLEC's, not ID advocates) agree with him. Creationists may be wrong, but this is not the way to go about proving it. Secondly, your paragraph,
There turns out not to have been a Garden of Eden, fossils are found of strange plants and animals that do not exist today, today’s species fall naturally into a heirarchial organization, there are severe moral and intellectual problems with the existence of the Intelligent Designer named God, a lot of the details of the designs are mind-bogglingly stupid, etc.
which is meant to be hard-hitting, misses the mark in a number of ways. First, you simply overstate your case when you say that "there turns out not to have been a Garden of Eden". It is true that we haven't found one, but to assume that the Garden of Eden, or even part of it, could survive the Flood seems to be stretching it. It would either have to have been removed by a miracle, in which case there is no particular reason for it being on the earth, or it would have gotten buried in the Flood. Now, if you had argued that "no physical evidence for the Garden of Eden has been found," you would not have been overstating the evidence, and you might have left some people with questions to mull over. But when you make statements that you clearly cannot prove, you tempt people to dismiss you as a crank. As far as strange fossils go, who says that all life forms, especially all species, that have ever existed must exist now? That is a rather gross mischaracterization of creationist theory. Today's species falling naturally into a hierarchical organization has been recently disputed here, and while it may be convincing to your buddies, its strength as an argument is much less here. And stupid design is only stupid if someone can do better, and if you don't know how to do better, you might want to be a little more careful about accusing the original designer of incompetence. Perhaps your best argument is that there are moral and intellectual problems with God. But be careful even here. For if your best arguments are here, and the scientific evidence for design is strong, you risk making this a science versus religion argument with you on the side using religious arguments. You talk about Darwinian evolution as being "both naturalistic and simple." The problem some of us see is that it is too simple, and naturalism is only an advantage if one starts with the presupposition that naturalism is good and even modified supernaturalism is bad. That sounds suspiciously like a religious commitment. In #229 you say, referring to Apolipoprotein A-I milano,
No consciousness or intelligence whatsover is required to make this complex, specified protein. A single base pair mutates and the specification is followed automatically and mechanically.
but this misses the point. If one is given wild-type protein, one can get to milano protein with one mutation. But very few people here would insist that single mutations cannot arise and be helpful in certain environments. Sickle cell trait is a case in point. Now if you can create the wild-type protein with only a series of beneficial single mutations from some other protein, we will be quite impressed. And you have not established even that milano is in fact superior. For all we know, too low cholesterol may predispose to cancer or spontaneous abortion or otherwise be injurious to health. The wild-type protein may very well be a better protein to have if one is not sedentary and eating the standard western diet. You need to be more tentative about your conclusions. Your comments about teosinte (#230) are addressing a problem that, AFAICT, does not exist. The vast majority of ID advocates, myself included, have no problem with single mutations, or progressively improving series of single mutations, being a pathway from teosinte to modern corn. I know that PT is bad at stereotyping ID, but I didn't think it was that bad. Besides, from a creationist perspective, maybe corn was the original and teosinte is the degenerate form, in which case we are just recovering the original information. Use some imagination. Or are you wishing to use an argument from personal incredulity without even trying to understand the other side? Periodically someone will burst upon the scene here spouting platitudes from Panda's Thumb as if they are Gospel, seeming to think, like some street preacher, that we just haven't been given the straight truth and that if you just hit us between the eyes, our belief systems will all crumble and we will be converted. I hate to tell you this, but it will be a whole lot more work than that. On the other hand, maybe that's the effect you want. You can claim that we "just won't listen to reason" and go back to feeling smugly that you have witnessed to the incorrigible, who can now be safely damned, or at least fired in good conscience. But if you want to actually engage, you have a lot of work to do.Paul Giem
January 30, 2009
January
01
Jan
30
30
2009
09:53 AM
9
09
53
AM
PDT
JayM (#232), The hypothesis that specification signals previous intelligence is not circular at all if stated correctly and not in a straw-man variant. 1. Observation: we observe intelligent agents creating specifications every day. In fact, it is these specifications that basically define something as intelligent. And this is by no means circular reasoning. Tell me, how would you know that something is intelligent if it did not produce specifications, yet was mumbling incoherently and randomly bumping into walls? This is merely an observation of what basically defines what we determine to be intelligent (system able to utilize foresight -- modeling the future and generating targets). 2. The math behind measuring for a specification combined with NFLT (and furthermore, active info) seems to put a no-go on what chance and law will produce absent previous intelligence. It seems that specifications most probably (as a best explanation) will not be produced by chance and law absent intelligence. And again, there is no circular reasoning here. This merely extends the math into a type of no-go theorem, which can be stated informally as "unless there is an infinite regress of active information, chance and law absent intelligence will not produce specifications." 3. Combine #1. (observation) with #2. (no go theorem) and you have created an hypothesis that a specification will reliably signal previous intelligence. 4. We flow from a definition of intelligence (foresight) and an observation of what foresight produces, to a no-go theorem, to an hypothesis. The only assumptions that need to be made is that there is most likely not an infinite regress of active information (an infinite bias toward our universe, life, and the evolution of intelligence) and that other humans do indeed possess foresight so that we can conclude that the specifications which they produce are indeed associated with their foresight since I know as a fact that my foresight is essential in me producing specifications such as this conversation. As an aside, the first assumption can even be tested somewhat to see what chance (background noise) and law absent intelligence (a set of laws arbitrarily chosen without consideration for future results) will produce. If this random configuration does not produce active information, then I see no reasons why chaotic and possibly random fluctuations on the quantum scale would have a bias (active information caused from the correct search procedure being matched with the correct non-uniform probability distribution) toward our universe, life, and the evolution of intelligence. If you still believe that there is circular reasoning involved as opposed to arguing from observation and math to hypothesis, then please point out the circular argument.CJYman
January 30, 2009
January
01
Jan
30
30
2009
07:26 AM
7
07
26
AM
PDT
"But the people who believe in the “God in six days” theory won’t let go. 150 years later, they’re still clinging to their old, superseded theory. They have no genuine new evidence to support their theory and mostly attack evolution. Indeed, they often don’t seem to even understand the new theory they are criticizing, but they insist on being taken seriously. It’s rather frustrating for all involved." What utter nonsense. This and some of your succeeding posts indicate your ignorance on the debate. You make a lot of unsupported assertions and act like they are true and then reveal your naiveté by bringing up irrelevant examples. Have at it my friend because you are making us look good. The specification argument is not circular. The only two places specification is found are in human intelligence and in life. It is found no where else. No natural process has ever been seen creating specification while it is seen in human activity and possibly some animal activity. The whole design argument is based on probability of cause not a logical certainty. If one sort of activity can cause something and another sort of activity has never been seen creating this result then it strengthens one over the other as the likely answer. Talk about utter faith. Anyone who believes that natural processes create specification is basing it on nothing and then we get the lecture on how this faith based epistemology now rules the day. As I said what utter nonsense. It shows more about the people who believe in it then it does about truth.jerry
January 30, 2009
January
01
Jan
30
30
2009
07:20 AM
7
07
20
AM
PDT
Cardiovasc Diabetol. 2007; 6: 15. "Gene transfer of wild-type apoA-I and apoA-I Milano reduce atherosclerosis to a similar extent" Corinna Lebherz,corresponding author1,3 Julio Sanmiguel,1 James M Wilson,1 and Daniel J Rader2 Background The atheroprotective effects of systemic delivery of either apolipoprotein A-I (wtApoA-I) or the naturally occurring mutant ApoA-I Milano (ApoA-IM) have been established in animal and human trials, but direct comparison studies evaluating the phenotype of ApoA-I or ApoAI-Milano knock-in mice or bone marrow transplantated animals with selectively ApoA-I or ApoAI-Milano transduced macrophages give conflicting results regarding the superior performance of either one. We therefore sought to compare the two forms of apoA-I using liver-directed somatic gene transfer in hypercholesterinemic mice – a model which is most adequately mimicking the clinical setting. Methods and results Vectors based on AAV serotype 8 (AAV2.8) encoding wtApoA-I, ApoA-IM or green fluorescent protein (GFP) as control were constructed. LDL receptor deficient mice were fed a Western Diet. After 8 weeks the AAV vectors were injected, and 6 weeks later atherosclerotic lesion size was determined by aortic en face analysis. Expression of wtApoA-I reduced progression of atherosclerosis by 32% compared with control (p = 0.02) and of ApoA-IM by 24% (p = 0.04). There was no significant difference between the two forms of ApoA-I in inhibiting atherosclerosis progression. Conclusion Liver-directed AAV2.8-mediated gene transfer of wtApoA-I and ApoA-IM each significantly reduced atherosclerosis progression to a similar extent.gpuccio
January 30, 2009
January
01
Jan
30
30
2009
06:02 AM
6
06
02
AM
PDT
djmullen @229
gpuccio [226]
Specification comes form the conscious representation of an intelligent, conscious beings.
That puts your result, ‘There is an intelligent and therefore conscious designer,’ into your premises, creating a circular argument.
Well put. The whole notion of "specification" is one of the reasons I responded to Dr. Fuller's post in another thread. Design detection, by its very nature, must make certain assumptions about the nature of the designer and must generate additional knowledge of the designer. When ID proponents ignore this inherent connection and refuse to discuss the nature of the designer, it opens the ID movement up to charges of being disingenuous. It also results in the type of circular reasoning you identified. Specification is asserted to only originate from intelligence, but that is the very thing we're supposed to be proving.
It’s also an argument that is easily shown to be wrong. Look again at the example of Apolipoprotein A-IMilano I gave in message 205. This protein is wonderfully specified: it cleanses plaque from the arteries and prevents heart attacks. Yet it’s caused by a simple point mutation that changes an arginine to a cysteine at position 173. No consciousness or intelligence whatsover is required to make this complex, specified protein. A single base pair mutates and the specification is followed automatically and mechanically.
Thank you for the example. This is why I think the fertile ground for ID research lies in determining where the edges of evolution actually lie. Certainly some point mutations can have remarkable consequences. The question is, how far can the MET mechanisms get? A related question is, how large is the connected region of viability in genome space? These questions have objective, reproducible, and therefore scientific answers, which have the potential to provide direct support for ID theory. That is something that, sadly, I still don't see Dr. Dembski's two papers, however interesting, capable of. JJJayM
January 30, 2009
January
01
Jan
30
30
2009
05:17 AM
5
05
17
AM
PDT
djmullen @228
But the people who believe in the “God in six days” theory won’t let go. 150 years later, they’re still clinging to their old, superseded theory. They have no genuine new evidence to support their theory and mostly attack evolution. Indeed, they often don’t seem to even understand the new theory they are criticizing, but they insist on being taken seriously.
Your overall message, while somewhat brutally stated, isn't a bad summary of why modern evolutionary theory is accepted by the overwhelming majority of scientists. Unfortunately, I am forced to agree that many ID proponents have a very limited understanding of MET specifically and science in general. The amount of painstaking, detailed, and aggressively reviewed work done by scientists every day is truly phenomenal. The fact that some ID proponents dismiss it out of hand reflects badly on the ID movement. On the other hand, your insinuation in this one paragraph is as overbroad as those I've complained about by people on "my side." Not all ID proponents are young earth creationists. Most that I have interacted with personally are not (as far as I can tell). I'm rather surprised that YECs are willing to enter the "big tent" of ID at all, given that their worldview is fully faith-based and does not require science for validation.
It’s rather frustrating for all involved.
It's just as frustrating for those of us on the ID side to see ID proponents discrediting the movement because of a lack of understanding of MET. We need to be even more rigorous than our detractors if we are to be taken seriously. JJJayM
January 30, 2009
January
01
Jan
30
30
2009
05:06 AM
5
05
06
AM
PDT
Several people have claimed that mutations cannot create new traits or functions in organisms. Take a look at this site: http://www.koshland-science-museum.org/exhibitdna/crops02.jsp It shows three of the mutations that converted teosinte into modern corn: "For example, a gene on chromosome #1 causes the ears of corn to be big and to grow on a few short branches. In contrast, the ears of teosinte are scattered over many small branches. A gene on the second chromosome causes more rows of kernels to grow, yielding more food per corn plant. A gene on the fourth chromosome causes corn kernels to have small, soft casings. Teosinte kernels have much larger, harder kernel casings that make them hard to eat." The article includes picturs of the small, hard kernels of teosinte and modern corn. Scroll down from here: http://www.matrifocus.com/LAM06/corn.htm and see a picture of a teosinte bush and compare it to a modern corn plant. Random changes can very definitely create new traits and functions.djmullen
January 30, 2009
January
01
Jan
30
30
2009
01:22 AM
1
01
22
AM
PDT
gpuccio [226] "Specification comes form the conscious representation of an intelligent, conscious beings." That puts your result, 'There is an intelligent and therefore conscious designer,' into your premises, creating a circular argument. It's also an argument that is easily shown to be wrong. Look again at the example of Apolipoprotein A-IMilano I gave in message 205. This protein is wonderfully specified: it cleanses plaque from the arteries and prevents heart attacks. Yet it's caused by a simple point mutation that changes an arginine to a cysteine at position 173. No consciousness or intelligence whatsover is required to make this complex, specified protein. A single base pair mutates and the specification is followed automatically and mechanically.djmullen
January 30, 2009
January
01
Jan
30
30
2009
01:11 AM
1
01
11
AM
PDT
gpuccio [222] "For once, I will make an analogy: we onserve some facts, like objects falling, etc. Then one Newton comes, and he creates a mathemathical theory which can well explain that. For a moment, let’s forget that it is a theory based on necessity, and let’s concentrate only on the epistemological aspect: it is an explanatory theory of some kind. Well, in absence of any other explanatory theory, or even of any other “credible” explanatory theory, that becomes for the moment the “best explanation”. But if another good explanatory theory is suggested, we have to choose between the two." An excellent point. We observe some facts, such as the great numbers of species. Then a Hebrew writer borrows a Babylonian theory, modifies it a little and in absence of any other explanatory theory, or even of any other “credible” explanatory theory, that becomes for the moment the “best explanation”: An Intelligent Designer named God created every species in six days about 4000 BC. This theory explains all of the evidence and it reigns for over 2000 years. Then new evidence is found that doesn't fit the old theory. There turns out not to have been a Garden of Eden, fossils are found of strange plants and animals that do not exist today, today's species fall naturally into a heirarchial organization, there are severe moral and intellectual problems with the existence of the Intelligent Designer named God, a lot of the details of the designs are mind-bogglingly stupid, etc. Then, about 150 years ago a new theory is proposed by Darwin. His theory accounts for the heirarchial organization of species, both modern and ancient, doesn't need a Garden of Eden or an Intelligent Designer (who is incredibly more unlikely than all of the proteins in all of the species combined), explains naturally the existence of diseases and hyper-nasty plants and animals, etc. and does it all with a theory that is both naturalistic and simple. This is a new and credible theory and we have to choose between it and the old one. The new theory is gradually adopted by just about every scientist in the world, easily encompasses new discoveries such as how genes work and 150 years later reigns triumphant. But the people who believe in the "God in six days" theory won't let go. 150 years later, they're still clinging to their old, superseded theory. They have no genuine new evidence to support their theory and mostly attack evolution. Indeed, they often don't seem to even understand the new theory they are criticizing, but they insist on being taken seriously. It's rather frustrating for all involved.djmullen
January 30, 2009
January
01
Jan
30
30
2009
12:56 AM
12
12
56
AM
PDT
Mark Frank (#224) gpuccio has answered you, but let me give it another try. Your claim that "this information is defined in terms of the improbability of the outcome given chance" is probably true, if you want an operational definition. But it actually misses the point of the information itself. Let me give you an example of what I mean. At a certain point in American history, the need was felt to communicate the actions of the British army in Boston. An agreed upon signal was that if one lantern was hung in the bell tower of a church (apparently the Old North Church), the British were coming by land, and if two lanterns were hung, they were coming by sea. The number of lanterns thus gave a binary message (or perhaps trinary--if no lanterns were hung, they were not coming). In any event, it was necessary for the sender of the message to be in control of how many lanterns were hung; otherwise the message would not have been reliable. It did not matter, for purposes of communication, whether no lanterns were customarily hung, whether one lantern was customarily hung, whether two lanterns were customarily hung, whether there was some kind of regular rotation (two lanterns on Sundays, for example), or whether the number of lanterns was random or semi-random. All that was necessary was that the sender be able to control the number of (lighted) lanterns hung. Now, to conceal the design from the British, it would help if there were a random or semi-random number of lanterns hanging usually, as the hanging of one or two lanterns then would have aroused less suspicion. But even if normally there were no lanterns hanging, without some kind of reason to connect the number of lanterns with British troop movements, it would have been hard to make that connection. This is a nearly perfect example of undetectable intelligent design. But notice that what makes it intelligent design is not its improbability. It is its controlled conformation to a specific code which specifies a message. Without that code, you haven't got a clue as to what the message is, or even if there is a message. And without control, you have no way of being confident that the message received is the message that the sender wanted to give. This example illustrates that the essence of intelligent design is not complexity (how much more simple can you get than control over one bit of information?). It is specification. Detectable design is a whole different matter. If semi-randomly one or two lights was hung in the tower, and the person hanging the lights this time was the usual person to hang lights, the finding of one or two lights in the tower, with the attendant message getting out, would have given the British no reason to suspect that their intentions were being broadcast. On the other hand, if a messenger were caught with a message stating, "The British Army is coming by sea" (or by land), the army officers would probably have made the appropriate design inference and the bearer of the message would have found himself on the end of a noose, with the message not reaching its intended recipients. Our problem is not, by and large, that of the colonists trying to send a message. It is rather that of the British army trying to detect a message. Towards that end, we have to be careful of being too suspicious, as we can then easily become paranoid. But if we find a message written in English, saying "The British are coming by sea", we do well to make the appropriate design inference. Even if we find one we cannot understand, we do well to see if it is in English code, or in French or German. If it is in Iroquois, we might still miss the significance of the message. Getting back to the issue of design in biology, the problem is entirely reversed. The appearance of design is obvious even to those who would wish to deny design. Dawkins comes to mind, as do George Gaylord Simpson and Frances Crick. So that criterion of design is already fulfilled. The only remaining question is, can some other process explain the facts equally well, or even remotely equally well? If the answer is no, then the design hypothesis is by far the best-supported one. Saying that design is simply the negation of law-guided and chance-produced outcomes is only true of detectable design, not design itself. And it does not capture the essence of design. Design itself has to do with intending one event to happen and not another, coordinated with some other aspect of reality (broadly defined to include mathematics and aesthetics). That's why the "functionally" is important in functionally specified complex information. That's why, if you were able to demonstrate a pathway from other proteins to the flagellum where each step was in some real sense, in some real environment, an improvement on the last one, you might still fail to convince everyone that there really was no design. Many would abandon ID (if you could do that for the OOL I would), but many of those would go the theistic evolution route, contending that the universe was set up so that life would evolve, and that life was planned after all. But right now mechanistic evolutionists do not have even that. All they have is promises that such a pathway will someday be found. That's a lot of faith, brother. Forgive me if I do not have such faith.Paul Giem
January 29, 2009
January
01
Jan
29
29
2009
03:05 PM
3
03
05
PM
PDT
Mark: I hope Jerry can ghange his mind! I think there is still a basic misunserstanding about design, which I tried to suggest in one on my posts in this thread (I can't remember which). In the concept of CSI, the true mark of design is specification, not complexity. I will be more clear: designed things are always specified, but not always complex. Specification comes form the conscious representation of an intelligent, conscious beings. That's why computers can create new complexity (see my example of pi calculation), but not new specifications. But then, why do we need complexity (that is, improbability) in the concept of CSI? It's simple. it's because if something is simple, it could "appear" specified, even if nobody has ever thought of outputting that specification. Inother words, specification could be a pseudo-specification. Seeing forms in clouds is just an example. So, complexity (improbability) is added to the filter to avoid those false positives. And complexity acts as a negative filter. But the specification, that is a very positive connotation. That is the sign of consciousness, of intelligence, of meaning. The fact that pseudospecifications may happen, when complexity does not act as a safeguard against them, does in no way change the fact that true specification (a true meaning inputted by a conscious agent, and recognizable, in the right context, by other conscious agents) is the positive sign of design.gpuccio
January 29, 2009
January
01
Jan
29
29
2009
01:32 PM
1
01
32
PM
PDT
Re #221 Gpuccio I fear Jerry does not share your opinion of me! MarkMark Frank
January 29, 2009
January
01
Jan
29
29
2009
11:19 AM
11
11
19
AM
PDT
Gpuccio Actually my point is more empirical than you may realise. I am actually talking about observable evidence. The argument for design seems to hinge entirely on a particular definition of information. But this information is defined in terms of the improbability of the outcome given chance. So if that improbability is not true - then there seems to be no other evidence for design. What other observation can you make to support design that does not rely on the improbability of a chance explanation?Mark Frank
January 29, 2009
January
01
Jan
29
29
2009
11:17 AM
11
11
17
AM
PDT
Mark: I forgot: you say: "A designer of undefined power and motives is always the perfect explanation for everything and might well have designed things to look like Darwinism." You know that that has never been my point, and never will be. I declare officially that I will never use that argument, and that if others in ID use it, I will openly criticize them! What can I do more? :-)gpuccio
January 29, 2009
January
01
Jan
29
29
2009
10:54 AM
10
10
54
AM
PDT
Mark: you are never satisfied: we have just given you a way to falsify ID, and you still complain! :-) As often, I don't agree with your last point. The absence of a consisten non design explanation of biological information is one of the strongest points of ID, but that does not mean that ID is only a negative theory. Let's put it this way: 1) A design scenario can explain well biological information but 2) If a non design scenario can explain it equally well, the affirmation that design is the "best explanation" will very much lose strength, unless it can be based on other kinds of evidence. A constant difference between us is that you tend more to think in purely logical terms, while I prefer to think in empirical terms. Empirical science is not a branch of pure logic. For once, I will make an analogy: we onserve some facts, like objects falling, etc. Then one Newton comes, and he creates a mathemathical theory which can well explain that. For a moment, let's forget that it is a theory based on necessity, and let's concentrate only on the epistemological aspect: it is an explanatory theory of some kind. Well, in absence of any other explanatory theory, or even of any other "credible" explanatory theory, that becomes for the moment the "best explanation". But if another good explanatory theory is suggested, we have to choose between the two. And there never will be complete consensus about which one is the best. Scientific consensus is only the temporary opinion of the majority (or, sometimes, of those who have morfe power). But would you say that a theory of gravity, like Newton's theory, is purely negative only because it needs to be the best explanation? Any eory can meet a sudden crisis if it encounters a better theory as its competitor. The same is true for darwinian evolution and design, which at present are practically the only two competing theories to explain the causal origin of biological information. One must choose between them. And the choice is personal, and will never be a question of majority consensus, of authority, or of final evidence. I really don't believe in final truth in science. Even our scientific approach, in the end, is a question of free choice.gpuccio
January 29, 2009
January
01
Jan
29
29
2009
10:50 AM
10
10
50
AM
PDT
Jerry: I think it's a problem of mnmotivation. Some come here to discuss, others to disturb. We can accept both, but there are limits to the exchange you can have with someone who only wants to disturb. I am never unhappy with others thinking differently from me. We have witnessed many serious and creative interlocutors here (Mark Frank and Sal Gal are perhaps the most recent which I would cite). They have different opinions, and are very sincere and precise in expressing them. I never want to convince anybody, but I really enjoy intellectual confrontation, even if passionate, especially if passionate. But intellectual confrontation it has to be.gpuccio
January 29, 2009
January
01
Jan
29
29
2009
10:31 AM
10
10
31
AM
PDT
Mark --I don’t think the evidence for ID would be changed one jot. A designer of undefined power Mark, you are correct in that God can always be invoked but ID does not invoke God nor does it attempt to describe the nature of the designer. What you are describing is a means of falsifying a method of ascertaining design (IC), which is an aspect of ID.tribune7
January 29, 2009
January
01
Jan
29
29
2009
09:45 AM
9
09
45
AM
PDT
So I am interested to know what evidence for ID, if any, would remain if Darwinism was established as plausible. If you take away the evidence against Darwinism - what’s left?
First off, the major qualification is whether a particular process operates uniformly for all problems. There may be a subset of FCSI, or just plain 'ol CSI, that is within reach of unguided processes. If so, ID theory would have to adjust to that. But if it's not operating uniformly, and this can be shown, then, no, ID theory in relation to MET would not be falsified in its entirety. Second, I'd make the distinction between "Darwinism" and MET. Darwinism in its common, improper usage usually denotes a dedication to the concept that life in its entirety does not rely on any intelligent interaction in any form. So in that case, the grand claims of MET would be vindicated but ID would still stand, with the remaining ID-compatible hypothesis being intelligent evolution. What justification remains? OOL and chemical evolution, which Darwinism in a general sense overlaps. In this hypothetical scenario MET only becomes probable based upon the rules and conventions of the information-based replicator. And if anything this case for ID is easier to make. But since this conversation has already showcased just how weak the case for Darwinism and certain claims of MET (but not micro-evolution!!!) really is I don't see how considering this scenario would help.Patrick
January 29, 2009
January
01
Jan
29
29
2009
08:45 AM
8
08
45
AM
PDT
1 2 3 4 10

Leave a Reply