Uncommon Descent Serving The Intelligent Design Community

Two forthcoming peer-reviewed pro-ID articles in the math/eng literature

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The publications page at EvoInfo.org has just been updated. Two forthcoming peer-reviewed articles that Robert Marks and I did are now up online (both should be published later this year).*

——————————————————-

“Conservation of Information in Search: Measuring the Cost of Success”
William A. Dembski and Robert J. Marks II

Abstract: Conservation of information theorems indicate that any search algorithm performs on average as well as random search without replacement unless it takes advantage of problem-specific information about the search target or the search-space structure. Combinatorics shows that even a moderately sized search requires problem-specific information to be successful. Three measures to characterize the information required for successful search are (1) endogenous information, which measures the difficulty of finding a target using random search; (2) exogenous information, which measures the difficulty that remains in finding a target once a search takes advantage of problem-specific information; and (3) active information, which, as the difference between endogenous and exogenous information, measures the contribution of problem-specific information for successfully finding a target. This paper develops a methodology based on these information measures to gauge the effectiveness with which problem-specific information facilitates successful search. It then applies this methodology to various search tools widely used in evolutionary search.

[ pdf draft ]

——————————————————-

“The Search for a Search: Measuring the Information Cost of Higher Level Search”
William A. Dembski and Robert J. Marks II

Abstract: Many searches are needle-in-the-haystack problems, looking for small targets in large spaces. In such cases, blind search can stand no hope of success. Success, instead, requires an assisted search. But whence the assistance required for a search to be successful? To pose the question this way suggests that successful searches do not emerge spontaneously but need themselves to be discovered via a search. The question then naturally arises whether such a higher-level “search for a search” is any easier than the original search. We prove two results: (1) The Horizontal No Free Lunch Theorem, which shows that average relative performance of searches never exceeds unassisted or blind searches. (2) The Vertical No Free Lunch Theorem, which shows that the difficulty of searching for a successful search increases exponentially compared to the difficulty of the original search.

[ pdf draft ]

—————

*For obvious reasons I’m not sharing the names of the publications until the articles are actually in print.

Comments
"Another stab at it: DNA seems to be viewed as software, but isn’t it really hardware?" No more than a CD-ROM is hardware; is has a physical reality, but it is the information contained within that is its purpose. DNA is a vessel of information (of the functionally specified variety) that animates and organizes inaimate matter into living tissue. It is the only thing in the universe that does so.Upright BiPed
January 22, 2009
January
01
Jan
22
22
2009
12:30 PM
12
12
30
PM
PDT
#32:
No, the rock is not information. And your example has nothing to do with DNA. DNA (at least, the protein coding genes) is digital information which stores a sequence which, when translated into an aminoacid sequence, has a specific function. We know the code, we can read it. We know, in many cases, the product (the protein) and its function. What has all that to do with your rock?
I will concede that my point may be "clear as mud," and I appreciate that anyone makes an attempt to answer it. What I'm trying to say is: notwithstanding the characterization of DNA as "digital information," isn't it really a physical configuration of molecules that interacts with other physical objects/particles, leading to a far flung range of physical consequences? Another stab at it: DNA seems to be viewed as software, but isn't it really hardware?pubdef
January 22, 2009
January
01
Jan
22
22
2009
12:23 PM
12
12
23
PM
PDT
gpuccio:
Natural laws do not imply any new information except obviously the possible information in the original setting of the laws themselves: that is an aspect of the fine tuning argument, which is very different, and separated, form the main ID argument of information in biology.
Since Dembski and Marks put no restriction on what processes can be modeled as searches, any process that isn't uniformly random involves active information. (In fact, even uniformly random processes can be modeled as active-info-rich searches by simply modeling the search space in such a way that the sampling isn't uniform. If a process generates random circles whose diameters are uniformly distributed, simply define the search space in terms of the areas rather than the diameters.) So natural laws are active-information-rich according to Marks and Dembski's definitions. Their claim is that this active info must be referred to a higher-level search (or, presumably, to intelligence, although they don't say that in the papers), which can in turn be referred an even higher-level search, ad infinitum. Regularities that just are don't seem to be an option in this framework.
That equals, obviously, knowing the searched phrase. In other words, you can get the weasel phrase only of you already know it. But then why search for it?
The active information metric is defined in terms of a prespecified target T. If prespecified targets are a problem, then they're a problem for Marks and Dembski's whole framework.
That’s why we speak of a search: because darwinian evolution needs a system which acts as a search.
Who is "we"? The idea that biological evolution is searching for something is certainly not the mainstream view.
That’s why you will never obtain better digital replicators just by random noise in a digital system. The concept does not simply work. Mathematicians can argue about why it does not work, but we do know that it does not work: you just have to try to detail it or put it to test, both in a computer environment or in a biological environments. The only space where that system seems to work is in the mind of darwinists.
Genetic algorithms certainly do work. Whether they scale well enough to produce our earthly biota is another question.R0b
January 22, 2009
January
01
Jan
22
22
2009
11:56 AM
11
11
56
AM
PDT
gpuccio @30 Evolution performs no search, it is in itself, a result. Evolution is not the trip, it's the destination you arrive at. Whether implied by Darwin or anyone else, there is no search process involved. Since evolution has no goal, we can't tell where it's going or how easy it will be to get there.Toronto
January 22, 2009
January
01
Jan
22
22
2009
11:21 AM
11
11
21
AM
PDT
WilliamDembski[26], I skimmed the "search for a search" paper. It's a nice paper. I think there is a simpler proof of "conservation of uniformity" for finite Omega, one that uses elementary methods and does does not invoke weak convergence of measures. Consider the vector [p(1),...,p(n-1), p(n)] where [p(1),...,p(n-1)] is uniformly distributed and p(1)+...+p(n-1)+p(n)=1. Integrating over the simplex gives the joint pdf and the marginal of each p(i), and the expected value in the marginal is 1/n. I understand that the authors are pro-ID but I don't see how the paper itself can be labeled pro-ID." What is the logic behind such a claim? Because the darwinian search obviously is not uniform, it has not been chosen according to the uniform distribution induced by the Kantorovich-Wasserstein metric, hence there is support for ID? I also don't understand your comment. What do Schneider, Pennock, and Behe have to do with how your paper is relevant to biology?Prof_P.Olofsson
January 22, 2009
January
01
Jan
22
22
2009
09:17 AM
9
09
17
AM
PDT
Laminar: "This is an example of a random search with a binary evaluation criteria (it either is the works of Shakespeare of it isn’t) There is no descent with modification and no fitness metric, just a halting condition. It is NOT an evolutionary search." Well, NS is a very limited oracle, and it can only judge if a reproductive advantage has been achieved or not. I really think that all that talking of landscapes and fitness functions creates a lot of confusion. Let's be simple: NS can do only two things: expand a genome if a perceptible reproductive advantage has been achieved (positive selection); or eliminate it if there has been a significant loss of function (negative selection). That is the only oracle you have in the darwinian model, nothing else. And I am still expecting that a credible molecular model for macroevolution be detailed with that kind of oracle correctly utilized, and the random variation part correctly computed. Is that asking too much from such an extraordinary scientific theory as darwinian evolution is supposed to be?gpuccio
January 22, 2009
January
01
Jan
22
22
2009
07:51 AM
7
07
51
AM
PDT
pubdef (#19): "Suppose the course of a creek is altered by a rock that falls into it. Better, suppose that the new course leads the creek into country that is more amenable to creek-dom, and the creek becomes a river. Is the rock “information?” My real point: how is this different from DNA (or RNA — I don’t have the specifics of that science)? A physical substance affecting another physical substance, with extensive consequences in the long run?" No, the rock is not information. And your example has nothing to do with DNA. DNA (at least, the protein coding genes) is digital information which stores a sequence which, when translated into an aminoacid sequence, has a specific function. We know the code, we can read it. We know, in many cases, the product (the protein) and its function. What has all that to do with your rock?gpuccio
January 22, 2009
January
01
Jan
22
22
2009
07:43 AM
7
07
43
AM
PDT
PO--I wrote “relevant” which is not the same as “related.” Relevant is relevant to related and related is related to relevant Don’t you read Merriam-Webster? It has an awful plot :-)tribune7
January 22, 2009
January
01
Jan
22
22
2009
07:41 AM
7
07
41
AM
PDT
R0b: Just a few thoughts on what you say: 1) Natural laws do not imply any new information except obviously the possible information in the original setting of the laws themselves: that is an aspect of the fine tuning argument, which is very different, and separated, form the main ID argument of information in biology. Once a necessity mechanism is detailed, there is no addede information there. The mechanism has to follow the laws. Functional information can be superimposed on contingent structures, not on necessary ones. The information in biology is very definite: it is digital, and it is functional. Necesiity has no role in creating it, except for the possible role of the NS model. NS is a model of necessity, but it is a model which has never been detailed, except for very trivial microevolutionary events. And I am not only saying that it has not been proved: I am saying that no real molelular model has ever been detailed, for any important protein transition, for example, where the role of NS is substantiated. 2) I will not answer your weasel argument becasue, like you, I am not a mathemathician. But I will only say that I don't agree with your concept. If you have to get the final phrase, you have to know the solution for eaxh search, and the correct relationship netween those solutions. That equals, obviously, knowing the searched phrase. In other words, you can get the weasel phrase only of you already know it. But then why search for it? It's not the same if you have to look for one protein with one function you need. Take protein engineers, for instance. They know what they want (the function), but they do not know how to achieve it (they don't know the sequence which will have that function, notwithstanding all the knoweledge we have about proteins). So, they utilize partial random search, and function measuring for selection. And still, they have to work a lot to get some results. And function measuring is much more sensitive than just NS (selecting a fucntion only when it is strong enough to provide, by itself, a reproductive advantage). 3) Always to get back to biology, in evolution there is IMO no search model: just replicators and an environment, and random variation. NS is only a consequence of the interaction between the replicator and the environment. But the theory of darwinian evolution implies a search, because it assumes that the growing information in higher beings is derived from the information in lower beings through unguided mechanisms. In other words, darwinian theory interprets a completely neutral system (the replicator and its environment) as a system which can find new information and new intelligent patterns. That's why we speak of a search: because darwinian evolution needs a system which acts as a search. The only problem is that such a system is not really effecting any search, and that's exactly why the theoy does not work. That's why you will never obtain better digital replicators just by random noise in a digital system. The concept does not simply work. Mathematicians can argue about why it does not work, but we do know that it does not work: you just have to try to detail it or put it to test, both in a computer environment or in a biological environments. The only space where that system seems to work is in the mind of darwinists.gpuccio
January 22, 2009
January
01
Jan
22
22
2009
07:38 AM
7
07
38
AM
PDT
tribune[27], I wrote "relevant" which is not the same as "related." Don't you read Merriam-Webster?Prof_P.Olofsson
January 22, 2009
January
01
Jan
22
22
2009
06:53 AM
6
06
53
AM
PDT
Sal Gal (#24): "I don’t think it’s a particularly well kept secret that the genetic sequences that are prime for evolution into genes coding for new proteins are duplicate genes and pseudogenes." It is not a particularly well kept secret that darwinists do think that way. They have to use the hypothesis of duplication as a first step because that helps to keep the previous function at the level of the originary gene. That seems useful, if you are supposing that unguided evolution goes from protein A with function A1 to protein B with function B1. But, at the same time, operating on a duplicate gene implies that negative selection cannot any more act. "There is huge inconsistency among IDists on the matter of functionality of DNA sequences. They often push the point that most or all of the genome is designed, and that we simply have a great deal to learn about what non-coding sequences do. But when they want to portray evolution as utterly improbable, they say that a sequence of bases is categorically fit if it codes for a prespecified protein, and is categorically unfit otherwise. Now which is it? Can a non-coding sequence contribute to fitness, or not?" I understand that you are not a biologist, but please, let us go back to real examples. Protein coding genes are only 1.5% of the human genome. The non coding DNA is almost certainly functional, and its functions are almost certainly regulatory. But we still understand poorly what those functions are and how they work. There is absolutely no inconsistency in ID about that matter. We do believe that non coding DNA is functional (at least a graet part of it; personally, I belive it is almost completely functional). And we do believe that its functions are regulatory. But, when we "want to portray evolution as utterly improbable", we do choose the model of protein coding genes, because that's the model we know about, and that's the model on which darwinist theory has been built. In other words, we do not deal with the information in non coding DNA for exactly the same reason why darwinist do not deal with it: because we don't know where it is, and how it is encoded. But where is the inconsistency? Let's say that we are dealing with how that 1.5% of the genome which codes for proteins was generated. That is more than enough to prove that darwinian evolution is "utterly improbable". In other words, in tha 1.5% we have information fro about 20,000 proteins (at least), and we have to explain how it emerged. Moreover, a protein coding gene is a protein coding gene. It encodes a protein sequence. And there is only one kind of fitness for a sequence of nucleotides which encodes a protein sequence: the protein must be functional. Or are you suggesting that a protein coding gene evolves first as a regulatory gene, and then is "coopted" as a protein coding gene? Are you sharing the darwinist folly to that point? In the end, the problem is simple: we have two proteins, A and B, completely different one from the other, and someone says that B is derived from A, or that both are derived from a common ancestor. Well, the information in those proteins is digital, and for me that means that any search based on random variation has tp traverse the combinatorial space defined by proteins sequences of approximately that length. That is the problem with random variation an a digital search. And if you say that NS changes the facts, that may be true, but NS is a model of necessity, and you have to build that model, to detail where and how NS can work. Simply hoping that it can work anyway will not do. Regulatory functions are all another question. They are still vastly unknown. But, if we knew more about them, they would certainly be a much greater problem for darwinists, because indeed regulation is a higher level of information.gpuccio
January 22, 2009
January
01
Jan
22
22
2009
06:53 AM
6
06
53
AM
PDT
how they are relevant to evolutionary biology. Everything is related to evolutionary biology, Professor. If it wasn't for Darwin, we'd still be walking around in animals skins and rubbing two sticks together to make a fire. Don't you read, Newsweak? Oh, I misspelled it. Horrors.tribune7
January 21, 2009
January
01
Jan
21
21
2009
08:51 PM
8
08
51
PM
PDT
As for the relevance of this work to biology, let me remind commenters that Thomas Schneider used his ev program to argue against Behe and for the power of natural selection in biological evolution and that Rob Pennock cited his work on AVIDA likewise to argue against Behe and for evolution (Pennock cited this not in his NATURE article but in his Dover expert witness report). So if you've got a problem with the applicability of the research at the Evolutionary Informatics Lab to real-life biological evolution, take it up with Schneider and Pennock.William Dembski
January 21, 2009
January
01
Jan
21
21
2009
08:17 PM
8
08
17
PM
PDT
Glad to hear about your articles Mr. Dembski! You seem to be successfully pulling into the area that neo-Darwinists seem to deem the most important, that is, peer reviewed articles. Once more ID themed papers get into the peer reviewed sections of science, who's to say neo-Darwinism will hold up at all? Especially considering, if I understand the idea behind the conservation of information theory correctly, it will literally be the death-knell of neo-Darwinism!Domoman
January 21, 2009
January
01
Jan
21
21
2009
07:30 PM
7
07
30
PM
PDT
Wolpert and Macready stated outright that their work applied to combinatorial optimization. No one in his right mind would model biological evolution as combinatorial optimization. Setting up as a target, say, the set of all length-100 sequences of nucleotides coding for a particular protein is highly unrealistic. Sure, that gives you a combinatorial optimization problem, but to say that any natural process sought for an encoding of a particular length is absurd. I don't think it's a particularly well kept secret that the genetic sequences that are prime for evolution into genes coding for new proteins are duplicate genes and pseudogenes. There is huge inconsistency among IDists on the matter of functionality of DNA sequences. They often push the point that most or all of the genome is designed, and that we simply have a great deal to learn about what non-coding sequences do. But when they want to portray evolution as utterly improbable, they say that a sequence of bases is categorically fit if it codes for a prespecified protein, and is categorically unfit otherwise. Now which is it? Can a non-coding sequence contribute to fitness, or not? Geneticists believe that some pseudogenes serve functions, even though they don't code for proteins. So why should the fitness of a genetic sequence be dichotomous? Why should we not consider that a non-coding sequence may pass through a succession of functions before coding for a protein? Just as the IDists say, who knows what functions we have yet to discover?Sal Gal
January 21, 2009
January
01
Jan
21
21
2009
05:26 PM
5
05
26
PM
PDT
Evolution is a blind, unguided process. ID is a goal-driven process. Since ID has it's goal and evolution doesn't care for one, neither are searching for anything. How do you measure the efficiency of a function they're not actually performing? In other words, what do these two papers contribute to the resolution of the ID/evolution argument?Toronto
January 21, 2009
January
01
Jan
21
21
2009
03:00 PM
3
03
00
PM
PDT
Getting papers published is always nice so congratulations to the authors! However, I don't see how these 2 papers qualify as"pro-ID" until it is demonstrated how they are relevant to evolutionary biology. Remember that such claims were made for the original paper by Wolpert et al. until it was pointed out that it has assumptions that do not apply to evolutionary biology.Prof_P.Olofsson
January 21, 2009
January
01
Jan
21
21
2009
02:02 PM
2
02
02
PM
PDT
For anyone who has read the second paper: To my math-challenged eyes, it appears that the only type of information that their vertical NFLT addresses is what they call "importance sampling" in the first paper. I must be missing something, so I need someone to hold my hand here. I would expect the phrase "search for a search" to refer to a search for an better-than-random search algorithm. But the meta-search space (M(omega) on page 4) is a space of probability distributions, not a space of algorithms. If we're restricting ourselves to stateless search algorithms and no fitness/cost function, then the only information we can use is a probability distribution to bias our sampling. But this leaves out common search strategies like genetic algorithms. For example, if the active information I'm given is: - Fitness function f is smooth - Target T is at the maximum of f then that information helps immensely in choosing a search algorithm. But neither of those items of information are probability distributions, so neither of them fit into Marks and Dembski's meta-search space. Again, I'm math-challenged and I know I'm missing something.R0b
January 21, 2009
January
01
Jan
21
21
2009
11:28 AM
11
11
28
AM
PDT
CJYman, thanks for the comments. I think we're going to have to talk far more concretely if we want to avoid talking past each other. With regards to weasel, you say:
Which metric are you using to determine when each search locks on a letter? Is this metric possible without any knowledge of the target? IOW, trying simulating the search you propose without imposing problem specific information about when each search is to stop and which search goes in which position. Without knowledge about the final target, your search procedure seems even harder for chance and law to accomplish since now there are two steps instead of only one.
I need to describe my model better. SearchAlgorithm1 sends a guess to Oracle1, and Oracle1 responds with "yes" if the guess is "M", or "no" otherwise. Once SearchAlgorithm1 gets a "yes" back, it stops searching. This is a blind search as Dembski describes it, and it involves no more problem-specific information than any other blind search. Likewise with SearchAlgorithm2, SearchAlgorithm3, etc. Oracle2 says "yes" for the letter "E", Oracle3 for the letter "T", etc. When all searches are done, SearchAlgorithm1 has the letter "M", SearchAlgorithm2 as the letter "E", etc. You might ask, how does each SearchAlgorithm know which Oracle to query? Isn't that problem-specific information? I would respond, how does any search algorithm know what oracle to query? How does it know to query at all? How does it know what search space to sample from? How does it know to stop when the oracle tells it that it found the target? How does it know to not stop before that? All of this information is built into the search model, and Marks and Dembski do not count it as active information. The only info that counts as active info is that which allows the search to perform better than random search. My weasel model performs random searches, so it has no active info, according to Marks and Dembski's definition. I think the same principle applies to your response to the stick-in-the-river example. According to my model of that physical process, the stick finds the target far faster than it would through random sampling. You may think that a different model is more appropriate, and that's exactly my point. Marks and Dembski's framework provides no criteria for determining whether a model is appropriate or not, so the choice is arbitrary. This means that the active information metric is, to some degree, arbitrary when applied to real processes rather than pre-specified search models. Note that the two papers referenced in the OP apply the metric only to pre-specified models. Note also that neither paper makes any attempt to connect the active info concept to intelligence, design, teleology, etc. So there's a gap there that Dembski needs to fill in order to support his claim that the papers are pro-ID.R0b
January 21, 2009
January
01
Jan
21
21
2009
11:06 AM
11
11
06
AM
PDT
Pardon me for jumping in, and for not having read any of this post or comments, but I thought this might be an opportune moment to ask a question I've had for a while, on the general subject of "material" and "information." Suppose the course of a creek is altered by a rock that falls into it. Better, suppose that the new course leads the creek into country that is more amenable to creek-dom, and the creek becomes a river. Is the rock "information?" My real point: how is this different from DNA (or RNA -- I don't have the specifics of that science)? A physical substance affecting another physical substance, with extensive consequences in the long run?pubdef
January 21, 2009
January
01
Jan
21
21
2009
10:11 AM
10
10
11
AM
PDT
As an aside, active information is caused by the organization of the algorithm (matching of search procedure to search space) and this is why the algorithm can not generate active information any more than it can create itself. Thus, if it can not create itself absent previous foresight, the active information is generated by a higher level search ad infinitum or by a system which can model the future and generate targets (foresighted system).CJYman
January 21, 2009
January
01
Jan
21
21
2009
08:04 AM
8
08
04
AM
PDT
Rob (#8): "Now try modeling the search the found our current natural laws. What is the search space? What’s the target? Is active info a property of actual processes, or of the choices we make when we model those processes as searches?" Active info is a property of the organization necessary for the actual processes to produce better than chance performance. Taking this back a step further, if chance and law absent previous foresight will not cause an organization of law and chance to produce active information, then active information is also a property of foresight. So far, no one has shown that active information can be generated absent previous foresight (applying knowledge of the problem to be solved or search space into the behavior/organization of the search algorithm as it relates to the search space). Of course, that obviously adds credibility to the ID Hypothesis. Rob: "Another example: Suppose my target is a lake. A stick dropped in a river tends to find its way to the target without searching every corner of the universe, so we’re talking massive amounts of active info. How does the stick know which way to go? Obviously, the same gravity that determines the location of the lake (at a local minimum) also guides the stick. The target location and the search algorithm are not independent. Is this dependency intelligent?" I don't think your example models the concept of active information correctly, since no one can create an evolutionary algorithm based on whichever principles you are saying are at work. I think the flaw in the example would be in actually calculating for active information -- how may sticks fall into rivers naturally and then what percentage of those sticks would actually make it to any lake. Then we would have to randomly drop sticks in random rivers and see what percentage make it to your target lake compared to any lake. This would seem to be akin to assuming that any collection of background noise and laws will eventually produce life and an evolutionary algorithm and then generate intelligence at better than chance performance. Well, sure you can make that as your hypothesis, but good luck with trying to test that or even coming up with a mathematical model to show that is theoretically possible. Rob: "As far as the weasel searches, if Search1 searches for the first letter, Search2 searches for the second letter, etc. then all letters will be found. Marks and Dembski’s weasel algorithm is nothing more than all of these searches happening in parallel. One model has lots of active info, and the other has none. So again it seems that the active info metric depends heavily on how we choose to model a process." The problem is not merely finding the letters. The problem is first locking the letters in place when they are found and then placing those locked positions in to the correct position to reach the final phrase -- ie: search 1 goes in position a, search 2 goes in position b, etc. Which metric are you using to determine when each search locks on a letter? Is this metric possible without any knowledge of the target? IOW, trying simulating the search you propose without imposing problem specific information about when each search is to stop and which search goes in which position. Without knowledge about the final target, your search procedure seems even harder for chance and law to accomplish since now there are two steps instead of only one.CJYman
January 21, 2009
January
01
Jan
21
21
2009
07:04 AM
7
07
04
AM
PDT
Rob (#8): "If you try to model the evolution of the human brain as a search, you’re immediately faced with some choices that seem arbitrary. For instance, what is the target?" As far as I can tell, there is no need for arbitrary measurement. Out of all possible combinations of the chemical constituents to make a brain, the brain is an extremely improbable combination of chemicals. In fact, with enough calculation power, an objective number of probability could be given to the brain. I'm sure for the purposes of debate, no one would object to the that number being unfathomable small, seeing that every scientists acknowledges that it is indeed the most complex system by a far shot within our universe (that we are aware of) -- even compared to a space shuttle which is a pretty darn complex system. If we wouldn't expect as a matter of probability, in the history of our planet, for the constituent chemicals which make up a brain to randomly coalesce into a brain, then we both understand that we need some type of fine tuned laws to allow the brain in the first place and then a ratcheting mechanism based on those same laws to bring us closer to the brain. Thus we have a filtering process which is needed to discover the brain. Now, we both seem to realize that the brain itself won't come together via chance and law absent that filtering process. So the question becomes, can that filtering process and the organization of laws which account for it be discovered via merely an arbitrary collection of laws and initial conditions (chance and law)? This becomes a search for that filter (search process which raises the probability for discovering/generating the brain). SO we still haven't accounted for the ability to find the brain at better than chance performance by saying "evolution did it." We need to still search for that evolutionary algorithm -- the incredibly fine tuned laws and initial conditions of our universe -- those highly improbable physical constants and variables which drive some physicists to marvel that it seems a "superintellect has mokeyed with physics." Rob: "Have we reached it? Maybe we’re stuck in a local optimum. If natural laws prevent us from ever reaching the ultimate target, then they have negative active information." It doesn't matter if we are stuck in a local maximum, it only matters if we have indeed reached one of the targets. Even if we don't reach all of the targets (which we will never know since we are within the program) then that makes no difference for us having reached at least one of the targets. Not knowing all the targets makes no difference in calculating the positive active information for targets which have been reached. I think that a good example for the point that is being made is this: take some background noise and an arbitrary collection of laws (definitely only law and chance absent any previous foresight to organize the laws and initial conditions in any specific way for any specific target) and run a program which causes the noise and laws to interact with each other an see if any active information is generated. Are any specified or pre-specified patterns discovered at better than chance performance?CJYman
January 21, 2009
January
01
Jan
21
21
2009
06:56 AM
6
06
56
AM
PDT
Hello Rob, I definitely enjoy discussing these things with you as well. Your style of conversation is a breath of fresh air (no obfuscation, honest responses, and no personal attacks). I apologize if I can't follow this topic for that long, as I am back to work now, but here goes for now. And I apologize for my extreme inability to be brief and to the point. Sometimes, I just think that a little in depth rant is necessary to get the idea across. So I will be posting responses to your #8 in sections. Rob, you state: "It’s interesting that simple natural laws can be more information-rich than human brains, but such is the concept of active information." Yes, the laws themselves are simple mathematical descriptions of regularities, however the relation of laws to each other is not simple and it is in this complex organization of law and initial conditions where the active information "resides." In fact, if you understand the fine tuning argument from physics you will understand that our set of laws and initial conditions (cosmo constant) are part of an extremely small set that would allow life, much less even provide enough time for evolution to occur. Life's existence is teetering on a knifes edge of a combination of laws among a vast majority of possible mathematical laws which wouldn't support any notion of life as an evolving information processing machine. That is what information is about. It is not about the laws as merely descriptions of regularity themselves but about the *organization* of those laws, ie: which laws are being utilized, which initial conditions are being utilized, and what values are the laws set at? That is what ID critics fail to realize. There is objectively more to our universe than law and chance. There are organizations of law and chance. This is known as information and it can be assigned a probability -- its information content in bits. In fact, the organization of natural laws may be so complex that they operate in such a way as to provide a framework for consciousness. The organization of laws themselves may even be so complex as to cause the universe itself to be an intelligent system. If a chess program, "merely" a sufficiently organized collection of logic gates, can be intelligent (model the future and move toward a target of winning the game) then there is no theoretical reason why our universe can't be intelligent in at least that same way. To say that a chess program is only law and chance and thus has no need for previous intelligence is blatant misinformation. It's not necessarily the law and chance involved that requires previous intelligence (law and chance is just what it is), it is the highly specific and improbable organization of law and chance -- the information -- that required previous intelligence (foresight).CJYman
January 21, 2009
January
01
Jan
21
21
2009
06:49 AM
6
06
49
AM
PDT
Abstract: Conservation of information theorems indicate that any search algorithm performs on average as well as random search without replacement unless it takes advantage of problem-specific information about the search target or the search-space structure.
The notion of "performance" in the context of NFL is nonstandard. Performance is defined as the quality of a sample of n points in the search space and their associated fitness values. The running time of a search algorithm is entirely ignored. This is reasonable only if the time to evaluate fitness does not vary much from one point in the search space to the next -- e.g., as in typical combinatorial optimization problems. It is obvious that the time required for "fitness evaluation" in biological evolution varies enormously from one type of organism to the next. Evolution is not just a matter of information, but also of time, and extant NFL analyses do not tell us about "performance" in any conventional sense of the term.
To pose the question this way suggests that successful searches do not emerge spontaneously but need themselves to be discovered via a search.
When time is taken into account, generally superior "searches" do emerge quite naturally [paper in review].Sal Gal
January 21, 2009
January
01
Jan
21
21
2009
06:49 AM
6
06
49
AM
PDT
From the CoS paper: "In evolutionary search, a large number offspring is often generated, and the more fit offspring are selected for the next generation. When some offspring are correctly announced as more fit than others, external knowledge is being applied to the search giving rise to active information." Deciding that an individual has achieved the search target is to decide that it is more fit than one that hasn't, all search algorithms require some knowledge of the solution, namely what qualifies as a solution. The fitness function in this case is just binary or 'all or nothing' choice. The problem here is that reducing the evaluation criteria of a GA or other type of hill climbing algorithm to an all or nothing choice converts these algorithms into random searches - If a 'hill climber' can't measure the slope then it is just doing a random walk. Providing an evaluation criteria that is non-binary does not necessarily imply or require knowledge about the search in hand, it just requires that the search space has certain properties in order for the search to be effective. In other words if your fitness landscape is flat with a single pinnacle of fitness then a graduated fitness function is of no help. "A “monkey at a typewriter” is often used to illustrate the viability of random evolutionary search." This is an example of a random search with a binary evaluation criteria (it either is the works of Shakespeare of it isn't) There is no descent with modification and no fitness metric, just a halting condition. It is NOT an evolutionary search.Laminar
January 21, 2009
January
01
Jan
21
21
2009
03:12 AM
3
03
12
AM
PDT
Well, I wonder who the NCSE will "Richard Sternberg" (verb) after they are published.William Wallace
January 20, 2009
January
01
Jan
20
20
2009
11:31 PM
11
11
31
PM
PDT
On one leveI agree with Dr. Dembski, and it is hard to imagine how anyone possibly could not. He is stating in a more complicated way the following: The probablity of a bitstring is proportional to the number of bits in the smallest program that will generate it. Thats why a "search for a search for search" cannot be more probable than the original search. However, what is kind of amazing to me, is that there is still a very strong commitment to Dualism in Dr. Dembski's work which I think is complicating his whole endeavor. He demands we take away a certain part of a computer algorithm and say, "That's not really part of the search algorithm - that was added into it." And the inevitable source for this essential ingredient divorced from the algorithm proper is of course the miraculous, inscrutable and decidly uncomputational (in Dr. Dembski's mind) marvelous Human Mind. (And please remember to capitalize, out of respect.)
"Computers, despite their speed in performing queries, are completely inadequate for resolving even moderately sized search problems without accurate information to guide them" "Such information does not magically materialize but instead results from the action of the programmer who prescribes how knowledge about the problem gets folded into the search algorithm. " "Recognition of the inability of search algorithms in them selves to create information is very useful" "when the results of an even a moderately sized search problem are too good to be true or otherwise overly impressive, we are faced with one of two inescapable alternatives: The search problem under consideration, as gauged by random search, is not as difficult as it first appears. The other inescapable alternative, for difficult problems, is that problem-specific information has been successfully incorporated into the search." "No Free Lunch Theorems (NFLT’s) [28], [51], [52], show that without prior information the search environment or the target sought, one search strategy is, on average, as good as any other." "Over 50 years ago, Leon Brillouin, a pioneer in information theory, wrote “The [computing] machine does not create any new information, but it performs a very valuable transformation of known information. When Brillouin’s insight is applied to search algorithms that do not employ specific information about the problem being addressed, one finds that no search performs consistently better than any other"
I can't imagine Leon Brillouin thinking his observation was any more than obvious. A computer is an extremely simple device, everyone should understand that (think Turing Machine). But a computer agorithm can be as arbitrarily complicated as needed. And you can't come in and strip part of it out and say, "That's not part of the algorithm! That's the result of a Human Mind!" It is part of the algorithm. And even if it came from a human mind, you haven't shown a human mind isn't an algorithm too. Don't really want to start a huge heated debate though.JT
January 20, 2009
January
01
Jan
20
20
2009
07:14 PM
7
07
14
PM
PDT
Well this would be interesting.... except the abstracts show that the papers will constitute a slew of technobabble and idiosyncratic terminology, and layered levels of abstraction, no doubt never even mentioning theism once. Not surprising. NS http://sciencedefeated.wordpress.com/notedscholar
January 20, 2009
January
01
Jan
20
20
2009
06:33 PM
6
06
33
PM
PDT
Is anyone interested in providing how these search algorithms are related to ID and evolution? What are they searching? And how does this relate to biological processes. If anyone can put this in simple language, it would be appreciated. I realize that it has something to do with transforming one DNA string into another and for the new DNA string to generate something functional and useful but while the mathematical rigor may be necessary for these journals what is it in simple terms? Anyone?jerry
January 20, 2009
January
01
Jan
20
20
2009
05:26 PM
5
05
26
PM
PDT
1 7 8 9 10

Leave a Reply