Uncommon Descent Serving The Intelligent Design Community

Two forthcoming peer-reviewed pro-ID articles in the math/eng literature

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The publications page at EvoInfo.org has just been updated. Two forthcoming peer-reviewed articles that Robert Marks and I did are now up online (both should be published later this year).*

——————————————————-

“Conservation of Information in Search: Measuring the Cost of Success”
William A. Dembski and Robert J. Marks II

Abstract: Conservation of information theorems indicate that any search algorithm performs on average as well as random search without replacement unless it takes advantage of problem-specific information about the search target or the search-space structure. Combinatorics shows that even a moderately sized search requires problem-specific information to be successful. Three measures to characterize the information required for successful search are (1) endogenous information, which measures the difficulty of finding a target using random search; (2) exogenous information, which measures the difficulty that remains in finding a target once a search takes advantage of problem-specific information; and (3) active information, which, as the difference between endogenous and exogenous information, measures the contribution of problem-specific information for successfully finding a target. This paper develops a methodology based on these information measures to gauge the effectiveness with which problem-specific information facilitates successful search. It then applies this methodology to various search tools widely used in evolutionary search.

[ pdf draft ]

——————————————————-

“The Search for a Search: Measuring the Information Cost of Higher Level Search”
William A. Dembski and Robert J. Marks II

Abstract: Many searches are needle-in-the-haystack problems, looking for small targets in large spaces. In such cases, blind search can stand no hope of success. Success, instead, requires an assisted search. But whence the assistance required for a search to be successful? To pose the question this way suggests that successful searches do not emerge spontaneously but need themselves to be discovered via a search. The question then naturally arises whether such a higher-level “search for a search” is any easier than the original search. We prove two results: (1) The Horizontal No Free Lunch Theorem, which shows that average relative performance of searches never exceeds unassisted or blind searches. (2) The Vertical No Free Lunch Theorem, which shows that the difficulty of searching for a successful search increases exponentially compared to the difficulty of the original search.

[ pdf draft ]

—————

*For obvious reasons I’m not sharing the names of the publications until the articles are actually in print.

Comments
Jerry -- guilty as charged. It must have been past my bedtime when I posted that last comment. But in all seriousness -- it seems that a large part of the argument of ID relies on the concept of "information," and I'm honestly at sea about what exactly that it. And gpuccio: this is not very helpful.
You ask:
“Where along the continuum of complexity does a physical interaction attain the status of “information?”
The answer is very simple: when some particular complexity assumes a configuration which can give you a specific useful information, which can allow you to do something which otherwise you could never do.
In other words, a physical interaction attains the status of "information" when it can give you "a specific useful information." As long as it appears to me that ID relies on how complex something is, I don't think I'll be impressed. In my limited experience, evolutionary biologists have been quite aware of how complex nature can be. I didn't get very far into "The Blind Watchmaker" before it was overdue, but I did get through the chapter in which Dawkins describes echolocation in bats in greater and greater detail, all the while making analogies to radar developed for planes in WWII.pubdef
January 25, 2009
January
01
Jan
25
25
2009
11:32 AM
11
11
32
AM
PDT
Prof_P.Olofsson, Are you ready to say the correspondence of the nucleotides in DNA to the production of functional proteins does not act like a code. If you are, then I will gladly include you with pubdef and Mark Frank. But you should also look at my comment #9 about the current thread. I do detect a small lack of constructive criticism on your part. Yours is one of pointing out the flaws in other's arguments, which is well and good but I do not see any attempt at helping others how to solve the problem other than your criticism. For example, can the problems you raised in the past about the flagellum be solved or ameliorated somewhat if there were probability estimates of the number of potentially functional proteins from the totality of possible proteins. Now I do not know enough about the technicalities of either probability theory or the behavior or random polymers of amino acids to make any intelligent assessment but I bet that there are some that do. By the way I have hardly read all your comments so my assessment could be quite wrong. I was only using the sampling of the ones I have read to make my judgment and like any statistical analysis there is a potential error. The sampling also indicate a cordial and generally nice person. By the way I was a mathematics major in college and went to Duke graduate school on a fellowship for mathematics before leaving for the military and a change of life. My leaving had nothing to do with my work in math but I never really got back into it and eventually went to Stanford for an MBA where I initially was enamored with Operations Research because of my math background. I decided to pursue marketing instead. I had a number of statistics courses in later years as part of a different Ph.D program. So while the technicalities of statistics is a distant memory other things I remember quite well. I can recognize constructive behavior or the lack of it when I see it. That is something I never lost because I see it or the lack of it every day. I look forward to you proving me wrong because I believe you could be extremely helpful. But if you were, you would probably suffer for it with your colleagues. Actually what could be a better way to discredit ID then to actively help it and then have those you helped admit that what they were trying to show was a dead end and then have them thank you for all that you tried to do for them. If such a thing happened to me I would glow with satisfaction.jerry
January 25, 2009
January
01
Jan
25
25
2009
11:15 AM
11
11
15
AM
PDT
WilliamDembski[87], Indeed. I also pointed this out in [54]. Minor point: On page 2, there is no need to restrict yourself to finite-dimensional vectors; Tychonoff's Theorem takes care of the infinite-dimensional case which is relevant if you don't want to fix the number of steps in advance. I still wonder about the relevance to evolutionary biology of searching for a search according to the probability meausure induced by the Kantorovich-Wasserstein metric. I'm not trying to annoy you; I think it's a very relevant question. Maybe jerry [93] can give me an answer?Prof_P.Olofsson
January 25, 2009
January
01
Jan
25
25
2009
09:31 AM
9
09
31
AM
PDT
gpuccio @86
The problem with many fitness functions in GAs is, IMO, that while they pretend to model NS, they are really introducing active information, and therefore realizing IS.
Do you have any particular fitness functions in mind, other than those like Dawkin's Weasel that do explicitly encode their desired end state? One of my favorite GA examples is the evolved antenna. This is a much better example of what GAs can do than the Weasel. In this model, the environment consists of the physical laws regarding antennas and the other antenna designs. Each genome is rated against the others in that environment. The only thing that the simulation itself does is model random mutation followed by selection where the chance of reproducing is proportional to the fitness in the environment at that time. The only information introduced in this model is the relative fitness of the genome. Although greatly simplified, this is exactly how information from a real world environment is communicated to a population of organisms. I don't see how this requires intelligent selection.
If a GA does not do that, we could analyze its results, and see how much it is really modeling something. But we need to know the code in the details for that.
I believe the paper linked from the page I linked to gives enough information. Would you agree, though, that if the GA behaved as I described, it really is modeling RM+NS?
I have found the link about tierra very interesting. Unfortunately, there is probably not enough detail there to really understand, but my impression is that it could be something more near to what I think should be done. The limit here seems to be that the original organisms were 80 bytes long, if the article is right, and the derived organisms were either not very different from them (79 bytes), or much shorter (45). The shorter ones were essentially a subset of the original ones, and parasitic of them, because they had to use part of the originals’code to replicate.
My apologies, the page I linked to wasn't the best. Here is a much better one, that includes links to many publications on Tierra and alternative implementations. The "organisms" are actually quite different from each other.
In other words, my impression is that in general all the viruses were using approximately the same code, with minor variations. That is very interesting, but it would be essentially a simulation of microevolution, in the range of random probabilistic resources applied to an existing code. Obviously, we should know in detail the individual codes to really understand what happened, and why the modified code was sometimes more efficient than the original one. But I don’t think that any really new code has been generated here, and certainly not with the characteristics of CSI.
I'm a little confused by this. In your original post to which I replied, you said:
No. The replicator must survive or not survive for its intrinsic capacity to survive or not survive in the environment. In other words, the variation must increase the true replicating ability of the replicator: it’s not the designer that has to decide who survives according to a predetermined, and searched for, function.
This is exactly what Tierra does. I'm not sure why you're now adding additional constraints about new code and CSI. What is your objection? In any case, the code of the digital organisms is available, and you can even run Tierra on your own machine to see the behavior you are looking for.
I am sure that we can simulate microevolutionary events in a real simulation. That kind of events is possible, and well documented even in biology (see antibiotic resistance). It is the assumption that a cumulation of simple microevolutionary events can bring about new complex functions (let’s say, a completely new code of at least 500 bits, a new algorithm, and so on), and not just reshuffle or slightly modify existing ones, which should be tested in a true simulation. But if you have further details about that, please let me know.
This is where I think there is extensive potential for ID research. That being said, I believe there are GAs out there that do mathematical theorem proving and have come up with new algorithms. I'm afraid I'm getting more, rather than less, confused about your precise objections to GAs.
You ask for further elaboration on this concept “You understand that it is a very severe restraint to what functions can be selected.” What I mean is that if NS can only expand new replicators which have a detectable reproductive advantage, then not all useful functions can apply, indeed only a tiny subset of them. Many complex functions, while potentially interesting and useful in the long term, would never give an immediate reproductive advantage, and could never be selected by NS.
Ah, I see, thank you. That is definitely a restriction. Much as I might like retractable wheels built-in to my feet, that is not something that can evolve from our current body plan. This ties in with my previous comment about rich areas for ID research. Where can we get to from where we are? JJJayM
January 25, 2009
January
01
Jan
25
25
2009
09:14 AM
9
09
14
AM
PDT
Mark Frank, I don't want to leave you out. So thank you too for your inane arguments. The more we have people like you and pubdef, the easier it is for us. The two of you are setting standards for what can be used against ID and we appreciate your efforts.jerry
January 25, 2009
January
01
Jan
25
25
2009
08:31 AM
8
08
31
AM
PDT
Pubdef, Your arguments are fatuous. They seem more like an attempts of a disruptive toddler rather than constructive adult. Comparing a rock in the middle of the stream to an ordered set of molecules which then set in motion a series of steps that end up with a completely different ordered set of molecules that operate in context with several other ordered sets of molecules and each of these ordered sets of molecules have physical properties which are functionally useful for an organism is one of the more inane arguments I have ever heard. Keep up the good work because it is objections to ID such as yours that makes our case easier. Remember we are not trying to convince you because we long ago knew such attempts were useless but we are trying to convince those who are seriously trying to understand the issues. So thank you for your efforts. People like you make our job easier.jerry
January 25, 2009
January
01
Jan
25
25
2009
08:22 AM
8
08
22
AM
PDT
Gpuccio Re #90. This all depends on what you mean by "code". Code, meaning and symbol are all words with a number of uses in English. An important distinction is between what Grice call non-natural and natural meaning. They can be associated with an agreement among people to associate one thing with another, for example the letters Au with the metal gold. On the other hand they may depend on some kind of causal relationship other than an arbitary agreement - for example dole queues are a symbol of an economic depression. UCU causes the production of Serine. This is because of biochemistry - not some arbitrary agreement. Therefore it falls into the second category of symbol. You write: The important point is that there is no biochemical reason (law of necessity) why, say, UCU corresponds to Serine. The connection is purely symbolic, and is guaranteed only by the fact that the translation system recognizes the UCU codon and connects it to the aminoacid Serine. But that is only to say that UCU causes Serine in the context of the transalation system - given the presence of the translation system then there is every necessity that UCU leads to Serine. Drinking alcohol causes road accidents - but only in the context of driving a car.Mark Frank
January 25, 2009
January
01
Jan
25
25
2009
08:12 AM
8
08
12
AM
PDT
pubdef: Why do you say that DNA is not a code, and that the concept of a genetic code is only an analogy? THat is simply not true. DNA, in its protein coding parts (which, as you probbaly know, are only 1.5% in the human genome)stores information through a specific symbolic code, which works in the same way as the Morse code you cite. I am not implying here thatv the word "code" indictes necessarily that ir is designed: as you say, that would be assuming the conclusion. I am using the word "code" in a very elementary sense (the same as geneticists have always ised it): a symbolic language which bears symbolic information for something else. The genetic code is made of codons (three consecutive nucleotides). Of all the possible 64 combinations, each has a specific meaning: it corresponds to one of the 20 aminoacids, or to a stop signal. The important point is that there is no biochemical reason (law of necessity) why, say, UCU corresponds to Serine. The connection is purely symbolic, and is guaranteed only by the fact that the translation system recognizes the UCU codon and connects it to the aminoacid Serine. The connection is realized by the tRNA molecules, which recognizes the codon in the mRNA, and (at another site of the molecule) links to the right aminoacid and transfers it to the growing protein sequence. So, the connection is purely symbolic: nobody knows why UCU, in particular, corresponds to Serine. UCU is just a word which represents a meaning, the aminoacid Serine. The system works only because all of its parts use the same code or language. And beyond the genetic code, the gene coding sequence is information. The gene coding for the protein myoglobin has a sequence of 154 x 3 = 462 nucleotides which code for the 154 aminoacid sequence of the protein myoglobin. That sequence is unique to that protein, and is the sequence which allows the function of myoglobin. That's what I mean when I say that there is no law of necessity which can output the sequence of myoglobin (or any other functional protein sequence). If you just try to synthesize a random protein sequence from a pool of random aminoacids, you can get any possible sequence. Biochemical laws do not privilege any specific sequence, and least of all a functional sequence like myoglobin. If you want to synthesize myoglobin, you need to know the prinary sequence of myoglobin: you have to know that specific sequence of 154 aminoacids. In other words, you need a specific information. You ask: "Where along the continuum of complexity does a physical interaction attain the status of “information?” The answer is very simple: when some particular complexity assumes a configuration which can give you a specific useful information, which can allow you to do something which otherwise you could never do.gpuccio
January 25, 2009
January
01
Jan
25
25
2009
05:49 AM
5
05
49
AM
PDT
Gpuccio No. The replicator must survive or not survive for its intrinsic capacity to survive or not survive in the environment. In other words, the variation must increase the true replicating ability of the replicator: it’s not the designer that has to decide who survives according to a predetermined, and searched for, function. I am really struggling with this. What are you asking for? This is a simulation not the real thing. No simulated life form is going to die or survive unless there is a mechanism in the software for doing that. The programmer must create that mechanism. What does an “intrinsic” capacity to survive mean in this context? What is the “true” replicating ability as opposed to any other replicating ability? It is almost as if you want the environment and the die/survive mechanism to develop through evolution as well as the individuals that live in that environment. Go back to the example of artificial selection. In this case the real world fitness function is the product of a designer. If I breed pigeons I decide which ones survive and on what basis. Suppose I breed pigeons on the basis of speed and the result is a pigeon that has a radically different breast bone structure (I don't design the breast bone structure - in fact I may not even know it exists). Would this not be an impressive demonstration of Darwinian mechanisms in action? But the selection mechanism (speed) is completely designed.Mark Frank
January 24, 2009
January
01
Jan
24
24
2009
11:37 PM
11
11
37
PM
PDT
gpuccio:
As Upright BiPed has already said, DNA is a support for information: you cannot have pure software, you always have software written on a hardware support. The “physical configuration of molecules that interacts with other physical objects/particles” is only the biochemical structure of the DNA molecule. But the special sequence of nucleotides which, in a symbolic code, encodes the sequence of aminoacids in a specific protein is pure digital information.
I have to admit that you lost me after this point; I don't really understand what "necessity" is in this context, and don't really care at this moment. But I think there's a problem with the portion of your post that I reproduced here. The "special sequence of nucleotides" -- the nucleotides are physical objects, interacting with other physical objects. You, apparently, are asserting that their sequence is a "code." I maintain that the only difference from a rock in a stream is a matter of degree; they are both physical objects interacting with other physical objects. DNA is much more complicated, but how does that constitute "information" in a way that makes it fundamentally different from the rock? Where along the continuum of complexity does a physical interaction attain the status of "information?" I know that geneticists and others in science refer to DNA as a "code," but I see that as a nomenclature that describes its function by analogy. To argue that the genetic "code" is evidence of ID is to assume the conclusion, i.e., that DNA is a product of intelligence -- a "code" like Morse code or computer source code -- when there is no empirical evidence of teleological origin.pubdef
January 24, 2009
January
01
Jan
24
24
2009
10:36 PM
10
10
36
PM
PDT
I should note that our approach subsumes fitness functions but is considerably more general. Fitness functions alter the probabilities in a search. Our measure of active information focuses on that change in probabilities. But there are other ways to alter probabilities than by introducing a fitness function.William Dembski
January 24, 2009
January
01
Jan
24
24
2009
07:35 PM
7
07
35
PM
PDT
JayM: thank you for the interesting reflections. A few thoughts: The problem with many fitness functions in GAs is, IMO, that while they pretend to model NS, they are really introducing active information, and therefore realizing IS. If a GA does not do that, we could analyze its results, and see how much it is really modeling something. But we need to know the code in the details for that. I have found the link about tierra very interesting. Unfortunately, there is probably not enough detail there to really understand, but my impression is that it could be something more near to what I think should be done. The limit here seems to be that the original organisms were 80 bytes long, if the article is right, and the derived organisms were either not very different from them (79 bytes), or much shorter (45). The shorter ones were essentially a subset of the original ones, and parasitic of them, because they had to use part of the originals'code to replicate. In other words, my impression is that in general all the viruses were using approximately the same code, with minor variations. That is very interesting, but it would be essentially a simulation of microevolution, in the range of random probabilistic resources applied to an existing code. Obviously, we should know in detail the individual codes to really understand what happened, and why the modified code was sometimes more efficient than the original one. But I don't think that any really new code has been generated here, and certainly not with the characteristics of CSI. I am sure that we can simulate microevolutionary events in a real simulation. That kind of events is possible, and well documented even in biology (see antibiotic resistance). It is the assumption that a cumulation of simple microevolutionary events can bring about new complex functions (let's say, a completely new code of at least 500 bits, a new algorithm, and so on), and not just reshuffle or slightly modify existing ones, which should be tested in a true simulation. But if you have further details about that, please let me know. You ask for further elaboration on this concept "You understand that it is a very severe restraint to what functions can be selected." What I mean is that if NS can only expand new replicators which have a detectable reproductive advantage, then not all useful functions can apply, indeed only a tiny subset of them. Many complex functions, while potentially interesting and useful in the long term, would never give an immediate reproductive advantage, and could never be selected by NS. Moreover, a new function must be well integrated into the existing system of replication, before it can translate into a true advantage. Going back to the biological world, you must understand that protein functions, even when searched by protein engineering algorithms, appear in the beginning as very low biochemical affinities, detectable by some sensitive measurement system, but completely useless in the real cell environment. They have to be intelligently selected and amplified, before a true powerful biochemical function can be reached. And still that function would have to be integrated into what already exists, and carefully regulated (the synthesis of the protein started at the right moment, and stopped when it is no more necessary, the protein concentration regulated at the right level, and so on). Think, for instance, of protein cascades, where all the components of the cascade must be present for the final result to be obtained, and each protein has to be present in different concentrations, from very low to very high, so that the cascade may amplify the original signal. Amd the signal must come from the right source, and be translated to the right effector. And still the effect must be strong enough to give a reproductive advantage, before it can be selected. So, what I mean here is that NS, as it is conceived in the real biological world, and at the molecular level, is a very, very poor oracle. It can do very little, probably almost nothing at the level of complexity which is already present even in the simplest autonomous living beings, bacteria and archea. Because the more an organism is complex, the more difficult it is to obtain an immediate reproductive advantage by a simple step variation. And bacteria and archea are very complex. So, to assume that NS is responsible for the emergence of all the existing proteomes, where each protein is hundreds of aminoacids long, and most proteins are deeply different one from another, and have different functions, and almost none of those functions can be useful by itself, and all that scenario has to be regulated and integrated, not only within a single cell, but among myriads of different cells, in multicellular organisms, and so on, well, to believe that is real folly. So, please remember that ID has never affirmed that RV cannot generate something useful: the ID assumption is that RV cannot generate something useful and complex. That's why the concept of CSI has two parts: the specification and the complexity. It is the complexity which avoids the false positives due to random variation. But it is the specification which connects the complexity to design.gpuccio
January 24, 2009
January
01
Jan
24
24
2009
06:31 PM
6
06
31
PM
PDT
gpuccio[84] Evolution is not concerned with CSI as it is defined as a process that has no specific goal. The term belongs to the ID point of view, not Darwin's. My model is what you asked for, which is a darwinian mechanism. From the ID point of view, it generates CSI, because the observer rejects any information that does not lead to his specific requirements which may be quite complex. From the point of view of the box, there is no predefined specific output, but the viewer sees what he requested, CSI. Voting need not be done by human viewers, it could be another black box looking for a mate.Toronto
January 24, 2009
January
01
Jan
24
24
2009
04:09 PM
4
04
09
PM
PDT
gpuccio @77 I've been following this discussion with considerable interest. While my personal interests when writing software for my own amusement lean more toward cellular automata, I have implemented a couple of genetic algorithms.
No. The replicator must survive or not survive for its intrinsic capacity to survive or not survive in the environment. In other words, the variation must increase the true replicating ability of the replicator: it’s not the designer that has to decide who survives according to a predetermined, and searched for, function.
First, this has sparked a number of ideas that I probably lack the time to implement fully. Thank you for the mental kickstart. However, I think you might be focusing on the wrong level of abstraction. The fitness function in a genetic algorithm (GA) that is simulating random mutation plus natural selection (RM+NS) is a simplified model of the ability of an individual organism to survive and reproduce in the simulated environment. That is the replicator's "intrinsic capacity to survive or not survive in the environment." That being said, if you want to see a GA where ability to replicate is measured directly, see Thomas Ray's Tierra. It does exactly that.
That’s exactly what is assumed in darwinian theory: the new function must be such that it increases the reproductive ability of the new form.
Yes, and this is exactly what is modeled by many GAs. It is possible to learn about the capabilities and limitations of RM+NS without taking the simulation down to the level of individual atoms.
You understand that it is a very severe restraint to what functions can be selected.
Could you expound on this? I don't see the severe restraint (aside from the limitations of current computing hardware).
So you cannot underestimate this point. Solving the travelling salesman problem in a shorter way does not usually help a software to replicate. It has to be a function inherent to the replication in that environment. Only that kind of function can be selected by NS.
Only that kind of function is selected in the real world of competing organisms. The point of (some) GAs is to show that RM+NS can generate complex, unexpected, and varied solutions to surviving in particular environments. That's a different level of abstraction than simple replication, but the mechanisms being used are the same.
Other kinds of selection are IS.
Again, GAs measure the capabilities and limitations of random mutation plus selection (usually without a known end goal) in various environments. Those environments generally don't mirror the real world, although Tierra reflect a small subset of it. That is immaterial to the discussion, however, because GAs show that the process of random mutation followed by selection does have certain capabilities in a wide variety of environments. My personal view is that GAs could be used to flesh out Dr. Behe's ideas in The Edge of Evolution to provide a better understanding of where the edge lies and what types of problems cannot be solved by RM+NS. I'm a math and software geek, though, so I would think that. I do believe we need significantly more computing horsepower for such research. JJJayM
January 24, 2009
January
01
Jan
24
24
2009
03:45 PM
3
03
45
PM
PDT
Toronto: I think I have answered you #65 in my #74. I cannot see any CSI in your example. I cannot see any function or complexity in the output you describe. And the users are not an environment there, but just conscious observers who project their representations. It has nothing to do with what we are discussing here, but if you want to think differently, you are welcome.gpuccio
January 24, 2009
January
01
Jan
24
24
2009
02:51 PM
2
02
51
PM
PDT
Patrick[82] My proposal does not require the properties of a fractal generator. The bit stream could be generated as the result of a function applying a bit mask to n bits of PI starting from location X. The goal is still hidden from the programmer and it will change based on the whims of the environment, the end-user. I too would like to see the output, but I would use it to generate short sentences. If for every million chars of output I got 3 or 4 short runs of proper grammar, say 5 to 7 words each, I would place them in an array. After a few thousand runs, I would have thousands of arrays which I could select again via my black box by simply deciding to interpret the output in a different way. An automated author!Toronto
January 24, 2009
January
01
Jan
24
24
2009
02:32 PM
2
02
32
PM
PDT
Toronto, Fractals have been discussed on UD by gpuccio and myself before. Still, I wouldn't mind seeing your particular proposal carried out, if only to see what it results in from an artistic perspective.Patrick
January 24, 2009
January
01
Jan
24
24
2009
01:28 PM
1
01
28
PM
PDT
gpuccio[74]
So, as you can see, I insist that to simulate the darwinian mechanism, you have to demonstrate that the results of random variation can generate a ture spontaneous advantage in some replicators, and that such spontaneous advantages can cumulate to the point of generating true new complex functions (CSI).
I think my example at [65] satisfies the above. As a "black box", it generates new CSI according to the environment. The environment supplies the fitness functions in the form of people voting. These fitness functions are constantly changing according to the moods and needs of the voters. The output bit stream need not be used strictly as video or audio but left up to the environment, e.g., a group of unknown users could use the output as indexes into a list of possible stock picks for trading purposes. The "black box" never changes, only the CSI as perceived by the user. As far as the use of a fractal generator is concerned, any process could be used to generate the bit stream. The user is as blind to the internal process as the programmer was to the external goal. Only the single bit change in the "DNA" is required after a successful survival.Toronto
January 24, 2009
January
01
Jan
24
24
2009
12:46 PM
12
12
46
PM
PDT
I think that in general I agree with what you say, but I really am not comfortable with the whole concept of fitness function. It looks extremely artificial to me.
I'd say it IS artificial, since fitness functions in GAs are typically not dynamically morphing but are instead statically and uniformly defined at the outset. Although it's possible for the programmer to step in and tweak the function once a plateau is reached. But my point was to try and limit the usage of fitness functions to be more realistic. The problem is that it makes writing the program much more difficult since a large variety of fitness functions will need to be accounted for. Essentially, to make the project feasible you'd need the software to dynamically adjust the fitness functions realistically at runtime without constant programmer intervention. Now I have heard of this being done but only in reference to AI research. In the late 90s a friend of mine wrote a basic AI that then self-modified via a dynamically adjusted GA and other forms of input. Supposedly the resulting AI was pretty smart even on the limited hardware of that time. Unfortunately, the project got axed when the investor died in the World Trade Center. And I'd also agree that I've never heard of a simulation that models all the biological constraints you mention.Patrick
January 24, 2009
January
01
Jan
24
24
2009
10:50 AM
10
10
50
AM
PDT
Patrick: I think that in general I agree with what you say, but I really am not comfortable with the whole concept of fitness function. It looks extremely artificial to me. Moreover, I am always thinking in terms of molecular biology. Too many discourses remain generic and useless because they are not brought to the essential level of molecular biology. That's also what Behe tries to remind always. What I wonder is: how can a specific change in a protein bring to a reproductive advantage in all general cases? Obviously, we have very extreme (and artificial) cases where a small change can bring a great survival advantage: antibiotic resistance (of the simpler form) is a good example, and it's not a case that it is almost the only example. And, as Behe correctly points out, it is a case of lucky disease, of loss of information which becomes useful due to exceptional circumstances. But in general, how can a simple mutation in one protein generate a phenotipic change that is good enough to give a true reproductve advantage and to expand? Again, I mean beyond the few cases of microevolution, which always deal with very small changes and adaptations in the same island of functionality? The transition from one protein to a different one is a virtual impossibility in almost all cases. There are no functional intermediates, when you have to change completely the primary sequence, and the folding, and the active site. And beyond that, in almost all cases, how could the appearance of a new protein with some elementary biochemical activity be useful in an integrated and complex cellular system? We know very well that you don't need just a new protein, but a correct regulation of its transcritpion, translation and post-translational events, and a series of finely tuned interactions of that protein with a lot of other proteins and cascades in the cell, before you get functionality. In other words, most cellular functions are IC from the beginning. That's why I have always found very strange the focus on the flagellum, just because Behe used that model in his book. Almost everything is IC in a cell! I would like to repeat here that the focus of darwinists on duplicated genes as the basis for evolution is rather symptomatic. We should remember that a duplicated gene is the only way to work at developing a new functional protein without losing the original function. Indeed, that's what any programmer does when he wants to work at some part of the code and change it: he copies it, and works on the copy, not to destroy the original. But, apart from the lucky circumstances of having the right genes in copy for our evolutionary experiments, we should remember that applying variation on a non functional copy of a gene has an unpleasant consequence which is often underestimated: that negative selection cannot any more control what is happening. In other words, if the duplicated gene is no more transcribed and functional, no negative selection can eliminate the bad mutations which compromise the original function. So, all the original useful information will be quickly lost, and unless and until a new functional configuration arises, no positive selection can apply. In other words, mutations on a non functional gene become neutral, and we are in the full ocean of non functional possibilities, from which probably nothing has ever emerged.gpuccio
January 24, 2009
January
01
Jan
24
24
2009
10:22 AM
10
10
22
AM
PDT
I think I'll attempt to summarize in one sentence: The fitness function must not contain a long term goal that is not applicable to short term goals, and these goals are very general in scope applying only to competitiveness against other replicators and not "functionality for function's sake".Patrick
January 24, 2009
January
01
Jan
24
24
2009
10:03 AM
10
10
03
AM
PDT
Mark: "Would this satisfy you?" No. The replicator must survive or not survive for its intrinsic capacity to survive or not survive in the environment. In other words, the variation must increase the true replicating ability of the replicator: it's not the designer that has to decide who survives according to a predetermined, and searched for, function. That's exactly what is assumed in darwinian theory: the new function must be such that it increases the reproductive ability of the new form. You understand that it is a very severe restraint to what functions can be selected. So you cannot underestimate this point. Solving the travelling salesman problem in a shorter way does not usually help a software to replicate. It has to be a function inherent to the replication in that environment. Only that kind of function can be selected by NS. Other kinds of selection are IS. That is the weakest point in the concept of NS, and you cannot get rid of it so simply. Moreover, you are underestimating the importance of the metrics. A measurement is a very sensitive metrics, where you can put the threshold as low as you want or can. In NS, that is completely different. the threshold of measurement of a function is necessarily very high, becasue that function must provide some real reproductive advantage. Only a few functions can do that, and only at really functional levels of activity. That is a big, big problem for NS. Finally, you are underestimating the difference between the two kinds of selection: negative and positive. Negative selection has the role of eliminating failures (by far the most likely results, with RV). But positive selection must expand the mutated individual so that it comes to represent the whole population, or most of it. For a mutated individual to expand, it has to acquire a true reproductive advantage vs the previous form, because the previous form is still perfectly functional, and the single mutated individual must compete with all the others so that they are suppressed, and it expands. That is no simple result to be obtained. It is not a case that the best examples of "positive" selection are those of antibiotic reistance, where a single and very powerful artificial aggressive agent can suppress the normal population and let only the lucky carriers of a mutation survive. But that is not certainly the general scenario for all supposed cases of NS. So, as you can see, I insist that to simulate the darwinian mechanism, you have to demonstrate that the results of random variation can generate a ture spontaneous advantage in some replicators, and that such spontaneous advantages can cumulate to the point of generating true new complex functions (CSI). The necessity of overcoming the barrier of better survival is the biggest obstacle for darwinian mechanism. Such a barrier, in a complex replicator, can be overcome only by complex functions and adjustments, which cannot derive from simple random variations. And, if the darwinian model is true, why shouldn't it work in a digital environment? There are infinite ways in which a replicating software can profit of the digital environment where it runs: better programs can occupy better spaces of the digital environment, compete for the computing resources, or directly attack competitors. Computer viruses do that. But I suppose that they are usually designed. Why cannot "better" computer viruses, or anything like that, come out of random digital noise in a digital environment? Why cannot we simulate that? The truth is that we all know that such a simulation would never work. Because the model it is based on has never worked, and never will work. In the same way that plasmodium falciparum has never acquired the capacity to survive at certain temperatures, or to survive in carriers of sickle cell anemia. You just don't acquire that kind of functions that way. By the way, have you an example of GA solving the travelling salesman problem in a shorter way (and, possibly, with a new and different algorithm, which would be similar to generating a new protein)? I would be interested in that.gpuccio
January 24, 2009
January
01
Jan
24
24
2009
09:52 AM
9
09
52
AM
PDT
1. The active information in a fitness function acts as a long term "funnel". This funnel can contain an explicit target, like with the weasel example. As gpuccio said, "It is obvious that GAs, being intelligently designed problems, can find solutions which the designer did not know in advance." This is because the funnel contains a specific target that is general in scope and is applied over the long term regardless of short term considerations. For example, you could be looking for the best shape for an antenna and these could be several different fitness functions. I'm just guessing but I'm presuming the landscape for an antenna GA should be very smooth in the best case scenarios. This example starts with an antenna that gets 1 dBi. a) tested against explicit set of shapes b) specific test by sampling signals and any increase, however small, in forward gain is rewarded c) the more complex the shape the better but everything is rejected except those shapes that have a gain of 1 dBi higher than the previous generation (as in, the steps in the pathway are in 1 dBi incremental jumps). d) the more complex the shape the better but everything is rejected except those shapes that have a gain of 12 dBi or higher. e) complexity and length is rewarded. Each generation is checked for a better antenna, but is not rejected. a should find the target very fast, with performance getting worse and worse with each version. I'd be surprised if d gets anything. And as a funnel e is so overly generalized that it may never find anything useful. Although you'd probably end up with a monster of an antenna. 2. The main issue is this:
The environment must be totally blind to the replicator, in other words in no way it must be designed to favor some specific solutions in the replicator. It can, obviously, favor whatever solution arises “naturally”, as a consequence of necessity deriving from its internal rules and structure. But no “fitness function” must be purposefully written in the environment by the designer. The only “fitness function” will be a description of how the environment works, or is structured, or changes, exactly as it should happen in darwinian theory.
If I may re-interpret you: a) fitness functions in nature are not static, and will often not apply uniformly over many generations. Some fitness functions may not exist at all for certain problems, or they're so overly generalized that the search is not properly funneled b) neutral mutations are allowed but no fitness function can target specific configurations of them or guide them toward long term goals. If a series of neutral mutations manages to hit upon a configuration without any guidance some fitness functions will activate at that point. The weasel program is a good example since many intermediates do not have a function (comprise no english word) in the short term yet they're selected for anyway. c) most fitness functions must be limited to a short term target acheiveable in single step pathways. A new set of fitness functions will be generated based upon the new functional configuration of the replicator. d) in order to be realistic the fitness function must not contain a long term target that is specific in scope yet cannot currently effect the organism/creature in the short term. For example, you cannot have a fitness function targeting the long term functionality of the flagellum but you could have hypothetical fitness functions for intermediates. e) in nature there are examples where deleterious/destructive mutations may provide benefit in limited environments. There must be no preference for fitness functions that imply both benefit and constructiveness. f) some overly generalized long term fitness functions can be used but there must be a maximum ceiling balanced against other considerations. For example, for a hunter a generalized fitness function could be "an increase in speed in order to catch my prey". Problem is, this must be balanced against things like an extremely high metabolism and increasing speed at the expense of strength required to take down the prey.Patrick
January 24, 2009
January
01
Jan
24
24
2009
09:50 AM
9
09
50
AM
PDT
Gpuccio You have written a lot. But I think it will be enough to concentrate on this bit: And it is not only “the problem” which “stems form the simulator’s brain”, but especially the process of solution, even if the solution itself is not known in advance. And the process of solution is an intelligent creation, a fruit of design, and it incorporates not only the active informatiout about the problem, but also the intelligent elaboration of the designer, his intuitions, his patient work, his purpose, his general knowledge and view of the world, and who knows how many other things. The process of solution of a GA has three parts: (a) The initial conditions (b) The variation mechanism (c) The elimination mechanism I am going to leave aside (a) and (b). I am sure there are GAs where these are done at random with no attempt to tune them to the end result. The real issue is (c). You use grand phrases about the designers’ work, intuition and intelligence but in the end this work is going to manifest itself as an elimination mechanism – a set of criteria for deciding who will survive and who will not. It doesn’t matter a toss whether the designer laboured on it for 20 years with the genius of Leonardi Di Vinci or stumbled on it while doodling and decided to give it a go. We need to understand what is it about a GA elimination process which makes GA an unacceptable simulation of NS? It isn’t the amount of work that was done beforehand! You hint at one difference when you write in the context of protein design: I recognize any approach to my function, by my capacity to define and measure it, and not by a spontaneous advantage which the function implies. . In other words the designer has worked out that the fitness function = selection criteria = measurement system will lead to the required function. But note that it is the measurement that decides what survives and thus plays the role of natural selection. It looks like your concern is that the measurement leads to an end objective over and above satisfying the measurement while NS does not. But not all GAs work that way. Some simply have a fitness function. Suppose for example you use a GA to solve the travelling salesman problem. Then the fitness function is simply the shorter solution survives – the end objective is the same as the fitness function. To summarise. Suppose we have a GA with the following properties: (a) Initial states are selected randomly within the domain space (b) Variation is not in way related to fitness function or any external objective (c) Survivors are selected in a manner chosen by the programmer on a whim and with no other end than to generate survivors that do well on the measurement used Would this satisfy you?Mark Frank
January 24, 2009
January
01
Jan
24
24
2009
08:28 AM
8
08
28
AM
PDT
Toronto (#65): Your example, as I understand it, is interesting, but absolutely not pertinent to our discussion. First of all, a fractal generator is a necessity algorithm with some random seed. It is an example of "organization" based on necessity, but not of "functional information". Your fractals perform no function. Second, as you say: "Lifeforms die based on voter input. If too many visitors on the website don’t like the song or picture, it’s removed from the environment. If an output gets a lot of thumbs up, it survives and modifies a bit." And so on. In other words, the results are selected by intelligent consious people, according to their intelligent and consious sensibility. That has nothing to do with NS, and has nothing to do with function. It is just a collective rorschach test where people select what they like (or recognize) more. In a sense, it is an interesting variation of the weasel example (and I mean Shakespeare, not Dawkins). "In the US, enough generations might produce a picture of an eagle, while in Italy, you might end up with something that sounds like opera." That's exactly my point. The audience is modeling the content according to its projected representations. In a way, it's a form of (artistic) design. If I have missed something of what you mean, please let me know.gpuccio
January 24, 2009
January
01
Jan
24
24
2009
07:36 AM
7
07
36
AM
PDT
Mark: So, let's try to draw some conclusions. You say: "If so, surely this is a significant piece of evidence in favour of the Darwinian process? All that remains is to show that it is possible for RM to generate complex and unanticipated solutions to problems which stem from the natural world rather than the simulators’ brains." Here is where your reasoning is not correct, in the light of what we have said before. The fact that highly sophisticated pieces of engineering can find solutions to specific problems, even by intelligently using random search, is in no way evidence in favour of the darwinian process. And it is not only "the problem" which "stems form the simulator's brain", but especially the process of solution, even if the solution itself is not known in advance. And the process of solution is an intelligent creation, a fruit of design, and it incorporates not only the active informatiout about the problem, but also the intelligent elaboration of the designer, his intuitions, his patient work, his purpose, his general knowledge and view of the world, and who knows how many other things. How could all that "stem for the natural world"? The natural world is as it is. Nothing stems from it, not problems, not processes of solution, not solutions. The best we can say is that, form the interaction between the replicator (which, at least, is a functional entity) and the natural world (which is only a passive scenario) the functions inherent in the replicator can be favoured, or suppressed, according to how they fare in that scenario. And that's exactly what I had requested in my proposed simulation: a passive scenario (although an organized one, with its laws and structures), a functional replicator, and random variation. That's all that is needed. In other words, the simulator must put his intelligence in building that scenario, but not in solving it. " While it may be difficult to create such problems in a simulation it is not clear why they the differ in principle." I hope I have explained why they differ in principle. "What are the key differences between a designed fitness function and a “blind” one such as NS? They both eliminate some individuals in a systematic way." Indeed, the fitness function is an ambiguous abstraction. Let's say that it is the replicator itself which survives or not survives, according to the resources it has (in NS); while it survives or not survives according to the planned expectations of the programmer (in IS). That is a lot of difference. The "fitness function" of the natural world, whatever it nay be, incorporates no design and no intelligence. The fitness function in an algorithm is a well defined product of design, and incorporates a lot. "Imagine someone did discover (a la Douglas Adams) that the earth was in fact a giant machine designed to generate complex life forms through RM and a complex environment. Would this suddenly invalidate the Darwinian logic?" Well, maybe I was wrong and you really are a theistic evolutionist in disguise! You had fooled me with all that false reasoning about a materialistic view of the world. :-) Really, I don't want to discuss here the inconsistencies of TEs. But if you are just implying that the earth is a giant machine designed to produce life by ETs, then I can answer: no, it isn't. If it were, we would see that. We would see the active information somewhere. Now, I am not saying that the earth is not tailored to favour life: I do believe it is. If it were not, life would simply be impossible. In a sense, the whole universe is tailored to favour life (see the fine tuning argument). But that is the most I can concede to TEs. That "favouring" does not include the generation of specific protein sequences, or of all the other information we observe in the biological world. That's why the biological world is so different from the non living world. For it, another level of design is necessary. Finally, I can admit that biological information could be generated by pure random variation and Intelligent Selection, like in GAs. But that is exactly an ID scenario (although not my favourite one).gpuccio
January 24, 2009
January
01
Jan
24
24
2009
07:24 AM
7
07
24
AM
PDT
Mark: Your objections allow me to go into further detail about very important aspects of the question. First of all, I agree with you not to get tied into semantics about the word "process". I think we agree about what NS is in the darwinian theory, and that's the important thing. So, let's go on. You say: "It appears that for you it is a key issue that the fitness function in a GA is designed, while the fitness function in the real world is not. This is the big thing that invalidates GAs as simulations of the Darwinian process as far as you are concerned. Correct?" Yes, it's perfectly correct. But I would like to add immediately that it is not only a question of fromal difference: it is a question of utter substance, because in GAs the fitness function (and the algorithm which uses it) are designed in a way which incorporates a lot of "active information" (or, if we don't want to get tied into semantics, of useful knowledge) about the problem to be solved, plus a lot of intelligent planning about how to best solve it. It is not a small difference. More about that later. "Now do you accept that some GAs do generate complex solutions to designed fitness functions? So they establish that RM can generate complex and unanticipated solutions – even if the problem they address is anticipated?" Here the matter becomes more tricky. Let's try to analyze it better. GAs, like any intelligent software, can certainly generate solutions. And the solution is by definition unanticipated, while the problem is certainly anticipated. IMO, what differentiates GAs from other softwre is that GAs use a random search as part of the algorithm. That's why I have always compared GAs to other engineering processes which do the same, like protein engineering and antibody maturation. Now, to understand the question better, let's take an example which uses only necessity: a software program which can calculate succesive digits of pi. Now, let's pretend you are calculating the nth digit of pi: the solution is unanticipated (we don't know it in advance), but the problem is anticipated (we have to know the right algorithm which can correctly calculate that digit). The same algorithm can calculate many digits of pi (I suppose...). So, we have here an example of a program based on pure necessity (a mathemathical formula) which can give us a solution which becomes ever more complex with each new digit. But please, take notice of two important things: a) the algorithm is of pure necessity, and does not imply any random search. b) even if the complexity of the solution can increase, the specification remains the same, and is linked to the mathematical definition of pi. No new specification is ever created by the program. Now, let's go to intelligent algorithms which incorporate some random search as part of the process. Apart from examples which are pure propaganda, like Dawkins' weasel, where the program could as well write down the solution ibstead of looking for it by random search, that kind og algorithms, like those used in protein engineering, have a definite reason to exist: in that case, again, the designer knows the problem but not the solution, but he knows no algorithm based on necessity to reach the solution. In other words, he cannot calculate the solution. That's typically our situation with protein functions: we may know what function we are searching, but we have no idea of which aminoacid sequence can implement it. That's why, instead of a process of calculation based on necessity, we can adopt a process of trial and error, based on limited random search and some form of intelligent selection, usually a very sensitive measure of the desired function after each step of limited random variation. Such a method, if correctly designed, works. It is not easy, it is not quick, but it works. We know that. But it is important to understand why it works. We have a lot of intelligent programming and active information here, at different steps: a) First of all, there is usually a careful selection of the sequences we start from. That can be very important, and is based on what we understand of protein function. b) The random variation step is usually tailored as much as possible. For instance, in antibody maturation, it is applied only to the part of the sequence which has to be "improved", and not to the whole protein coding gene. In other words, it is random, but "targeted" variation. c) The engineer never asks random variation to do what it cannot do. In other words, the process of variation has the role of achieving as much variation as statistical laws allow to achieve with the available resources, and not more. That's why each step of limited variation has to be followed by a step of becessity (measurement and selection). d) Finally, and most important, the selection is intelligent selection, and not natural selection. That does not mean only that it is implemented by an intelligent engineer, but also, and especially, that it is completely based on our intelligent understanding of the problem: here is where most of the active information slips in. The difference is fundamental. I decide what function I am looking for: I am not expecting "any possible useful function", but a specific solution to a specific problem. I "measure" it, even at levels which could have no significant relevance in the general context of the environment. In other words, I recognize any approach to my function, by my capacity to define and measure it, and not by a spontaneous advantage which the function implies. My selection is therefore artificail, intelligent, and guided by a lot of active information about the result. More in the next post.gpuccio
January 24, 2009
January
01
Jan
24
24
2009
06:45 AM
6
06
45
AM
PDT
2- Whether or not “evolution” is goal-oriented is being debated.
Toronto: gpuccio @[48] seems to agree that the process of evolution has no goal.
yes "evolution" as it is currently accepted and applied.
ROb: Correct me if I’m wrong, but ID scenarios seem to imply the latter. That is, the designer was not using evolution to find a solution to a problem, but rather to instantiate an already-known solution.
ID doesn't say. All ID says is that there are some things in the universe (and perhaps the universe itself) which show signs of being intelligently designed. As for the implementation that would be another question that can be pursued after design is determined.Joseph
January 24, 2009
January
01
Jan
24
24
2009
05:33 AM
5
05
33
AM
PDT
Gpuccio The real world, in darwinian thought , does not “include” any “process” for selection. NS is just a consequence of how the real world is, and of how a replicator works. It is not a “process”. It is a blind effect, due to blind laws of necessity. First a comment on terminology: a process does not have to be designed. Stalactites grow through a process; stars are born, shine and then die through a process. These are both blind. They just happen. But let’s not get tied into semantics. It appears that for you it is a key issue that the fitness function in a GA is designed, while the fitness function in the real world is not. This is the big thing that invalidates GAs as simulations of the Darwinian process as far as you are concerned. Correct? Now do you accept that some GAs do generate complex solutions to designed fitness functions? So they establish that RM can generate complex and unanticipated solutions – even if the problem they address is anticipated? If so, surely this is a significant piece of evidence in favour of the Darwinian process? All that remains is to show that it is possible for RM to generate complex and unanticipated solutions to problems which stem from the natural world rather than the simulators' brains. While it may be difficult to create such problems in a simulation it is not clear why they the differ in principle. They may be more more complex but they don't present any new problems in principle. What are the key differences between a designed fitness function and a “blind” one such as NS? They both eliminate some individuals in a systematic way. Why should a process that can solve one type of problem not be able to solve the other? Imagine someone did discover (a la Douglas Adams) that the earth was in fact a giant machine designed to generate complex life forms through RM and a complex environment. Would this suddenly invalidate the Darwinian logic?Mark Frank
January 24, 2009
January
01
Jan
24
24
2009
04:56 AM
4
04
56
AM
PDT
Mark: "Any simulation of RM+NS has to deal with both RM and NS. RM is random in the sense that the mutation is independent of the selection criteria. But NS, which is the simulated by the fitness function, is far from random." Well, that's exactly my point. Exixting GAs deal certainly with RM, but they absolutely don't deal with NS. All of them deal with Intelligent Selection (IS). Let's speak a little of the famous "fitness function". In reality, no fitness function exists. Functions are just our intelligent creations. The problem is, the fitness functions created in GAs don't represent in any way what is assumed in darwinian theory. And that's not only because biological reality is different from digital reality: that is a basic problem of all simulations, both GAs and the one I am proposing. That is certainly a limit, but it is not my point. My point is that teh fitness function in GAs is an intelligent function, devised exactly to obtainin in some more or less indirect way, what the designer of the simulation wants to obtain. That's only design in a cheap tuxedo. In darwinian theory, NS is only a blind effect which derives form the interaction of two different realities: the environment, or landscape, or whatever you want to call it, and the functional reality which we call the replicator. Two points are essential to define some process as NS: a) The environment must be totally blind to the replicator, in other words in no way it must be designed to favor some specific solutions in the replicator. It can, obviously, favor whatever solution arises "naturally", as a consequence of necessity deriving from its internal rules and structure. But no "fitness function" must be purposefully written in the environment by the designer. The only "fitness function" will be a description of how the environment works, or is structured, or changes, exactly as it should happen in darwinian theory. b) The replicator can be as functional and as complex as we want: in my simulation, it represents the result of OOL (which we are not simulating here). The only requirement is that nothing must be frontloaded in the replicator to "guide" or help the future variations. In other words, any future variation must be truly random (and, as I have said, the variation mechanism can apply any statistical distribution you like, uniform or not, provided that no information is inputted about some specific functional solution). That is the concept of NS as it is outlined in darwinian theory. That's what we have to simulate. Anything which does not have those two properties, is neither NS nor a simulation of it. It is just some form, more or less in disguise, of IS. "Nevertheless it is not random." I have never said that NS is random. It is a process of necessity. But it has to satisfy the two criteria I have detailed, otherwise it is not NS. In other words, NS is a "blind" (not "random") process of necessity. "Then the mechanism of RM+NS would lead to species with ever greater P/W ratios." In that simulated world, you would only obtain (if you are lucky) the same replicators, with slightly greater P/W ratios (as far as that is possible without damaging the existing functions, and by means of simple random variations, which would never approach the level of CSI). In other words, you attain (if you are lucky) exactly what you specified in your intelligent fitness function. If you specify greater P/W ratios, you can obtain that and nothing else. If you specify flying objects (whatever that may mean in a digital environment) you can obtain that and nothing else. If you don't specify anything, you obtain nothing. That's exactly, IMO, the intuitive meaning of the concept of active information. And specification, as you well now, is the first requirement for CSI and design. "But, just like the real world, any simulation must include a process for selection based on something." The real world, in darwinian thought , does not "include" any "process" for selection. NS is just a consequence of how the real world is, and of how a replicator works. It is not a "process". It is a blind effect, due to blind laws of necessity. Maybe a theistic evolutionist could argue that the environment is designed to produce life, but that's another story. I believe, to your merit, that you are not a theistic evolutionist.gpuccio
January 24, 2009
January
01
Jan
24
24
2009
03:54 AM
3
03
54
AM
PDT
1 5 6 7 8 9 10

Leave a Reply