Uncommon Descent Serving The Intelligent Design Community

Dave Thomas says, “Cordova’s algorithm is remarkable”

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
arroba Email


Dave Thomas is in a bit of a tizzy over my humble offering: Tautologies and Theatrics (part 2): Dave Thomas’ Panda Food. He responds at Pandas Thumb with: Calling ID’s Bluff, Calling ID’s Bluff. I thought I’d alert the readers at UD what horrible things I’m accused of, that I might be some sort of vile scoundrel. 🙂

[Dave writes:]

Imagine my surprise, then, when I found Salvador Cordova at Uncommon Descent spewing blatant falsehoods about this work. I was shocked – shocked, I say – to catch the UD Software Engineers in a lie. And quite a lie it is – with the help of mathematicians like Carl Gauss, I’m going to lift the veil from the obfuscations of IDers, and prove it’s a Lie, much as you would prove a mathematical theorem.
….
which the brilliant Gauss found useful

Thomas then correctly identified the formula I implemented via Genetic Algorithm:

[Dave writes:]

The Software Engineering Team of Uncommon Descent has been caught lying – Q.E.D.

Where did I ever claim there wasn’t smoke and mirrors involved in the gimmickery here? Fer cryin’ out loud, my post was talking about mathematical theatrics, and I presented that program as an example of gimmickery! I even alerted the reader with these words before presenting my program, “The following [are] computational theatrics”. Sheesh!

[Dave writes:]

As an exercise in Smoke and Mirrors, Cordova’s algorithm is remarkable

Well dog gone, he actually says something nice about my work. It’s REMARKABLE! 🙂

I fully take pride in the smoke and mirrors I used, I never pretended otherwise. In contrast, Thomas refuses to admit he’s also using smoke and mirrors in his GA. He pretends somehow his steiner-solving program should persuade us that mindless undesigned forces can hit specified targets.

Well, did he have some Chimpanzee create the fitness functions in his software for him? Without intelligent design on his part, his fitness functions will fail to guide the system to the intended target. He has thus snuck the answer his GA, much the same way I snuck the answer in my GA. At least I alert the readers of where the trickery is, but Thomas would rather have his faithful congregation at Pandas Thumb believe that mindless evolution can truly work magic.

As Haeckel said:

Evolution is henceforth the magic word by which we shall solve all the riddles that surround us.
Ernst Haeckel1

Instead of “abracadabra” the Pandas say, “evolution”. Ramen.

Comments
[…] Hoppe admits irreducible complexity can be evolved by random mutation and designed selection (RMDS). But this says nothing about random mutation and natural selection (RMNS). Examples of Designed Selection (DS) to achieve desired intentional goals are Dawkins Weasel, Avida, Ev, Steiner, Geometric, Digital Ears and Cordova’s remarkable evolutionary algorithm. […] Hoppe blows a gasket over Ewert’s paper, forgets to mention Avida predicts zombie apocalypse and solves OOL | Intelligent Design and Creation Science
[…] models the Darwinian view are genetic algorithms like Avida, Tierra, Ev, Steiner, Geometric and Cordova’s remarkable algorithm. Winston Ewert discusses these […] Neutral theory and non-Darwinian evolution for newbies, Part 1 | Uncommon Descent
[…] models the Darwinian view are genetic algorithms like Avida, Tierra, Ev, Steiner, Geometric and Cordova’s remarkable algorithm. Winston Ewert discusses these […] Neutral theory and non-Darwinian evolution for newbies, Part 1 | Intelligent Design and Creation Science
"I think the manufacturers might be surprised by your claim to have contributed to the design." Yeah, if I accepted the disk drive as presented. But if instead the manufacturer had no clue what to do to improve the disk drive other than from my selection of random variants of it, then I think I'd have every right to claim to have designed the improvements to it. (Or the entire disk drive if this process started from scratch.) "So if selection algorithms are part of the design they have the weird propery that the design takes place after the solution has been produced." No they don't. They have the property that design takes place after slight improvement has been produced. Again and again and again. The solution is produced only after much design has occurred. j
j You are right - I apologise. I forgot what I wrote. So I need to adjust the argument a bit. Suppose you go on to Google and you choose the fastest widget. Did you therefore contribute to the design of that widget? To make it concrete suppose you wanted an external disk drive and you selected the one with the fastest access time subject to a minimum capacity. I think the manufacturers might be surprised by your claim to have contributed to the design. You may object that the disk drives were created before the selection process. But that is exactly what happens with Dave's selection algorithm (or any selection algorithm). It selects from what already exists. It is only the repeated application of the algorithm that hides this rather obvious point. So if selection algorithms are part of the design they have the weird propery that the design takes place after the solution has been produced. Mark Frank
[This is why I said I plan to move on.] You: "...but the organiser is doing no more than the 'facilitator' of the maths competition..." I don't agree. Because you changed the rules midstream. Back at (70) you said: "I want to find a solution to a difficult mathematical problem. So I organise a competition and offer a prize to the first person to solve it." (boldface added) That is why I said that the organizer did not design anything. The organizer did not influence the design, s/he accepted it as presented. I make myself more likely to find a [Widget To Accomplish Some Task] if I take a trip to Sears, or use Google. That doesn't mean I designed the [WTAST]. I may not know whether the [WTAST] even exists ahead of time. But if I'm willing to expend some effort, I can increase the likelihood of finding it -- I can facilitate its finding. But then at (73), you made a crucial change to the role of the organizer: "The organiser will then subject these solutions to Dave’s selection algorithm (i.e. select the ones with shortest path length) and hand them back to the children to try to find improvements..." You added his/her selectivity to the process of achieving the end product. That is why I said that the organizer is a co-designer. And I maintain that Dave Thomas can be considered the designer (not special creator) of the Steiner solutions. It is only because of the teleology that he imparted to the system that the solutions were obtained. j
j. I am sorry you are bored with this topic. To my mind we got to an interesting point. It gives me an opportunity to have the last word - but please understand this is purely in a spirit of intellectual enquiry. In my proposed example with children you cleverly use the word "co-design" but the organiser is doing no more than the "facilitator" of the maths competition - simply selecting the best solution out of many submissions - and we earlier agreed that the facilitator was not a designer. If you accept that selecting the best of several designs is not designing the solution, then your logic seems to imply that whether the organiser designed the solution depends on what the children do and not on what the organiser does. Suppose the organiser did not know whether the proposed solutions were created through intelligence or the roll of the dice? I will keep checking for a few days in case you (or anyone else) feels moved to respond but will understand if that is the end of the discussion. Thanks for attending. Mark Frank
The principal characteristic of intelligent agency is choice. Even the etymology of the word intelligent makes this clear. Intelligent derives from two Latin words, the preposition inter, meaning between, and the verb lego, meaning to choose or select. Thus according to its etymology, intelligence consists in choosing between. For an intelligent agent to act is therefore to choose from a range of competing possibilities. (William A. Dembski, Intelligent Design, p. 144.)
You imply that the children use their intelligence in the generation of the proposed solutions. This being the case, then of course they are to be credited with being co-designers with whoever designed the algorithm used to choose among the proposals. However, if they had been generating their proposals by the roll of dice (for example), then they are not to be credited with having contributed to the design. They would have been slavishly implementing a mindless process. A designer is one whose choices lead to a solution. If those choices can be automated (as in Dave Thomas's program), it makes no difference. All who makes the choices (which algorithm to use, how to initialize it, what "the fittest" is defined as, possibly how to tweak the algorithm, etc.) that result in the solution, is/are the designer/s. Your evaluation of Edison agrees with this. You say that if his solution was obtained completely by chance, then he did not design it, but if he made choices that were essential to permitting the discovery to be made, then he did design it. Dave Thomas made choices that were essential to permitting the discovery of the Steiner solutions. I could easily take your hypothetical introduction of intelligent agents into the variation process as an apparent sly ploy to make it seem more reasonable that a totally blind/dumb/purposeless process has powers that it doesn't have. (Like Darwin comparing natural selection to artificial selection in the Origin of Species.) But I'll be charitable and assume this wasn't the case. [P.S. I'm getting bored with this particular topic and plan to move on.] j
j Let's start by summarising where we agree. 1) You say "having the intention is a necessary, though not sufficient, condition for designing something". I absolutely agree. Dave had the intention to create patterns with short path lengths - but that is not sufficient to prove that he designed them. 2) In the case of the competition the organiser cannot be said to have designed the solutions (although he did design the process which lead to the solutions being designed). I believe that Dave was in the position of the organiser. He created a process where solutions emerged. I can perhaps convince you by imagining a small change to his algorithm. At the moment it creates new patterns through random mutation and crossover of "DNA". It then selects through choosing the ones with the shortest path length. It is the selection process that Salvador considers to be point where Dave adding his design. OK let's keep the selection process but change the variation process. This will mean moving the algorithm off a computer. The organiser will start by asking many small children to come up with their best solution to Dave's problem. The organiser will then subject these solutions to Dave's selection algorithm (i.e. select the ones with shortest path length) and hand them back to the children to try to find improvements (probably we mix them up so a child is not trying to improve another child's solution - not their own). The children hand in their improvements and we run them through the selection algorith and do it all again until it looks like we aren't getting anywhere (assume these are extraordinarly indefatigable children). This looks very like the competition except it has multiple rounds. I seriously doubt you that could say the organiser was designing the solutions - the children are doing that. But the element that Salvador labelled as introducing design (the selection algorithm) is identical. What has changed is the source of variation. With respect to 1 and 2. You need to be precise about what happens. If Edison simply kept putting things between two electrodes in the hope that one would one day light up - then I don't think you can say he designed that solution. He just discovered it. He also designed the process for finding a solution (trial and error). If he thought about the kind of properties that would be required for a light bulb, selected suitable elements, tried them out, and the worked out how to package them into a usable offering then he did design the solution and a bit of trial and error helped out. But Dave's algorithm is not at all like that. Rgds Mark Frank
Mark Frank, As I strongly implied @65, you are using the word "design" in too strict a sense. You're using it as a synonym for "specially create." The more general definition is what ID concerns. An engineer who uses a computer program to find a solution (that correspond to a design) is considered the designer. And it's true. He had "held a particular purpose in view" and undertook "deliberate purposive planning" and "laid down means to an end in a scheme" (See my comment #46) The organizer of a design competition cannot be said to have designed anything. The proper description of their activity would be "facilitation." Yes, I did write "If it wasn’t for his intention, they wouldn’t exist." But having the intention is a necessary, though not sufficient, condition for designing something. Obviously, anyone who ever intended to design an airplane before the Wright brothers actually succeeded cannot be credited with designing an airplane. But this is what one would be led to by your interpretation of what I wrote. One has to actually come through with the goods to be considered an inventor. For example, patents aren't issued to business owners, they're issued to the employees who contributed substantively to the design. I don't quite follow what you mean regarding the Newton-Raphson method. —————————— Your (Mark Frank's) criteria @48: "(b) [to be considered a design, it must] achieve the designer’s purpose in the fashion that the designer planned." You @70: "Re #68. Neither 1 or 2 [@69] are in conflict with my conditions (a) and (b). 1 and 2 are methods for coming up with a design. This could happen through serendipity, a blinding flash of inspiration or a visit from my fairy godmother. However, once the design has been produced conditions (a) and (b) must still apply for it to be a design and not just a happy accident." This makes no sense. If the design existed before the designer even intended to design anything, or if the design is found only by a process of elimination, it cannot be in the fashion that the designer planned. M-W's dictionary:
serendipity - the faculty or phenomenon of finding valuable or agreeable things not sought for. trial and error - a finding out of the best way to reach a desired result or a correct solution by trying out one or more ways or means and noting and eliminating errors or causes of failure; also : the trying of one thing or another until something succeeds.
And my modes (3) or (4) don't meet your criteria, either. j
Two more modes of design: 3) Subconscious daemon (initiated by previous conscious effort). Mathematician Henry Poincaré, after intensive periods of deliberate, conscious effort searching for what he called Fuchsian functions:
...I left Caen, where I was living, to go on a geologic excursion under the auspices of the School of Mines. The incidents of the travel made me forget my mathematical work. Having reached Coutances, we entered an omnibus to go to some place or other. At the moment when I put my foot on the step, the idea came to me, without anything in my former thoughts seeming to have paved the way for it, that the transformations I had used to define the Fuchsian functions were identical with those of non-Euclidean geometry. I did not verify the idea; I should not have had time, as upon taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty. On my return to Caen, for convenience sake, I verified the result at my leisure.
4) Conscious inspiration. Mozart:
When I am well and in good humour, or when I am taking a drive or walking after a good meal, or in the night when I cannot sleep, thoughts crowd into my mind as easily as you could wish. Whence and how do they come? I do not know and I have nothing to do with it. Those which please me I keep in my head and hum them; at least others have told me that I do so. Once I have my theme, another melody comes, linking itself with the first one, in accordance with the needs of the composition as a whole: the counterpoint, the part of each instrument and all the melodic fragments at last produce the complete work. Then my soul is on fire with inspiration. The work grows; I keep expanding it, conceiving it more and more clearly until I have the entire composition finished in my head though it may be long. Then my mind seizes it as a glance of my eye a beautiful picture... It does not come to me successively, with various parts worked out in detail, as they will later on, but in its entirety that my imagination lets me hear it.
(Both quotes from Roger Penrose, The Emperor's New Mind) j
Re #69. Thanks Salvador. I will restrict my involvement to your threads and see how it goes. Re #65. There is a difference between designing a method for finding a solution and designing the solution. Dave designed a method for producing patterns with short path lengths but he didn't design the patterns. Here's an analogy. I want to find a solution to a difficult mathematical problem. So I organise a competition and offer a prize to the first person to solve it. I designed the method of finding the solution (the competition) but the winner designed the solution. The winner would be justifiably pissed off if I claimed I had designed the solution because "without my intention the solution would not have existed". In the case of Newton-Raphson (and indeed Salvador's example) there is an additional complication - the solution is a single number. You can't design a number. You can come up with a design which you express as a single number - but that number must represent something more complex where the designer has to think through how the solution solves the problem. This is confusing because it means that in these examples the only occurence of design is in the method of finding the solution - not the solution itself. Re #68. Neither 1 or 2 are in conflict with my conditions (a) and (b). 1 and 2 are methods for coming up with a design. This could happen through serendipity, a blinding flash of inspiration or a visit from my fairy godmother. However, once the design has been produced conditions (a) and (b) must still apply for it to be a design and not just a happy accident. Rgds Mark Frank
Mark Frank, If it makes you feel better, my posts get held up as well. Let's be patient with the volunteer work of the moderators however. If you are on my threads, and you abide by the spirit of the forum, and I remove something of yours, it will appear on the Cutting Room Floor. Thank you by the way for visiting. Salvador scordova
Re #46 for an object to be designed (as opposed to appearing to be designed) it must not only satisfy the designer’s purpose but a) should be the result of some activity by the designer (a heavy shower suits my purpose for watering the garden but I didn’t design the shower) b) achieve the designer’s purpose in the fashion that the designer planned (if I lay out a hose to water the garden and the garden actually gets watered because of a leak in the hose you can’t say I designed the solution)
This is certainly a very narrow idea of design. The method of working backwards from a known goal is clearly a useful design strategy, and the inability of evolution to use this strategy is clearly the source of many of the infelicities of the design of organisms, but it is hardly the only strategy available to a designer. Two other important sources of design are: 1) Discovery/serendipity: This is arguably the most important modality of design, in which discovery of the means precedes awareness of the goal. Typically, a designer working on some other problem entirely discovers a means of achieving an end that he did not previously even conceive as a possible goal. For example, until the discovery of the laser, nobody would have imagined using optical storage for movies. Indeed, shortly after its discovery, the laser was described as "a solution in search of a problem." Truly transforming discoveries typically arise in this way. 2) Trial and error solution to a problem: Edison reportedly tried hundreds of possible filament compositions in his search for a way of making a light bulb. Historically, much pharmaceutical drug development is derived from testing a large number of chemical compounds in animal models, picking those that seem to produce useful effects, and only later figuring out how that particular compound produces those effects. trrll
Re #65. I would like to continue this discussion but I am finding that only about 50% of my postings on UD are accepted and others are delayed by up to 24 hours (I am not aware of breaking any of the rules). This makes it almost impossible to have an intelligent debate. It also takes considerable time and effort to make a thoughtful and relevant response and it is too frustrating to do this and then find it is rejected without explanation There doesn't seem to be any logic as to why some posts are accepted and others are not. It makes me wonder if it is actually a technical or process problem rather one of censorship. Let's see if this one makes the grade :-) Mark Frank
Tom Observed: It seems to me that the displacement theorem does not apply. The problem here is not like finding a particular amino acid sequence, as in Dembski’s “Searching Large Spaces.” That is, the problem is not to find an element of a small target, where elements outside the target are of no utility. Instead, any network that spans the fixed nodes is usable. There are a great many such networks. Some are better than others. You might argue that the set of Steiner solutions is the target, but, unlike Dembksi’s search assistant, the fitness function does not “know” which networks are Steiner solutions. There are local maxima in the fitness landscape, and the fitness function does not do anything to assist the GA in finding global maxima (Steiner solutions). This seems to me not to fit Dembski’s paradigm of assisted search.
If the Displacement Theorem does not apply then the target lacked high specificity ( high specificity means highly improbable), which means Thomas's program cannot be a counter example to CSI conservation. This would be equivalent to launching an arrow and saying where ever it landed it hit a target. But even that may be too generous if the software guarantees solutions which create connectivity (albeit of unknown length). If then we calculate the ratio of all possible connected networks versus all possible unconnected "networks", then CSI is implicated, and the displacement theorem applies. Thomas's describes his fitness function in his original essay Target? TARGET? We don’t need no stinkin’ Target!:
In this figure, the first and third “organisms” connect all the nodes, while the second has a fatal flaw: the top node is not connected to any other node. I defined the “fitness” of the organism as simply as the net length of all activated segments, or 100,000 if any fixed node is unconnected. It’s important to note that the “fitness” thus defined does not depend on the exact number of active variable nodes, or the angles between connected segments, or upon anything other than the total length of active segments. While both first and third solutions at least connect the fixed nodes, they are both far different than the proper Steiner Solution for the five-node system. The Fitness Test knows nothing of this solution, however; all it tells us is that the solution on the right is a little shorter, and therefore “fitter,” than the solution on the left. Because the middle solution misses a node, it is “unfit.”
In outher words we have 2 situations: 1. If Displacement theorem doesn't apply, then the GA cannot be used as a valid counter-example to ID claims 2. If the Displacement theorem applies, then we may be able to find how the CSI was snuck in. There is an extreme caveat here, discovering the how is a sufficient, but not necessary condition for affirming the displacement theorem. It may be the GA program is so complex, it defies analysis how the CSI was snuck in. For example, a fitness funciton may have been discovered by the designer through serendipity which he may or may not understand why it works. Meaning, random hacking of the fitness function on his part may create a local optimization, just as your paper (correct me if I'm wrong) suggested could be easy. But even such random hacking to achieve a goal is assisting the search. If the solution space is structured a certain way, then on rare occasions the quest for optimiztion may actually result in genuinely informed (learned) strategy. scordova
Mark Frank (48):
...for an object to be designed (as opposed to appearing to be designed) it must not only satisfy the designer’s purpose but...achieve the designer’s purpose in the fashion that the designer planned...
This simply isn't true. Use of various (computer-based) numerical methods to help find optimal, or at least adequate, solutions is a standard part of perhaps most engineers' educations. Necessarily, when these methods are used, the engineer doesn't know what the solution will be beforehand, but he or she must know enough to choose and set up the method in a way that it may yield a solution. I've already given an example of this in another thread: the solution of a system of nonlinear equations (that models some real-world problem) using an algorithm such as the Newton-Raphson method.
Dave Thomas had no idea how the resulting patterns were going to achieve short paths so he did not design...
He did design them. If it wasn't for his intention, they wouldn't exist. What he didn't do is specially create them. Intelligent design is not creationism. ;-) j
Formatting error: trrll's blockquote should have ended after the first paragraph in the previous comment. Sorry. P.S. Will UD have a comment "preview" function after the coming upgrade? sophophile
trrll wrote:
It seems clear that every existing species could in principle be generated by random chance, simply by exhaustively sampling the space of possible DNA sequences. While maximally inefficient, such an algorithm would eventually be successful. Therefore, if the design of living creatures embodies CSI, then random chance can create CSI. You're right that CSI can be produced by purely random processes, given enough time. However, my reading of Dembski is that he defines CSI with a 500-bit threshold in order to preclude such a production of CSI, except by an extreme fluke, within the projected duration of the universe. The error, of course, is in assuming that instances of CSI must come together in a single step. If the specified complexity accumulates in many smaller steps, with a non-random fitness function to select promising candidates along the way, then the odds because dramatically better for producing CSI within the time available. Given that most IDers accept natural selection on a small scale (when they aren't contradicting themselves by insisting that it is a tautology), you would think the only debate would be over whether real-life fitness functions and genomes admit of a series of steps toward structures possessing CSI, where each step is realistic given the amount of time available for it to happen. Applying this to GA's, the question is which "genomes" and fitness functions allow for achievable stepwise paths to CSI, not whether are any such genomes and fitness functions exist.
sophophile
The question whether undesigned nature can create such genetic algorithms in real life that can originate novel CSI. The displacement theorem describes the likelihood as more remote than random chance on average.
Others have pointed out why the displacement theorem does not apply. But for the sake of argument, let's consider random chance: It seems clear that every existing species could in principle be generated by random chance, simply by exhaustively sampling the space of possible DNA sequences. While maximally inefficient, such an algorithm would eventually be successful. Therefore, if the design of living creatures embodies CSI, then random chance can create CSI. So it seems that the argument ultimately boils down to one of efficiency. Is a parallel heuristic sampling algorithm such as a genetic algorithm able to discover CSI (or whatever type of information is required to define an organism sufficiently efficiently? This is not a question that could be resolved by some sort of law of information conservation, even if such a thing could be established, because it is a kinetic question—now a question of whether it can happen, but whether it can happen fast enough to be consistent with observation. trrll
Salvador, "The displacement theorem describes the likelihood as more remote than random chance on average." It seems to me that the displacement theorem does not apply. The problem here is not like finding a particular amino acid sequence, as in Dembski's "Searching Large Spaces." That is, the problem is not to find an element of a small target, where elements outside the target are of no utility. Instead, any network that spans the fixed nodes is usable. There are a great many such networks. Some are better than others. You might argue that the set of Steiner solutions is the target, but, unlike Dembksi's search assistant, the fitness function does not "know" which networks are Steiner solutions. There are local maxima in the fitness landscape, and the fitness function does not do anything to assist the GA in finding global maxima (Steiner solutions). This seems to me not to fit Dembski's paradigm of assisted search. "No one has yet been able to demonstrate that a GA can popup out of nowhere on its own. NO ONE!" That's in part because of the weirdness of your statement. No one claims that nature created a GA. Humans devised the GA as an abstract model of a natural process. The natural process of evolution arises by necessity when a population of self-replicators competes for resources in a bounded arena. Variation in the population arises from errors in replication necessitated by thermodynamics. The boundedness of the arena necessitates that some variants are culled from the population. Evolution is necessity operating on chance inputs. A better version of your remark: No one has yet been able to demonstrate that a self-replicator can pop up out of nowhere on its own. NO ONE! "... GA’s do not demonstrate that undesigned nature can evolve complex designs from scratch." You have a concept of undesigned nature? If I demonstrate to you that novel information in biota comes from random errors in reproduction and specificity comes from the environment (i.e., by way of selection), you'll have no choice but to tell me the environment was designed. Am I wrong? Tom English
52. scordova: “What Thomas did not model was adding some noise into his selection routines where the less fit individual survives almost 50% of the time.” I think that this could be an interesting variation on a GA program. On the other hand, do you really want to suggest that? The reason is that by this mechanism you are permitting the possibility that mutations can accumulate without being eliminated right away by selection. So even if either of two mutations may not have a significant effect on improving fitness, the synergistic combination of the two may do just that. A few months ago, there was a big discussion about the evolution of the mineralocorticoid receptor that involved two mutations. see http://www.sciencemag.org/cgi/content/abstract/sci;312/5770/97. ofro
You could reduce this further with a point D close to point 5. That will introduce new possibilities for tweaking B and C. I got a total length 1595.392 using D=(400,281) but then I only modified the y coord in 1 degree increments. steveh
Here are the 4 proposed networks In Cartesian coordinates, letting Vertex 1 = be the origin, or equivalently Vertex 1 = (0,0), and Vertex 6 be at (x,y) = (800,300), I propose the following 4 solutions of equivalent length:
Solution 1: steiner points: A = (86.6025, 150) B = (313.3975, 150) C = Fermat Point joining vertex 5,6,3 = (730.42, 224.88) A connects to Vertex 1, Vertex 4, Steiner Point B B connects Steiner Point A, Vertex 5, Vertex 2 C connects to Vertex 5, Vertex 6, Vertex 3
Solution 2: steiner points: A = (86.6025, 150) B = (313.3975, 150) C = Fermat Point joining vertex 2, vertex 6, vertex 3 = (730.42, 75.13) A connects to Vertex 1, Vertex 4, Steiner Point B B connects Steiner Point A, Vertex 5, Vertex 2 C connects to Vertex 2, vertex 6, vertex 3
Solution 3: steiner points: E = Fermat Point joining vertex 1,vertex 4, vertex 5 = (69.58, 224.88) F = (486.6025, 150) G = (713.3975, 150) E connects to Vertex 1, Vertex 4, Vertex 5 F connects to Vertex 5, Vertex 2, Steiner Point G G connects Steiner Point F, Vertex 6, Vertex 3
Solution 4: steiner points: E = Fermat Point joining vertex vertex 1, vertex 4, vertex 2 = (69.58, 75.13) F = (486.6025, 150) G = (713.3975, 150) E connects to Vertex 1, Vertex 4, Vertex 2 F connects to Vertex 5, Vertex 2, Steiner Point G G connects Steiner Point F, Vertex 6, Vertex 3
Here is the sum of all the lengths where each solution has the same length, but I'll us solution one to calculate length, and then one can assume the other 3 solutions have the same length: 1 to A 173.2050808 4 to A 173.2050808 A to B 226.7949192 B to 5 173.2050808 B to 2 173.2050808 sub total left 919.6152423 5 to C 338.8521647 3 to C 235.4003099 6 to C 102.3907822 sub total right 676.6432568 total 1596.258499 Each Steiner point has degree 3 with each connected edge opening 120 degrees from the adjacent one radiating from the same steiner point. These are necessary but not yet sufficient conditions for a steiner solution. I await Dave’s correct solution, but this one I think these are at least MacGeyver's. scordova
Here is the sum of all the lengths: 1 to A 173.2050808 4 to A 173.2050808 A to B 226.7949192 B to 5 173.2050808 B to 2 173.2050808 sub total left 919.6152423 5 to C 338.8521647 3 to C 235.4003099 6 to C 102.3907822 sub total right 676.6432568 total 1596.258499 scordova
There are 4 solutions which can be constructed via symmetry from the 1st one. I don't know if they are optimal solutions, but they'll connect the 6 verticies. Letting Vertex 1 = be the origin, and Vertex 6 be at (x,y) = (800,300) I propose the following steiner points: Here is my correction A = (86.6025, 150) B = (313.3975, 150) C = Fermat Point joining vertex 5,6,3 = (730.42, 224.88) A connects to Vertex 1, Vertex 4, Steiner Point B B connects Steiner Point A, Vertex 5, Vertex 2 C connects to Vertex 5,6,3 Each Steiner point has degree 3 with each connected edge separated by 120 degrees from the other. These are necessary but not yet sufficient conditions for a steiner solution. I await Dave's correct solution, but this one I think is adequate. The other 3 can be constructed via symmetry. Salvador scordova
trrll commented: Re #50. While I understand that this is a moderated forum, and you are entitled to censor whatever you choose for whatever reasons that you choose, it hardly seems fair to choose not to post a response yet still write a rebuttal to a selected snippet from that response
trrll, Apparently while I was moving your earlier comment, Michaels was still typing a response to it. However, feel free to continue participating, I do appreciate your participation. Salvador scordova
Otherwise, life can pop up anywhere, in any form, any planet.
We don't exactly know that it can't. Whether the origin of life is a low probability or a high probability event is unclear. It seems to have happened pretty quickly here on earth once the crust stabilized, which suggests that it is a high probability event under primordial earth conditions, but how common those conditions are in the universe is unclear.
The fact is you’re defining living contraints of the program, thermo being just one of the external considerations in your example. Your cost/efficiency ratios are but one of many conditions that must be optimized, not including interactions, immune systems, repair systems, catalyst(enzymes). But cost/efficient ratios can be overcome by larger energy input/output.
Yes, organisms are more complicated than this simple system, and there are certainly more complicated forms of artificial life evolving in computer simulations. But other concerns, such as overall energy input/output do not obviate the importance of efficient design of biological networks. It is certainly true that a mutation that increases overall energy input would be strongly selected for. But once that mutation has gone to fixation, the individuals with more efficient networks will once again have a selective advantage.
I’d appreciate any links which may dispute the articles findings at Biocompare posted in 2003 on a study by Dr. Richard Wolfenden who claims it to be an enigma as to enzymes arising within the currently accepted universal timeline.
What is the relevance to evolution of a reaction whose uncatalyzed rate is negligible? Do you imagine that enzymes are the only catalysts in nature? All sorts of things have been observed to catalyze chemical reactions, including inorganic clays, minerals, and ions. Moreover, studies of the evolution of novel enzymatic activities by mutation/selectioin in vitro have shown that catalytic activity, with rates less than an enzyme, but much greater than the uncatalyzed rate, can readily be found in mixtures of random peptides or nucleic acids. trrll
Re #50. While I understand that this is a moderated forum, and you are entitled to censor whatever you choose for whatever reasons that you choose, it hardly seems fair to choose not to post a response yet still write a rebuttal to a selected snippet from that response. trrll
trrll wrote: For example, the Steiner network problem models a biological problem that organisms must solve, the problem of “designing efficient networks. Networks are used heavily in biology neural networks, vascular networks. They need to connect crucial targets within the body, but thermodynamics imposes an energetic cost per unit length. An organism that grows excessively long vascular and neural pathways will be at a disadvantage compared to an otherwise identical organism that grows its pathways more efficiently, because it will require more nutrition and be at greater risk when food is scarce.
But that presumes the existence of a functional network in the first place, and that the signal-to-noise ratios are adequate enough to make small differences in network topology sufficiently visible to seleciton forces. If this is not the case, the appropriate model for the fitness might as well be a random number generator with a teeny tiny slight bias toward a goal. What Thomas did not model was adding some noise into his selection routines where the less fit individual survives almost 50% of the time. What Thomas used was truncation selection, and as John Sanford noted in Genetic Entropy that is a highly inappropriate model of selection in the wild. And then there is yet one other nasty issue, what chooses the number of network points in the first place? A very good example of this problem is the excessive overkill in terms of brain cells in the human mind. It is widely acknowledged that the human brain seems far beyond what natural selection requires. Why then add all these network points, if energy is the selection criteria? The human brain consumes 20% of energetic resources. From an energetic standpoint, this seems a bit excessive, don't you think? I hate to say this, but if average intelligence is declining per generation, this would not bode well for Darwinian evolution as the designer of the human mind. Some impetus greater than thermodynamic efficiency was at work in creating such a thermodynamically expensive apparatus as the human mind. scordova
trrll asked: So where does that information come from? Is it in the few bits of the “shorter is better” fitness criterion? Or the program as a whole?
The information comes from the choices of the the intelligent designer(s) including the computer system designers. Whether they act through a surrogate like a computer does not negate the fact that the the source of the information (the reduction of uncertainty) regresses to an intelligent designer. And even if he does not know what the answer will be before hand, it does not negate that the information in the output channel still proceeds from his choices. The output of the program is merely an alternative representation of the information he put in. The "bits" in the software are not to be confused with the "bits" in the solution space. For example, I could write the following that will "generate" zillions of bits of CSI in an output channel until you shut the program off. The following will generate a long string of 1's:
main(){ while(1) printf("1"); }
The output (from a storage standpoint) has many more bits of information than the source program that generated it. But the information still regresses to the choice of the and engineering of the system. A ZIP file with X number of byte to store it may decompress to a much larger file of Y bytes. A program can in principle be miniscule to the output it decompresses to (the above program was such an example). CSI metrics for information apply a different technique for measuring information rather than rote storage requirements or even algorithmic complexity. This thread is not appropriate for that discussion. Maybe Part 3 of Tautologies and Theatrics, ok? The question whether undesigned nature can create such genetic algorithms in real life that cna originate novel CSI. The displacement theorem describes the likelihood as more remote than random chance on average. Thus GA's do not demonstrate that undesigned nature can evolve complex designs from scratch. No one has yet been able to demonstrate that a GA can popup out of nowhere on its own. NO ONE! Salvador PS (A long string of 1's can be generated by a stuck key on a computer keyboard. Such a circumstance might inadvertently generate a long string of 1's. But these are the pitfalls of computer arguements due to some of the artificialities computers induce into a simulation.) scordova
"Do you genuinely not recognize the difference between a solution that requires that the final answer be known and one that only requires identification of a constraint that the answer must satisfy? Or are you just trying to confuse the issue because you are not confident of your ability to rebut the actual points raised by Thomas’s simulation?" trrll, for my part, I understand what you're saying, I just disagree with the conclusion. The simulation is over simplified and does not represent lifes diversity in this unique biosphere. A fitness function or genetic algorithm is dependent upon the a narrow viewpoint(contraints) for optimization and leads often to deadends. This is one of the actual problems with genetic algorithms. Without conditional feedback loops for multiple external stimuli and a reactionary pre-coded feedback loop, solutions can be suboptimal without even knowing it. A simple fitness function selected to favor certain reproductive success based upon one narrow contraint across large search spaces is only one function simulated, but it does not extrapolate to the complex interactions of higher level organisms or new morphological changes. Recognition of certain laws and the catastrophic consequences if they are modified only attributes to the precarious situation of which we find ourselves in a finely tuned plan. Thermodynamics does not only apply to this planet, but the universe as we know it. The contraints as you put it here on earth is actually what allows life to exist. We are living within multiple levels of hierarchical contraints which limit exposure to full planetary and life extinction. Otherwise, life can pop up anywhere, in any form, any planet. I think a more simple reduction is simulating eye color combinations; however, popping eyes on the back of ones head that is functional is quite another. There is a breakpoint in morphology, but this should not be confused with thermodynamics on our planet which actually allows life to exist. The fact is you're defining living contraints of the program, thermo being just one of the external considerations in your example. Your cost/efficiency ratios are but one of many conditions that must be optimized, not including interactions, immune systems, repair systems, catalyst(enzymes). But cost/efficient ratios can be overcome by larger energy input/output. So the contraint of such vascular systems is only limited like fish in a fish bowl or those released into a pond, a lake or ocean. These are size and energy considerations yes, but this is not related to morphology, only survival of each species. Famine can kill an elephant or a caterpillar. Optimization of neural, vascular pathways within each does not prove one can evolve into the other. Morphology to me is programmatic - designed. Optimization and variation based upon external stimuli within bounded contraints is but one part of the overall genetic program. The Steiner Tree problem mimics but one component and does not solve Fluid Dynamics contraints. Finally, speaking of chess, re: enzymes as catalyst and time contraints: http://news.biocompare.com/newsstory.asp?id=10433 hattip: linked to by Jonathan Sarfati, former New Zealand Chess Champion; http://www.creationontheweb.com/content/view/3547 I'd appreciate any links which may dispute the articles findings at Biocompare posted in 2003 on a study by Dr. Richard Wolfenden who claims it to be an enigma as to enzymes arising within the currently accepted universal timeline. He states, "As to the uncatalyzed phosphate monoester reaction of 1 trillion years, "This number puts us way beyond the known universe in terms of slowness," he said. "(The enzyme reaction) is 21 orders of magnitude faster than the uncatalyzed case. And the largest we knew about previously was 18. We've approached scales than nobody can grasp."" So, not only are we looking at thermo considerations, fluid, etc., but also catalyst responsible for all life forms which speed reactionary survival mechanisms to milliseconds. Michaels7
Sal, et.al., LEt's face it, the PT crowd's little choo choo has gone way round the bend and off the tracks! Their language is more and more strident. Their use of the "IDers are lying" phrase is getting, well, tiresome. I keep saying this, but I'll repeat again and again until they get it: for a group that claims to hold logic and scientific reasoning so dear, it is constantly amazing how quickly they abandon it when their own logic, reason, claims and idea are challenged. IT reminds me of the title of Solzhenitsyn's "We Never Make Mistakes". That they actually believe that's the case is laughable and sad at the same time. Keep after them Sal. I'll take your logic and reason over their ad hominems, straw men and violations of the law of non-contradiction any day! Let us know when one of them says something reasonable! DonaldM
Re #46 for an object to be designed (as opposed to appearing to be designed) it must not only satisfy the designer's purpose but a) should be the result of some activity by the designer (a heavy shower suits my purpose for watering the garden but I didn't design the shower) b) achieve the designer's purpose in the fashion that the designer planned (if I lay out a hose to water the garden and the garden actually gets watered because of a leak in the hose you can't say I designed the solution) Dave Thomas had no idea how the resulting patterns were going to achieve short paths so he did not design (b). They do however give the appearance of design because they look like the kind of pattern someone might have thought up if told to produce a pattern with a short path. Mark Frank
Dave T.: "That’s like dismissing as nitpicking the objection that an alleged perpetual motion machine only needs a little bit of input energy to keep it working." The phrase "a little bit pregnant" comes to mind, too. j
Me (30): "Salvatore" Uggh. Sorry, Salvador -- I was pressed for time. I really do know better. I've only read your name here about, what, maybe 2000 times? ofro (31): "It seems to me that you are implying that it is not possible to write a program that behaves in the manner postulated by blind/dumb/purposeless Darwinian evolution. In other words, one cannot write a code that behaves like this hypothetical, very mechanistic process because as soon as I write it, I have put in a non-Darwinian goal?" No. Darwinian evolution programs exist, just not any that design anything. Google "Darwinbots", for example. Mark Frank (35): "The power of a programme like David Thomas’s is not to simulate all aspects of evolution. It is simply to show that small mutations plus selection applied repeatedly can generate innovative solutions that give the illusion of design." It's not an "illusion of design" when something was intended. It's actual design. See definitions 1 and 2 (the original definitions) of the noun at www.m-w.com/dictionary/design :
1 a : a particular purpose held in view by an individual or group {he has ambitious designs for his son} b : deliberate purposive planning {more by accident than design} 2 : a mental project or scheme in which means to an end are laid down
Use of the word to designate a "the arrangement of elements or details in a product or work of art" is recent. Dave T.: "That’s like dismissing as nitpicking the objection that an alleged perpetual motion machine only needs a little bit of input energy to keep it working." j
Re #38 (and others). I am not doing well at explaining my point. I will try a different approach. 1. Salvador and others write as though there were two things: a problem to be solved and a selection algorithm for solving it. (Then of course you can object that the algorithm has been intelligently designed to solve the problem). But they are not two separate things. Evolution is not trying to solve a separate problem from reproducing - reproducing/getting selected *is* the problem. It would be nice to simulate the complex, subtle and everchanging ways that organisms get selected in reality - but that is asking too much of a simulation. So we substitute a different selection algorithm. To that extent the programme is a very partial simulation of evolution. 2. It was perhaps unfortunate that Dave Thomas put his programme in the context of finding a Steiner solution. That is just a by-product of his programme. Imagine he had never mentioned the Steiner solution. The programme still works. 3. It is a trivial result that if you repeatedly mutate, select and inherit for attribute A then you will end up with a population that has more and more of attribute A. I don't think anyone on the list would challenge that or find it interesting. What is interesting about programmes like Dave's is that the "solutions" generated are novel (ie.e. not predicted by the writer of the programme) and give the appearance of being designed. 4. Of course intelligence is required to write a computer programme including a selection algorithm. This is because it is an artifical simulation of reality. In the same way you need intelligence to write a climate simulation programme - that doesn't mean the climate was intelligently designed. 5. I think this is the real issue: Only some selection algorithms lead to novel solutions that appear to be designed. So a selection algorithm on the lines of "accept if closer to the sum of the first 1000 digits, reject if further away" is unlikely to lead to a novel solution. After all the solution is only going to be a number. So some selection algorithms lead to novel solutions which appear to be designed; others don't. So the real question is does the ineffable selection process of natural selection fall into the first category or the second? That's not a question that will be solved by computer programmes. All they can do is show that at least some selection algorithms are able to generate novelty. Mark Frank
However, such problems are rare in which a GA can “design” a solution. In contrast, a GA in and of itself would be a very poor way of designing chess strategies. Chess software does not employ GA solutions as much at employs search heuristics, and brute force searches, and good guesses (the technical term is “static evaluation”). Thus, contrary to Haeckels claim that evolution is the word that can solve all our problems, in the world of engineering, that is not true, and its not even a common solution!
It would almost certainly be possible to design a genetic algorithm to solve chess problems. One could start with a set of genes regulating the connectivity of a neural network, train each network over the equivalent of a decade or so of human chess competition, and then allow it to "reproduce" based upon its chess ranking. Of course, the resultant program might well run too slowly to be of much use, considering the requirement to simulate the firing of perhaps millions of neurons. Chess, after all, is a relatively simple game, amenable to brute force look-ahead strategies. Similarly, your trivial math problem is hardly worth the effort of a genetic algorithm. But when you get to really difficult problems—NP complete problems like the Traveling Salesman—then the power of an evolutionary approach becomes more evident. I am curious about your perspective regarding the information generated by the simulation. After all, one could add in a random number to generate random arrays of nodes, and the program could go on indefinitely generating designs for Steiner networks, outputting a huge amount of information. And it is hardly trivial information—even intelligent human beings have difficulty designing efficient networks. So where does that information come from? Is it in the few bits of the "shorter is better" fitness criterion? Or the program as a whole? Even that doesn't come close to the number of bits output by the program. Do you deny that the solution to a network problem (and presumably, all other NP Complete problems) constitutes real complex specified information? Of course, there is a sense in which the solution to any problem can be said to be implicit in the problem definition. But in this case the design of every viable organism can reasonably be said to be implicit in the laws of physics and chemistry, in which case a hypothetical intelligent designer of life would no more be adding information than an intelligent traveling salesman who works out an efficient route. trrll
Algorithms like Dave Thomas’s do an excellent job of optimizing for a particular physical variable. In the case of the Steiner solution, the algorithm optimizes for shortest length or smallest area. And there is no doubt about the results: A computer running the algorithm can find an optimal solution faster than a human being. But notice that the choice of what physical variable the algorithm optimizes for is not decided by the algorithm but built into it by the programmer. With respect to the algorithm, the “most fit” solution is always the one that minimizes length or area. Now “fitness” for Darwinism means “survival”. But survivability does not map to a single or even a small set of physical variables. It maps to a virtually infinite set of physical variables. For one organism, increasing length might increase survival, for another decreasing length might increase survival, for another survival might have nothing to do with length at all. In fact, it may involve an entirely novel physical characteristic, which is the point of evolution in a creative sense in the first place.
The choice of what biological variables to optimize for is not something that is decided by evolution—it is something that is imposed by nature. For example, the Steiner network problem models a biological problem that organisms must solve, the problem of "designing" efficient networks. Networks are used heavily in biology—neural networks, vascular networks. They need to connect crucial targets within the body, but thermodynamics imposes an energetic cost per unit length. An organism that grows excessively long vascular and neural pathways will be at a disadvantage compared to an otherwise identical organism that grows its pathways more efficiently, because it will require more nutrition and be at greater risk when food is scarce. There is no choice open to evolution of what physical variable to optimize for, because thermodynamics is inflexible; whatever else the organism might do to improve its resource utilization, mutations that result in shorter, more efficient networks will always enhance survivla. So the programmer of the simulation is effectively playing the role of the laws of thermodynamics. Allowing the fitness criterion to mutate, as some have suggested, is clearly unrealistic, because the laws of thermodynamics are fixed. It may well be that evolution won't work in a universe in which the laws of nature mutate randomly from moment to moment, but that is not the question that the simulation is designed to test. trrll
Tom, Thank you again for sharing your expertise. If I may offer a couple anecdotes and the invite you to make some more comments for the benefit of the readers, especially those without a PhD in Computer Science like yourself. :-) I was very fascinated with Artificial Intelligence (AI) at first, and then, its very ambitious hopes seemed to fall short of expectations. At first, it was quite amazing to see these "AI" programs play chess, play checkers, and so forth, but at the end of the day these were not really thinking or creative machine, at least not in the way we conceive of what a thinking being is.... In fact, the definition of AI is now somewhat nebulous.... All this to say, it seems to me that systems that somewhat mimic an intelligent designer in its activities still require a great deal of front loaded intelligent design to give the system all its marvelous abilities. This of course has bearing on the issue of how much intelligent front loading nature would require for nature to be able to create life from scratch and confer the complexities we see today. If hypothetically life came about through a process of selection and mutation, how rare would such mutations and events have to be? If functional designs we see in the biological world as are easy to come by as fog in London, then we should not at all be amazed. However if such events are rare, then one would have to wonder: 1. whether "intelligently designed selection" is a more appropriate metaphor versus "natural selection" 2. whether intelligently designed prescribed evolution is a better metaphor for some kinds of evoltuion 3. some intial amounts of special creation of the first life (even Darwin believed in limited special creation) 4. some combination of the above The selection that I see in GAs fits into category #1. That is, Genetic Algorithms do their thing because an intelligent agency is designing the selection that is used. The selection in such an algorithms is intelligently designed to achieve a goal. Self-extracting zip files correspond to #2, where data de-compresses (de-represses using Davison terminology). #3 would correspond to the existence of the computer systems in the first place. A self-extracting ZIP file of a GA would be analogous to #4. The point is, it seems to me there is not a lot of room for thinking such events would be common place, and because of their rarity, intelligence seems at least a plausible candidate for their ultimate origin. That’s kind of where all this debate falls. Along the lines of a successful Genetic Algorithm’s (GA) rarity, let me offer some thoughts as I think it speaks of the improbability of a successful evolutionary pathway for life. Here are the requirements for a human GA to succeed: 1. The problem has to be solvable 2. The problem is amenable to be solved by an evolutionary algorithm 3. The problem can be tractably analyzed such that a trustworthy selection strategy appropriate to the problem can be formulated 4. An intelligent agency is available to put the evolutionary system together such that evolution can happened toward a desired goal The first example that comes to mind for all the requirement being met is a GA that solves the Travelling Salesman Problem
Given a number of cities and the costs of travelling from any city to any other city, what is the cheapest round-trip route that visits each city exactly once and then returns to the starting city?
However, such problems are rare in which a GA can “design” a solution. In contrast, a GA in and of itself would be a very poor way of designing chess strategies. Chess software does not employ GA solutions as much at employs search heuristics, and brute force searches, and good guesses (the technical term is "static evaluation"). Thus, contrary to Haeckels claim that evolution is the word that can solve all our problems, in the world of engineering, that is not true, and its not even a common solution! For example, I provided 5 programs that pumped out the final string of "500500", but the most obtuse-looking by far was the genetic algorithm. The problem was simply not amenable to a GA. I had to actually concoct an extremely circuitous way for a GA to compute the answer. But beyond the rarity of GA’s an problems amenable to GA’s is the fact that every GA we have seen come into existence from scratch came about through intelligence or pre-existing life. Let us hypothetically assume an evolutionary route via selection was taken. Would that route require design? My initial response is, “yes”. scordova
BC, I wrote my first FORTAN program in 1981 on punch cards, fed into an IBM370. One single comma out of place, or a missing punch card could spell disaster for the programs Compilation or infinite series of loops or serious wrong answers and simulation failures. So your decades of experience holds no sway over me. I built along with my buddy at the time a Texas Instruments computer and have programmed in Assembler, PL/I, COBOL, Basic and other 4th generation languages. I ran Differential Equations on my poor little VIC20 to score extra credit in third year Calculus. I worked with a leading edge software company for 7 years who revolutionized the document industry supplying both IBM and Xerox with key software components that included Image Compression algorithms, before starting my own consulting firm for another 7 years working with fortune 500 companies in securing cost-savings of millions of dollars in the elimination of outdated legacy document management systems that included intelligent, knowledge based systems that built on-demand technology for real-time solutions. What took weeks, we reduced to seconds and allowed for interactive response. We did this across broad-based input and output systems, including translation with centralized or network processing. Now, all bravado aside and with all due respect to your software engineering skills, the problem is not in understanding what the program is doing, but extrapolating from it grand illusions. One little snippet of code does not provide evidence for a materialist solution. We are only just unraveling the mystery of life. I once believed in simple evolutionary steps before I became more interested and read with intense interest about Genetics, cell structure, regulatory systems, the repair mechanisms, signal processing and the host of other complex sub routines being performed in the most simplest of life forms. What I find most extraordinary is the claim that code on a computer simulating the most basic of instructions can be evidence of full blown evolution as pushed by NDEs. You are talking to one of the converted, a former evolutionary believer. And much like someone who quits smoking cigarettes, the awful smell of a once bad habit reminds one all to well why the habit was dropped. Why it may "look cool" at one point in youthful experimentation among peers, the cost far outweighs any temporary benefit(peer-acceptance), and as society awakens to new information, youth matures, hindsight shines light upon the folly of past mistakes and uninformed or bad input. While I in no way equate myself on the level of Dr. Sanford, or for that matter any scientist or PhD. here, it was in reading more information, not less which transformed my opinion and influenced me to cross-check unanswered questions on evolution. The more I read, the more I find RM&NS cannot account for the diversity of life as we see it or its complexity on the micro levels and genetic code. The task at hand is enormous and systems biologist all agree that much of the knowledge base and coordination must be systematically re-programmed for better understanding to unlock the code of life. I'm not the one saying this, but leaders in the fields. The simulation process is still directed and not by simple materialist processes, but by complex design that includes conditional processing logic. Anytime you allow variables into a system upon which that system must conditionally react to multiple inputs and levels, then pre-programmed responses are found which include the inherit capability to conserve itself. Simple replication and reproductive differentials does not merit extrapolation for all we see in cellular design and genetic code. What we see in life is Code Conservation wins and mutations End. The simulation is to simplistic and does not account for accumulated intelligence or myriad external conditions. At the same time we see how precarious life forms are, that adaptation is limited, bounded and mutations do not suffice for lifes morphology, but more often lead to extinction. These are plain, simple, straight forward observations. All laboratory experiments to this day, tell this tale of woe for random mutations limits and destructive force. If evolutionary lab results produced any observational, repeated outcome to the contrary it would be trumpeted thru out the entire world. Our way forward in unlocking lifes code is thru the Design Paradigm. Simulations and mathematical concepts will apply, but not without overall design. Again, we are confusing random mutations with to much power. Confusing the designed purpose of bacteria's promiscuous role and that of complex multi-cellular organisms. Michaels7
taciturnus wrote:
Now “fitness” for Darwinism means “survival”. But survivability does not map to a single or even a small set of physical variables. It maps to a virtually infinite set of physical variables...So a Darwinian algorithm needs to do more than optimize for a prespecified physical variable. It needs to figure out what variable to optimize for.
Dave, Darwinian evolution doesn't need to "figure out" what variable to optimize for. Mutations occur across the genome. Whichever ones promote survival and reproduction are retained, regardless of what "physical variables" they affect. At no point is it necessary for evolution to single out a particular variable for optimization. For a genetic algorithm, the difference is that we know the precise problem we are trying to solve. The colors of the lines in a Steiner network are obviously irrelevant to its optimality, so the programmer doesn't bother to mutate color, focusing instead on the relevant variables. sophophile
Ofro, Relax, you were not my "target" for calling Salvador a liar, though certainly I see how you can make such an "assumption" since my initial aim signaled an address at first to you. My "assumption" was that all knew it was Thomas I was "targeting" in relation to the programs limits and his false accusation against Salvador. Thus I stated the following... "What was uncalled for is calling Salvador a liar. Anyone reading his initial premise understood exactly what he intended in his reponse and it was not misleading in the least. I think Dave’s intentions while I’m sure well intentioned at first lead the lay person to believe design in the simulation..." The first line should read, "What was uncalled for is Dave Thomas calling Salvador a liar". Again, when we go on "assumptions", then information loss(in this case, one or two symbolic names) leads to false-positives and even more "assumptions" which lead to chaos if intelligence does not intervene to provide corrective measures and address the specific misunderstanding between sender and receiver. A result of the random mutational loss of one or two keywords is corrected not thru a materialistic blind mechanism, but thru appropriate intelligent keywords being restored, symbols shared by both sender and receiver to resume a more directed and informed pathway of discussion away from miscontrued or uniformed "assumptions". In this case for you and possibly others; symbols as names, though not hard-coded or hard wired as the original genetic datastream does make a difference. Blind processes built upon materialistic assumptions can never lead to this obvious correction in our misunderstood communications. The mechanisms in place to ascertain your feedback, retrieve, coordinate appropriate reponse, and send new symbolic data expands enormous programmatic and conditional responses which can vary dependent upon a willful recognition and anticipation of feedback from you the original sender or others. Our very misunderstanding, interaction, correction and recognition demonstrate materialistic processes alone woefully inadequate for the reactions to information found within and outside of us. We are simply at the very beginnings of the understanding of lifes animated dance. And what a wondeful choreographed dance it is. Michaels7
Mark, I wouldn't expect Dave Thomas's algorithm to model all aspects of evolution. But when the selling point of Darwinian evolution is that it involves no intelligent intervention at all, it's not mere nitpicking to point out that the algorithm has been deliberately designed to optimize the one physical feature that has been specified a priori as equivalent to survival. That's like dismissing as nitpicking the objection that an alleged perpetual motion machine only needs a little bit of input energy to keep it working. Cheers, Dave T. taciturnus
steveh, I do apologize for not preserving what you wrote last night. I have opened a thread where in the future, if I make editorial decision, I can deposit yours and others work so you and others can have access for it. Sorry about what happened earlier, and I hope you'll take this as a sign that I hope to hear from you again. See: [off topic experiment] cutting room floor (version 1) Salvador scordova
Salvador, "What my position regarding fitness function adequacy is the following tautology: supplying a fitness function that will solve the problem is supplying a fitness function that will solve the problem, and supplying a fitness function that will not solve the problem is supplying a fitness function that will not solve the problem." Then may I conclude that the "supplying" is irrelevant? The fitness function is what it is, regardless of how it originated. If there is complex specified information in the function -- the function itself, not the implementation of the function -- then it is there to be detected and measured. If I paint a picture of the Cliffs of Dover, there is CSI in the painting, but presumably not in the cliffs. If I write code in a programming language to describe a function that sums the Euclidean lengths of line segments, the CSI in the code is not the CSI in the function. The GA gains information from the function, not the code implementing the function. There are infinitely many correct implementations of the fitness function, and the GA behaves identically with all of them. You make it sound as though the problem is extrinsic to the fitness function. The problem for the GA is to maximize the fitness function itself. Dave perhaps muddied the waters by introducing the notion of a Steiner solution too early. The problem is not to find a Steiner solution. The problem is to find a network that connects the fixed nodes with minimal sum of link lengths. The (un)fitness function could not be much more straightforward: If the network connects all fixed nodes to one another, return the sum of link lengths. Otherwise, return a high "length" value. The GA usually finds networks of high fitness, but rarely finds an optimal network (i.e., Steiner solution). "Because not all fitness functions will solve the problem, in fact the overwhelming majority of possible function will guide the algorithm away from the solution." I have proved that almost all fitness functions are algorithmically random, or very nearly so. A highly random fitness function does not guide toward or away from anything. Intuition might suggest that such a function is hard to optimize, but the opposite is actually true. See "Optimization Is Easy and Learning Is Hard in the Typical Function," http://members.cox.net/tom.english/cec2000.pdf Tom English
Re 33. Dave T. Your proposed test is interesting. But you need to be truly analogous to evolution. The Darwinian approach is to generate objects (bit strings presumably) at random, subject them to the unknown algorithm, take those that survive, and slightly mutate some of the survivors, repeat indefinitely. What is the realistic alternative? As you say the real conditions for survival are ever-changing and unpredictable. So it is unreasonable to suppose the designer knows what those conditions are. So the designer should design an object that will do well under an unknown algorithm. I wonder which approach has the best chance? Actually all the above is only marginally important(but kind of fun to make the comparison). The power of a programme like David Thomas's is not to simulate all aspects of evolution. It is simply to show that small mutations plus selection applied repeatedly can generate innovative solutions that give the illusion of design. I think it illustrates that point rather nicely. Mark Frank
"but I just wanted to make it clear I don’t think supplying a fitness function is tantamount to telling the GA how to solve the problem”. For this to be the case, it has to be the appropriate fitness function." I think the appropriate fitness function is given to us directly by the problem in this case. I apologise if the following comes across as patronising, it's not intended to be so, I'm just trying to explain the steps in my reasoning as clearly as I can: "Find the shortest network which connects all of a set of given points and any number of additional variable points" could IMO, be reworded as "Find the fittest network connecting a series of points, where "fit" is defined as follows: solution A is fitter than solution B if the total network length of A is shorter than that of B and all the points in A are connected". Have I added any new information in that formulation? ( I don't believe I have) Ok so let's say you, the human, find what you believe to be an optimum network using intelligence and I also provide a solution by some means, and we both have a list of line segments defined by endpoints - how will we decide who has won? Will we need to know the optimal solution in order to work out if your solution is better than mine? I suggest we will not. Also, we will have to agree on the meaning of some simple terms such as "length" and be agreed on how length can be calculated from end coordinates etc. but in agreeing these terms will we being be giving away the shape of the optimal solution? Is it cheating to build these basic definitions into the fitness function? Is Dave's fitness function adding information that we haven't agreed to in taking up the challenge, or different from a "function" we would use to determine if your solution is better than mine? Let's also do the same and compete to see who can find the best approximation to the sum of numbers from 1 to 1000. Could you rephrase the problem in a non-circular way? Can you a write a function which will judge if your solution is better than mine that doesn't have to implicitly or explicitly know the correct answer? Steve p.s. I'm not going to try and reconstruct my discarded post. My motivation is flagging. steveh
Ofro, I've been following this discussion without contributing, and now I think I understand what everyone says the problem is. Maybe I can explain it in a way that helps. Algorithms like Dave Thomas's do an excellent job of optimizing for a particular physical variable. In the case of the Steiner solution, the algorithm optimizes for shortest length or smallest area. And there is no doubt about the results: A computer running the algorithm can find an optimal solution faster than a human being. But notice that the choice of what physical variable the algorithm optimizes for is not decided by the algorithm but built into it by the programmer. With respect to the algorithm, the "most fit" solution is always the one that minimizes length or area. Now "fitness" for Darwinism means "survival". But survivability does not map to a single or even a small set of physical variables. It maps to a virtually infinite set of physical variables. For one organism, increasing length might increase survival, for another decreasing length might increase survival, for another survival might have nothing to do with length at all. In fact, it may involve an entirely novel physical characteristic, which is the point of evolution in a creative sense in the first place. So a Darwinian algorithm needs to do more than optimize for a prespecified physical variable. It needs to figure out what variable to optimize for. IDer's think Dave Thomas is stealing a base algorithmically by designing the algorithm to optimize a specific physical feature, and then defining fitness as optimization of that feature. It should not be a surprise that a computer will outperform a human designer in these circumstances. Moreover, the same species will need to change the physical characteristics it optimizes as the environment changes. And, since Darwinian evolution is supposed to account for novel structures, the algorithm must somehow optimize for features that are not yet in existence, and only come into existence by the algorithm. If we wished to improve Dave Thomas's contest of algorithm vs human design by making it more realistic, we could frame the contest like this: Design an algorithm that will optimize for "survivability", with survivability meaning the ability to persist in an environment specified only when the algorithm is run. Examples of the specification of survivability in an environment at runtime would be: - whatever calculates pi to the greatest number of digits survives. - whatever adds the first 20 integers in the shortest time survives. - whatever comes up with the optimal Steiner solution for a set of points survives. - etc., etc. The algorithm would be run and the human would do his best, and whoever maximizes the relevant criterion survives. The human could at least make a crack at solving the problem no matter what it is. The problem for the algorithm, of course, is that it must have prior knowledge of what the survivability criterion will be before it is even designed, let alone run. But no such apriori criterion for survivability exists for Darwinism. Therefore, all the artificial Darwinian algorithms suffer the same flaw insofar as they have designed in optimization criteria that are not available in the real world. At least, that is what I think the problem is... Cheers, Dave T. taciturnus
Surely the fitness function corresponds to natural selection, which is an external force. The only difference here is that the fitness function doesn't change, which I guess would be the equivalent of the organism living in a stable environment. I imagine that many features of organisms could be said to have some kind of optimum value. Chris Hyland
comment by j: "What needs to be demonstated is Darwinian evolution doing what its supposed (by many) to be able to do. Darwinian evolution is blind/dumb/purposeless. For a program to demonstrate Darwinian evolution, it can’t be given goals, either explicitly or implicitly. The (right) fitness functions need to evolve, too." I am still not clear about the whole issue. It seems to me that you are implying that it is not possible to write a program that behaves in the manner postulated by blind/dumb/purposeless Darwinian evolution. In other words, one cannot write a code that behaves like this hypothetical, very mechanistic process because as soon as I write it, I have put in a non-Darwinian goal? It seems to me that the necessary conclusion would be that this falsifies a Darwinian mechanism a priori. That sounds like testing a null-hypothesis, and something tells me that this is not a valid test. ofro
Strangelove wrote (4): "How should genetic algorithms look if they are to accurately demonstrate evolution?" It's not a matter of demonstrating "evolution." Demonstrating evolution is easy -- Thomas's and Salvatore's programs do so. Many implementations of algorithms for obtaining mathematical solutions can also be considered to demonstrate "evolution," too. But these are all teleological. What needs to be demonstated is Darwinian evolution doing what its supposed (by many) to be able to do. Darwinian evolution is blind/dumb/purposeless. For a program to demonstrate Darwinian evolution, it can't be given goals, either explicitly or implicitly. The (right) fitness functions need to evolve, too. j
Tom commented: If merely supplying a fitness function is tantamount to telling the GA how to solve the problem,
I don't believe "supplying a fitness function is tantamount to telling the GA how to solve the problem" is the case. I hope that is not the impression my writings gave, but if so, I should speedily clarify that that is not my position. What my position regarding fitness function adequacy is the following tautology: supplying a fitness function that will solve the problem is supplying a fitness function that will solve the problem, and supplying a fitness function that will not solve the problem is supplying a fitness function that will not solve the problem. Sorry for the redundant redundancy, but I just wanted to make it clear I don't think "supplying a fitness function is tantamount to telling the GA how to solve the problem". For this to be the case, it has to be the appropriate fitness function.
then why are there so many problems GA’s can’t solve well?
Because not all fitness functions will solve the problem, in fact the overwhelming majority of possible function will guide the algorithm away from the solution. scordova
Tom, I discovered the hard way that WordPress does not like the less than or equal sign. I see you've made that discovery as well. :=) Salvador scordova
[continuing a post truncated by the blog software] Bill neglects to mention that I(B) is less than or equal to I(A). Necessity may eliminate in B some or all of the information in the antecedent A. Here A is the input string of bits, and B is the output chromosome, and B is necessary when A is input to the GA. That is, the GA is a deterministic algorithm. It follows by definition that I(A) = N bits. Under the zero-knowledge assumption that all chromosomes are equally likely to be output as the solution, I(B) = K bits. Clearly K is much less than N -- i.e., the number of bits in a chromosome is much less than the number of bits input over the entire GA run by the random number generator. The upshot is that the GA works by selectively eliminating information, not by generating information. Keep in mind that the population not only holds competitive individuals, but serves as the memory of the GA. At the time of reproduction-with-variation, new information is entered into the population (memory). At the time of selection of parents, non-parents are culled from the population (memory). To the degree that a culled individual is distinct from individuals remaining in the population, information is eliminated. The fitness function does not tell the GA how to find an optimal solution. Neither the GA nor the fitness function knows the optimal fitness value. The fitness function gives the GA information on the fitness of individuals currently in the population relative to one another, but not relative to the unknown global optimum. It is important to note that GA's do not perform well for all fitness functions. If they did, there would be a free lunch in optimization for GA's, contradicting a well known theorem of Wolpert and Macready. Stuart Kaufmann formulated the NK landscape as a tool for studying problem hardness. (Different settings of parameters N and K give fitness surfaces with different properties.) There is a large literature addressing what are known as GA-hard and GA-deceptive problems. If merely supplying a fitness function is tantamount to telling the GA how to solve the problem, then why are there so many problems GA's can't solve well? Tom English
[continuing a post truncated by the blog software] Bill neglects to mention that 0 Tom English
Salvador, Irrespective of how good a model of natural evolution Dave's genetic algorithm is, it should explainable in terms of design theory. Right? To simplify analysis, let's say that N bits are randomly generated (i.i.d. uniform) and stored in a file prior to each run of the GA. The GA's random number generator is modified to work by reading bits from the file. N is exactly the number of bits it will need in a run of the GA. Now the GA can be seen as single-valued function mapping strings of N bits to strings of K bits, where K is the number of bits in the most fit chromosome of the final population. I hate relying on old work by Bill Dembski, but it seems more appropriate here than more recent work: "Because information presupposes contingency, necessity is by definition incapable of producing information, much less complex specified information. For there to be information there must be a multiplicity of live possibilities, one of which is actualized, and the rest of which are excluded. This is contingency. But if some outcome B is necessary given antecedent conditions A, then the probability of B given A is one, and the information in B given A is zero. If B is necessary given A, Formula (*) reduces to I(A&B) = I(A), which is to say that B contributes no new information to A. It follows that necessity is incapable of generating new information. Observe that what Eigen calls "algorithms" and "natural laws" fall under necessity." http://www.arn.org/docs/dembski/wd_idtheory.htm Bill neglects to mention that 0 Tom English
from the previous “Panda Food” Thread: ofro wrote: “What I don’t understand is the basic premise of your example, which apparently already has an explicit solution of the problem built into the program.” scordova replied: “I’m afraid that isn’t quite correct because if you go to ga.c, and do a text search for 500500 you won’t find it. The solution was never explicitly stored anywhere.” I would have eventually figured it out myself that the Gauss solution to the problem was part of your program, but your admission makes it easier. I have to say now that your reply was anything but “thruth and nothing but the thruth”, and more like “I have sinned through my own fault, in my thoughts and in my words, in what I have done, and in what I have _failed to do_.” I am convinced now that at that point you wanted to “code-bluff” me (see https://uncommondesc.wpengine.com/index.php/archives/1449). And I am not pleased.
Ofro, I was not trying to be mean or demeaning. I was only trying to point out the solution can be implicitly stored in a program (analogous to driving directions), it does not have to be explicitly stored (analogous to an explicit street address). This fact makes a lot of GA theatrics possible, where there is nothing explicitly stated, but the answers are lurking and so diffuse that short of running the program, one will not see that the program has the solution implicitly built in. The most basic example was brute.c, where one might not off the top of one's head know the answer, but one would know a proven strategy (simply adding all the numbers) that would succeed, and hence one could program an implicit solution which would eventually reveal an explicit solution. GA will often reveal solutions which we do not know in advance even though the the answer is effectively stored in the definition of the search strategy, the explicit answer often only appears during execution. We may have a thousand random numbers whose sum we don't know in advance. A program however with the appropriate solution seeking strategy can find the answer. The answer is effectively snuck in by matching the right problem-solving strategy to the right problem. scordova
BC wrote: Nature guides the evolution of organisms by killing off the organisms that have bad mutations and proliferating the ones that have good ones.
That is not completely accurate. For starters, bad and good are teleological conceptions, Nature has no conception of design teleology. This again is Darwinian double speak, not science.
Consider the eyes of cave organisms who live in total darkness. If eyes are expensive to make, selection can wreck their exquisite engineering just as surely as it built it. An optic nerve with little or no eye is most assuredly not the sort of design one expects on an engineer’s blueprint, but we find it in Gammarus minus. Whether or not this kind of evolution is common, it betrays the fundamental error in thinking of selection as trading in the currency of Design. Allen Orr
Lewontin further points out the futility of fitness to define the value of inherent functionality in: SFI Bulletin 2003
It is easy to say that fitness of a type is its “relative probability of survival and reproduction” but turning that phrase into a coherent measure that can do work in evolutionary explanation is not so easy.... The problem is that it is not entirely clear what fitness is. Darwin took the metaphorical sense of fitness literally. The natural properties of different types resulted in their differential “fit” into the environment in which they lived. The better the fit to the environment the more likely they were to survive and the greater their rate of reproduction. This differential rate of reproduction would then result in a change of abundance of the different types. In modern evolutionary theory, however, “fitness” is no longer a characterization of the relation of the organism to the environment that leads to reproductive consequences, but is meant to be a quantitative expression of the differential reproductive schedules themselves. Darwin’s sense of fit has been completely bypassed. ... How, then, are we to assign relative fitnesses of types based solely on their properties of reproduction? But if we cannot do that, what does it mean to say that a type with one set of natural properties is more reproductively fit than another? This problem has led some theorists to equate fitness with outcome. If a type increases in a population then it is, by definition, more fit. But this suffers from two difficulties. First, it does not distinguish random changes in frequencies in finite populations from changes that are a consequence of different biological properties. Finally, it destroys any use of differential fitness as an explanation of change. It simply affirms that types change in frequency. But we already knew that. Richard Lewontin, 2003
scordova
steveh asked: Could you mail a copy back to me please, so I can post it at ATBC or PT?
I tried to find out if the system stored it anywhere. I did not see it so I won't be able to get you a copy. I'm sorry. I do appreciate your efforts however at posting here. scordova
16 Michaelst says: “What was uncalled for is calling Salvador a liar.” When did I call him a liar? What does this statement have to do with what I said in my posting #12, to which you are responding? Perhaps it was my earlier post #3 where I accused him of tiptoeing around the facts even though he knew what I meant? If anybody was not honest with the truth in this context, I'll have to pass the compliment on to Salvador Cordova. “Anyone reading his initial premise understood exactly what he intended in his reponse and it was not misleading in the least.” Excuse me? He knew from an earlier popst that I had limited programming expeience. And he triumphantly trumpeted afterwards, in the introduction of this thread that he “fully [took] pride in the smoke and mirrors I used, I never pretended otherwise.” I would have remained quiet, but he didn’t need to add insult to injury. “I think this is misleading to the average person on the street without software knowledge.” So maybe I am that average person on the street who hasn’t programmed since the days of Fortran or earlier. At least I have enough biological expertise to know how an evolutionary mechanism works in Nature and to know what to expect from a program that is supposed to mimic this mechanism. Your apologies are gladly accepted. ofro
Rats!, that was the first time I didn't save a copy (due to the accidental early submission) Could you mail a copy back to me please, so I can post it at ATBC or PT? Failing that, I'll try and reconstruct from memory and you may feel free to indicate if I have made signicant alterations. However, as before, this won't happen before tomorrow (actually this) evening (EU). I would be interested to hear what those alleged representations were. You may post them as an example of Evolutionist's misrepresentations if you like. (saved). steveh
steveh, I thought your last round wasn't very productive to the discussion, I therefore didn't publish your latest. Thank you however for the time you spent trying to respond, but I didn't think it added to the discussion. Sorry it didn't make the editors cut. Your offerings were replete with misrepresentations. I feel I'm under no obligation to give them air time. Salvador scordova
steveh wrote: Dave Thomas showed exactly how your program was a disguised version of the Gauss formula.
Oh gee do you think he thought it was disguised by statements where I pretty much pointed out something was disguised? Such as when I said:
Rather than compute the midpoint via a simple calculation ... The following [are] computational theatrics
Sheesh! A solution can be disguised in software such that it's not explicit. The solution can be snuck in without being explicit. That was the whole point of my software examples.
steveh wrote: You have not done the same for his.
So what? Showing the disguise in plain English is only a sufficient, but not necessary condition to establish an answer is sneaking in. An alternative method is code knockout or replacement to show the answer is snuck in. Besides, I don't see his mathematical forumla explictly stated in my code, do you??? So what if Dave's strategy is so novel there is no analogue in terms of the work of a known mathematicial genius like Gauss. That does that detract from the design Dave had to perform to make the code work. Does establishing design require that I make some English language summary of what the code does versus restating every last bit of Dave's code? Do you think his fitness function and the rest of the code were not assembled with the purpose of finding Steiner solutions? Do you think a Chimpanzee can even describe the algorithm in pseudo-code? Pull the snippet I identified, and let me know if you think it will still guide to target. How about you replace the fitness function with the reverse of choosing LARGER solutions, and tell me if it still guides toward the desired optimal solution so easily. How about you replace this:
double dx = xP[k] - xP[j]; double dy = yP[k] - yP[j];
with this:
double dx = xP[k] * xP[j]; double dy = yP[k] * yP[j];
Tell me what you think will happen. :-) Or how about replacing fitness() with a random number generator? What do you think will happen? If it still doesn't guide to target, that shows he's sneaking the solution in partly through that code.
steveh wrote: The FORTRAN snippet you presented doesn’t seem to include the code for determining if all points are connected to the same network.
Because I didn't feel like pointing out his whole program is permeated with design, and all those other details help sneak the answer in. I pointed to the highlights. To be technical, almost all his software is part of the method for sneaking the solution. As I pointed out, he is specifying a strategy, much like giving driving directions instead of giving an explicit address. He's being highly disingenuous if he's suggesting his answer is snuck in at one spot, when the fact the it is practically the whole program (minus a few areas) when executed constitutes sneaking the answer in. I merely tried to humor his leading question to some extent by highlighting where the selection choice most strongly reflected his design goals. In truth, most of his entire program is what sneaks the solution in, but I pointed the highlights. Whether his method can be compactly described as mine was is beside the point. In fact, I chose an example that could be compactly described and has been known for a long time so that there will be transparancy in what is happening. I even wrote in the comment section of my code to make it transparent where I was adding some theatrics:
Rather than compute the midpoint via a simple calculation it takes a random number as a starting point and then mutates the random number and uses a fitness function to select between the mutant and the original number to give the current best midpoint estimate.
Which means, rather than add 2 numbers and divide by 2 to get the midpoint, I went via an extremely circuitious, rube-goldberg type route. Despite all that, Thomas still accuses me of lying, when in fact I pointed the reader what exactly was being done and where something was purely for show to illustrate the shennanigans of what could be committed against the unsuspecting. The first programs were very direct and transparent, and I contrasted them with some that were deliberately obtuse, and which were identified as such. The point was to encourage readers not to be too persuaded by computational theatrics. scordova
The program is told to randomly search for a minimal length under a given set of rules, so the program has no idea about the goal of the programmer.
I’m afraid that is horribly incorrect. The progam relfects the goals of the programmer, it is an expression of his goals (hopefully anyway, for human programmers). It is not randomly searching, the search is assisted in the way the programmer describes.
No, ofro is right. The fitness function does "guide" the evolution of the organisms. However, it does this by proliferating the organisms that do the best job of meeting the goal. The goal, whatever it happens to be, is not as important as the mechanism: allow some subgroup of organisms to proliferate while another subgroup dies, and the whole species moves in the direction of certain mutations (those mutations might be the ones you want in order to meet your goal, or, in nature, they might be the mutations that help the species survive). If camouflaged organisms survive better than uncamouflaged ones, then the entire species will become progressively more camouflaged with each generation. The mutations introduce novelty to the species, and the survival/reproduction numbers determine whether that mutation is eliminated in the species or proliferates in the species. The GA's goal is simply used to hand-out death and proliferation to the individual organisms. Nature has it's own way of handing out death and proliferation to organisms (do you disagree with this?)
Salvador’s point still stands, the mechanisms are in fact seeking a solution
The mechanism is to allow a subgroup of the population to reproduce better than another subgroup. We, as GA programmers, guide the evolution of the species to a particular goal by killing off the organisms that are moving away from our particular goal. Do this iteratively, and you end up tuning the species to accomplish your goal. Nature guides the evolution of organisms by killing off the organisms that have bad mutations and proliferating the ones that have good ones. Do this iteratively, and you end up with species that have a lot of benefitial mutations (camouflage, ability to run quickly, etc)
I think Dave’s intentions while I’m sure well intentioned at first lead the lay person to believe design in the simulation is not present when in fact guidance and conditional processing are done. I think this is misleading to the average person on the street without software knowledge.
Nature supplies the guidance and conditional processing for biological organisms. I think Sal's presentation is misleading to laypersons who don't understand the software. I am saying this as a software engineer with decades of experience. BC
ofro states, "So there has to be a more specific goal of some form. This higher-level goal was to write the code such that it mimicked, as well as possible for simplifying computer model, a certain process, namely the randomness that is assumed to underlie mutations in nature." And this is the problem in a nutshell. The "assumptions of each side are different, therefore we talk across each other with little insight at times. In Daves case the assumption is that it would lead to evolution and survival. Yet it is guided by conditional processing - which is itself a hallmark of design. Snowflakes do not conditionally decide where to fall or when to melt and therefore cannot adapt to external forces. A snowflake cannot avoid the human boot about to step on it like a lizard, fly or any number of life forms. Living organisms can and this is the assumptive difference. One side believes material forces alone can explain adaptive technology of life forms. The other recognizes only intelligence can adapt and therefore program adaptable objects. Salvador's point still stands, the mechanisms are in fact seeking a solution, though they are not "hard-coded" which in fact he never stated. You cannot have unguided - yet guided without design not touching it. This confuses snowflakes with life. Its not that simple. Snowflakes do not repair themselves in order to conserve the original copy. "So here is where I think the “goal” stops". The goal preceded the outcome. Intelligent agents designed code which "mimicked" various solutions to reach an endpoint. However, this does not prove evoulution works the way the program does. It does show interesting solutions to "variations" within species. I think its a key point to focus on adaptation within boundaries. This is still largely a guessing game, although I appreciate the effort. What was uncalled for is calling Salvador a liar. Anyone reading his initial premise understood exactly what he intended in his reponse and it was not misleading in the least. I think Dave's intentions while I'm sure well intentioned at first lead the lay person to believe design in the simulation is not present when in fact guidance and conditional processing are done. I think this is misleading to the average person on the street without software knowledge. Michaels7
Dave Thomas showed exactly how your program was a disguised version of the Gauss formula. You have not done the same for his. The only fitness function you have identified measures the length of a candidate network. Knowing how to measure that length does not give you a shortcut to finding a near-optimum or novel solution. You've seen the challenge, you've seen the 'disguised FORTRAN solution', if you don't understand FORTRAN, I've put a C version at http://altsteve.nfshost.com/fitness.c If that doesn't give the game away, you can also get that length by drawing the solution on paper and using a rule, or you could build a wire model of the candidate and weigh it. Please, could you show me how that "secret insider knowledge" sneaks in a solution. The FORTRAN snippet you presented doesn't seem to include the code for determining if all points are connected to the same network. If you draw the code on paper you can do that quite easily by eye. If you build a model, you can move the model it and see if any bits stay behind. But I'm giving game away. Sorry Dave. steveh
ofro wrote: The program is told to randomly search for a minimal length under a given set of rules, so the program has no idea about the goal of the programmer.
I'm afraid that is horribly incorrect. The progam relfects the goals of the programmer, it is an expression of his goals (hopefully anyway, for human programmers). It is not randomly searching, the search is assisted in the way the programmer describes. I appreciate the time you gave for a lenghthy reply, but it is wrong on a very important point on which much of the whole ID debate hinges. Let us be exceedingly generous and hypothetically assume selection was used to create the design of life. On that assumption, would those selection pressures have to be carefully tuned, as in designed? The answer seems to be yes, and that is the ID position. That is the theme of The Displacement Theorem. Thomas tried to refute that, but he failed miserably because his selection code was permeated with intelligent design. To model a random selection force would be to have a random number generator randomly pick a solution. But even that would be too generous because it assumes mutation can even create a selectable solution in the first place! Thank you and Strangelove, however, for trying to answer my querries. scordova
I was going to reply, but I think ofro's post is good enough. Strangelove
#7. scordova: "Ofro, Strangelove, Please answer these questions, 1. Do you think Thomas’s evoltuionary algorithm would have worked if he did not have the goal in mind when writing the software?" I gues I first have to know what mean here with “goal”. Obviously, the basic goal was to write some code. Strictly speaking, if Thomas didn’t have that goal in mind there would not be any program at all. This would be the first (trivial) answer to your question. So there has to be a more specific goal of some form. This higher-level goal was to write the code such that it mimicked, as well as possible for simplifying computer model, a certain process, namely the randomness that is assumed to underlie mutations in nature. So here is where I think the “goal” stops. The program is told to randomly search for a minimal length under a given set of rules, so the program has no idea about the goal of the programmer. However, these rules in no way prejudice the program, except for the prejudice not to come up with nonsenical results like turning on the printer or not doing anything at all. Let’s draw the analogy between program and natural mutation/selection. According to the evolution theory, mutations occur randomly. The biochemical reactions that lead to mutations may "know" how they happened, such as after dose of radiation or a misread nucleotide by the polymerase, but they don’t know any more about a goal than the program's random generator. Instead of searching for a length minimum in a geometrical problem, in Nature the outcome is that the individual survives (or not) or has other means to have more (or less) progeny. Asking whether the program would work without a goal is similar to asking whether nature has a goal. So the answer to your question depends on how you view evolution itself. Does the evolution process have a “goal in mind”? “Goal” for me excludes or at least greatly restricts randomness; I guess it can also be referred to as “bias”. So I have to ask how randomness can be excluded in nature: Is there no such thing as a random mutation that can change the phenotype of an individual? Is there no such thing as different organisms with different abilities to survive (due to mutational differences in some phenotype)? If you answer with “there is absolutely no such thing” to both questions, then the necessary conclusion from this your premise is that Thomas’ program is not appropriate since it did not model your picture of nature. ofro
For the record, I had one of Dave Thomas\'s burgers today and it was remarkable. Scott
Another thing to point out in regards to GAs, is that a lot of the spe"cificity comes in just in the selection of variables to modify". Right johnnyb. All these points make GAs poor solutions for genuinely random evolution. And we could also add the huge difference between genotype and phenotype natural selection, the unfeasibility of complexity hierarchy, etc etc ... K. kairos
[Dave writes:] As an exercise in Smoke and Mirrors, Cordova’s algorithm is remarkable
Well Dave, I must credit the masters of computer smoke and mirror trickery who served as my role models: you, Richard Dawkins, Chris Adami, Lenski, Robert Pennock, Elsberry & Shallit, and the rest of the evolutionary community -- you all showed me how its done, and I'm flattered that you would compliment me in my emulation of your achievements. Although, I must admit, I feel like I'm fulfilling the role of Th Masked Magician in showing your magic tricks are the products of intelligent engineering, not magic after all. scordova
Another thing to point out in regards to GAs, is that a lot of the specificity comes in just in the selection of variables to modify! Most people don't think about this, but in fact this is a huge part of the teleology in any program. When you restrict the answer to only a few given quantities to vary, that is completely different than if you went the atelic route of selecting quantities to vary at random, hoping that the quantity you need to vary is even in your list of possibilities! johnnyb
Ofro, Strangelove, Please answer these questions, 1. Do you think Thomas's evoltuionary algorithm would have worked if he did not have the goal in mind when writing the software? 2. Do you think the reason his sofware succeeded is attributable to his pre-meditated efforts? Salvador scordova
"I don’t understand this. Why does the fitness function have to be random, chimpanzee programmed, or evolve itself? Doesn’t the fitnes function represent a particular environment that the evolving creature survives in, and the pressures it faces?" The problem is that the fitness function cannot be directly related to the problem being solved. Natural selection simply says the animal must survive or be better at reproduction, but somehow guides processes such as eye formation. Therefore, in a fitting scenario the selection algorithm should not directly correspond with what you are searching for, in fact there should be a large disconnect. Without such a disconnect, what you are doing is moving the design to the environment. You aren't getting rid of design, you're just moving it around. Take Avida, for instance. Without carefully crafting the fitness landscape, nothing at all evolved. This was even in their silly-simple little environment. Also, what is missing in _all_ of these examples, is the ability for the copying function to change by the same mechanisms that everything else does, which would be required for a Darwinian scenario. johnnyb
Ofro wrote: I would have eventually figured it out myself that the Gauss solution to the problem was part of your program, but your admission makes it easier.
Well there was a pretty blatant hint in that one of the four programs was called gauss.c! I pointed out what explit meant in my usage, that means explictly using the string "500500" in the program. The route toward that outcome was indirect, and inexplicit, or shall I say implicit. It was to help the readers see what gimmickery can be employed in attempts to use genetic algorithms to support the Blind Watchmaker thesis. Thomas invited readers to point out where he was explicitly specifying the target. Well, it depends on what means by "explicitly specifying" doesn't it? One can specify the target implicitly by using a strategy rather than giving the exact locations. I can tell you where the White House is via: 1. travelling directions from where you are 2. giving you the exact address 3. giving Latitude and Longitude 4. visiting a websites that will give you one of the above There are many roads to Rome, so to speak. A GA is one but many ways to reach a destination. At issue is whether the GA requires intelligence to infuse it with sufficient specifity to hit a target. Thomas has not proven mindless forces can successfully implenent an evolutionary route. For evolution to succeed via mutation and selection, it still requires intelligent design. But who is to say that an evolutionary algorithm was the route which intelligence had taken. I demonstrated 5 ways the same answer could be arrived at via intelligence. explicit.c was analogous to special creation and ga.c to prescribed evolution, but all 5 programs (explicit.c, brute.c, recurs.c, guass.c, ga.c) still required intelligent design to hit the target, all required a prescribed plan from the beginning. Salvador scordova
"Well, did he have some Chimpanzee create the fitness functions in his software for him?" I don't understand this. Why does the fitness function have to be random, chimpanzee programmed, or evolve itself? Doesn't the fitnes function represent a particular environment that the evolving creature survives in, and the pressures it faces? Perhaps you can answer the question with an example. How should genetic algorithms look if they are to accurately demonstrate evolution? Strangelove
scordova (see above): "I fully take pride in the smoke and mirrors I used, I never pretended otherwise." from the previous "Panda Food" Thread: ofro wrote: "What I don’t understand is the basic premise of your example, which apparently already has an explicit solution of the problem built into the program." scordova replied: "I’m afraid that isn’t quite correct because if you go to ga.c, and do a text search for 500500 you won’t find it. The solution was never explicitly stored anywhere." I would have eventually figured it out myself that the Gauss solution to the problem was part of your program, but your admission makes it easier. I have to say now that your reply was anything but "thruth and nothing but the thruth", and more like "I have sinned through my own fault, in my thoughts and in my words, in what I have done, and in what I have _failed to do_." I am convinced now that at that point you wanted to "code-bluff" me (see https://uncommondesc.wpengine.com/index.php/archives/1449). And I am not pleased. ofro
Thanks, scott. Isn't it charming how pastor Dave Thomas feeds his flock the standard line that his program illustrates the miracle of evolution. His congregation over at yonder Pandas Thumb website swallows the his magic tricks uncritically. "Wow pastor Dave, isn't wonderful what the miracle of evolution can do in your computer program. From this day forward, I put my faith and trust in Charles Darwin, and I'll even donate to the cause by buying a Darwin Bobble head." scordova
It's easy to see their desperation in trying to demonstrate that these simulations represent what actually happens in nature. People who are terribly insecure about the notion that there might a designing intelligence behind it all rather than the Steamboat-era mythology they've held so dear to their hearts for so long, will go to great lengths to support their faith. Here are some other discussions from this blog which discuss the silliness of such simulations: https://uncommondesc.wpengine.com/index.php/archives/907 https://uncommondesc.wpengine.com/index.php/archives/802 https://uncommondesc.wpengine.com/index.php/archives/166 Scott

Leave a Reply