Uncommon Descent Serving The Intelligent Design Community

Tautologies and Theatrics (part 2): Dave Thomas’s Panda Food

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

(this also servers as a partial response to a formal request for a response fielded by the UDer’s mortal enemies, the Pandas, specifically Dave Thomas in Take the Design Challenge!)

This is part 2 a of discussion of evolutionary algorithms. In (part 1): adventures in Avida, I exposed the fallacious, misleading, and over-inflated claims of a Darwinist research program called Avida. Avida promoters claim they refuted Behe’s notion of irreducible complexity (IC) with their Avida computer simulation. I discussed why that wasn’t the case. In addition, I pointed out Avida had some quirks that allowed high doses of radiation to spontaneously generate and resurrect life. Avida promoters like Lenski, Pennock, Adami were too modest to report these fabulous qualities [note: sarcasm] about their make-believe Avidian creatures in the make-believe world of Avida. One could suppose they refrained from reporting these embarrassing facts about their work because it would have drawn the ridicule the project duly deserves from the scientific community.

In contrast to the spectacular computation theatrics of Avida, Dave Thomas of Pandas Thumb put together a far less entertaining but no less disingenuous “proof” of the effectiveness of Darwinian evolution. Every now and then, the Panda faithful need some food to help them sustain their delusions about naturalistic evolution. This food I call Panda food, and chef Dave Thomas cooked up a pretty nice recipe to feed delusions to the faithful Pandas at our rival weblog. Perhaps if Dave Thomas refines his Panda food recipes, he should consider opening a restaurant chain, and maybe he should call it Panda’s.

To introduce what is at stake, I first introduce the idea of known explicit targets and unknown but desired targets. An explicit known target is a target which we clearly can see, describe, and precisely locate. Examples of such a target would be the bulls-eye an archer aims for.

We can build machines to help us hit such explicit targets. A good example of such an intelligently designed explicit-target hunter is the Infra-red Maverick Missile.

IR Maverick

When on a mission to destroy something like a tank, the aircrew tasked to fly the mission, locates the explicit target (i.e. a tank), and then describes the target to the missile through the process of designation (the designation process is analogous to a point-an-click on the aircrews video screen). Upon launch, the missile employs a feedback and control strategy very akin to classical control theory to home its way to the target.

But those are examples hitting explicit targets. What about unknown, but desired targets? Let me call such targets “targets of opportunity”. A target of opportunity would be the kind of target we know inexplicitly, but still seek after. A good example of such a target of opportunity would be deer in forest during a deer hunting season. Hunters have a general strategy for tracking and hunting the deer, but they don’t know in advance what their target will be (be it Bambi or Bambi’s mother, for example). We don’t know what kind of game we may or may not bag, just that we have a general idea of what we’re striving after.

Does the military have human/machine systems with “target of opportunity” capability? Ahem. Even if I did know of such things, I’d have to deny the existence of such missiles like SLAM-ER Target-of-Opportunity Missile.

SLAM

In the field of engineering and human endeavors, many of the solutions can be thought of like the process of hunting down targets of opportunity. Sometimes we are confronted with a problem, we have strategy we know in advance will yield a solution even before we explicitly know what the solution is.

A VERY simple case in point. Take the integers from 1 to 1000. The following question is posed to us, “what is the sum of these integers, 1 to 1000?” Do we have to know in advance what the answer is? Maybe, maybe not. I’ll cheat and give you the answer. It’s 500,500.

The important point is, that even if you did not know the answer (the target of opportunity) in advance, you have well-proven strategies to find and hit the target. One such strategy would be to sit down with a calculator or spread sheet and add the numbers form 1 to 1000. Another would be to write a computer program which added them together. Yet another would be to write a genetic algorithm to find the answer. I’ll provide several such examples a the end of this essay for you computer geeks out there! But the most important thing in hitting such a target of opportunity is that by intelligently designing the right strategy, one can hit a target of opportunity without the target being explicitly described. Get the picture?

Adding of numbers is a very primitive example of hunting down a target of opportunity. A far more sophisticated example, is finding the optimal design of a computer chip given certain constraints. The space of possibilities is extremely large, but engineers can program genetic algorithms (much like they build sophisticated calculators) to hunt down solutions on their behalf.

Back to the Pandas challenge of me. To build their case, anti-IDers will often need to equivocate and obfuscate the issues. Clarity is their enemy, confusion is their friend. Such was the recent offering by Dave Thomas of Pandas in a long, tedious essay, Target? TARGET? We don’t need no stinkin’ Target!.

He shows how a genetic algorithm can hunt down a target of opportunity. But as I hope I’ve shown, such a thing is unremarkable! However, he hints his program demonstrates mindless forces can find such targets without intelligent design.

Dave employs equivocation and Orwellian Double Speak to argue his case. He takes a designed selection strategy and tries to pass it off as an example of mindless undesigned forces which can magically converge on a target of opportunity. How does he promote his theatrical gimmick? Read what he says, and then read the challenge he poses to IDers:

Genetic Algorithms are simplified simulations of evolution that often produce surprising and useful answers in their own right. Creationists and Intelligent Design proponents often criticize such algorithms for not generating true novelty, and claim that these mathematical recipes always sneak the “answer” into the program via the algorithm’s fitness testing functions.

There’s a little problem with this claim, however. While some Genetic Algorithms, such as Richard Dawkin’s “Weasel” simulation, or the “Hello World” genetic algorithm discussed a few days ago on the Thumb, indeed include a precise description of the intended “Target” during “fitness testing” on of the numerical organisms being bred by the programmer, such precise specifications are normally only used for tutorial demonstrations rather than generation of true novelty

I have placed the complete listing of the Genetic Algorithm that generated the numerous MacGyvers and the Steiner solution, at the NMSR site.

If you contend that this algorithm works only by sneaking in the answer (the Steiner shape) into the fitness test, please identify the precise code snippet where this frontloading is being performed.

Thomas sneaks the answer in by intelligently designing a strategy which will find the target of opportunity. This sort of gimmickry is not much beyond the following illustration:

One kid goes up to another with a paint ball gun and shoots him, and says,

“Don’t get mad, I wasn’t aiming at you, I was aiming at the shirt you were wearing.”

bulls eye shirt

By giving the computer the correct strategy (like a method of adding numbers) one guarantees the answer (or target) will be hit, or at least a near miss. There are numerous strategies which will succeed, but they still must be intelligently designed. For the less technically minded readers, I hope what I’ve written so far gives a narrative explanation of what’s really going on.

To get an idea of how easy it would be to give the wrong search strategy, consider a long sequence of driving directions. If even one occurence of the word “left” is substitutted for “right” or vice versa, the directions will fail. Without intelligence programming the selection strategy, the target would have missed in Dave’s program. However, Dave Thomas used intelligence to ensure a miss wouldn’t happen, or at least, less likely. He thus snuck the answer in after all, contrary to his denials.

In the post script, for the benefit of the technically minded readers, I’ll address the more technical details to help put all of Dave’s nonsense to rest.

Salvador Cordova

PS

TECHNICAL DETAILS

Dave’s Challenge:

If you contend that this algorithm works only by sneaking in the answer (the Steiner shape) into the fitness test, please identify the precise code snippet where this frontloading is being performed.

I’ll identify it plain and simple, and call his bluff. The major front loading is in how selection is made. With the wrong selection description, the wrong target of opportunity, if any, will be hit. Simple!

Dave counts on a bit of obfuscation to make his work unreadable. He chooses an antiquated computer language known as FORTRAN to make his demands. “Lets invite UD software engineers to read my hieroglyphics and invite them to show where I sneaked the answer in!” Sheesh.

That said, I will identify an important part of his barely readable code, which, if removed will cause the genetic algorithm to miss the target. The fact that this section is essentially irreducibly complex is testament that intelligent design was needed to enable the genetic algorithm to do its thing.

If any section is even slightly re-written in a mindless way, the program likely misses the target at best and fails to even functionally compile at worst. I’m sorry the following link will look like hieroglyphics to some, but of necessity, I need to show it to call Dave’s bluff with it. Here is one of the many places where Dave sneaks the answer in:

Dave Thomas’s Code Bluff

Does Dave Thomas doubt me that I’ve identified where he snuck the answer in? How about we allow 5 random changes to the code segment I pointed to? Does he think such mindless modification can be introduced and the algorithm will still function? Do we think the GA will successfully hit the target (assuming the GA can even run) in the midst of 5 measly random changes? Will Dave run away from the fact that the above selection strategy needs intelligent design? Or will he represent that the above code segment came to be of its own accord, and that the selection strategy described by the above code is the product of blind mindless processes?? Will he continue to insist what he did is not sneaking the answer in?

The selection strategy in his program is anything but natural. Just because the terms Darwinian and selection are used in the argument does not mean intelligent agency is not permeating the entire project. Such labelings are doublespeak. If I went through and re-labeled everything intelligently designed selection vs. natural selection, you’d get the real gist of what’s happening!.

All right, as I promised, I’ll now present several ways to add the numbers 1 to 1000 and get the answer 500500. With the exception of the first program, in each case the target answer will not be an explicitly stated target, but rather a target of opportunity which is hit via an intelligently designed hunting strategy.

The sample programs are written in the C language.

This program will give the explicit answer to question, “What is the sum of the numbers from 1 to 1000?” :

explicit.c

This program will give the answer to question, “What is the sum of the numbers from 1 to 1000?” through a brute force computation which involved adding all the numbers from 1 to 1000:

brute.c

This program will give the answer to question, “What is the sum of the numbers from 1 to 1000?” through Gauss’s method of mathematical induction:

gauss.c

This program will give the answer to question, “What is the sum of the numbers from 1 to 1000?” through recursive addition of all the numbers from 1 to 1000 :

recurs.c

This program will give the answer to question, “What is the sum of the numbers from 1 to 1000?”
through a genetic algorithm. The algorithm pairs up numbers form 1 to 1000. Rather than compute the midpoint via a simple calculation it takes a random number as a starting point and then mutates the random number and uses a fitness function to select between the mutant and the original number to give the current best midpoint estimate. The process is repeated with increasing refinement. 2 times the sum of the midpoints then becomes the sum we are seeking. Snapshots of the algorithm’s progress are given along the way. The following computational theatrics are akin to what Dave Thomas performed:

ga.c

PPS
I and my co-workers (while I was in school in the 90’s) worked on target recognition systems and simulations of missile guidance systems. Dave can feed the biologists at Pandas Thumb his Panda food, but half the UDers here have relevant engineering backgrounds to see through the charade. He could not have picked a worse thing to do than challenge the UDers to disprove the flimsy claims of his intelligently designed program.

Comments
From Steiner tree
For the Euclidean Steiner problem, points added to the graph (Steiner points) must have a degree of three, and the three edges incident to such a point must form three 120 degree angles. It follows that the maximum number of Steiner points that a Steiner tree can have is N-2, where N is the initial number of given points.
There is a triangle defined by 1,4,A where A is the nearest steiner point to 1 and 4. The triangle has dimensions (if I did not botch my trig): length 1,4 = 300 length 1,A = 173.2051 approx length 4,A = 173.2051 approx I did not bother running http://www.diku.dk/geosteiner/ to double check however...so do not hold me to my guess. [update: see below for my updated guess] Salvadorscordova
August 15, 2006
August
08
Aug
15
15
2006
04:31 PM
4
04
31
PM
PDT
scordova: “I never represented it to something natural, that exactly the point. I invite you then to comment on the naturalness of Dave Thomas’s simulation.” What I don’t understand is the basic premise of your example, which apparently already has an explicit solution of the problem built into the program. I don’t know of anything in nature where an apparent evolution-like process already contains the target in some form. In my mind, the basic premise of Thomas’ program comes significantly closer to approaching a natural evolutionary process, although I will obviously not claim that it is the same in all aspects. The Darwinian process is one in which a phenotype of an individual of a population is occasionally modulated and tested for how it fares in a certain environment (the program does it one individual at a time while not caring about the many non-mutated individuals), and when an individual is identified, it is further modified. That is what I believe I see in Thomas program but not in your example. So if I had to choose between the two, I would definitely give Thomas' program the vote of being more "realistic."ofro
August 15, 2006
August
08
Aug
15
15
2006
04:13 PM
4
04
13
PM
PDT
Gosh, is FORTRAN still running out there? When is guidance unguided? Evidently when programmed by evolutionist. Without an intelligent agent knowing the end goal in order to provide a fitness selection critieria, the correct outcome would not have advanced at all towards a correct conclusion. GA may show some simple form of mimicry as in weighted outcome based upon input, but it does not lead to macro-evolutionary scales being highly vaunted so vociferously without caution. If anything it shows how Design can utilize such internal mechanisms. There is no such thing as "fitness selection" in an unguided process. I'm not fully convinced yet of frontloading, but its certainly more plausible than RM&NS. It appears evolutionist are trying to have it both ways. A guided process within an at-large unguided process. Variation of birds are allowable in an algorithm, but it would be confusing to then argue the bird will change into another form by random mutations. Everything we observe shows conservation. Duplication and repair. Feedback loops. The algorithm that exist allows for changes within a boundary - and this is the real fitness criteria that meets the survivability standard. Anything outside that fitness, goes to extinction. But it will forever be a bird, no matter how some evolutionist on here may hate the statement of that truth - as being simple minded: A bird will be a bird. The butterfly is a wonderful example of morphological change. But even then we are looking at the "preprogrammed outcome" of the larvae stage into adulthood. The end-form, genetic capacity is built in, is it not? From the beginning... This should teach us a lesson by observation - even morphological changes are pre-programmed. Observing nature and genetics, does each butterfly suddenly gain new genes? To try and mimick life with unguided process can only get you unguided failures. Thus the fruitfly experiments. The pathway of lifeforms is towards conservation and repair of the genome, not change and certainly not open to randomness without possible harm. This is more bluff, just like the PNAS paper quoted by Andrea. FORTRAN or not, without fitness tuning, you will not reach anywhere close to your target because it guides. Without guidance, there is no target to shoot for. The larvae would turn into a tree, a bat, a fish. No, what we see are tadpoles turning into frogs, caterpillars into butterflies. We see patterns genetically programmed for multiple stage metamophosis. We may one day in the future be able to preprogram some simple life forms with initial stages as cute as some caterpillars. It may be that with the input of an Oak leaf, or an ivy leaf, or a particular flower that the colors change in the butterfly. Maybe temperature will gage the outcome. Whatever our scientist create it will be designed, reactionary to external input, and choice driven for best outcomes as measured for that particular and "measured" ecosphere. But it will still be designed. Whether 10 inputs or 10,000.Michaels7
August 15, 2006
August
08
Aug
15
15
2006
04:05 PM
4
04
05
PM
PDT
Was that post supposed to contradict, somehow, what I posted here?
No. I felt however if I didn't post it in its enitirety I would have not done justice to you. That post was a big turning point because that was the first time someone of your stature said something Bill and I didn't absolutely cringe at! I wanted the readers to appreciate your contributions to field of evolutionary algorithms. Salvaodrscordova
August 15, 2006
August
08
Aug
15
15
2006
03:50 PM
3
03
50
PM
PDT
Salvador, Wow, did I really post that at ARN? Am I ever glad I got off caffeine! Seriously, I do regret the way I treated Bill Dembski and other IDists back then. Was that post supposed to contradict, somehow, what I posted here?Tom English
August 15, 2006
August
08
Aug
15
15
2006
02:53 PM
2
02
53
PM
PDT
"CPU OS GA engine etc." You need those things when you write any sort of computer program(not usually the GA). If you want to compute the total weight of a load of rocks, all of those elements would have to be present; All of that fine tuning would be necessary. One mistake in your FORTRAN program or the operating system or your computer hardware and the answer would be wrong or non-existent. Therefore you have proved beyond all doubt that a physical pile of rocks can't have a combined weight - it's just too complicated.steveh
August 15, 2006
August
08
Aug
15
15
2006
02:48 PM
2
02
48
PM
PDT
Salvador, I don't think you are justified in invoking the Displacement Theorem. Would you please establish that the assumptions of the analytic framework of Bill Dembski's "Searching Large Spaces" are met in the present circumstance? If you regard what Dave Thomas has implemented as assisted search (Bill's term), then you must regard the fitness function as the assistant. Bill stipulates that the assistant knows the target (the set of solutions) at the outset of the search. Show me where, in Bill's paper, having a fitness measure on candidate solutions equates to knowing the target.Tom English
August 15, 2006
August
08
Aug
15
15
2006
02:31 PM
2
02
31
PM
PDT
caligula claims: And in these areas, they are *demonstrably* wrong about their CSI claims, because they can be falsified in the world of mathematics and computation.
Caligula, If I may ask, how familiar are you with Bill Dembski's works? Do have his books handy? This is a crucial question, because, if you assert such things on this weblog, I am somewhat obligated to ask you to defend you claims. And I may invite you to do so mathematically. I apologize for the brusk treatment you've encountered here, and you've earned some respect in the eyes of the readers for the way you've handled yourself today. However, now that you've made that assertion, I will have to ask a few questions. Do you have a definition of CSI that you can make this claim for, and do you have Bill Dembski's books? Salvadorscordova
August 15, 2006
August
08
Aug
15
15
2006
01:58 PM
1
01
58
PM
PDT
mike: I whole heartedly agree. Computers in the foreseeable future aren't going to simulate any past or present ecosystem of the Earth. Their computing power just can't match the required detail of development (genotype=>phenotype mapping) and all the various challenges to an organism's survival and reproduction. I have already admitted this here, and I have faithfully kept on saying it on the Finnish usenet group where I sometimes contribute. (This is not to say I found evolutionary simulations totally fruitless, however. On the contrary.) But the issue at hand with Dembski, Salvador and others does not only concern biological evolution, as I wrote earlier. It also concerns many of the most exciting fields in computer science. And in these areas, they are *demonstrably* wrong about their CSI claims, because they can be falsified in the world of mathematics and computation. You don't need millions of years to demontrate it. As anyone literate can see, Salvador's last resort is more or less that everyone else except ID promoters are *forbidden* to use math to support their claims. That would be sneaking ID into blind calculcations, apparently. Really, that is all he is saying. I thank Salvador and others for their time. Regrettably, as the previous time I wrote to this blog, I've grown tired of reading how the moderator is in my opinion just trying to confuse both the discussion and perhaps some of the readers with totally irrelevant points. Of course, he already announced that I'm the one trying to create confusion, so you have my word against his.caligula
August 15, 2006
August
08
Aug
15
15
2006
01:48 PM
1
01
48
PM
PDT
Tom, I hope I don't embarrass you by introducing you a bit to them through something you wrote at ARN regarding Dembski's Displacement theorem: Tom responds to Bill. I hope you don't mind me quoting you as I'd like the readers to have an appreciation for your background:
Bill, I am sincerely impressed by your mathematics. I have always been impressed by your talent for propaganda, and that is also plenty evident in the paper. As you are well aware, it was I who argued in 1996, five years before you published No Free Lunch, that NFL follows from conservation of information in deterministic search. I appreciate your apparent recognition of the importance of that insight, though I would appreciate it even more if you cited my work. I have suspected that you have avoided calling attention to a secondary result in my paper, which says that a random walk of the search space almost always finds an excellent point within a modest number of steps. Now it seems that I was right, because you have used almost the same math to argue that it takes an exorbitant number of steps to reach a search target. Again, you have cited neither my paper nor the source I cited, a paper by Joe Breeden. My concern is that you bias your presentation by omitting reference to similar work that leads to a conclusion essentially the opposite of your own. The difference between your "search is slow" and my "search is fast" result is elementary and important. You give the size of the search target in absolute terms, and I give it as a fraction of the size of the search space. Both approaches are valid under certain circumstances. For readers who will not see my paper, I'll mention that if you want to be 99.99% sure of obtaining fitness better than that of 99.999% of all points in the search space, a random walk of 921 thousand points suffices. To obtain fitness in the top 1% with 99% certainty, the random walk need visit only 458 points. Because the target size is specified as a fraction of the search space size, these numbers hold (with some proviso for ordering points with identical fitness) for all large search spaces. Note that the approach of defining search targets in terms of fitness quantiles implicitly acknowledges that the quality of a search result is a matter of degree, not all-or-nothing. In other words, if your objective is to obtain a point with fitness better than 99% of points in the search space, but you end up with one that is merely better than 98.999%, the experience is generally not traumatic. You, of course, address problems with all-or-nothing solutions. This, in and of itself, would be fine, but you play quite a trick in reintroducing graded fitness as a means for the beneficent Bob to supply information to the benighted Alice. To my knowledge, never before has anyone given satisfactoriness primacy over fitness. When there is a fitness function, a satisfactory solution is one that is sufficiently fit. You switch things around without comment. It is a very clever tactic, but not one that I respect terribly much. And it also worth noting that when you first anthropomorphize Bob you appear to be taking a conventional approach to giving a concrete explanation. Few reviewers will realize that the teleology salesman has just stuck his foot in the door. In engineering, the objective is usually to find satisfactory, not optimal, solutions using acceptable amounts of time and space. Biologists who say evolution is an optimization process back off from that stance when you give them the option of calling it a "satisficing" process. In practice, the quality of a solution is rarely all-or-nothing, and the number of satisfactory solutions is generally increasing in the size of the problem instance. It is very interesting that you stipulate repeatedly that Alice must find one, and only one, protein. Why, precisely, is that, Bill? Why doesn't Alice search for any protein with certain functional properties? Why is Bob in love with a particular sequence of amino acids? Why doesn't Alice base the search on knowledge of existing proteins and their functional properties? You seem to be trumping up a case for teleology. A much more subtle and shrewd trick, which allows you to boost the case for the necessity of an external teleological assistant, is your assumption of a uniform distribution on the space of solutions. In prior work on the mathematics of search and optimization, the distribution of fitness functions has been assumed to be the average of all distributions, i.e., uniform. This sufficient condition for NFL induces a uniform distribution on the space of solutions, but the necessary and sufficient condition of a block-uniform distribution does not. The set of NFL distributions your framework does not accommodate is uncountable. Even for your all-or-nothing (binary, solution-or-not) fitness functions, the distribution of solutions may be far from uniform when the distribution of fitness functions gives NFL. Predicating a uniform distribution on the search space is particularly odd in the context of your protein example. For amino acid sequences, the universal distribution would be the natural choice. That is, there's a strong argument for exploring algorithmically compressible (simple) amino acid sequences prior to algorithmically random (complex) sequences. In practice, programs for evolutionary computation focus upon solutions with low algorithmic information, simply because their pseudorandom number sequences contain little information. In the important case of state-space search (say, by algorithm A* or iterative-deepening depth-first search), cheaper sequences of state-transforming operations are considered before more expensive sequences, and this implicitly defines a nonuniform distribution on the space of possible solutions. In short, I think your assumption of a uniform distribution on the search space is rarely useful. I should mention that I published work last year treating deterministic search algorithms as operators on probability distributions of fitness functions. I characterized NFL distributions as fixed points for all search algorithms, and showed that deterministic search preserves the nearest NFL distribution as well as the distance to that distribution. I also showed that randomization moves the distribution of search results closer to the fixed point, indicating rather clearly, I think, that randomization is a hedge againts mismatch of the search algorithm and an unknown distribution of fitness functions, not a strategy for speeding search. I fixated on the Kullback-Leibler distance, and failed to observe that my main results generalize immediately to a large class of distance measures, including the metrics based on Lp norms. I believe this is related to your work. On a positive note, I think much of what you have done in the paper could be quite useful. It is not merely the IDists who speak vacuously of intelligence, but many of my friends in machine intelligence. You deserve credit for nailing down the term. Your formalization of information gain, a topic that has occupied me at times, is also quite good, I think. But you indicate that your framework accommodates most of prior theory and practice, and this is simply not so. And it is grossly manipulative for you to turn the search problem upside-down, without acknowledging you have done so, to beg the question of the existence of teleological processes in nature. Sure, once you have smuggled in the notion that a biological process has searched for and found a specific sequence of amino acids, you can argue that assistance must have come from outside the observable universe. So what? From a false premise, conclude anything. Best wishes, Tom English
scordova
August 15, 2006
August
08
Aug
15
15
2006
01:46 PM
1
01
46
PM
PDT
Salvador, "A genetic algorithm is like an instruction manual that tells the computer how to go about solving a problem. Genetic algorithms are good for solving only a limited set of problems." This is misleading. The genetic algorithm is a sequence of instructions for simulating evolution. One part of the simulation is evaluation of the fitness of all members of a population. For each individual in the population, a fitness function is applied to the individual. The fitness function is assumed to be defined, but it is not a part of the genetic algorithm itself. It may be thought of as modeling the environment in which the population evolves. The evolutionary simulation essentially does not "know" anything about the fitness function. The upshot is that to solve different problems with a genetic algorithm, you change the fitness function, not the genetic algorithm. A single genetic algorithm can be used to solve many different problems. "Furthermore, if the genetic algorithm is mis-programmed it won’t work." This is simply not true. Implementations of genetic algorithms are in fact hard to debug, precisely because programming errors often do not stop them from obtaining solutions to problems. For instance, if an implementation mutates alleles at twice the rate it is supposed to, it is very hard to tell by watching the behavior of the implementation that something is wrong. "Thus, it is misleading to hint that genatic algorithms negate the need for intelligent agency somewhere in the pipeline." Use of an intelligently designed simulation does not imply that what is simulated is intelligently designed.Tom English
August 15, 2006
August
08
Aug
15
15
2006
01:20 PM
1
01
20
PM
PDT
Caligula asked: You can mutate the hardware, the CPU, the OS or the GA engine all you like.
The issue is not mutating these things but showing how finely tuned they need to be in order to compensate for the randomness of the objects they select. The alternative is to fine- tune the objects one is selecting, but that would look too much like special creation. Rather, this circuitous route serves the anti-design case by sneaking away the fine-tuning into the things you just listed: CPU OS GA engine etc. Then one can pretend those intellignetly designed things aren't seriously affecting the results, when indeed they are. Thomas is piggy backing on specified complexity of these artifacts and is not including them in his accounting equation. The displacement theorem helps put into perspective the amount of CSI anti-IDers are actually sneaking in to the system. You asked how much fine tuning, and I gave you an answer in terms of the improbability of the object in question. I even showed 5 ways to reach the same answer for adding numbers 1 to 1000. The genetic algorithm was the most circuitous and theatrical. The most conceptually simple was the brute method. But each strategy need intelligent authorship. To give you an idea of what the displacement theorem means, consider writing a GA to solve someones 100 bit password. One average one GA is no better than the next, in fact, no better than random chance. To presume that nature selects complexity is misguided. Orr pointed that out. John Davison will happily point out the more complex creatures are the ones going extinct. Selection in the wild is barely a sustainer, and more the destroyer of complexity. We see this supported empirically and theoretically. Orr unwittingly said it well: "Whether or not this kind of evolution is common, it betrays the fundamental error in thinking of selection as trading in the currency of Design. ". regards, Salvadorscordova
August 15, 2006
August
08
Aug
15
15
2006
01:17 PM
1
01
17
PM
PDT
mike1962: “The point is, the fact that there was less than “vague routes” is true, but misleading. The selection system was highly tuned and very specific about what the final result would be.” Caligula: "Perhaps, but by the same token so is natural selection. As Dawkins has said: there are *vastly* more ways of not being alive than being alive. Which applies to any creation of natural selection (organisms or their substructures): no matter how many possible evolutionary pathways are favored by selection, there are vastly more pathways rejected by selection. *That* is what selection, and especially *cumulative* selection, is all about." I agree. The question left for me then is: is "natural selection", that is, selection by the (designed or nondesigned) environment something capable of resulting in life as we know it in all it's glory? Nobody knows because nobody knows the initial conditions. Avida doesn't tell us anything we don't already know about process control, and it certainly cannot answer the big questions about life. Avida is a waste of time.mike1962
August 15, 2006
August
08
Aug
15
15
2006
01:06 PM
1
01
06
PM
PDT
Yes, and imagine if large comets hit the Earth every ten years, and natural laws mutated every morning at breakfast time! We would hardly have complex life, if any life at all. But could you explain, Salvador, how does all this have *anything* to do with the issue at hand? You can mutate the hardware, the CPU, the OS or the GA engine all you like. But please notice that by doing that, instead of mutating a population -- something which is perfectly relevant in this discussion -- you are mutating the very natural laws, all of the environment of the population, and the fitness challenge that we are *supposed* to apply to the population. And you are mutating them all at a fast pace in evolutionary time scale. In short, you are trying to step outside cosmos and enter chaos, because your theory is incompatible with the cosmos. I'm interested to see how many followers you have in this move. The same applies to your calculations in #28. I doubt they make "sense" to anybody but yourself. They certainly didn't have anything to do with what mike and I were discussing. It's as simple as this: how many possible solutions are there, and how big a portion of them are MacGyvers with decently short length? Also, how many possible evolutionary routes are there to the MacGyvers, as opposed to the number of all possible evolutionary routes? My claim is that MacGyvers are a vast group, but even then, all the other solutions vastly outnumber MacGyvers. This means that (a) the selection process allows *plenty* of freedom while (b) it still produces specified results. I'm interested to read mike's take, though.caligula
August 15, 2006
August
08
Aug
15
15
2006
12:48 PM
12
12
48
PM
PDT
Really? Tell me, do you think if I interchanged lines 8 with like 12 in Thomas’s code snippet that the system will still successfully guide to target? The point of GAs is to illustrate the power of imperfect self-replicators to find novel solutions to problems. If you remove the capacity for the imperfect self-replicators to exist, then the GA will not function. What is your point?franky172
August 15, 2006
August
08
Aug
15
15
2006
12:46 PM
12
12
46
PM
PDT
ofro commented: I fail to see how your summation example comes remotely close to simulating a selection process in nature.
I never represented it to something natural, that exactly the point. I invite you then to comment on the naturalness of Dave Thomas's simulation.scordova
August 15, 2006
August
08
Aug
15
15
2006
12:44 PM
12
12
44
PM
PDT
Zapatero wrote: To claim that Thomas “sneaks” the Steiner shape into his program via the fitness function is about as absurd as claiming that Fermat’s Last Theorem “contains” Andrew Wiles’ 150-page proof.
Really? Tell me, do you think if I interchanged lines 8 with like 12 in Thomas's code snippet that the system will still successfully guide to target?scordova
August 15, 2006
August
08
Aug
15
15
2006
12:26 PM
12
12
26
PM
PDT
Caligula asked: BTW. Just for the benefit of all: since you seem to know that Thomas’ GA is “highly tuned and very specific about what the final result would be”, could you show us some calculations.
It would be on the order radom chance scanning the space of possible outcomes, i.e. if the solution space is improbable, then on average the likelihood a selection force existing by random chance to reach it even more remote. For example if random chance will hit the Steiner solution 1 out of 10^1000 times, then on average the existence of a selection force to guide it to target is more remote than that. That was the conclusion of the Displacement theorem. This is readily apparent with the challenge I offered. Let some mindless change, perhaps as little as 5 of the 1137 characters, be made in the code snippet I identified and let's see how frequently it will even compile, much less guide itself to target. There are small "comment section" island which would be immune to change, but beyond that, such untuning would destroy Dave's program.scordova
August 15, 2006
August
08
Aug
15
15
2006
12:14 PM
12
12
14
PM
PDT
Nick Matzke asked: Forgive me for being dense,
You are not dense. You're one of the brightest guys out there.
but where, exactly, did you “identify the precise code snippet where this frontloading is being performed”?
Go to Dave Thomas’s Code Bluff. You'll see it corresponds to a section in Thomas's code. This section of code sets up the criteria for determining how fit a solution is. In otherwords, this section of code induces the selection pressure to select out solutions. You will not see any explict reference to the target in question. As I pointed out, the specification is essentially a strategy to hunt down the target. It is not as overt as Dawkins weasel. Salvadorscordova
August 15, 2006
August
08
Aug
15
15
2006
11:51 AM
11
11
51
AM
PDT
mike1962: "The point is, the fact that there was less than “vague routes” is true, but misleading. The selection system was highly tuned and very specific about what the final result would be." Perhaps, but by the same token so is natural selection. As Dawkins has said: there are *vastly* more ways of not being alive than being alive. Which applies to any creation of natural selection (organisms or their substructures): no matter how many possible evolutionary pathways are favored by selection, there are vastly more pathways rejected by selection. *That* is what selection, and especially *cumulative* selection, is all about. BTW. Just for the benefit of all: since you seem to know that Thomas' GA is "highly tuned and very specific about what the final result would be", could you show us some calculations. To me it seems, that there is a vast number of possible routes to any final result and notable variety final results, both in quantity and quality. (Some results giving about the same "length" have little common in detail of structure, except that a human observer might call them "MacGyvers".caligula
August 15, 2006
August
08
Aug
15
15
2006
11:45 AM
11
11
45
AM
PDT
Nickm, Scordova indicated the following link for an example of the front-loading http://smartaxes.com/docs/ud/tautologies/bluff.txt As you say, it seems to be just measuring total length using Pythagoras. As I see it the length might show up, and be selected for/against indirectly, if this were a biological system. Eg if an creature had to eat in order to build and maintain long connections, or if it got slow, or easily damaged, if the total was large, then a long total length could be selected for by either starvation, being caught by a predator, or an accident without as much as turning on a calculator or even knowing the length. Also as you point out, a soap solution can get similar results without calculating anything. I suspect that if you were to try to mimic the soap-solution solution using a computer program it would be similarly complex. Also, Sal, we use powerful computers to predict the weather, yet somehow nature has always manage to work out if it was going to rain even before the invention of the computer. Even a dumb rock could 'calculate' its trajectory down a mountainside more accurately than a team of computer bods with the latest equipment.steveh
August 15, 2006
August
08
Aug
15
15
2006
11:29 AM
11
11
29
AM
PDT
sagebrush gardener: Indeed, NN is not GA. There is no population, and the changes made into the network during backpropagation are not random. Why do I consider NNs relevant in this discussion? Because your original comment, as well as e.g. Dembski's claim that only ID can produce CSI, bring NNs into the discussion. ID, or at least Dembski and his supporters, are making a claim not only concerning biological evolution but concerning *all* blind algorithms, including AIs produced by various self-learning algorithms other than GAs. As for the human interference in backpropagation. You will see that backpropagation is a generic method for approximating *any* non-linear function. The backpropagation rule is carried out exactly the same way regardless of the function to be learned (i.e. the problem to be solved). The only difference, then, is the function to be learned. Sure, a human typically fixes the number of inputs and outputs to match those of the function to be learned. (Usually the number of all nodes in the network remain fixed during the learning process.) But this is simply practical rather than "major front-loading". As for the "desired output", as I said a human doesn't even have to *know* the desired output in cases like "reinforcement learning". Please see the link I gave earlier. As for "technical jargon". Discussion boards are a challenge, aren't they? Too many words and you're spamming, too few and you hide behind jargon. If allowed, I'm more than happy to discuss this issue thoroughly. Preferably by explaining some of the technical terms as needed and then making use of them for brevity. Fortunately, at least sagebrush seems to be able to learn about unknown terms on his own.caligula
August 15, 2006
August
08
Aug
15
15
2006
11:21 AM
11
11
21
AM
PDT
Nickm: "As I understand it the genetic algorithm was simply selecting for shortest length. This is a very simple, low-specificity selection target, and yet the hits favored by this simple selection target end up being rather complex and hard to find by direct algorithms." Right. Any given set of waves on the ocean would be in the same boat. Nobody doubts that a variety of "complexity" can be built up by applying simplish selections to stochastic inputs. But there are quantifiable limits to the nature of the output given selection criteria and the allowable temporal orders that the selections are applied, etc. It is not an anything goes proposition, by any means. So then, I think what we need to know is, if I have a selection criterion that will generate cogs, another selection criterion that generates springs, and another selector that generate pins, is it possible for the outputs to coaless into a watch? I suppose it eventually could, if all the other selectors that may exist allow for it. So then, what does Avida show us that we didn't already know? Nothing that I can tell. The key questions about evolution on this planet (and universe) boil down to what the initial conditions were, and why they were the way they were. It's a wholistic proposition. If the universe is actually deterministic down deep, then nothing is an accident, and all life was bound to exist just the way it has. Otherwise, not, but then we're left with something in nature that is genuinely non-deterministic, which is beyond reason. At any rate, Avida, and programs like it, if they are useful at all, will end up showing us that life on earth is necessarily the product of some very non-trivial selection criteria. Take that as a ID-friendly prediction.mike1962
August 15, 2006
August
08
Aug
15
15
2006
11:12 AM
11
11
12
AM
PDT
To claim that Thomas "sneaks" the Steiner shape into his program via the fitness function is about as absurd as claiming that Fermat's Last Theorem "contains" Andrew Wiles' 150-page proof.zapatero
August 15, 2006
August
08
Aug
15
15
2006
10:49 AM
10
10
49
AM
PDT
Forgive me for being dense, but where, exactly, did you "identify the precise code snippet where this frontloading is being performed"? As I understand it the genetic algorithm was simply selecting for shortest length. This is a very simple, low-specificity selection target, and yet the hits favored by this simple selection target end up being rather complex and hard to find by direct algorithms. And: please identify the front-loaded target in the soap film version: http://www.pandasthumb.org/archives/2006/07/target_target_w_1.htmlNickm
August 15, 2006
August
08
Aug
15
15
2006
10:38 AM
10
10
38
AM
PDT
scordova,
What is at play here is an abundance of technical jargon to confuse the issues.
I sometimes suspect that, but being not very bright myself I tend to give the challenger the benefit of the doubt and begin by assuming the he knows something I don't and that he is not merely blowing smoke. In the process of considering his challenge and doing my best to determine a.) whether or not it is accurate and b.) whether or not it is applicable, I often surprise and delight myself by learning something new.sagebrush gardener
August 15, 2006
August
08
Aug
15
15
2006
10:32 AM
10
10
32
AM
PDT
Caligula: "Has any of the targets let alone even a *vague* route to any of the “targets” (the formal solution or the MacGyvers) been intelligently designed? Not at all." But so what this all prove then? As I see it, it demonstrates stocastic inputs can yield certain "shaped" or selected output depending on a fitness algorithm designed to select increasingly desirable traits. This is certainly not news to anyone in the process control or AI world (such as myself.) I don't see how it benefits the Darwin camp. Nobody doubts the ability for an environment to select events that occur within it. Nobody doubts that unforseen paths may be "trodden" on the way to ever increasing "fitness." For example, a fairly simple example of this is a HVAC air temperature control system where an algorithm (PID in this case) takes temperature inputs and attempts to control the heating and air units to acheive a stable temperature close to the target. (Not as easy as you think for a large space. Simple thermostats do a very lousy job of it.) The PID may be manually tweaked and tuned during this process (since humans know what they want to achieve), but the "route" (actual temperature fluctuations) taken in this process is infinite and unknowable at the onset with any high degree of precision. Each fluctuation relative to the air unit states provides useful information to the stategy of the PID (and the human who may need to tweak it if things get out of hand, or were poorly estimated at the onset.) The point is, the fact that there was less than "vague routes" is true, but misleading. The selection system was highly tuned and very specific about what the final result would be. In the end, it's not logically different than Dawkins "methinks it is a weasel" program. In such systems, it's the selector that is all important, not the stocastic input. But is this how life came to be, and how it progresses in the formation of novel features? Does Avida demonstate anything other than a frontloaded system? No, despite the numerous paths that the input may take.mike1962
August 15, 2006
August
08
Aug
15
15
2006
10:15 AM
10
10
15
AM
PDT
1. How would the computer generate a selection process (laws of nature, environment, etc.) all by itself, left alone on the table? 2. As far as I can see, this question brings us all the way to abiogenesis. Yes, Thomas is definitely assuming a readily available entity roughly comparable to a cell or even a multi-cellular organism. This is beyond the scope of the questing at hand, however.caligula
August 15, 2006
August
08
Aug
15
15
2006
10:10 AM
10
10
10
AM
PDT
sagebrush gardener, What is at play here is an abundance of technical jargon to confuse the issues. The goal of a persuassive response, if the facts are on one's side, is to clarify and enlighten, not to use jargon to try to beat down the questioner. The art of programming is not very far removed from the art of writing an instruction manual, it's just more technical and rigorous. A genetic algorithm is like an instruction manual that tells the computer how to go about solving a problem. Genetic algorithms are good for solving only a limited set of problems. Furthermore, if the genetic algorithm is mis-programmed it won't work. Thus, it is misleading to hint that genatic algorithms negate the need for intelligent agency somewhere in the pipeline. Salvadorscordova
August 15, 2006
August
08
Aug
15
15
2006
10:06 AM
10
10
06
AM
PDT
caligula,
I would surely want to see you “tweak” by hand, say, the weights of a neural network “until it does” (give the “correct” results)! It’s a better idea to just let the NN learn the “target” using a blind and extremely generic method called “backpropagation”. How is a NN not self-learning?
Sorry to be thick, but are you saying that Avida is NN, or am I missing your point? I couldn't find a reference that indicated that Avida uses NN techniques. Also, you seem to be implying that "tweaking by hand" is not applicable to NN. Surely you don't mean that the output of NN is independent of the actions of the programmer, do you? My background is primarily in business programming and I claim no expertise in NN, but I did find this in an introduction to neural networks:
The Back Propagation NN works in two modes, a supervised training mode and a production mode. The training can be summarized as follows: Start by initializing the input weights for all neurons to some random numbers between 0 and 1, then: Apply input to the network. Calculate the output. Compare the resulting output with the desired output for the given input. This is called the error. Modify the weights and threshold for all neurons using the error. Repeat the process until error reaches an acceptable value, which means that the NN was trained successfully... [Emphasis added]
This seems to support my contention that a program's output is tweaked by the programmer to achieve a desired result -- even in Back Propagation NN.sagebrush gardener
August 15, 2006
August
08
Aug
15
15
2006
09:50 AM
9
09
50
AM
PDT
1 2 3

Leave a Reply