Uncommon Descent Serving The Intelligent Design Community

Congratulations Dave Thomas!

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Dave has proven beyond a doubt that intelligent agents can construct useful trial and error algorithms.  As long as the way the trials are conducted and the way the results are judged is well specified then trial and error algorithms work!  Of course we all learn to search for solutions using trial and error as children.  Or so I thought.  Maybe Dave Thomas is just discovering it now and thinks he’s stumbled onto something revolutionary.  The $64,000 question remains unanswered.  Who or what specified how trials in evolution were to be conducted?  The only answer I’ve heard  from chance worshippers is that some mystical chemical soup burped out a living cell containing a protein assembly machine called a ribosome driven by an abstract digitally encoded control program and a data library containing abstract digital specifications for a large number of proteins required for the cell to function in an information storage molecule called DNA.   In point of fact, information in the  DNA molecule is required to construct a ribosome and a ribosome is required to duplicate a DNA molecule.  Which came first: the protein or the robotic protein making machine that requires parts made of proteins?    Maybe Dave can find the answer by trial and error.  Let’s all wish him luck.

Good luck, Dave!

Comments
Tom Spare me. Were you a programmer when "Expert Systems" were all the fad? I was. Before and after. Like rule based decision making in software was something no one had ever done before some marketing genius decided to call it "Expert Systems" to see if it would sell better. Artificial Intelligence is the same story. 25 years ago I was working in the CAD/CAM industry with what's called auto-router software. This is software that undertakes the enormously complex task of finding a way to route traces on a circuit board in the least number of copper layers. Circuit board cost rises exponentially as number of layers increases and production yield goes in the opposite direction. Rules for clearances between traces, width of thru holes, etc. all adjustable with the same tradeoffs. We implimented genetic algorithms, artificial intelligence, and expert systems in that software before anyone ever heard the terms and we weren't pompous enough to think we were inventing anything new. Imagine how we laugh when some young idiot or clueless academician picks up something we were doing when they were still crapping gerber baby food and gives it some hoity-toity name like it's something new. Please, please spare me. So now here comes Tom with a GA working on a Steiner Tree with 6 points and one connection layer. Imagine me giggling over that trivial POS when 25 years ago I was coding software that did the same thing only with 60 thousand points to connect and anywhere from one to a dozen connection layers. Please, please, PLEASE spare me. I'm begging you. My sides are aching and I'm spitting beer all over my screen from laughing so hard.DaveScot
August 22, 2006
August
08
Aug
22
22
2006
10:03 PM
10
10
03
PM
PDT
Note to readers: Everything scientists say about nature is just a MODEL of reality, not reality itself. There is never any way of knowing if science has gotten at reality. This is as much a consequence of the empiricism of science as the methodological naturalism. Genetic algorithms are highly abstract models of biological evolution. While they do not predict much about biota, they do serve to validate key aspects of evolutionary theory. I have read hundreds of papers on evolutionary computation, but no discussion of simulation of evolution has surpassed one by Wirt Atmar I first read in 1994. If you want to understand what simulation models have to do with biology and engineering, I recommend it highly. It is also quite amusing to see Wirt slam Dawkins. The paper is here: http://www.aics-research.com/research/notes.htmlTom English
August 22, 2006
August
08
Aug
22
22
2006
04:17 PM
4
04
17
PM
PDT
Joseph, Samuel Taylor Coleridge wrote, “Until you understand a writer’s ignorance, presume yourself ignorant of his understanding.” I learned that while studying for my first master's degree, which was in English. Dembksi: “Chance as I characterize it thus includes necessity, chance (as it is ordinarily used), and their combination.” Tom: "Do you have the least notion of how outrageous it is to define pure necessity as chance?" Joseph: "Do you know how outrageous it is to even think that is what was posted? I take it “English” is your second language." Read Dembski closely. Necessity is chance. "Ordinary" chance is chance. Combinations of necessity and chance are chance. Now read what I wrote. Pure necessity is chance. "Pure" is what grammarians call an intensifier. Tell me if you still don't understand. "Perhaps you can back up what you say with something of substance. I won’t be holding my breath…" I gave you substance, but you evidently did not recognize it for that. You went to dictionaries, of all places, as sources of authority that could prove me wrong. That is about as wise as going to the dictionary to find out the meaning of "Darwinism." Tom: "One notion is that any program (sequence of instructions) for a universal computer (Turing-complete system) is an algorithm." Wikipedia: "Thus, an algorithm can be considered to be any sequence of operations which can be performed by a Turing-complete system." http://en.wikipedia.org/wiki/Algorithm Note that the Wiki quote comes from the "Formalization of Algorithms" section. Your dictionary definitions are informal. Unfortunately, if you want to make big claims about algorithms, you need a formal understanding. Do you want to challenge me on equating programs with sequences of instructions? universal computers with Turing-complete systems? Please check Wiki before doing so.Tom English
August 22, 2006
August
08
Aug
22
22
2006
03:54 PM
3
03
54
PM
PDT
Note to readers: Just because "genetic" is in the term genetic algorithm does NOT mean it (any particular GA discussed) reflects biological reality.Joseph
August 22, 2006
August
08
Aug
22
22
2006
03:42 PM
3
03
42
PM
PDT
"Sorry guys, but GAs are still child’s play. Real programmers don’t give hoity-toity names like “Genetic Algorithm” to ways of finding answers that just about every child invents on his own recognizance without being taught. That’s just a really lame attempt by greenhorns to appear smart and innovative." When you are awarded your McArthur Fellowship you can explain that to former McArthur Fellow John Holland, to whom the term "genetic algorithm" is due. Perhaps my son was a dim-wit like his dad, but I am certain I never saw him 1. Maintain a population of potential solutions, each written as a binary string 2. Record the fitness of each individual in the population 3. Use a fitness-weighted roulette wheel to select parents 4. Generate a random number to decide where to cross over two parent strings 5. Repeatedly flip a biased coin to decide which bits in the offspring to mutate 6. Sort the offspring by fitness and merge them with the parents 7. Decide how to cull excess individuals from the population My son is grown now, but I'll be sure to have a close look at the kids in the park when I'm walking the dog.Tom English
August 22, 2006
August
08
Aug
22
22
2006
02:45 PM
2
02
45
PM
PDT
Tom English: I imagine you feel like an algorithm must have a purpose, but it just ain’t so. Yeah right. You have shown you can't even read a quote properly. And please don't tell me what I feel. Merriam Webster (online) algorithm: a procedure for solving a mathematical problem (as of finding the greatest common divisor) in a finite number of steps that frequently involves repetition of an operation; broadly : a step-by-step procedure for solving a problem or accomplishing some end especially by a computer Compact Oxford (online) a process or set of rules used in calculations or other problem-solving operations. Cambridge International Dictionary of English (online) a set of mathematical instructions that must be followed in a fixed order, and that, especially if given to a computer, will help to calculate an answer to a mathematical problem Wiktionary (online) Any well-defined procedure describing how to carry out a particular task Wordsmyth (online) a completely determined and finite procedure for solving a problem, esp. used in relation to mathematics and computer science. The American Heritage® Dictionary of the English Language: Fourth Edition. 2000. (online) A step-by-step problem-solving procedure, especially an established, recursive computational procedure for solving a problem in a finite number of steps. That should be enough however I doubt even that will get through. So Tom, I don't feel algorithms have a purpose it is obvious that they do. Perhaps you can back up what you say with something of substance. I won't be holding my breath...Joseph
August 22, 2006
August
08
Aug
22
22
2006
02:29 PM
2
02
29
PM
PDT
DaveScot: "We programmers call this the “brute force method” because all it does is takes the simple method of trial and error and multiplies its effectiveness by the computer’s speed at conducting a trial and evaluating the result. No finesse. Just brute force." We computer scientists would never call a GA a brute-force method. From Wiki, "brute-force search is a trivial but very general problem-solving technique, that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement." Brute-force search is infeasible in Dave Thomas' problem. The space of chromosomes is too large to enumerate. If a brute-force search goes through the chromosomes (bit strings) in the order 00 ... 00 00 ... 01 00 ... 10 etc., the computer will turn to dust before finding a chromosome comparable in fitness to those found by the GA. "[Trying to correct yourself] Each trial is not necessarily a totally random guess." No, brute-force search is not random guessing. In fact, a random sample of chromosomes will probably yield better results. Let N be the number of fitness evaluations done in a single run of Dave's GA. Draw a chromosome randomly (i.i.d. uniform) from the space of chromosomes N times, keeping track of which has the highest fitness. Note that random sampling and the GA use an identical fitness function. The fitness function is not part of the algorithms. It is essentially an external black box. The fitness function is no more designed for use by the GA than it is for use by random sampling. The GA is not designed to accommodate the fitness function, and in fact can be used with an infinitude of other fitness functions. The important question, then, is why does the GA do better than random search? Averaged over all fitness functions, the GA does not do better than random search (Wolpert and Macready, "No Free Lunch in Optimization"). Random sampling is oblivious to the topography of the fitness landscape. Any advantage of the GA over random sampling for a problem reflects a degree of "GA-friendliness" of the fitness landscapes corresponding to problem instances. There is inherent order in Dave's problem that is reflected in the topography of the fitness landscapes. The order comes from the physical system itself, not Dave's mind. Representation of the system seems a minor issue to me -- I cannot see what shuffling the genes would do but increase the disruptiveness of crossover.Tom English
August 22, 2006
August
08
Aug
22
22
2006
02:21 PM
2
02
21
PM
PDT
Or 400,000 in scholar: http://scholar.google.com/scholar?hl=en&lr=&q=genetic+algorithms&btnG=Searchfranky172
August 22, 2006
August
08
Aug
22
22
2006
12:16 PM
12
12
16
PM
PDT
Sorry guys, but GAs are still child’s play. Real programmers don’t give hoity-toity names like “Genetic Algorithm” to ways of finding answers that just about every child invents on his own recognizance without being taught. Excuse me? "Real programmers"? http://www.google.com/search?hl=en&lr=&q=scholar%3A+genetic+algorithms&btnG=Search Here are 800,000 articles about GA's. I guess these authors aren't "real programmers"?franky172
August 22, 2006
August
08
Aug
22
22
2006
12:13 PM
12
12
13
PM
PDT
It has been fairly pointed out at the Panda forum After The Bar Closes that Dave Thomas' technique is not pure trial and error. This is true. Each trial is not necessarily a totally random guess. After the first trial the child's game of "warmer/colder" is employed to evaluate the trial results and solutions that are warmer are preferred over those that are colder as the starting point for the next trial. Sorry guys, but GAs are still child's play. Real programmers don't give hoity-toity names like "Genetic Algorithm" to ways of finding answers that just about every child invents on his own recognizance without being taught. That's just a really lame attempt by greenhorns to appear smart and innovative. I'm trying really hard to avoid being mocking and contemptuous in my reincarnation here but you church burnin' ebola boys fellows at ATBC are making it difficult. I can only bite my tongue so much before it gets bit clean through, if you get my drift, and I think you do.DaveScot
August 22, 2006
August
08
Aug
22
22
2006
11:18 AM
11
11
18
AM
PDT
Salvador & Dave, sorry about the misquote.Tom English
August 22, 2006
August
08
Aug
22
22
2006
11:09 AM
11
11
09
AM
PDT
Joseph: "First just the word algorithm directly implies intelligence- look it up." "Has anyone ever observed unintelligent, blind/ undirected (non-goal oriented) process produce an algorithm? No." Look it up, indeed. There are multiple takes on the notion of an algorithm. One notion is that any program (sequence of instructions) for a universal computer (Turing-complete system) is an algorithm. One can generate algorithms randomly. It is not generally possible to determine what they are "good for." I imagine you feel like an algorithm must have a purpose, but it just ain't so.Tom English
August 22, 2006
August
08
Aug
22
22
2006
11:08 AM
11
11
08
AM
PDT
Just so we are clear- algorithm is NOT Al Gore with rhythm.... :)Joseph
August 22, 2006
August
08
Aug
22
22
2006
10:54 AM
10
10
54
AM
PDT
“Read “No Free Lunch”- page 14 last paragraph that continues onto page 15. ‘Chance as I characterize it thus includes necessity, chance (as it is ordinarily used), and their combination.’” Tom English: Do you have the least notion of how outrageous it is to define pure necessity as chance? Do you know how outrageous it is to even think that is what was posted? I take it "English" is your second language. CHANCE includes necessity. I did NOT post what necessity is defined as. This is the major problem with debating anti-IDists- IDists say one thing but when it gets to the anti-IDists they perceive something else. Pathetic.Joseph
August 22, 2006
August
08
Aug
22
22
2006
10:52 AM
10
10
52
AM
PDT
Dave -- two additional problems you left out: 1) Certain areas cannot be explored because it causes the organism to die on its way there 2) In order to do trials, you have to have a semantic idea of what you are trying to do. This is in addition to the hardware requirementsjohnnyb
August 22, 2006
August
08
Aug
22
22
2006
10:46 AM
10
10
46
AM
PDT
Tom wrote: Salvador: “The only answer I’ve heard from chance worshippers”
Tom, Those were DaveScot's words, not mine. Salscordova
August 22, 2006
August
08
Aug
22
22
2006
10:28 AM
10
10
28
AM
PDT
Salvador: "The only answer I’ve heard from chance worshippers" Necessity worshippers. Necessity is not chance.Tom English
August 22, 2006
August
08
Aug
22
22
2006
10:23 AM
10
10
23
AM
PDT
Joseph, "Read “No Free Lunch”- page 14 last paragraph that continues onto page 15. 'Chance as I characterize it thus includes necessity, chance (as it is ordinarily used), and their combination.'" Do you have the least notion of how outrageous it is to define pure necessity as chance? Why would anyone do that, unless he were engaged in obfuscation? I have not checked closely enough to be sure, but I suspect that Bill was trying to patch up a fundamental error (the one Caligula alludes to) in the explanatory filter of The Design Inference without admitting the error outright. You seem proud to dredge up the quote. I generally think well of Bill, but this is him at his worst.Tom English
August 22, 2006
August
08
Aug
22
22
2006
10:21 AM
10
10
21
AM
PDT
Mike1962, Regarding rigorous analysis, I am working on another GA by Elsberry and Shallit which I will analyze. Salvadorscordova
August 22, 2006
August
08
Aug
22
22
2006
09:57 AM
9
09
57
AM
PDT
Mike1962 asked: Does his test falsify Dembski’s CSI filter approach? I would like to see this handled in a rigorous way.
No, becuase Dave would have to demonstrate that purely stochastic process could create the entire simulation. Just because portions of the simulation are stochastic does not mean the system on the whole is undesigned. A shot-gun pattern is stochastically described. If someone fired his shot gun at his neigbors pet rat, it does mean this was an undesigned act on the part of the shooter merely because the shotgun pellets have a stochastically described pattern. Salvadorscordova
August 22, 2006
August
08
Aug
22
22
2006
09:48 AM
9
09
48
AM
PDT
BDelloid Are you folks suggesting that the RM + NS algorithm in nature is the thing that is designed ? And that this algorithm is likely sufficient to explain evolution but the alorithm is the design product in question ? I can only speak for myself but that's not what I'm suggesting. I'm suggesting the hardware upon which trials may be carried out is designed. Of course RM+NS is sufficient once the capacity to conduct trials and evaluate errors has been provided. However, the sufficiency is a probalistic matter illustrated by the proverbial million monkeys on a million typewriters, given enough time, will reproduce all the works of Shakespeare. The remaining problem for RM+NS, while not as great as explaining where the trial and error hardware came from, is that there doesn't appear to be enough probabilistic resources for it to have discovered all these wonderful solutions like flagella and camera eyes and immune systems and etcetera. Mutations that are beneficial are exceedingly rare and natural selection is largely lost in the noise of other factors effecting survival. Maybe a trillion years of RM+NS could produce some of these systems but just hundreds of millions of years seems to border on the impossible. Or maybe hundreds of millions of years on millions of planets could collectively produce these systems but not on just one planet in the time available. Dembski is all about trying to formally quantify the odds. Possibly an impossible task and certainly a task that can never be completely exhaustive as one can never prove a negative (i.e. that one knows ALL the possible probabilistic resources and has factored them in). On the other hand, it may be provable beyond a reasonable doubt and that's really what science is all about. The goal Thomas achieved certainly WAS specified. The goal was to find the shortest series of line segments connecting all the specified points given a finite amount of time for the trial and error search to run. We programmers call this the "brute force method" because all it does is takes the simple method of trial and error and multiplies its effectiveness by the computer's speed at conducting a trial and evaluating the result. No finesse. Just brute force. This brute force algorithm is what is proposed as the driver of creative evolution. The problem with it is twofold. The origin of the platform upon which the trials in creative evolution are conducted and evaluated is a problem larger than the results ostensibly obtained by the search. The search didn't have enough time to be reasonably likely to find the solutions that we see (not enough brute force).DaveScot
August 22, 2006
August
08
Aug
22
22
2006
09:48 AM
9
09
48
AM
PDT
bdelloid: Are you folks suggesting that the RM + NS algorithm in nature is the thing that is designed ? Any algorithm strongly suggests intelligence. Do anti-IDists even understand the word "algorithm"? Apparently not. Has anyone ever observed unintelligent, blind/ undirected (non-goal oriented) process produce an algorithm? No.Joseph
August 22, 2006
August
08
Aug
22
22
2006
09:20 AM
9
09
20
AM
PDT
I have been pondering Dave Thomas' simulation and Salvadore's response ever since Sal started a similar thread. Sal's position is very simple, I think. He suggests that any "fitness" formlula is, in itself, front loading. I have come to believe that Sal is right. I consider the challenge of abiogenesis via purely natural means. For such to be so, somewhere there had to be an environment that naturally produced a stew of organic chemicals. (Of course, scientists haven't figured that one out yet.) Then one day a molecule, or small community of molecules, had to form which could -- therefore did -- reproduce itself. The day that happened, no party was thrown! It just happened. Now, the nature of that repruduction is that the reproductive product must have been similar, but not identical to, the original. If the reproduction also reproduced, no party. If the reproduction did not reproduce, again, there would be no funeral. All this to say, at the early stages of life, "survival of the fittest" had not been established, rather "survival of the surviving" was the only filter. If it survived to reproduce, it survived. If it didn't, it didn't and no party was thrown either way. It seems to me therefore that if a software simulation were to be made, it would have to have only one filter -- survival. A small piece of "reproducing" code would have to be written. Size wise, it would have to realistically be feasible in light of UPB. I think that because computers are so darned disciplined, a random pot stirring program would have to interfere with the world that contains the reproducing code. Initial success would be seen as a reproducing code in an actively destructive environment which "improves itself" by making itself somehow fundimentally more able to survive. (I'd love to see it pull of an active error correction algorithm myself.) Ultimale success would be for this e-organism to develop into multiple competing strains, and establish for itself a sense of "survival of the fittest." If NDE is true, then such a sim is possible.bFast
August 22, 2006
August
08
Aug
22
22
2006
09:13 AM
9
09
13
AM
PDT
Look, the solution to this is simple. Change the selection algorithm to be visual acuity and see if a full-functioning eye develops. If it doesn't do that, then he has to explain why his code only produces results for certain selection criteria but remains ateleological.johnnyb
August 22, 2006
August
08
Aug
22
22
2006
08:59 AM
8
08
59
AM
PDT
To those that think Intelligent Design is a new idea, consider the following from 1950. In comparing the remarkable similarity that existed between the skulls of marsupial and placental saber-toothed cats, Otto Schindewolf captioned the figures of the two forms with the following: "The skulls of carnivorous marsupials and of true carnivores show an extremely surprising similarity in over all habitus and, in particular, in the unusual overspecialization of the upper pair of canines. The similarities of form are present even in such details as the structure of the large flange on the lower jaw, DESIGNED TO GUIDE and protect the upper canines." (my emphasis). Problems in Paleontology, page 260. This provides elegant direct support for the Prescribed Evolutionary Hypothesis which is why I included that figure and caption in my paper "A Prescribed Evolutionary Hypothesis." Rivista di Biologia: 155-166, 2005. Of course that was long before "Design" became a dirty word in the lexicon of mutation happy, natural selection inebriated Darwinian mysticism. "Is there anything whereof it may be said, See, this is new? It hath been already of old time, which was before us." Ecclesiastes "A past evolution is undeniable, a present evolution undemonstrable."John A. Davison
August 22, 2006
August
08
Aug
22
22
2006
08:38 AM
8
08
38
AM
PDT
Why should we have a random fitness function ? The only fitness function that matters in nature is survival. That fitness function has already been shown to produce important evolutionary changes. Are you folks suggesting that the RM + NS algorithm in nature is the thing that is designed ? And that this algorithm is likely sufficient to explain evolution but the alorithm is the design product in question ? I think you folks misunderstand the challenge. The goal that Thomas achieved wasn't SPECIFIED - so it wasn't front loaded. If his algorithm had a pre-specified Steiner tree that was defined by 1) the number of internal nodes 2) the location of these internal nodes 3) the number of branches between each internal nodes and 4) the identity of nodes connected by each branch and his fitness function measured an index of difference between a proposed Steiner tree and the real Steiner tree then this would have been an example of front loading. In this case, the only fitness measure is length, which is in no way pre-specified in regard to the final Steiner tree. Therefore, he demonstrated that a random process can achieve a pre-specified goal of very low probality. Which is the same thing that RM + NS has been shown to do.bdelloid
August 22, 2006
August
08
Aug
22
22
2006
08:34 AM
8
08
34
AM
PDT
Caligula: Well, it is hardly relevant what Dembski writes if it isn’t reflected in his EF. It is very relevant and it is reflected in THE EF. Also before using something it is very relevant to read the instructions. Caligula: Are you saying that rolling 5, 4, 1, 4 is both a “chance” event AND a “necessity” event? I explained that already. The dice fall and roll due to gravity and inertia. What faces up is chance. Caligula: You either accept the event as mere “necessity” or you exclude “necessity” and come up with mere “chance”. Only someone out of touch with reality would do such a thing. Caligula: Besides, EF analyses “events” which strongly hints at single-step processes. Since when? I have never heard of that except from those who appear to know the least about it. Go figure. Caligula: When considering living organisms, it is even more obvious that you can’t rule out a model which includes a multi-step process with many intermediate forms, each intermediate produced from its predecessor by the combination of chance and necessity. When considering living organisms you had better be prepared to show how they arose from "sheer-dumb-luck" ie the anti-ID position of unintelligent, blind/ undirected (non-goal oriented) processes, BEFORE you go making claims about their subsequent evolution. Caligula: How about including a LOOP evaluating each step individually before jumping into the Design hypothesis? There isn't any "jumping" and it is a design inference. That inference can either be refuted or confirmed with future knowledge- as can any scientific inference.Joseph
August 22, 2006
August
08
Aug
22
22
2006
08:26 AM
8
08
26
AM
PDT
Mike1962 What Dave’s challege *does* show is that design detection isn’t so easy, and may be impossible in a frontal sort of way. Dave would first have to give us an example of objects that weren't designed. The whole point of my post was that he didn't do that. His result was intelligently designed. It was his solution. He designed a software tool to help him find a solution to a specific problem. The result of an intelligent agent using a tool to assist in problem solving is not an example of an object that wasn't designed. Others used calculators and spreadsheets as tools. Some just used intuition, pencil, and paper. But make no mistake, every solution was intelligently designed, including those output by Thomas' algorithmic trial and error tool. All his program did was leverage the number crunching speed of a modern computer to work HIS intelligently designed search algorithm.DaveScot
August 22, 2006
August
08
Aug
22
22
2006
08:20 AM
8
08
20
AM
PDT
I'm finding it hard to grasp exactly what "The Design Challenge" is all about. What is the final product the genetic algorithms produce?BenK
August 22, 2006
August
08
Aug
22
22
2006
07:35 AM
7
07
35
AM
PDT
"But of course there is front loading in the example because there is a goal in mind. The goal is fixed and doesn’t move and the selection criteria have been choosen to move towards that goal." That is right. Which is why the test may have succeed in what Thomas wanted to demonstrate, but which fails to be very interesting to me. Which is why I'd like to see randomly generated fitness algorithms build up a complex functioning virtual machine of some kind. :) Does his test falsify Dembski's CSI filter approach? I would like to see this handled in a rigorous way.mike1962
August 22, 2006
August
08
Aug
22
22
2006
07:08 AM
7
07
08
AM
PDT
1 2 3

Leave a Reply