Uncommon Descent Serving The Intelligent Design Community

Congratulations Dave Thomas!

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
arroba Email

Dave has proven beyond a doubt that intelligent agents can construct useful trial and error algorithms.  As long as the way the trials are conducted and the way the results are judged is well specified then trial and error algorithms work!  Of course we all learn to search for solutions using trial and error as children.  Or so I thought.  Maybe Dave Thomas is just discovering it now and thinks he’s stumbled onto something revolutionary.  The $64,000 question remains unanswered.  Who or what specified how trials in evolution were to be conducted?  The only answer I’ve heard  from chance worshippers is that some mystical chemical soup burped out a living cell containing a protein assembly machine called a ribosome driven by an abstract digitally encoded control program and a data library containing abstract digital specifications for a large number of proteins required for the cell to function in an information storage molecule called DNA.   In point of fact, information in the  DNA molecule is required to construct a ribosome and a ribosome is required to duplicate a DNA molecule.  Which came first: the protein or the robotic protein making machine that requires parts made of proteins?    Maybe Dave can find the answer by trial and error.  Let’s all wish him luck.

Good luck, Dave!

Comments
Joe: "Wow, this is worse than I thought. Coin tossing will only generate an algorithm if an intelligent agency is involved." Do you have a new theory of intelligent tossing? Is it sort of like intelligent falling? Or is it just the old, tired treatment of intelligence as something infectious -- i.e., when I touch the coin, it becomes intelligent, and the outcome of the toss is biased? Here's how you started all of this: "Any algorithm strongly suggests intelligence. Do anti-IDists even understand the word 'algorithm'? Apparently not. Has anyone ever observed unintelligent, blind/ undirected (non-goal oriented) process produce an algorithm? No." If we reject intelligent tossing theory, then I have demonstrated exactly what you suggested could not exist. You placed no restriction on the language in which algorithms would be expressed. And you should not have, because your claim was that there was no way for a random process to produce an algorithm. I made a natural choice of language -- prefix-free, compact, binary descriptions of Turing machines. This type of language plays an important role in the theory of Solomonoff-Chaitin-Kolmogorov complexity. You whined and whined for proof. Now that I have proved exactly what you wanted proved, you insinuate that I cheated, but offer no counterargument. That's worthless behavior. Tom English
Wow, this is worse than I thought. Coin tossing will only generate an algorithm if an intelligent agency is involved. Also Wiki can hardly be considered an authority on anything. Thanks again for demonstrating an algorithm requires intelligence. BTW I never once thought that you were saying every algorithm can be generated without intelligence. The confusion is still all yours. Joseph
Joe: "Also we were talking about ALGORITHMS implying INTELLIGENCE." The negation of your claim is merely that SOME algorithm can arise by chance-and-necessity. I evidently have confused you by arguing that EVERY algorithm can be generated without "intelligence." So let's take the simple route and be done with this. I am going to demonstrate that "coin tossing" generates algorithms with high probability. As always, when you don't understand something, see Wiki. A universal Turing machine (UTM) is a Turing-complete system. A UTM is "programmed" with a description of a Turing machine (TM). When the TM description specifies no transitions, the UTM merely halts on all inputs. Such a TM description is the shortest algorithm for the UTM. For some UTMs the TM description must be self-delimiting, and we require such a UTM here. In other words, each description must encode its own length in some fashion. In the TM description language we use here, every sufficiently long sequence of bits begins with a TM description. A description begins with a possibly-empty string of M 0's, which indicates that M bits are required to encode TM states. Then there is a non-empty string of N 1's, which indicates that N bits are required to encode TM symbols. Then a 0 serves as a delimiter. If M is nonzero, the following M + N bits give the number of transitions in base-2 notation. Otherwise there are no additional bits in the TM description. For instance, a TM with two states, four symbols, and three transitions has a description 0110011 [plus 18 bits for transitions]. Any TM that has no states (i.e., has a description that starts with 1) has no transitions, and halts on all inputs. Thus the following are all algorithms. 10, 110, 1110, ... All TM descriptions beginning with 1 belong to the preceding sequence. Now let's generate a TM description M by tossing a fair coin, and writing down 1 for heads and 0 for tails. The procedure is well defined, because every sufficiently long sequence of bits begins with a TM description, and all TM descriptions are self-delimiting. The probability that a M begins with 1 (hence is an algorithm) is 1 / 2. The probability that M is the algorithm 01000 (note that there are no transitions) is 2 ^ 5. Thus the probability of generating an algorithm by coin tossing is greater than 1 / 2. Did I rig this result? No. The UTM is the original model of universal computation. In fact, the notion of Turing-completeness is defined in terms of the UTM. The choice of self-delimiting TM's was natural here. How else would we have guaranteed that every sequence of coin tosses gave a description? Is it really that surprising that the simplest algorithms should merely halt, and that they should occur often in random generation of "programs"? I'm sure you are not satisfied with algorithms that merely halt. Perhaps I should mention that coin tossing will also generate nontrivial, though short, algorithms with fairly high probability. Before barking at the stranger again, why not go back to No Free Lunch and observe that Bill Dembski rejects chance at 500 bits of complexity. There are many algorithms shorter than 500 bits. Coin tossing will turn up short algorithms, and this is no contradiction to what Bill has written. Why did I not go directly to this proof? Because it is not particularly interesting. I wanted to talk about design inference, not basic probability and the plain old theory of computation. Tom English
[Off topic, regarding my off-topic post addressed to no one] Joseph: “Those are design-centric hypotheses.” Joseph: "Weird because I asked for CHANCE hypotheses." From http://www.chiasmus.com/archive/msg00190.html: One night, while serving as [Lyndon Baines Johnson's] press secretary, Bill Moyers was saying grace in a soft and respectful voice before a White House dinner. LBJ startled everyone when he interrupted Moyers, saying, "Speak up, Bill! Speak up!" Not letting the president rattle him, Moyers softly replied: "I wasn't addressing you, Mr. President." Tom English
Joseph: “Those are design-centric hypotheses.” Tom English: My point, precisely. Weird because I asked for CHANCE hypotheses. Tom English: Did it really never occur to you that my sequence of bits could come from space too? I have asked you several times for a demonstration and you failed to do so. In that light it occurred to me that you were just blowing smoke. Also we were talking about ALGORITHMS implying INTELLIGENCE. And from your responses I would have to say that is as safe an inference as there could be. Joseph
Joseph: "Those are design-centric hypotheses." My point, precisely. A wizard. A giant. The Devil. I have wasted a lot of time here. You should know that Bill Dembski has inferred design in a fictitious observation -- a sequence of binary-coded prime numbers (excluding 59) coming from deep space. Did it really never occur to you that my sequence of bits could come from space too? Did you stop to think that my sequence was less complex than the prime sequence, and that a design inference was therefore not a slamdunk? Tom English
Tom English: My hypothesis that the sequence of strings can arise by chance-and-necessity stands until someone gives evidence that it should be rejected in favor of design. Your hypothesis is rejected because it has not and cannot be substantiated. Show us a sequence of strings arising from nothing via chance-and-necessity. I can demonstrate intelligent agencies putting together a sequence of strings. Gee I wonder how many chance hypotheses were rejected before Stonehenge was deemed an artifact? Tom posts: “Early interpretations Many early historians were influenced by supernatural folktales in their explanations. Some legends held that Merlin the wizard had a giant build the structure for him or that he had magically transported it from Mount Killaraus in Ireland, while others held the Devil responsible.” Those are design-centric hypotheses. Were there ever ANY chance hypotheses? Joseph
[Off-topic] From the Wiki article on Stonehenge: http://en.wikipedia.org/wiki/Stonehenge "Early interpretations Many early historians were influenced by supernatural folktales in their explanations. Some legends held that Merlin the wizard had a giant build the structure for him or that he had magically transported it from Mount Killaraus in Ireland, while others held the Devil responsible." Tom English
DaveScot: "It is obvious in your writings you begin with the assumption that evolution happened by chance then expect others to prove you wrong." Did you see post 61 in https://uncommondesc.wpengine.com/index.php/archives/1464, especially "Evolution is necessity operating on chance inputs"? "I expect you to prove the chance hypothesis." No one can prove a statistical hypothesis. In Bill's approach to design inference, a chance hypothesis is a null hypotheses. It is accepted in the absence of strong evidence to reject it. My hypothesis that the sequence of strings can arise by chance-and-necessity stands until someone gives evidence that it should be rejected in favor of design. This isn't something self-serving I am doing. It is simply the nature of null hypotheses. A legitimate request would have been to ask me to derive a probability for my hypothesis. I have already done some work to estimate what's known as the universal probability of the sequence, and I was prepared to take an honest stab at rejecting the chance hypothesis myself. "All I can do is point to what I know is designed, compare and contrast it with what I suspect is designed, and ask which is the better explanation - design or chance." I have no objection whatsoever to your doing that to form your personal worldview. But I hope you don't expect such informal methods to lead to scientific knowledge. By the way, I have a personal worldview that could not be less scientific. Tom English
Joe: “Do you understand what is being debated? The debate is unintelligent, blind/ undirected (non-goal oriented) processes - ie the anti-ID position- vs. intelligent, directed (goal oriented) processes)- ie the ID position.” Tom English: You truly do not understand what a logical and linguistic abomination that is, do you? It is reality. Therefore I understand your reaction. Reality doesn't appear to be your strong suit. Tom English: I tried to get you to reason through what you were saying, and you entirely ignored me — to your own detriment. Yup you are a legend in search of a mind. Tom English: I gave you the hypothesis that the enumeration can arise by chance. And I asked you to demonstrate that. IOW show us an enumeration that arose without the aid of an intelligent agency. Tom English: It is your responsibility, as the ID advocate, to show how to reject the hypothesis and make a design inference. Show me where this hypothetical enumeration exists. That is how it (science) is conducted- by examining the data. If the enumeration exists only in your head then I would sau it is an empty hypothesis- one not worth pursuing. Gee I wonder how many chance hypotheses were rejected before Stonehenge was deemed an artifact? Joseph
Tom I expect you to prove the chance hypothesis. It is obvious in your writings you begin with the assumption that evolution happened by chance then expect others to prove you wrong. I cannot prove or disprove either chance or design. All I can do is point to what I know is designed, compare and contrast it with what I suspect is designed, and ask which is the better explanation - design or chance. There are things that science may never be able to reveal with any degree of certainty and things like the evolution of life on this planet, a process that is unpredictable, unrepeatable, unwitnessed, and happened only once is quite likely to be one of those areas that may resist any definitive explanations. DaveScot
Joe: "Do you understand what is being debated? The debate is unintelligent, blind/ undirected (non-goal oriented) processes - ie the anti-ID position- vs. intelligent, directed (goal oriented) processes)- ie the ID position." You truly do not understand what a logical and linguistic abomination that is, do you? I tried to get you to reason through what you were saying, and you entirely ignored me -- to your own detriment. Tom: "I say that enumerating binary instruction sequences in “dictionary” order (0, 1, 00, 01, 10, 11, 000, …) is mechanical, not intelligent." Joe: "It does NOT matter what you say. It matters what you can demonstrate." You argue, but you evidently know little about argumentation. "I say" means "I claim"--i.e., "here is what you must refute." You clearly do not understand the logic of design inference. It is obvious in your writings that you start with the assumption of design and purpose, and expect others to prove you wrong. But the design inference works just the opposite way. One or more chance hypotheses must be rejected in favor of design. I gave you the hypothesis that the enumeration can arise by chance. It is your responsibility, as the ID advocate, to show how to reject the hypothesis and make a design inference. "IOW if we saw that sequence etched in the wall of a cave could we safely infer intelligence was responsible or would we infer errosion? [...] All Tom has to do is to demonstrate an algorithm can originate without the help of an intelligent agency." Quite the contrary. It's time for you to do a formal design inference. The cave-wall appeal to intuition is pathetic. The etchings themselves would probably merit a design inference. Tom English
Joseph: “I am still waiting for a reference that demonstrates unintelligent, blind/ undirected (non-goal oriented) processes can produce an algorithm.” Tom English: I challenge you to find any peer-reviewed publication whatsoever that refers to “unintelligent, blind/ undirected (non-goal oriented) processes.” Wow- you sure did tell me- LoL! Do you understand what is being debated? The debate is unintelligent, blind/ undirected (non-goal oriented) processes - ie the anti-ID position- vs. intelligent, directed (goal oriented) processes)- ie the ID position. Now if you are telling me there aren't any references to the former then one has to wonder why anti-IDists say their mechanism is well founded in peer-reviewed literature. So what we have here is Tom English doubting my claim that "algorithm directly directly implies intelligence" but offering absolutely nothing to show his doubt has any merit. Yet he continues to babble-on. Tom English: I say that enumerating binary instruction sequences in “dictionary” order (0, 1, 00, 01, 10, 11, 000, …) is mechanical, not intelligent. It does NOT matter what you say. It matters what you can demonstrate. Can you demonstrate ANY enumerating sequence arising from scratch, without the help of an intelligent agency. IOW if we saw that sequence etched in the wall of a cave could we safely infer intelligence was responsible or would we infer errosion? BTW intelligence can be defined as that which can create counterflow. Tom English: You effectively admit that a) you cannot hold your own in the discussion and b) you do not know how to deal with it appropriately. True projection at its finest. Tom's response is typical when one cannot support the claims made. All Tom has to do is to demonstrate an algorithm can originate without the help of an intelligent agency. Until that is demonstrated it is more than safe to infer any algorithm observed owes its origins to some intelligent agency. And therefore algorithm directly implies intelligence. Tom English: Are you aware that the phrase “blind watchmaker-type process” has no clear meaning? Are you aware that you have dumped it onto 96 pages on the Web? Go whine to Dawkins and all of evolutionisms' faithful. The meanings of evolution from "Darwinism, Design and Public Education": 1. Change over time; history of nature; any sequence of events in nature 2. Changes in the frequencies of alleles in the gene pool of a population 3. Limited common descent: the idea that particular groups of organisms have descended from a common ancestor. 4. The mechanisms responsible for the change required to produce limited descent with modification, chiefly natural selection acting on random variations or mutations. 5. Universal common descent: the idea that all organisms have descended from a single common ancestor. 6. “Blind watchmaker” thesis: the idea that all organisms have descended from common ancestors solely through an unguided, unintelligent, purposeless, material processes such as natural selection acting on random variations or mutations; that the mechanisms of natural selection, random variation and mutation, and perhaps other similarly naturalistic mechanisms, are completely sufficient to account for the appearance of design in living organisms. You do know that Dawkins wrote a book about "The Blind Watchmaker" And one more thingy: FUNCTION implies a definite end or purpose that the one in question serves or a particular kind of work it is intended to perform (the function of language is two-fold: to communicate emotion and to give information -- Aldous Huxley). Joseph
"It case Tom missed this: Perhaps “the blind watchmaker” deserves a fellowship also-> That is if said watchmaker can demonstrated to provide “any sequence of operations which can be performed by a Turing-complete system”." No, JG, I saw the original post and found it twice as sad when you quoted yourself. Your reference to the blind watchmaker is entirely gratuitious. Your post is nothing but a sneer. You effectively admit that a) you cannot hold your own in the discussion and b) you do not know how to deal with it appropriately. Are you aware that the phrase "blind watchmaker-type process" has no clear meaning? Are you aware that you have dumped it onto 96 pages on the Web? Tom English
Joseph: "I am still waiting for a reference that demonstrates unintelligent, blind/ undirected (non-goal oriented) processes can produce an algorithm." I challenge you to find any peer-reviewed publication whatsoever that refers to "unintelligent, blind/ undirected (non-goal oriented) processes." You have ensured there are no references by setting up a bizarre requirement. I have looked into your use of the phrase on the Web (57 hits), and you seem to think it is really a grand challenge when you throw it out there. But I doubt you can pin down what you mean by the phrase. 1. What is an an unintelligent process? How can we tell if a process is unintelligent? 2. What is a blind process? undirected (non-goal oriented) process? How do they differ from one another? 3. How can a blind or undirected process be intelligent? Note: The meaning of "intelligent" is far from self-evident. Ask Bill Demski. I say that enumerating binary instruction sequences in "dictionary" order (0, 1, 00, 01, 10, 11, 000, ...) is mechanical, not intelligent. The enumeration (see the Wiki article) has no purpose but to enumerate. It knows nothing about algorithms, and it never terminates. Some of the instruction sequences are algorithms for processing a null input. Perhaps we should look at relevant meanings of "purpose" from dictionary.com: 1. the reason for which something exists or is done, made, used, etc. 2. an intended or desired result; end; aim; goal. The enumeration has no purpose (2), and the algorithms it generates have no purpose (1). The algorithms do have function, however. I think you have been confusing function with purpose. Tom: "By definition, every algorithm has meaning. The meaning of an algorithm is what it instructs the Turing-complete system to do." Joseph: "From that we can infer algorithms do have a purpose- to instruct the Turing- complete system." You are equivocating on "meaning." Back to dictionary.com: 2. to intend for a particular purpose, destination, etc. 4. to have as its sense or signification; signify I am using the word in a sense close to the fourth. You are using it in the second sense. I hope you were genuinely confused, and did not equivocate purposefully. Intentional equivocation is a slimy tactic employed by people who know deep down they are űberdopes, but who will do anything to conceal the fact from themselves and the world. Tom English
I am still waiting for a reference that demonstrates unintelligent, blind/ undirected (non-goal oriented) processes can produce an algorithm. Also a reference for such processes producing enumerated sequences would be helpful. Tom English: By definition, every algorithm has meaning. The meaning of an algorithm is what it instructs the Turing-complete system to do. From that we can infer algorithms do have a purpose- to instruct the Turing- complete system. Thanks again. Tom English: P.S.–Joseph, when are you going to parse the Dembski quote you say I don’t understand? After you start substantiating your posts. Which means I will never have to... It case Tom missed this: Perhaps “the blind watchmaker” deserves a fellowship also-> That is if said watchmaker can demonstrated to provide “any sequence of operations which can be performed by a Turing-complete system”. Joseph
P.S.--Joseph, when are you going to parse the Dembski quote you say I don't understand? Tom English
Joseph: "OK wait-> Does “without useless instructions” mean they were with useful instructions?" In a teleonomic, not teleological, sense. Tom English
Tom: "Infinitely many algorithms read no input, run for a long time, and halt without writing output. How do you get a purpose out of that?" Joseph: "They are doing something, as opposed to nothing which can be quite easily accomplished without an algorithm." But such algorithms serve no function. They do not process information. How can something that serves no function have a purpose? "Do we have direct observation of intelligent agencies producing algorithms? Yes." Does this imply that only an intelligent agent can generate an algorithm? No. "Can intelligent agencies produce meaningless algorithms? Yes" By definition, every algorithm has meaning. The meaning of an algorithm is what it instructs the Turing-complete system to do. "Can intelligent agencies produce algorithms that produce algorithms? Yes" Does this imply that simple enumeration of sequences of instructions in lexicographic order does not generate algorithms? No. "Have you ever observed unintelligent, blind/ undirected (non-goal oriented) processes producing one?" Enumeration of sequences of instructions produces both algorithms and non-algorithms. This is as blind a process as one can define. To get some algorithms from the enumerated sequences, submit each instruction sequence to the Turing-complete system for execution without input. If execution terminates without error within bounded time, the sequence is an algorithm. If execution exceeds the time bound, assume the sequence is not an algorithm. This undirected procedure will not find all algorithms, but it will find indefinitely many. "And when unintelligent, blind/ undirecetd (non-goal oriented) processes start producing random (number) generators from scratch please be sure to let us know." Why? The random numbers used in an evolutionary computation can and actually should be input. But if you insist, first you will have to define what a random number generator is. You should read Bill Dembski's "Randomness by Design," available at http://designinference.com . By the way, see if you can find his error in defining the Glish* language. It causes him to contradict a well known result in the theory of Solomonoff-Chaitin-Kolmogorov complexity. Tom English
Tom English: The set of algorithms without useless instructions is infinite, so let’s say “infinitely many.” OK wait-> Does "without useless instructions" mean they were with useful instructions? Joseph
Tom English: Infinitely many algorithms read no input, run for a long time, and halt without writing output. Yes I know. Tom English: How do you get a purpose out of that? They are doing something, as opposed to nothing which can be quite easily accomplished without an algorithm. Joseph: “THanks Tom- Thanks for confirming my point.” Tom English: Joseph’s point? “First just the word algorithm directly implies intelligence- look it up.” Absolutely. Do we have direct observation of intelligent agencies producing algorithms? Yes. Can intelligent agencies produce meaningless algorithms? Yes Can intelligent agencies produce algorithms that produce algorithms? Yes Have you ever observed unintelligent, blind/ undirected (non-goal oriented) processes producing one? Has anyone EVER demonstrated that such processes can produce an algorithm? Reference it or admit algorithm directly implies intelligence. And when unintelligent, blind/ undirecetd (non-goal oriented) processes start producing random (number) generators from scratch please be sure to let us know. ----------------------------------------------------------------------------------- “That does NOT follow from the quote I posted.” Tom English: You’ve said this a couple times before. Two and through. Which is bad for a situation which called for a "one and done". Joseph
Tom: "Almost all algorithms" The set of algorithms without useless instructions is infinite, so let's say "infinitely many." Tom English
Joseph, "That does NOT follow from the quote I posted." You've said this a couple times before. But you have never parsed the quote for me. Why not? Here it is again. Dembski: “Chance as I characterize it thus includes necessity, chance (as it is ordinarily used), and their combination.” Tom English
Wikipedia: “Thus, an algorithm can be considered to be any sequence of operations which can be performed by a Turing-complete system.” Joseph: “Operations that can be performed” still demonstrates purpose. The purpose being to perform the operations specified. No, it demonstrates definition. The phrase says in essence that any algorithm is defined with respect to some Turing-complete system. An algorithm need not have any purpose. Infinitely many algorithms read no input, run for a long time, and halt without writing output. How do you get a purpose out of that? Wikipedia: "Because an algorithm is a precise list of precise steps, the order of computation will almost always be critical to the functioning of the algorithm. Instructions are usually assumed to be listed explicitly, and are described as starting ‘from the top’ and going ‘down to the bottom’, an idea that is described more formally by flow of control." Joseph: "THanks Tom- Thanks for confirming my point." Joseph's point? "First just the word algorithm directly implies intelligence- look it up." Evidently you are backing off from "directly," and are taking an indirect approach. You seem to hint at irreducible complexity in algorithms. Bad news here. Let M be a Turing-complete system with algorithms expressed as binary strings. This is purely for convenience -- all Turing-complete systems are equivalent in computing power. Almost all algorithms for M are algorithmically random or close to it. To see this consider that any algorithm A for M that is not random or close to it can be modified by adding random instructions that are never executed. Call the modified algorithm B. Keep it in mind that B computes the same function as A, so B is algorithmic. The number of algorithms B we can generate randomly from A is countably infinite, and for almost all B the length of B is much greater than the length of A, and this implies B is close to algorithmically random. In a rigorous proof, I would count more carefully, but this is good enough for here. To recap, in almost all algorithms a relatively small number of used instructions is lost in an ocean of random, unused instructions. Thus almost all algorithms exhibit a high degree of randomness, not a high level of complex specified information. If you do not understand the quasi-formal argument, Joseph, please have the grace to ask questions rather than attack. Tom English
Perhaps "the blind watchmaker" deserves a fellowship also-> That is if said watchmaker can demonstrated to provide "any sequence of operations which can be performed by a Turing-complete system". Joseph
Tom English presents: Wikipedia: “Thus, an algorithm can be considered to be any sequence of operations which can be performed by a Turing-complete system.” "Operations that can be performed" still demonstrates purpose. The purpose being to perform the operations specified. But I like this passage: Because an algorithm is a precise list of precise steps, the order of computation will almost always be critical to the functioning of the algorithm. Instructions are usually assumed to be listed explicitly, and are described as starting 'from the top' and going 'down to the bottom', an idea that is described more formally by flow of control. THanks Tom- Thanks for confirming my point. And the following is just wishful speculation: Tom English: Genetic algorithms are highly abstract models of biological evolution. While they do not predict much about biota, they do serve to validate key aspects of evolutionary theory. Why is it wishful speculation? We do NOT know what makes an organism what it is so we do not know what can cause the changes required if all of life's diversity owed its collective common ancestry to some unknown population(s) of single-celled organisms that just happened to have the ability to asexually reproduce. With GAs the programmer can make those "organisms" be anything he/ she wants them to be. That programmer can also define the parameters of what allows change and what constitutes "new/ evolved" populations. Now back to Tom's backpeddling: Read Dembski closely. Necessity is chance. That does NOT follow from the quote I posted. And YOU were deriving you post from that quote. Also if you had been following ID even the laws of nature had to have arrived via chance in the anti-ID scenario. IOW it IS "sheer-dumb-luck" through and through. Joseph
DaveScot, "Like rule based decision making in software was something no one had ever done before some marketing genius decided to call it “Expert Systems” to see if it would sell better." The term "expert system" was coined in academia in the 1970's, not in industry. Not all expert systems are rule-based, and in fact the first expert system, Dendral (1966), was not. You might want to check out the Post production system, first published by the mathematician Emil Post in 1943. Another model of rule-based computation is the Markov algorithm (1960?). Markov was a mathematician. "We implimented genetic algorithms, artificial intelligence, and expert systems in that software [25 years ago] before anyone ever heard the terms and we weren’t pompous enough to think we were inventing anything new." I am confused as to why you listed both expert systems and artificial intelligence. Expert systems are a kind of artificial intelligence. The term "artificial intelligence" goes back to 1956. Holland wrote about genetic algorithms in 1973. His book came out in 1975. The term "expert system" was around at least as early as Buchanan's Mycin (developed in the early 1970's). All three terms were well known 25 years ago (1981). I am confused. "Imagine how we laugh when some young idiot or clueless academician picks up something we were doing when they were still crapping gerber baby food and gives it some hoity-toity name like it’s something new." Programmers rarely invent as much as they think they have. Even if something has been pulled together with baling wire and chewing gum in industry, it is important for someone to reduce it to engineering (not hacking) practice. "So now here comes Tom with a GA working on a Steiner Tree with 6 points and one connection layer. Imagine me giggling over that trivial POS when 25 years ago I was coding software that did the same thing only with 60 thousand points to connect and anywhere from one to a dozen connection layers." The point of the exercise is not to demonstrate how large a problem instance the GA can solve. The point is to give a small, easy to understand example of a GA generating irreducible complexity. The program is intended as a proof of principle. Less is more. Tom English
Tom English While fishing one of my own comments out of the spam filter (I used the word "pill" which is blacklisted) I noticed one of yours in there and recovered that as well. I don't believe it was intentional. You probably used a blacklisted word like I did. DaveScot
Oops - that should be "So now here comes Dave Thomas with a GA working on a Steiner Tree with 6 points" in comment 38 instead of "here comes Tom". Please pardon any confusion that may have caused. DaveScot
Tom English with an as yet undetermined appendage writes: When you are awarded your McArthur Fellowship you can explain that to former McArthur Fellow John Holland, to whom the term “genetic algorithm” is due. I was awarded millions of dollars in incentive compensation at Dell while we took it from $1B to $40B in revenue in the 1990's. No fellowships though. It never occured to me to ask Michael for one. Imagine how sad I feel as I sit on my yacht writing this. Boo hoo. DaveScot
Tom Spare me. Were you a programmer when "Expert Systems" were all the fad? I was. Before and after. Like rule based decision making in software was something no one had ever done before some marketing genius decided to call it "Expert Systems" to see if it would sell better. Artificial Intelligence is the same story. 25 years ago I was working in the CAD/CAM industry with what's called auto-router software. This is software that undertakes the enormously complex task of finding a way to route traces on a circuit board in the least number of copper layers. Circuit board cost rises exponentially as number of layers increases and production yield goes in the opposite direction. Rules for clearances between traces, width of thru holes, etc. all adjustable with the same tradeoffs. We implimented genetic algorithms, artificial intelligence, and expert systems in that software before anyone ever heard the terms and we weren't pompous enough to think we were inventing anything new. Imagine how we laugh when some young idiot or clueless academician picks up something we were doing when they were still crapping gerber baby food and gives it some hoity-toity name like it's something new. Please, please spare me. So now here comes Tom with a GA working on a Steiner Tree with 6 points and one connection layer. Imagine me giggling over that trivial POS when 25 years ago I was coding software that did the same thing only with 60 thousand points to connect and anywhere from one to a dozen connection layers. Please, please, PLEASE spare me. I'm begging you. My sides are aching and I'm spitting beer all over my screen from laughing so hard. DaveScot
Note to readers: Everything scientists say about nature is just a MODEL of reality, not reality itself. There is never any way of knowing if science has gotten at reality. This is as much a consequence of the empiricism of science as the methodological naturalism. Genetic algorithms are highly abstract models of biological evolution. While they do not predict much about biota, they do serve to validate key aspects of evolutionary theory. I have read hundreds of papers on evolutionary computation, but no discussion of simulation of evolution has surpassed one by Wirt Atmar I first read in 1994. If you want to understand what simulation models have to do with biology and engineering, I recommend it highly. It is also quite amusing to see Wirt slam Dawkins. The paper is here: http://www.aics-research.com/research/notes.html Tom English
Joseph, Samuel Taylor Coleridge wrote, “Until you understand a writer’s ignorance, presume yourself ignorant of his understanding.” I learned that while studying for my first master's degree, which was in English. Dembksi: “Chance as I characterize it thus includes necessity, chance (as it is ordinarily used), and their combination.” Tom: "Do you have the least notion of how outrageous it is to define pure necessity as chance?" Joseph: "Do you know how outrageous it is to even think that is what was posted? I take it “English” is your second language." Read Dembski closely. Necessity is chance. "Ordinary" chance is chance. Combinations of necessity and chance are chance. Now read what I wrote. Pure necessity is chance. "Pure" is what grammarians call an intensifier. Tell me if you still don't understand. "Perhaps you can back up what you say with something of substance. I won’t be holding my breath…" I gave you substance, but you evidently did not recognize it for that. You went to dictionaries, of all places, as sources of authority that could prove me wrong. That is about as wise as going to the dictionary to find out the meaning of "Darwinism." Tom: "One notion is that any program (sequence of instructions) for a universal computer (Turing-complete system) is an algorithm." Wikipedia: "Thus, an algorithm can be considered to be any sequence of operations which can be performed by a Turing-complete system." http://en.wikipedia.org/wiki/Algorithm Note that the Wiki quote comes from the "Formalization of Algorithms" section. Your dictionary definitions are informal. Unfortunately, if you want to make big claims about algorithms, you need a formal understanding. Do you want to challenge me on equating programs with sequences of instructions? universal computers with Turing-complete systems? Please check Wiki before doing so. Tom English
Note to readers: Just because "genetic" is in the term genetic algorithm does NOT mean it (any particular GA discussed) reflects biological reality. Joseph
"Sorry guys, but GAs are still child’s play. Real programmers don’t give hoity-toity names like “Genetic Algorithm” to ways of finding answers that just about every child invents on his own recognizance without being taught. That’s just a really lame attempt by greenhorns to appear smart and innovative." When you are awarded your McArthur Fellowship you can explain that to former McArthur Fellow John Holland, to whom the term "genetic algorithm" is due. Perhaps my son was a dim-wit like his dad, but I am certain I never saw him 1. Maintain a population of potential solutions, each written as a binary string 2. Record the fitness of each individual in the population 3. Use a fitness-weighted roulette wheel to select parents 4. Generate a random number to decide where to cross over two parent strings 5. Repeatedly flip a biased coin to decide which bits in the offspring to mutate 6. Sort the offspring by fitness and merge them with the parents 7. Decide how to cull excess individuals from the population My son is grown now, but I'll be sure to have a close look at the kids in the park when I'm walking the dog. Tom English
Tom English: I imagine you feel like an algorithm must have a purpose, but it just ain’t so. Yeah right. You have shown you can't even read a quote properly. And please don't tell me what I feel. Merriam Webster (online) algorithm: a procedure for solving a mathematical problem (as of finding the greatest common divisor) in a finite number of steps that frequently involves repetition of an operation; broadly : a step-by-step procedure for solving a problem or accomplishing some end especially by a computer Compact Oxford (online) a process or set of rules used in calculations or other problem-solving operations. Cambridge International Dictionary of English (online) a set of mathematical instructions that must be followed in a fixed order, and that, especially if given to a computer, will help to calculate an answer to a mathematical problem Wiktionary (online) Any well-defined procedure describing how to carry out a particular task Wordsmyth (online) a completely determined and finite procedure for solving a problem, esp. used in relation to mathematics and computer science. The American Heritage® Dictionary of the English Language: Fourth Edition. 2000. (online) A step-by-step problem-solving procedure, especially an established, recursive computational procedure for solving a problem in a finite number of steps. That should be enough however I doubt even that will get through. So Tom, I don't feel algorithms have a purpose it is obvious that they do. Perhaps you can back up what you say with something of substance. I won't be holding my breath... Joseph
DaveScot: "We programmers call this the “brute force method” because all it does is takes the simple method of trial and error and multiplies its effectiveness by the computer’s speed at conducting a trial and evaluating the result. No finesse. Just brute force." We computer scientists would never call a GA a brute-force method. From Wiki, "brute-force search is a trivial but very general problem-solving technique, that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement." Brute-force search is infeasible in Dave Thomas' problem. The space of chromosomes is too large to enumerate. If a brute-force search goes through the chromosomes (bit strings) in the order 00 ... 00 00 ... 01 00 ... 10 etc., the computer will turn to dust before finding a chromosome comparable in fitness to those found by the GA. "[Trying to correct yourself] Each trial is not necessarily a totally random guess." No, brute-force search is not random guessing. In fact, a random sample of chromosomes will probably yield better results. Let N be the number of fitness evaluations done in a single run of Dave's GA. Draw a chromosome randomly (i.i.d. uniform) from the space of chromosomes N times, keeping track of which has the highest fitness. Note that random sampling and the GA use an identical fitness function. The fitness function is not part of the algorithms. It is essentially an external black box. The fitness function is no more designed for use by the GA than it is for use by random sampling. The GA is not designed to accommodate the fitness function, and in fact can be used with an infinitude of other fitness functions. The important question, then, is why does the GA do better than random search? Averaged over all fitness functions, the GA does not do better than random search (Wolpert and Macready, "No Free Lunch in Optimization"). Random sampling is oblivious to the topography of the fitness landscape. Any advantage of the GA over random sampling for a problem reflects a degree of "GA-friendliness" of the fitness landscapes corresponding to problem instances. There is inherent order in Dave's problem that is reflected in the topography of the fitness landscapes. The order comes from the physical system itself, not Dave's mind. Representation of the system seems a minor issue to me -- I cannot see what shuffling the genes would do but increase the disruptiveness of crossover. Tom English
Or 400,000 in scholar: http://scholar.google.com/scholar?hl=en&lr=&q=genetic+algorithms&btnG=Search franky172
Sorry guys, but GAs are still child’s play. Real programmers don’t give hoity-toity names like “Genetic Algorithm” to ways of finding answers that just about every child invents on his own recognizance without being taught. Excuse me? "Real programmers"? http://www.google.com/search?hl=en&lr=&q=scholar%3A+genetic+algorithms&btnG=Search Here are 800,000 articles about GA's. I guess these authors aren't "real programmers"? franky172
It has been fairly pointed out at the Panda forum After The Bar Closes that Dave Thomas' technique is not pure trial and error. This is true. Each trial is not necessarily a totally random guess. After the first trial the child's game of "warmer/colder" is employed to evaluate the trial results and solutions that are warmer are preferred over those that are colder as the starting point for the next trial. Sorry guys, but GAs are still child's play. Real programmers don't give hoity-toity names like "Genetic Algorithm" to ways of finding answers that just about every child invents on his own recognizance without being taught. That's just a really lame attempt by greenhorns to appear smart and innovative. I'm trying really hard to avoid being mocking and contemptuous in my reincarnation here but you church burnin' ebola boys fellows at ATBC are making it difficult. I can only bite my tongue so much before it gets bit clean through, if you get my drift, and I think you do. DaveScot
Salvador & Dave, sorry about the misquote. Tom English
Joseph: "First just the word algorithm directly implies intelligence- look it up." "Has anyone ever observed unintelligent, blind/ undirected (non-goal oriented) process produce an algorithm? No." Look it up, indeed. There are multiple takes on the notion of an algorithm. One notion is that any program (sequence of instructions) for a universal computer (Turing-complete system) is an algorithm. One can generate algorithms randomly. It is not generally possible to determine what they are "good for." I imagine you feel like an algorithm must have a purpose, but it just ain't so. Tom English
Just so we are clear- algorithm is NOT Al Gore with rhythm.... :) Joseph
“Read “No Free Lunch”- page 14 last paragraph that continues onto page 15. ‘Chance as I characterize it thus includes necessity, chance (as it is ordinarily used), and their combination.’” Tom English: Do you have the least notion of how outrageous it is to define pure necessity as chance? Do you know how outrageous it is to even think that is what was posted? I take it "English" is your second language. CHANCE includes necessity. I did NOT post what necessity is defined as. This is the major problem with debating anti-IDists- IDists say one thing but when it gets to the anti-IDists they perceive something else. Pathetic. Joseph
Dave -- two additional problems you left out: 1) Certain areas cannot be explored because it causes the organism to die on its way there 2) In order to do trials, you have to have a semantic idea of what you are trying to do. This is in addition to the hardware requirements johnnyb
Tom wrote: Salvador: “The only answer I’ve heard from chance worshippers”
Tom, Those were DaveScot's words, not mine. Sal scordova
Salvador: "The only answer I’ve heard from chance worshippers" Necessity worshippers. Necessity is not chance. Tom English
Joseph, "Read “No Free Lunch”- page 14 last paragraph that continues onto page 15. 'Chance as I characterize it thus includes necessity, chance (as it is ordinarily used), and their combination.'" Do you have the least notion of how outrageous it is to define pure necessity as chance? Why would anyone do that, unless he were engaged in obfuscation? I have not checked closely enough to be sure, but I suspect that Bill was trying to patch up a fundamental error (the one Caligula alludes to) in the explanatory filter of The Design Inference without admitting the error outright. You seem proud to dredge up the quote. I generally think well of Bill, but this is him at his worst. Tom English
Mike1962, Regarding rigorous analysis, I am working on another GA by Elsberry and Shallit which I will analyze. Salvador scordova
Mike1962 asked: Does his test falsify Dembski’s CSI filter approach? I would like to see this handled in a rigorous way.
No, becuase Dave would have to demonstrate that purely stochastic process could create the entire simulation. Just because portions of the simulation are stochastic does not mean the system on the whole is undesigned. A shot-gun pattern is stochastically described. If someone fired his shot gun at his neigbors pet rat, it does mean this was an undesigned act on the part of the shooter merely because the shotgun pellets have a stochastically described pattern. Salvador scordova
BDelloid Are you folks suggesting that the RM + NS algorithm in nature is the thing that is designed ? And that this algorithm is likely sufficient to explain evolution but the alorithm is the design product in question ? I can only speak for myself but that's not what I'm suggesting. I'm suggesting the hardware upon which trials may be carried out is designed. Of course RM+NS is sufficient once the capacity to conduct trials and evaluate errors has been provided. However, the sufficiency is a probalistic matter illustrated by the proverbial million monkeys on a million typewriters, given enough time, will reproduce all the works of Shakespeare. The remaining problem for RM+NS, while not as great as explaining where the trial and error hardware came from, is that there doesn't appear to be enough probabilistic resources for it to have discovered all these wonderful solutions like flagella and camera eyes and immune systems and etcetera. Mutations that are beneficial are exceedingly rare and natural selection is largely lost in the noise of other factors effecting survival. Maybe a trillion years of RM+NS could produce some of these systems but just hundreds of millions of years seems to border on the impossible. Or maybe hundreds of millions of years on millions of planets could collectively produce these systems but not on just one planet in the time available. Dembski is all about trying to formally quantify the odds. Possibly an impossible task and certainly a task that can never be completely exhaustive as one can never prove a negative (i.e. that one knows ALL the possible probabilistic resources and has factored them in). On the other hand, it may be provable beyond a reasonable doubt and that's really what science is all about. The goal Thomas achieved certainly WAS specified. The goal was to find the shortest series of line segments connecting all the specified points given a finite amount of time for the trial and error search to run. We programmers call this the "brute force method" because all it does is takes the simple method of trial and error and multiplies its effectiveness by the computer's speed at conducting a trial and evaluating the result. No finesse. Just brute force. This brute force algorithm is what is proposed as the driver of creative evolution. The problem with it is twofold. The origin of the platform upon which the trials in creative evolution are conducted and evaluated is a problem larger than the results ostensibly obtained by the search. The search didn't have enough time to be reasonably likely to find the solutions that we see (not enough brute force). DaveScot
bdelloid: Are you folks suggesting that the RM + NS algorithm in nature is the thing that is designed ? Any algorithm strongly suggests intelligence. Do anti-IDists even understand the word "algorithm"? Apparently not. Has anyone ever observed unintelligent, blind/ undirected (non-goal oriented) process produce an algorithm? No. Joseph
I have been pondering Dave Thomas' simulation and Salvadore's response ever since Sal started a similar thread. Sal's position is very simple, I think. He suggests that any "fitness" formlula is, in itself, front loading. I have come to believe that Sal is right. I consider the challenge of abiogenesis via purely natural means. For such to be so, somewhere there had to be an environment that naturally produced a stew of organic chemicals. (Of course, scientists haven't figured that one out yet.) Then one day a molecule, or small community of molecules, had to form which could -- therefore did -- reproduce itself. The day that happened, no party was thrown! It just happened. Now, the nature of that repruduction is that the reproductive product must have been similar, but not identical to, the original. If the reproduction also reproduced, no party. If the reproduction did not reproduce, again, there would be no funeral. All this to say, at the early stages of life, "survival of the fittest" had not been established, rather "survival of the surviving" was the only filter. If it survived to reproduce, it survived. If it didn't, it didn't and no party was thrown either way. It seems to me therefore that if a software simulation were to be made, it would have to have only one filter -- survival. A small piece of "reproducing" code would have to be written. Size wise, it would have to realistically be feasible in light of UPB. I think that because computers are so darned disciplined, a random pot stirring program would have to interfere with the world that contains the reproducing code. Initial success would be seen as a reproducing code in an actively destructive environment which "improves itself" by making itself somehow fundimentally more able to survive. (I'd love to see it pull of an active error correction algorithm myself.) Ultimale success would be for this e-organism to develop into multiple competing strains, and establish for itself a sense of "survival of the fittest." If NDE is true, then such a sim is possible. bFast
Look, the solution to this is simple. Change the selection algorithm to be visual acuity and see if a full-functioning eye develops. If it doesn't do that, then he has to explain why his code only produces results for certain selection criteria but remains ateleological. johnnyb
To those that think Intelligent Design is a new idea, consider the following from 1950. In comparing the remarkable similarity that existed between the skulls of marsupial and placental saber-toothed cats, Otto Schindewolf captioned the figures of the two forms with the following: "The skulls of carnivorous marsupials and of true carnivores show an extremely surprising similarity in over all habitus and, in particular, in the unusual overspecialization of the upper pair of canines. The similarities of form are present even in such details as the structure of the large flange on the lower jaw, DESIGNED TO GUIDE and protect the upper canines." (my emphasis). Problems in Paleontology, page 260. This provides elegant direct support for the Prescribed Evolutionary Hypothesis which is why I included that figure and caption in my paper "A Prescribed Evolutionary Hypothesis." Rivista di Biologia: 155-166, 2005. Of course that was long before "Design" became a dirty word in the lexicon of mutation happy, natural selection inebriated Darwinian mysticism. "Is there anything whereof it may be said, See, this is new? It hath been already of old time, which was before us." Ecclesiastes "A past evolution is undeniable, a present evolution undemonstrable." John A. Davison
Why should we have a random fitness function ? The only fitness function that matters in nature is survival. That fitness function has already been shown to produce important evolutionary changes. Are you folks suggesting that the RM + NS algorithm in nature is the thing that is designed ? And that this algorithm is likely sufficient to explain evolution but the alorithm is the design product in question ? I think you folks misunderstand the challenge. The goal that Thomas achieved wasn't SPECIFIED - so it wasn't front loaded. If his algorithm had a pre-specified Steiner tree that was defined by 1) the number of internal nodes 2) the location of these internal nodes 3) the number of branches between each internal nodes and 4) the identity of nodes connected by each branch and his fitness function measured an index of difference between a proposed Steiner tree and the real Steiner tree then this would have been an example of front loading. In this case, the only fitness measure is length, which is in no way pre-specified in regard to the final Steiner tree. Therefore, he demonstrated that a random process can achieve a pre-specified goal of very low probality. Which is the same thing that RM + NS has been shown to do. bdelloid
Caligula: Well, it is hardly relevant what Dembski writes if it isn’t reflected in his EF. It is very relevant and it is reflected in THE EF. Also before using something it is very relevant to read the instructions. Caligula: Are you saying that rolling 5, 4, 1, 4 is both a “chance” event AND a “necessity” event? I explained that already. The dice fall and roll due to gravity and inertia. What faces up is chance. Caligula: You either accept the event as mere “necessity” or you exclude “necessity” and come up with mere “chance”. Only someone out of touch with reality would do such a thing. Caligula: Besides, EF analyses “events” which strongly hints at single-step processes. Since when? I have never heard of that except from those who appear to know the least about it. Go figure. Caligula: When considering living organisms, it is even more obvious that you can’t rule out a model which includes a multi-step process with many intermediate forms, each intermediate produced from its predecessor by the combination of chance and necessity. When considering living organisms you had better be prepared to show how they arose from "sheer-dumb-luck" ie the anti-ID position of unintelligent, blind/ undirected (non-goal oriented) processes, BEFORE you go making claims about their subsequent evolution. Caligula: How about including a LOOP evaluating each step individually before jumping into the Design hypothesis? There isn't any "jumping" and it is a design inference. That inference can either be refuted or confirmed with future knowledge- as can any scientific inference. Joseph
Mike1962 What Dave’s challege *does* show is that design detection isn’t so easy, and may be impossible in a frontal sort of way. Dave would first have to give us an example of objects that weren't designed. The whole point of my post was that he didn't do that. His result was intelligently designed. It was his solution. He designed a software tool to help him find a solution to a specific problem. The result of an intelligent agent using a tool to assist in problem solving is not an example of an object that wasn't designed. Others used calculators and spreadsheets as tools. Some just used intuition, pencil, and paper. But make no mistake, every solution was intelligently designed, including those output by Thomas' algorithmic trial and error tool. All his program did was leverage the number crunching speed of a modern computer to work HIS intelligently designed search algorithm. DaveScot
I'm finding it hard to grasp exactly what "The Design Challenge" is all about. What is the final product the genetic algorithms produce? BenK
"But of course there is front loading in the example because there is a goal in mind. The goal is fixed and doesn’t move and the selection criteria have been choosen to move towards that goal." That is right. Which is why the test may have succeed in what Thomas wanted to demonstrate, but which fails to be very interesting to me. Which is why I'd like to see randomly generated fitness algorithms build up a complex functioning virtual machine of some kind. :) Does his test falsify Dembski's CSI filter approach? I would like to see this handled in a rigorous way. mike1962
What Dave's challege *does* show is that design detection isn't so easy, and may be impossible in a frontal sort of way. I'm open to ID, not because anyone has demonstrated that things like the flagellum must be designed, but because of the failure of MET/NDE to give a sufficient account of what they claim to explain. MET is the best that methodological naturalism can do so far. And it isn't very impressive to me. mike1962
Joseph: Well, it is hardly relevant what Dembski writes if it isn't reflected in his EF. Are you saying that rolling 5, 4, 1, 4 is both a "chance" event AND a "necessity" event? Well, how could one reach such a conclusion by using the EF? You either accept the event as mere "necessity" or you exclude "necessity" and come up with mere "chance". You can't have it both ways. It makes little difference to state outside the model, that *ahem* although we formally got CHANCE, there was in fact a bit of NECESSITY involved as everyone should know. We simply would no longer speak of these two terms as they were defined and used in the model. Besides, EF analyses "events" which strongly hints at single-step processes. Most objects of study, including e.g. human artefacts, writings and even crystals, are agreed by even IDers to be products of multi-step processes. The intermediate forms of multi-step processes are typically not discussed, however, and the mere end result is being studied as a single "event". When considering living organisms, it is even more obvious that you can't rule out a model which includes a multi-step process with many intermediate forms, each intermediate produced from its predecessor by the combination of chance and necessity. I.e. non-random seletion acting upon random mutation. How about including a LOOP evaluating each step individually before jumping into the Design hypothesis? caligula
Thats it isn't it Mike. I had a read through the thread over at PT and the complaint is that the front loading of the answer cannot be shown in the algorithm. But of course there is front loading in the example because there is a goal in mind. The goal is fixed and doesn't move and the selection criteria have been choosen to move towards that goal. How can anybody have the gall to suggest that this is not a telological process ? If you have an interative process that is working towards a prespecified goal then you have an example of teleology. Are these people simply idiots to miss this point ? jwrennie
To Caligula, Read "No Free Lunch"- page 14 last paragraph that continues onto page 15.
Chance as I characterize it thus includes necessity, chance (as it is ordinarily used), and their combination.
Joseph
"It seems like an obvious point doesn’t it dave. If you engineer your trial and error approach properly, then it can get results that you are looking for." Thomas's algorithm shows that a designed set of fitness criteria with random input can create novel and unforseen complexly-specified "entities." however, it is certainly not a "blind" system (unless the fitness algorithms themselves were determine by chance, which in Thomas's case, are not), and it certainly does not answer the huge question about how the initial self-replicating mechanism came to exist, and how the earth (the selection environment) came to have the properties it did. Thomas's test is interesting, and such things may turn out to falsify Dembsky's CSI approach, while showing how ID and RM+NS can work together to form interesting things. Now let's see a test where the fitness criteria themselves are randomly generated. mike1962
First just the word algorithm directly implies intelligence- look it up. Next for DNA replication, mRNA, etc., the cell requires a nucleotide building factory. DNA will not replicate and proteins will not form without one. Nucleotides are not just floating around in the air or water waiting to enter cells when they are required. As far as I know nucleotides ONLY exist in organisms. Evolution may be smarter than Orgel, Dennett and their army of followers, but that isn't really say much. BTW "blind algorithmic processes" is a contradiction of terms. Caligula: Which, in turn, admits the obvious and well-known fallacy in Dembski’s explanatory filter: the gradual, cumulative process of chance AND necessity (selection) needs to be added among valid natural explanations. (In the current version, “chance” and “necessity” are strictly separate, single-step processes.) That is false. Wm. Dembski makes it clear in his writings that chance and necessity are NOT separate. Gravity (necessity) always acts on the roll of the dice (chance). It appears you are as ignorant of ID and the EF as you are about algorithms. Joseph
It seems like an obvious point doesn't it dave. If you engineer your trial and error approach properly, then it can get results that you are looking for. Why do so many people struggle to see the obvious teleology in such an approach ? jwrennie
GOOD LUCK! tb

Leave a Reply