Uncommon Descent Serving The Intelligent Design Community

The Elegance of Computational Brute Force, and its Limitations

arroba Email

Although for many years I was a classical concert pianist, I was raised by a wonderful father, who is the most brilliant scientist I have ever known, and he imparted to me a love of science.

My love of mathematics and science never left me, and my superb education in these disciplines has benefited me well, since I now earn my living as a software engineer in aerospace R&D.

The first experience I had with computational search algorithms involved AI games theory, which you can read about here.

Brute (but intelligently designed) computational force can do some interesting things (and even elegant things, as you can discover from my perfect-play endgame databases), but only in domains with restricted search horizons, and only if the search algorithms are intelligently designed with a goal in mind.

As a result of my interest in, experience with, and knowledge of computational search algorithms and combinatorial mathematics, it immediately became obvious to me that the Darwinian notion that a blind search — with no goal and no design, and hopelessly inadequate probabilistic resources — represents a reasonable or even rational explanation of the origin of all of biology, is a transparently preposterous proposition.

Design, from whatever source, is the only logical explanation, and the Darwinian hypothesis of random errors filtered by natural selection deserves its appropriate place at the apogee of the ash heap of junk-science history.

"Body Plan" has a specific definition, within which all vertebrates share just one. So yes, it is relevant:
"The term Cambrian Explosion describes the geologically sudden appearance of multi-cellular animals in the fossil record during the Cambrian period of geologic time. By the close of this event, as many as forty-one separate phyla first made their appearance on earth. Phlya constitute the highest biological categories or taxa in the animal kingdom, with each phylum exhibitin a unique architecture, blueprint or structural body plan. Familiar examples of basic animal body plans are cnidarians (corals and jellyfish), mollusks (squid and shellfish), arthropods (crustaceans, insects, and trilobytes), echinoderms (sea star and sea urchins), and the chordates, the phylum to which all vertebrates including humans belong. The Cambrian Explosion: Biology's Big Bang Stephen C. Meyer, P.A. Nelson, & Paul Chien, copyright 2001
Irrelevant distractor. The issue is to compose a viable body plan early in embryological or similar stages, FROM THE ZYGOTE OR COMPARABLE. And that is an algorithmic challenge, one where random changes are maximally likely to disrupt and kill. You have several dozen such plans to account for, at 10 - 100+ mn bits of info each. 4 - 5 orders of magnitude beyond the FSCI threshold where solar sytem or cosmos scale resources are exhausted as plausibly supporting success on blind chance plus mechanical necessity. kairosfocus
As a reminder, Cambrian body plans (phlya) do not necessarily reflect a layman's intuitive definition of body plan. Birds, fish, mammals, reptiles, amphibians - anything with a notochord (including all vertebrates and a few invertebrates) - share a single body plan, and thus belong to the phylum Chordates. That is, a layman might think there are many body plans instead of one. Then again spoon worms, goblet worms, and jaw worms - despite sharing a common name - have each a different body plan, and thus belong to the phylum Echiura, Entoprocta, and Gnathostomulida, respectively. That is, a layman might think there is one body plan instead of many. rhampton7
The body plan challenge kairosfocus
A video we would do well to listen to. kairosfocus
Dr Liddle: Let me start with a dictionary reference, AmHD, to underscore a point:
spon·ta·ne·ous (spn-tn-s) adj. 2. Arising from a natural inclination or impulse and not from external incitement or constraint.
In short, natural, not artificial, i.e. coming from factors of blind mechanical necessity and chance under plausible initial circumstances. FSCI, as has repeatedly been noted, is informational, so it is highly contingent (as in states of successive alphanumerical characters in the strings in this post); that means that while laws and forces of necessity may act, they cannot be decisive. Absent action by art, we are looking at chance driving the proposed rise to functionally specific, complex information, with necessity providing a backdrop. it is in that context that the issue of gaining such complex specific function by chance driven trial and error searches of large config spaces arises. And, recall, until a vNSR arises, we are not there yet. There is simply no empirical evidence that points to that as a reasonable expectation -- whether all at once or in some proposed incremental manner -- on the gamut of the observed cosmos, and there is abundant reason to see that the sort of contingency that best explains FSCI is art. Further to all this, if you would glance at the RH col of the already linked introductory page, you will see that there are onward more specifically detailed discussions of the mutually destructive metabolism and genes first OOL scenarios, and of body plan macroevolutionary scenarios, with a critical evaluation of a cluster of some twenty central icons commonly used to teach the claimed macroevolutionary timeline. This of course deals with the roots, main trunk and main branches of the claimed macroevolutionary tree of life. A further module addressed origin of man, i.e. information origin issues relevant to our particular branch. At no point do I find that on the past several months of interaction, you can plausibly be ignorant of this. let me simply snip the recent exchange between Shapiro and Orgel, as illustrating the key un-answered problems, noting that cases all the way down to the origin of our own language using capcity are also noted on and are tied to the same problem tha there is no empirically supported credible scientific account of the origin of FSCI in these cases by blind chance + mechanical necessity in any reasonable initial circumstance. All, seems to be instead driven by the sort of Lewontinian a priori that excludes any but these from the outset. Begging the question and ripping the heart out of the inference to best explanation process. Best MATERIALISTIC explanation is not he same as best explanation. Exchange: ____________ >> [[Shapiro:] RNA's building blocks, nucleotides contain a sugar, a phosphate and one of four nitrogen-containing bases as sub-subunits. Thus, each RNA nucleotide contains 9 or 10 carbon atoms, numerous nitrogen and oxygen atoms and the phosphate group, all connected in a precise three-dimensional pattern . . . . [[S]ome writers have presumed that all of life's building could be formed with ease in Miller-type experiments and were present in meteorites and other extraterrestrial bodies. This is not the case. A careful examination of the results of the analysis of several meteorites led the scientists who conducted the work to a different conclusion: inanimate nature has a bias toward the formation of molecules made of fewer rather than greater numbers of carbon atoms, and thus shows no partiality in favor of creating the building blocks of our kind of life . . . . To rescue the RNA-first concept from this otherwise lethal defect, its advocates have created a discipline called prebiotic synthesis. They have attempted to show that RNA and its components can be prepared in their laboratories in a sequence of carefully controlled reactions, normally carried out in water at temperatures observed on Earth . . . . Unfortunately, neither chemists nor laboratories were present on the early Earth to produce RNA . . . >> >> [[Orgel:] If complex cycles analogous to metabolic cycles could have operated on the primitive Earth, before the appearance of enzymes or other informational polymers, many of the obstacles to the construction of a plausible scenario for the origin of life would disappear . . . . It must be recognized that assessment of the feasibility of any particular proposed prebiotic cycle must depend on arguments about chemical plausibility, rather than on a decision about logical possibility . . . few would believe that any assembly of minerals on the primitive Earth is likely to have promoted these syntheses in significant yield . . . . Why should one believe that an ensemble of minerals that are capable of catalyzing each of the many steps of [[for instance] the reverse citric acid cycle was present anywhere on the primitive Earth [[8], or that the cycle mysteriously organized itself topographically on a metal sulfide surface [[6]? . . . Theories of the origin of life based on metabolic cycles cannot be justified by the inadequacy of competing theories: they must stand on their own . . . . The prebiotic syntheses that have been investigated experimentally almost always lead to the formation of complex mixtures. Proposed polymer replication schemes are unlikely to succeed except with reasonably pure input monomers. No solution of the origin-of-life problem will be possible until the gap between the two kinds of chemistry is closed. Simplification of product mixtures through the self-organization of organic reaction sequences, whether cyclic or not, would help enormously, as would the discovery of very simple replicating polymers. However, solutions offered by supporters of geneticist or metabolist scenarios that are dependent on “if pigs could fly” hypothetical chemistry are unlikely to help. >> _______________ I call that mutual ruin. The only empirically warranted explanation for OOL is design, which would at once explain the relevant features that scream out, engineering. And, once that is on the table, then it is reasonable to see that there is no good reason to exclude such as the best explanation of onward major body plans etc. GEM of TKI kairosfocus
Read some of the literature in the link I gave you. The answer is in there. Elizabeth Liddle
What "incremental steps"? Do you realize how volitile nucleic acids are? Joseph
The counter-theory to ID is that self-replication with heritable differential reproduction can also bring about FCSI.
What self-replication? Even for RNA to replicate it takes TWO strands- one for the template and one for the catalyst. But anyway to date there isn't any evidence that blind, undirected chemical processes can produce CSI of any kind. Joseph
By "highly complex unicellular life forms are likely to assemble spontaneously out of non-replicating molecules in an organic soup", I mean in one step. If it happened at all, it must have happened in incremental steps, in which the ancestors of the LUCA included very much simpler self-replicating entities. Elizabeth Liddle
The problem seems to me that you think I am missing your point, and I think you are missing my point! Let me rephrase my question, to try to make it clearer what I am asking: You often claim that the probability of "spontaneous emergence of complex, functionally specific information" is very low - too low to have done so in the lifetime of our universe. What I am asking is: what do you mean by "spontaneous"? Because if you mean: those atoms and molecules just happened to assemble themselves, in their entirety, from a prebiotic soup, I would entirely agree. But that would be a straw man, because nobody is proposing that they did. Yes, of course, intentional intelligence frequently brings forth "FCSI" but that doesn't, itself mean that "intentional intelligence" is a necessary condition. Because A sometimes causes B doesn't allow us to infer that only A can cause B. The counter-theory to ID is that self-replication with heritable differential reproduction can also bring about FCSI. Clearly you disagree, but that is the point of disagreement, not whether highly complex unicellular life forms are likely to assemble spontaneously out of non-replicating molecules in an organic soup. Elizabeth Liddle
Dr Liddle: Sorry, but I do not buy your response as just above. First, the matter is in fact straightforward: there is just one directly known, empirically routinely observed source of functionally specific, complex information, in the form especially of coded strings, algorithmic instructions, data structures, etc. Intelligence. In addition, it can be shown that as such a string grows, the complexity as measured by number of configurational possibiliites exponentiates. (For every additional bit, the space of possibilities doubles.) As a direct consequence, once we are looking at sharply constrained independently describable clusters of strings E, we are looking at narrow and isolated defined zones of interest T, in the wider space W. By the very fact of tight specificational constraint, as has been there to be read all along, the set of possibilities swamps the set of relevant, specific and functional configs. So, chance based random walks feeding into trial and error/success algorithms within available atomic resources [solar system, cosmos we observe . . . ] are analytically maximally implausible as effective causes of the observed phenomena. For instance, without ever having met me or seen me post, you know that my posts in this thread are the product of an intelligence; not monkeys typing at random. That is all pretty plain and obvious in the present age where we can directly observe and cross-check. Now, we are dealing with the origins science context where we were not there to directly observe and where there are no generally accepted records written down for us. Ever since Lyell and Darwin et al, the answer to how to scientifically investigate such a deep past of origins, has been to infer to best explanation using the uniformity principle that like causes like, once we can observe similar phenomena and know the credible cause on empirical tests. All of this is easily accessible commonplace. Now, we happen to be dealing with known to be functionally specific, coded digital strings, e.g. in DNA, and RNA; and this extends to proteins as produced using the above two. The mRNA is a numerical control tape for a nanofactory machine, the ribosome. All of this is commonplace. It is similarly commonplace, that for a set of n 4-state elements, the number of possibilities, strictly is 4^n. So, for a 500-bit equivalent string, the Planck-time Quantum-state resources of the solar system would be swamped. (I worked out the comparison of pulling a straw-sized single sample from a cubical hay bale, a light month across, where light notoriously travels at 186,000 miles per second.) A one straw size sample, overwhelmingly -- and for very obvious reasons, will get a straw, even if a whole solar system is lurking in the bale. The sample will overwhelmingly tend to be typical of the bulk of the distribution, not atypical. All of this is commonplace and taught in basic statistics. So, we have no good reason to imagine that brute force blind search methods will have any reasonable hope of being effective. By contrast, intelligence routinely produces such string based FSCI. We are inductively entitled to see FSCI, especially digital coded strings, as a reliable sign of intelligent, choice contingency action -- design -- as cause. You may wish to claim that somehow some tiny, easy to find function will accumulate into self-replicating systems using von Neumann, coded instruction facilities. Such has to be empirically demonstrated, and it is not. In addition, the threshold of complexity for such a vNSR is very high indeed. Let's not forget, the empirical evidence is 100,000 - 1 mn bits of info. (And your long promised simulation has yet to surface, in any way that answers to the empirical reality as constrained by the observed world of Cell based life.) All of this -- as has been pointed out over and over, month after month, again and again, ad nauseum to the point where I am now losing patience in the teeth of evident passive aggressive tactics of selective hyperspekticism by pretence of "I do not understand . . . " a la Mark F -- strongly points to the most credible source of such entities: intelligence. Remember, the evidence points to 100,000 - 1mn or more bits of DNA info or the equivalent as having to be explained. Just 500 - 1,000 bits is beyond the reasonable reach of chance plus necessity. Just, such a source at the origin of cell based life, does not fit well with the preferred evolutionary materialist just so story of origins that nowadays is often promoted in the name of and under false colours of science. Similarly, the same challenge occurs to account for the origin of novel body plans, but he leap in threshold size is plain: 10 - 100+ mn bits of additional information, dozens of times over, and in a fairly short window according to the usual timelines for the Cambrian life revolution. All of this has been pointed out and explained, and linked and is otherwise accessible. Time and again, month after month. So, sorry, at this stage I do not buy the "I don't understand" claim. What is increasingly evidently the case is, rather: I do not agree, but I have no grounds that can warrant that disagreement, apart from worldview level preferences and biases. And, that sort of inference, whether disguised under the ever so loaded assertion that "science must explain naturalistically on naturalistic causes" or otherwise, will not wash here. For, that is patent question-begging and censorship on scientific inference, if science is at all to preserve its integrity as seeking the truth about our world on empirical evidence, not providing the best materialistic story about our world from hydrogen to humans. The latter is blatant ideology, not genuine science. Please, please, please, show me that this is not what is going on. On cogent, empirically based and reasonable grounds that warrant rejection of the above inference to best explanation for FSCI as has been -- yet again -- summarised. Good day GEM of TKI kairosfocus
Oh, and kf, I never use "passive aggressive tactics". If I don't understand something, I ask. That's all. Elizabeth Liddle
kf, I've read that link several times, and my question remains. That's why I asked it! It seems crucial to me. Elizabeth Liddle
Dr Liddle, there is a link; kindly follow, respond substantially and avoid passive aggressive tactics. kairosfocus
"sounds paradoxical" Well, in fact it is nonsensical. It is implausible as per definition of implausibility by Abel. Whatever is below a universal plausibility bound is not worth mentioning as a scientific hypothesis. Of course, this is not to say microevolution does not exist. A classic example of a spontaneous increase of information is as follows. You want to send a message and you make a mistake in it. Somebody else copying your message to pass it on (earlier it could have been a telegraphist at the post office) inadvertently corrects it. You get a small increase in the amount of information "for free". However, the message had already been there. And that's the whole point. You cannot get "for free", i.e. spontaneously without intelligent agency a long enough and meaningful message! Corrections - yes, but not the entire message. Abd this agrees perfectly well with common sense and our daily experience. But macroevolution just does not and there is nothing one can do about it. It just does not work like that! Eugene S
....spontaneous emergence of complex, functionally specific information, especially digitally coded information
What does this part mean, kf? It seems to me that this is the point at issue. Elizabeth Liddle
Prezactly. kairosfocus
Dr REC: Do you realise that until you have a "gain of function" of 500 - 1,000 bits you are not talking about FSC -- as in "COMPLEX" -- I? (And, that mere copying does not count as a gain of FSCI; just as, loading this post to your PC does not create fresh information? Similarly, printing 50,000 copies of a book does not create new information in the books.) In other words, what are you doing about the COMPLEXITY THRESHOLD -- tied directly to accessible atomic resources at solar system or observed cosmos levels, other than apparently strawmannising its significance? Going further, and looking at the solar system threshold, to show that you have done due diligence, kindly explain the meaning of and address the issues/implications of the following log-reduced form of the CSI metric -- and in particular the constant value of 500, which is useful for bringing out the meaning of FSCI: Chi_500 = I*S - 500, bits beyond the solar system threshold (You may want to read here and here, in context, to see what is going on in the equation. If you fail or refuse to properly address this, it is a strong indicator that you are indulging a strawman dismissal.) kairosfocus
F/N: Why 500 - 1,000 bits is a realistic (indeed, generous) upper limit to spontaneous emergence of complex, functionally specific information, especially digitally coded information. (Note the context of correcting yet another misleading anti-ID talking point now being circulated as if it were the indisputable "whole" truth in the usual fever swamps.) kairosfocus
GP: Always great to see you commenting here at UD! Of course the real issue is not increase, but increase that surpasses a critical functional threshold (especially where the relevant function is as you imply, irreducibly complex . . . typical of complex, functional systems, that is without loss of general force), in practice about 500 - 1,000 bits. What the folks on the other side need to be showing is how by incremental blind chance + mechanical necessity "easy back-slope of Mt Improbable" change, something like a Hello World program can become an operating system. This, predictably, they never do; and the GA's commonly presented as though they show this, are invariably cases of design. (The Schneider case came in for quite extensive deconstruction over the past several months here at UD.) GEM of TKI kairosfocus
DrREC: Functional information DOES increase in nature (de novo genes, novel activities). Sure. The problem is: how does it increase? Design certainly can increase functional information, for example by designing and implementing a de novo gene. Can the darwinian algorithm do the same? The simple answer is: no, if the new information is complex and not deconstructable into simple naturally selectable steps. Functional information increases in directed evolution (novel activities), due to mutation and recombination. Directed evolution, if I understand what you mean, is a form of design. Functional information increases in genetic algorithms. Genetic algorithms are obviously a form of design. The point is, in genetic algorithms, or in any form of directed evolution, the designer adds specific, purposeful and useful information to the scenario. ...is a distinction without a difference. Your opinion. I think you are simply wrong here. A genetic algorithm, where the best solution survives, reproduces, and goes onto the next round, or a directed evolution experiment, where sufficiently good bacteria or genes coding enzymes go on, or nature, where the fastest, smartest cheetah goes on to survive and reproduce are materially equivalent. The effect is the same. Completely wrong. The difference is a very big one. As I have tried to explain, in a genetic algorithm some function is measured, recognized and rewarded, or specific properties of the system are chosen to make some result much more likely. The same can be said for directed evolution. All these are forms of intelligent design. In all cases, a conscious purposeful agents sets the rules, because he wants to get some specific result, however defined. None of these ia equivalent to natural selection. In natural selection there is no recognition af anything, and no intervention of any conscious agent, and no purpose at all. Reproductive advantage is "selected" only because it is a reproductive advantage, and if we have a reproducing population, it's simply a corollary of the defrinition and of simple logic that a reproductor with a reproductive advantage reproduces better. Selection is indeed not the right word here. So, maybe the effect is the same (some reproductor expands), but the cause is completely different. I don't know why you seem to assume that, if an effect is the same, the cause must be the same. It is strange logic indeed. And I will not even try to comment on the even stranger concept of "materially equivalent". So now evolution is a blind search filtered by reproductive advantage. It always has been. That is your source of active information. I don't think it is active information at all. Active information is related to some useful knowledge of the search space. In the darwinian scenario, the search is blind, and all the search space is equivalent. The expansion of the best reproductor is only a logical corollary. It acts only by expanding the probabilistic resources. In a sense, the only active information present in such a scenario is that we have a reproducing population. That is active information indeed, because it's exactly the complex, already existing function of reproduction that logically implies the likely expansion of the best reproductor. So, the active information is in the reproducing beings. It is there, and darwinian theory can in no way explain how the information in reproducing beings was generated. That's why sometimes I state that natural selection could be more correctly called "natural self-selection". It is the reproductor who really selects itself, given a certain environment. But that kind of "active information" cannot anyway explain the emergence of new complex functional information (like a de novo functional protein domain). The fundamental limitation is that only one subset of functions is "naturally selectable": those that confer a reproductive advantage. Now, whatever Petrushka may go on saying, that subset is a very tiny subset of all possible functions. Therefore, natural selection is obviously much less powerful than intelligent selection. Intelligent selection can select any defined function, while natural selection can select only a reproductive advantage. It's as simple as that. Moreover, the subset of functions conferring a reproductive advantage is strongly limited by the existing complexity of the reproducer. Moreover, that subset is mostly made of complex functions, implemented by complex structures and a complex integration with what already exists. Therefore, most of the functions that can confer a reproductive advantage are informationally very complex, and they are in no way in the range of a blind search. gpuccio
a few notes as to computational brute force vs. protein folding:
In the year 2000 IBM announced the development of a new super-computer, called Blue Gene, which was 500 times faster than any supercomputer built up until that time. It took 4-5 years to build. Blue Gene stands about six feet high, and occupies a floor space of 40 feet by 40 feet. It cost $100 million to build. It was built specifically to better enable computer simulations of molecular biology. The computer performs one quadrillion (one million billion) computations per second. Despite its speed, it was estimated to take one entire year for it to analyze the mechanism by which JUST ONE “simple” protein will fold onto itself from its one-dimensional starting point to its final three-dimensional shape. "Blue Gene's final product, due in four or five years, will be able to "fold" a protein made of 300 amino acids, but that job will take an entire year of full-time computing." Paul Horn, senior vice president of IBM research, September 21, 2000 http://www.news.com/2100-1001-233954.html Networking a few hundred thousand computers together has reduced the time to a few weeks for simulating the folding of a single protein molecule: A Few Hundred Thousand Computers vs. A Single Protein Molecule - video http://www.metacafe.com/watch/4018233 As well, despite some very optimistic claims, it seems future 'quantum computers' will not fair much better in finding functional proteins in sequence space than even a idealized 'material' supercomputer of today can do: The Limits of Quantum Computers – March 2008 Excerpt: "Quantum computers would be exceptionally fast at a few specific tasks, but it appears that for most problems they would outclass today’s computers only modestly. This realization may lead to a new fundamental physical principle" http://www.scientificamerican.com/article.cfm?id=the-limits-of-quantum-computers The Limits of Quantum Computers - Scott Aaronson - 2007 Excerpt: In the popular imagination, quantum computers would be almost magical devices, able to “solve impossible problems in an instant” by trying exponentially many solutions in parallel. In this talk, I’ll describe four results in quantum computing theory that directly challenge this view.,,, Second I’ll show that in the “black box” or “oracle” model that we know how to analyze, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform “quantum advice states”,,, http://www.springerlink.com/content/0662222330115207/ Here is Scott Aaronson's blog in which refutes recent claims that P=NP (Of note: if P were found to equal NP, then a million dollar prize would be awarded to the mathematician who provided the proof that NP problems could be solved in polynomial time): Shtetl-Optimized Excerpt: Quantum computers are not known to be able to solve NP-complete problems in polynomial time. http://scottaaronson.com/blog/?p=456 Protein folding is found to be a 'intractable NP-complete problem' by several different methods. Thus protein folding will not be able to take advantage of any advances in speed that quantum computation may offer to any other problems of computation that may be solved in polynomial time: Combinatorial Algorithms for Protein Folding in Lattice Models: A Survey of Mathematical Results – 2009 Excerpt: Protein Folding: Computational Complexity 4.1 NP-completeness: from 10^300 to 2 Amino Acid Types 4.2 NP-completeness: Protein Folding in Ad-Hoc Models 4.3 NP-completeness: Protein Folding in the HP-Model http://www.cs.brown.edu/~sorin/pdfs/pfoldingsurvey.pdf
A few notes on the sheer absurdity of neo-Darwinists trying to use 'designed' computer algorithms to prove what neo-Darwinian evolution has been utterly incapable of proving in the lab (as if such a stretched post hoc proof needed any further rebuttal) :)
Darwin as the Pinball Wizard: Talking Probability with Robert Marks - podcast http://www.idthefuture.com/2010/03/darwin_as_the_pinball_wizard_t.html Here are a few quotes from Robert Marks from the preceding podcast, as well as link to further quotes by Dr. Marks: * [Computer] programs to demonstrate Darwinian evolution are akin to a pinball machine. The steel ball bounces around differently every time but eventually falls down the little hole behind the flippers. * It's a lot easier to play pinball than it is to make a pinball machine. * Computer programs, including all of the models of Darwinian evolution of which I am aware, perform the way their programmers intended. Doing so requires the programmer infuse information about the program's goal. You can't write a good program without [doing so]. Robert J. Marks II - Distinguished Professor of Electrical and Computer Engineering at Baylor University http://en.wikiquote.org/wiki/Robert_J._Marks_II Signature In The Cell - Review Excerpt: There is absolutely nothing surprising about the results of these (evolutionary) algorithms. The computer is programmed from the outset to converge on the solution. The programmer designed to do that. What would be surprising is if the program didn't converge on the solution. That would reflect badly on the skill of the programmer. Everything interesting in the output of the program came as a result of the programmer's skill-the information input. There are no mysterious outputs. Software Engineer - quoted to Stephen Meyer http://www.scribd.com/full/29346507?access_key=key-1ysrgwzxhb18zn6dtju0 Evolutionary Algorithms: Are We There Yet? - Ann Gauger Excerpt: In the recent past, several papers have been published that claim to demonstrate that biological evolution can readily produce new genetic information, using as their evidence the ability of various evolutionary algorithms to find a specific target. This is a rather large claim.,,,,, As perhaps should be no surprise, the authors found that ev uses sources of active information (meaning information added to the search to improve its chances of success compared to a blind search) to help it find its target. Indeed, the algorithm is predisposed toward success because information about the search is built into its very structure. These same authors have previously reported on the hidden sources of information that allowed another evolutionary algorithm, AVIDA [3-5], to find its target. Once again, active information introduced by the structure of the algorithm was what allowed it to be successful. These results confirm that there is no free lunch for evolutionary algorithms. Active information is needed to guide any search that does better than a random walk. http://biologicinstitute.org/2010/12/17/evolutionary-algorithms-are-we-there-yet/ Evolutionary Computation: A Perpetual Motion Machine for Design Information? By Robert J. Marks II Final Thoughts: Search spaces require structuring for search algorithms to be viable. This includes evolutionary search for a targeted design goal. The added structure information needs to be implicitly infused into the search space and is used to guide the process to a desired result. The target can be specific, as is the case with a precisely identified phrase; or it can be general, such as meaningful phrases that will pass, say, a spelling and grammar check. In any case, there is yet no perpetual motion machine for the design of information arising from evolutionary computation. http://www.idnet.com.au/files/pdf/Evolutionary%20Computer%20Simulations.pdf further notes: https://docs.google.com/document/pub?id=1h33EC4yg29Ve59XYJN_nJoipZLKIgupT6lBtsaVQsUs
and to add just a crystal clear perspective on the whole evolutionary algorithm scam, that atheistic neo-Darwinists continually try to sell to the unwary public, it is appropriate to reflect on Godel's proof;
THE GOD OF THE MATHEMATICIANS – DAVID P. GOLDMAN – August 2010 Excerpt: we cannot construct an ontology that makes God dispensable. Secularists can dismiss this as a mere exercise within predefined rules of the game of mathematical logic, but that is sour grapes, for it was the secular side that hoped to substitute logic for God in the first place. Gödel’s critique of the continuum hypothesis has the same implication as his incompleteness theorems: Mathematics never will create the sort of closed system that sorts reality into neat boxes. http://www.faqs.org/periodicals/201008/2080027241.html
Kutless - Amazed (Slideshow With Lyrics) http://www.youtube.com/watch?v=Gkl37oXuVEo bornagain77
The conversation has drifted a bit. Anyone want to defend the original post? Are evolution or genetic algorithms brute force? Is evolution a blind search? Is selection a source of active information? DrREC
I understand the scrutiny of calculations demonstrating what evolution can't accomplish. But in most of the pro-Darwinism comments I read the burden is placed entirely on those who question the theory. That's fine if the power of natural selection to create the information we observe in nature had ever been demonstrated. But it hasn't. Right now it's extrapolations and GAs. I read statements like this -
Natural selection can simultaneously monitor thousands of dimensions. It large populations it can sometimes “see” and fix a synonymous mutation. In short, natural selection is vastly more powerful, because it has no foresight. It is not searching for a target. Sounds paradoxical, but it’s how things work.
- and it sounds as confident and familiar as someone describing how fresh concrete settles. Except that no one can apply it to the origin of any species, past or present. Yes, a gene can vary, get selected, and get fixed. But the relationship between that phenomenon and biological diversity is speculative. It seems like it gets repeated so often that people forget it's only a hypothesis with plenty of evidence against it. ScottAndrews
"Further, even in those cases where there is arguably a gain of function, Behe shows that such “gain” almost inevitably results from the breakage of an existing part or system" No. He classifies those as modification of function, where one part is swapped for another. His conclusion is that loss-of-function mutations are easier. I'd agree-breaking something is easier than building it, and probably why in the short timescales of our experiments and observation, that loss-of-function results are seen. Nevertheless, the gains in functional information absolutely falsify the notion that fSCI (or your favorite acronym) cannot increase in natural processes. So we're left asking how much. I agree with your interpretation-Behe sees some threshold that evolution can't overcome. I think it is entirely arbitrary, and the estimates of where that edge is are likely to vary enormously depending on the assumptions used to calculate it. If we take one of the de novo proteins of humans or yeast, where 0 function and 0 amino acids are produced prior to recombination and mutation, and a coded protein that improves fitness emerges, I think we'd come up with a pretty large number if we used some ID calculations. Nevertheless, this was inferred to occur through tractable (apparently natural) pathways. I say apparently natural to appease the theistic evolutionists who will shout that I can't prove the process was unguided. Of course I can't. DrREC
Further, even in those cases where there is arguably a gain of function, Behe shows that such “gain” almost inevitably results from the breakage of an existing part or system.
Behe's own paper shows three of 17 instances to be pure gains of function. "Almost inevitably" seems to mean somewhat less that 85 percent. Petrushka
The differences between directed and natural selection are not favorable to ID. Directed evolution typically selects along one, or few dimensions favoring on or a few traits. Natural selection can simultaneously monitor thousands of dimensions. It large populations it can sometimes "see" and fix a synonymous mutation. In short, natural selection is vastly more powerful, because it has no foresight. It is not searching for a target. Sounds paradoxical, but it's how things work. Petrushka
even in those cases where a survival advantage has been conferred by a mutation, it is typically the result of breaking or blunting an existing functional element, not from creating some new informational element.
That might be a strong argument if it holds up, but it is unlikely to hold up. In fact research have already been done, and does not confirm the conclusion. A central problem is found in the modifier "typically." If one case in a hundred is non-typical, it breaks Behe's argument. Also, one needs to distinguish between lineages in bacteria -- in which odd and rarely needed alleles can be transferred horizontally and can be containerized in plasmids -- and lineages in vertebrates in which the majority of phenotypic differences are found in regulatory networks, and where the breakage of protein coding genes is rarely tolerated. Speaking of plasmids, some bacteria carry the entire set of genes required for building a flagellum in a non-functional plasmid. It seems odd. Petrushka
DrRec: "Even Behe lists multiple “adaptive gain of functional coded elements” in his latest review*." This is not the first time you have put forward this red herring, so I believe it is high time to weigh in and put a stop to your misreading of the situation. Behe has, for some time, and as further illustrated in the paper you cited, been looking for what he calls the "edge of evolution" or the boundary where traditional evolutionary mechanisms can actually do something. Behe goes through many examples of mutations and tries to categorize them to see what lesson can be learned. While there are a small handful of what could be viewed as "gain of function" mutations, the takeaway from Behe's careful review is most decidedly *not* that natural processes can readily come up with new informational structures. Further, even in those cases where there is arguably a gain of function, Behe shows that such "gain" almost inevitably results from the breakage of an existing part or system. Indeed, a large part of the point of Behe's paper is to propose what he calls the "First Rule of Adaptive Evolution," namely that in particular circumstances a fitness advantage can sometimes be gained by breaking or blunting a functional coded element. The strong takeaway from all this is that (i) naturalistic processes are terrible at producing information gain (indeed, we're still waiting for a decent example of information gain beyond the absolutely trivial), and (ii) even in those cases where a survival advantage has been conferred by a mutation, it is typically the result of breaking or blunting an existing functional element, not from creating some new informational element. Eric Anderson
Two points: 1) Functional information DOES increase in nature (de novo genes, novel activities). Even Behe lists multiple "adaptive gain of functional coded elements" in his latest review*. Functional information increases in directed evolution (novel activities), due to mutation and recombination. Functional information increases in genetic algorithms. 2) Saying "intelligent selection, where some function is actively function is actively measured or rewarded by the algorithm, and natural selection, where the only advantage is a “natural” advantage ... the only function which has any effect in the algorithm is reproductive function." is a distinction without a difference. A genetic algorithm, where the best solution survives, reproduces, and goes onto the next round, or a directed evolution experiment, where sufficiently good bacteria or genes coding enzymes go on, or nature, where the fastest, smartest cheetah goes on to survive and reproduce are materially equivalent. The effect is the same. So now evolution is a blind search filtered by reproductive advantage. That is your source of active information. Dembski and you seem comfortable calling the equivalent such everywhere except in nature. Why is that? I still do not grasp the material difference that make selection "active information" in Ev or Avida, but not in nature. *http://www.lehigh.edu/bio/pdf/Behe/QRB_paper.pdf DrREC
DrREC: The only "active information" added by the natural principle of positive selection is that, in a reproducing population, information giving a reproductive advantage is likely to expand. The natural principle of negative selection, on the other end, makes information which gives a reproductive disadvantage more likely to be lost. That's all. You want to call that a fitness function, be my guest, but I have pointed many times to the fundamental difference between intelligent selection, where some function is actively measured or rewarded by the algorithm, and natural selection, where the only advantage is a "natural" advantage, descending from a very simple logical principle, and not by any added information about a search space, and therefore the only function which has any effect in the algorithm is reproductive function. So, at best, a darwinian algorithm is a blind search for reproductive advantage. The only "active" information in that algorithm is a simple logical implication: what reproduces better usually expands, what reproduces worse usually is lost. The simple point made by Gil here, and which I absolutely agree with, is that there is no way such an algorithm, based on blind search with only that addition of "active information" (if we want to call it that way), and no more, can even begin to explain what we see in the biological world. In Gil's words: "Design, from whatever source, is the only logical explanation, and the Darwinian hypothesis of random errors filtered by natural selection deserves its appropriate place at the apogee of the ash heap of junk-science history." gpuccio
Dembski states from his abstract;
Though not denying Darwinian evolution or even limiting its role in the history of life, the Law of Conservation of Information shows that Darwinian evolution is inherently teleological. Moreover, it shows that this teleology can be measured in precise information-theoretic terms.
Thus DrREC, do you concede the main point of Dembski's paper, that IF neo-Darwinism could generate functional information in life, (which no empirical demonstration has been forthcoming), that that gain in functional information would have to be the result of the design built into nature??? bornagain77
Interesting post. Two questions: 1) Are genetic algorithms brute force algorithms? I thought they were quite different. 2) Is evolution a blind search? Dr.Dembski, for example contrasts Darwinian processes and blind searches: "Searches that operate by Darwinian selection, for instance, often significantly outperform blind search." Abstract of : LIFE ’S CONSERVATION LAW Why Darwinian Evolution Cannot Create Biological Information William A. Dembski and Robert J. Marks II I guess the question is whether the fitness function provided by the algorithm (called an active source of information) is analogous to the fitness function that emerges from natural selection. I can't think of a rationale for finding one an active source of information, and the other not. Can you? DrREC

Leave a Reply