Uncommon Descent Serving The Intelligent Design Community

Jonathan Wells on Darwinism, Science, and Junk DNA

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
Jonathan Wells

On November 5, I posted a response to people who falsely claim that I set out to oppose Darwinism on orders from Reverend Sun Myung Moon. Since then, many comments have been posted—some of them critical of my book, The Myth of Junk DNA. Unfortunately, other commitments prevent me from responding to every detail (so many critics, so little time!). So I have selected some representative comments posted by two people using the pseudonyms “Gregory” and “paulmc.”

First, “Gregory” asked how many biologists I think are “Darwinists.” In my original post, I wrote:

By “Darwinism,” I mean the claim that all living things are descended from one or a few common ancestors, modified solely by unguided natural processes such as variation and selection. For the sake of brevity, I use the term here also to include Neo-Darwinism, which attributes new variations to genetic mutations.

By “Darwinists,” then, I mean people who subscribe to that view. Having worked in close proximity with biologists for over two decades, I can confidently say that most of them—at least in the U.S.—are Darwinists in this sense.

“Gregory” also wrote that “without ‘doing science,’ Jonathan Wells personally concluded ‘evident design’ in ‘the mountains of Mendocino county.’ Thus, the argument that ‘intelligent design is a purely scientific pursuit’ is obviously untrue.” I’m not sure what “Gregory” means here by a “purely scientific pursuit.” Intelligent design (ID) holds that we can infer from evidence in nature that some features of the world, including some features of living things, are better explained by an intelligent cause than by unguided natural process such as mutation and selection. Unlike creationism, ID does not start with the Bible or religious doctrines.

So if “science” means making inferences from evidence in nature—as opposed to inventing naturalistic explanations for everything we see (as materialistic philosophy would have us do)—then ID is science.

Second, “paulmc” wrote that “there are a number of strong lines of evidence that suggest junk DNA comprises a majority of the human genome.” The lines of evidence cited by “paulmc” included (1) mutational (genetic) load, (2) lack of sequence conservation, and (3) a report that “putative junk” has been removed from mice “with no observable effects.” In addition, (4) “paulmc” wrote that “there is an active other side to the debate” about pervasive transcription. I’ll address these four points in order.

Before I start, however, I’d like to say that I’m not particularly interested in debates over what percentage of our genome is currently known to be functional. Whatever the current percentage might be, it is increasing every week as new discoveries are reported—and such discoveries will probably continue into the indefinite future. So people who claim that most of our DNA is junk, and that this is evidence for unguided evolution and evidence against ID, are making a “Darwin of the gaps” argument that faces the inevitable prospect of having to retreat in the face of new discoveries.

Now, to the points raised by “paulmc”:

(1) Mutational Load. In 1972, biologist Susumu Ohno (one of the first to use the term “junk DNA”) estimated that humans and mice have a 1 in 100,000 chance per generation of suffering a harmful mutation. Biologists had already discovered that only about 2% of our DNA codes for proteins; Ohno suggested that if the percentage were any higher we would accumulate an “unbearably heavy genetic load” from harmful mutations in our protein-coding DNA. His reasoning provided a theoretical justification for the claim that the vast majority of our genome is functionless junk—what Ohno called “the remains of nature’s experiments which failed”—and that this junk bears most of our mutational load.

According to “paulmc”, this is the first of “a number of strong lines of evidence that suggest junk DNA comprises a majority of the human genome.” But Ohno’s claim was a theoretical one, based on various assumptions about how often spontaneous mutations occur and how they affect the genome.

As of last year, however, the accurate determination of mutation rates was still controversial. According to a 2010 paper:

The rate of spontaneous mutation in natural populations is a fundamental parameter for many evolutionary phenomena. Because the rate of mutation is generally low, most of what is currently known about mutation has been obtained through indirect, complex and imprecise methodological approaches.

Furthermore, genomes are more complex and integrated than Ohno realized, so the effects of mutations are not as straightforward as he thought. As another 2010 paper put it,

Recent studies in D. melanogaster have revealed unexpectedly complex genetic architectures of many quantitative traits, with large numbers of pleiotropic genes and alleles with sex-, environment- and genetic background-specific effects.

In other words, the first line of evidence cited by “paulmc” is not evidence at all, but a 40-year-old theoretical prediction based on questionable assumptions. The proper way to reason scientifically is not “Ohno predicted theoretically that the vast majority of our DNA is junk, therefore it is,” but “If much of our non-protein-coding DNA turns out to be functional, then Ohno’s theoretical prediction was wrong.”

(2) Sequence Conservation. According to evolutionary theory, if two lineages diverge from a common ancestor that possesses regions of non-protein-coding DNA, and those regions are non-functional, then they will accumulate random mutations that are not weeded out by natural selection. Many generations later, the corresponding non-protein coding regions in the two descendant lineages will be very different. On the other hand, if the original non-protein-coding DNA was functional, then natural selection will tend to weed out mutations affecting that function. Evolution of the functional regions will be “constrained,” and many generations later the sequences in the two descendant lineages will still be similar, or “conserved.”

As “paulmc” pointed out , however, many regions of non-protein-coding DNA appear to “evolve without evidence of this constraint;” their sequences are not conserved. According to “paulmc,” this “implies that changes to these sequences do not affect fitness… we expect that for them to be functional they need some degree of evolutionary constraint,” and the absence of such constraint points to their “being putatively junk.”

Not so. Although sequence conservation in divergent organisms suggests function, the absence of sequence conservation does not indicate lack of function. Indeed, according to modern Darwinian theory, species diverge because of mutational changes in their functional DNA. Obviously, if such DNA were constrained, then evolution could not occur.

In 2006 and 2007, two teams of scientists found that certain non-protein-coding regions that are highly conserved in vertebrates (suggesting function) are dramatically unconserved between humans and chimps (suggesting… rapid evolution!). More specifically, one of the teams showed that one unconserved region contains an RNAcoding segment involved in human brain development.

Furthermore, the analysis by “paulmc” assumes that the only thing that matters in nonprotein-coding DNA is its nucleotide sequence. This assumption is unwarranted. As I pointed out in Chapter Seven of my book, non-protein-coding DNA can function in ways that are largely independent of its precise nucleotide sequence. So absence of sequence conservation does not constitute evidence against functionality.

(3) Mice without “junk” DNA. In 2004, Edward Rubin] and a team of scientists at Lawrence Berkeley Laboratory in California reported that they had engineered mice missing over a million base pairs of non-protein-coding (“junk”) DNA—about 1% of the mouse genome—and that they could “see no effect in them.”

But molecular biologist Barbara Knowles (who reported the same month that other regions of non-protein-coding mouse DNA were functional) cautioned that the Lawrence Berkeley study didn’t prove that non-protein-coding DNA has no function. “Those mice were alive, that’s what we know about them,” she said. “We don’t know if they have abnormalities that we don’t test for.”And University of California biomolecular engineer David Haussler said said that the deleted non-protein-coding DNA could have effects that the study missed. “Survival in the laboratory for a generation or two is not the same as successful competition in the wild for millions of years,” he argued.

In 2010, Rubin was part of another team of scientists that engineered mice missing a 58,000-base stretch of so-called “junk” DNA. The team found that the DNA-deficient mice appeared normal until they (along with a control group of normal mice) were fed a high-fat, high-cholesterol diet for 20 weeks. By the end of the study, a substantially higher proportion of the DNA-deficient mice had died from heart disease. Clearly, removing so-called “junk” DNA can have effects that appear only later or under other
circumstances.

(4) Pervasive transcription. After 2000, the results of genome-sequencing projects suggested that much of the mammalian genome—including much of the 98% that does not code for proteins—is transcribed into RNA. Scientists working on one project reported in 2007 that preliminary data provided “convincing evidence that the genome is pervasively transcribed, such that the majority of its bases can be found in primary transcripts, including non-protein-coding transcripts.”

Since an organism struggling to survive would presumably not waste its resources producing large amounts of useless RNA, this widespread transcription suggested to many biologists that much non-protein-coding DNA is probably functional. In 2010, four University of Toronto  researchers published an article concluding that “the genome is not as pervasively transcribed as previously reported.” Yet the Toronto researchers had biased their sample by eliminating repetitive sequences with a software program called RepeatMasker, the official description of which states: “On average, almost 50% of a human genomic DNA sequence currently will be masked by the program.” In the fraction that remained, the Toronto researchers based their results “primarily on analysis of PolyA+ enriched RNA”—sequences that have a long tail containing many adenines. Yet molecular biologists had already reported in 2005 that RNA transcripts lacking the long tail are twice as abundant in humans as PolyA+ transcripts.

In other words, the Toronto researchers not only excluded half of the human genome with RepeatMasker, but they also ignored two thirds of the RNA in the remaining half. It is no wonder that they found fewer transcripts than had been found by the hundreds of other scientists studying the human genome. The Toronto group’s results were disputed in 2010 by an international team of eleven scientists, and the group’s flawed methodology was sharply criticized in 2011 by another international team of seventeen scientists.

So “paulmc” was technically but trivially correct in writing that there are two sides to the debate over pervasive transcription. There are also at least two sides to the larger debate over the functionality of non-protein-coding DNA. But I leave it to open-minded readers of The Myth of Junk DNA to decide whether “paulmc” was correct in claiming that “the science at the moment really does fall on one side of this: large amounts of putative junk exist in the human genome.”

Oh, one last thing: “paulmc” referred to an online review  of my book by University of Toronto professor Larry Moran—a review that “paulmc” called both extensive and thorough. Well, saturation bombing is extensive and thorough, too. Although “paulmc” admitted to not having read more than the Preface to The Myth of Junk DNA, I have read Mr. Moran’s review, which is so driven by confused thinking and malicious misrepresentations of my work—not to mention personal insults—that addressing it would be like trying to reason with a lynch mob.

Follow UD News at Twitter!

Comments
UB, Many Thanks for this clarification. I agree. Yet again I found proof for myself that there is no such thing as pure science per se. It always requires some sort of philosophical wrapper. What I don't like about materialistic world views is their unwillingness to recognise this. IMHO, theistic thought is much more transparent (and intellectually honest).Eugene S
November 15, 2011
November
11
Nov
15
15
2011
07:22 AM
7
07
22
AM
PDT
Hello Eugene,
Yes, I was wondering why life was doing it in the first place. But as far as I understand Eigenstate, “it” meaning nature presents an even broader context, which I cannot dispute. However, what I feel must be disputed is the point of view which in principle does not distinguish life from inanimate matter.
As I said, I am happy to retract. My real issue with Eigenstate is the anthropocentric delusion that matter "computes" itself, and the bastardization of information by suggesting that everything "contains it". These are very alluring and pervasive visions, one's which lead to great uitility, but they are false and those who hold them should have the discipline to not conflate them with reality.Upright BiPed
November 15, 2011
November
11
Nov
15
15
2011
06:47 AM
6
06
47
AM
PDT
Eugene_S: "The concept of unknowable that believers have been familiar with since day 1, is now recognised by science, which is nice." ==== Sad thing is when making those claims of 'unkowable' or 'unsolvable', at least in their own minds it allows them an excuse to not deal with the question and move onto terms like 'directed' or 'guided' for which they do not have a right to and from the beginning they use to poke fun at. The problem for them, tho they'll never admit this, is that the data points towards direction, purpose and intent. The need then is to hijack the words/terms as their own and actually lie about the what they think those processes do when it comes to unobserved MACRO. When pinned to the carpet on their continued Faith-Based statement making, they proceed to religiously dogmatically defend their new found position for which they insist they always believed in anyway. *sigh*Eocene
November 15, 2011
November
11
Nov
15
15
2011
04:45 AM
4
04
45
AM
PDT
3.1.1.2.9 Eocene, Either way, the question askers of both ideologies are still left in the dark with no educational thirst satifying answer to be obtained. It is already a huge achievement to get materialist scientists to realise that reality is bigger than our understanding of it can ever get. The concept of unknowable that believers have been familiar with since day 1, is now recognised by science, which is nice.Eugene S
November 15, 2011
November
11
Nov
15
15
2011
04:35 AM
4
04
35
AM
PDT
UprightBiped, Maybe it is off topic but here it goes. Yes, I was wondering why life was doing it in the first place. But as far as I understand Eigenstate, "it" meaning nature presents an even broader context, which I cannot dispute. However, what I feel must be disputed is the point of view which in principle does not distinguish life from inanimate matter. I believe that life is also an intractable problem. It must be so: it is an extra "layer" of something science cannot define over physics and chemistry. Anyway, I am happy with Eigenstate's purely materialistic response because I think that ultimately there is nothing that can force us to believe in God apart from our self. If there can be any act on our part in which we can fully exercise our free will, what is it if not the decision to believe? So the only problem I have with ID at the moment is at the heart of this. Can there be in principle anything that pushes us via scientific means (observe-measure-predict) towards religious belief? My concern is therefore more religious/philosophical rather than scientific. I have no scruples as a scientist as regards ID: in the end of the day, why can't there be a means to infer design?Eugene S
November 15, 2011
November
11
Nov
15
15
2011
04:21 AM
4
04
21
AM
PDT
eigenstate: "I can understand the confusion, but rereading that, the only “it” I can identify there is nature-as-quantum-computer. Why it computes as it computes I understand to be an intractable question." *** Eugene_S: "“Why, is an intractable question.” "Excellent. Thanks. I can even see grounds for doubting if there is any semantic cargo in this question. This is as much as materialistic thought can get. Fair enough." ==== It does seem to be on equal footing with Shapiro and Yockey who when pressed on the OOL question, simply reply with "It's unkowable" or "It's Unsolvable". It's sort of like asking a member of Christendom to explain the "Trinity". The answer usually is - "It's a Mystery". Either way, the question askers of both ideologies are still left in the dark with no educational thirst satifying answer to be obtained. *smile*Eocene
November 15, 2011
November
11
Nov
15
15
2011
04:04 AM
4
04
04
AM
PDT
3.1.1.2.5 "Why, is an intractable question." Excellent. Thanks. I can even see grounds for doubting if there is any semantic cargo in this question. This is as much as materialistic thought can get. Fair enough. Anyhow, we need an oracle to learn why. Who might that oracle be? I think it is the One who designed this Big Quantum Computer of nature.Eugene S
November 15, 2011
November
11
Nov
15
15
2011
03:45 AM
3
03
45
AM
PDT
@Upright Biped, Here is what Eugene S quoted from ME in his reply to me, in which he asked "why is it doing it?":
“It will be a long time before we have even a modest portion of the kind of computing power nature brings to bear in assessing fitness, moment by moment.”
(my emphasis) The "it" is nature, and it is computing itself, and this is the resource for testing biological fitness -- physics. We were talking about evolutionary algorithms, which are indeed informed by the dynamics of biology, but my point was about the infrastructure problem, the shorting of computing power to ENABLE quality fitness testing. That's a physics problem. We cannot marshal anything like the computing problem nature brings to bear on reality, the Big Quantum Computer that is constantly resolving that reality. I can understand the confusion, but rereading that, the only "it" I can identify there is nature-as-quantum-computer. Why it computes as it computes I understand to be an intractable question.eigenstate
November 14, 2011
November
11
Nov
14
14
2011
09:44 PM
9
09
44
PM
PDT
eigenstate, You may think that Eugene was wondering why inanimate matter was just "doing it" ... er, computing itself. But I don't. And given that your conversation was based on algorithms modeled on biological evolution, its a fair bet to make. If I am wrong, I am happy to retract.Upright BiPed
November 14, 2011
November
11
Nov
14
14
2011
09:27 PM
9
09
27
PM
PDT
eigenstate, Is English not your native language?
it’s just something that stick out conspicuously I’m just no interested in checkers In any case, your claim to thoroughly familiarity but if you are thinking that’s your backgrounding, the search computation for a checkers game If you want to tell me your calculation as to the probability figures for the resources required to make a cell run But I won’t hold your breath. You’ve got the fulfillment of Lewis now driving your math, and forget that, the spirit of God. So it hardly seems fruitful for either of us, in light of that.
Lewis has nothing to do with "driving my math."
We had a guy participate who was affiliated with a program named “Chinook”
You might be interested to know that I met Jonathan Schaeffer, the primary author of Chinook, at the first computer olympiad in London. His program placed first and mine placed second. We had an ongoing friendship for many years, and I attended an AI conference at his university in Alberta. As it turned out, I recomputed his eight-piece endgame database and found errors, which he corrected based on my results. I then went on to compute perfect-play databases which have not been duplicated. You can read all about this at my website. You are right about the game of Go, which involves a more difficult factorial problem, which is not amenable to a traditional tree-search/evaluation-function solution. What this elucidates is the incredibly sophisticated pattern-recognition capabilities of the human mind, which further undermines Darwinian fantasies about the creative powers of random errors filtered by natural selection.GilDodgen
November 14, 2011
November
11
Nov
14
14
2011
09:00 PM
9
09
00
PM
PDT
@Upright Biped, I think you've gotten confused on what the "it" is there, and what the "doing it" is. Eugene S was replying to my comments about nature -- physics -- resolving everything in realtime, constantly, over and over. This is the massive computing power that powers realtime fitness testing, but the "it" had nothing to do with life. The it was "nature", and "doing it" was "computing everything in real time". This makes Eugene's question intractable, the answer unknowable. It's questioning the outermost context in our universe. There is no enclosing context we are aware of or can speak about to provide any answer, or the basis for knowledge. Sorry that got you confused towards a different question (why "life is 'doing it'"). That is a tractable question, in principle, but it wasn't the question in view when you responded. Why does nature as quantum computer compute as it does? That's what was in view, and what I responded to.eigenstate
November 14, 2011
November
11
Nov
14
14
2011
08:22 PM
8
08
22
PM
PDT
Why is it doing it?
Don’t know, don’t see a way I or we can know. I’m not even convinced “Why?” is a coherent question or carries any semantic cargo in this context.
Why does hot air rise? Why does an apple fall? Why do ripples on the surface change the reflection on the rocks below? All of these were legitimate questions, each ending in legitimate knowledge. If asking why life is "doing it" needs an extra dash of semantic cargo to be coherent, then I say give it all it needs. But I think this particular objection only arises when probing questions are asked of someone who doesn't want to, er, ask them with you. So you get the observed pat on the head, along with the ridiculous notion that asking 'why' needs some semantic cargo in order to rescue itself from being incoherent. (btw, the context was in the question, and you properly understood it - as evidenced by the first sentence in your response)Upright BiPed
November 14, 2011
November
11
Nov
14
14
2011
06:07 PM
6
06
07
PM
PDT
@Eugene S
Why is it doing it? Don't know, don't see a way I or we can know. I'm not even convinced "Why?" is a coherent question or carries any semantic cargo in this context.
eigenstate
November 14, 2011
November
11
Nov
14
14
2011
04:59 PM
4
04
59
PM
PDT
Eric Anderson, Sorry I didn't respond in the other thread; there had been no reply when last I looked. I simply think you are being premature, and perhaps swept away by Wells's and your own wishful thinking. It is not denied that function is being found for areas previously thought 'junk'. But it is nowhere near enough to light the bonfire under your your evolutionist-roasting program, even allowing for extrapolation. I am prepared to be wrong (contrary to the tiresome, and hopelessly wide-of-the-mark refrain that 'evolutionists' have some kind of philosophical attachment to junk). But that awaits a fivefold increase in the current functional percentage just to pass the 50% mark.
Junk DNA is, in essence, just another example of the invalid “bad design” line of arguments. The reason I say their feet need to be held to the fire is because they have clearly gone on record saying that a large amount of non-functioning DNA is evidence for evolution and against design.
There remains a substantial amount of apparently non-functioning DNA - I'm not a huge fan of the "what-the-designer-would-have-done" argument, but those authors who point it out have not been given any reason to retract. We know of about 1-2% more functional DNA than we had a few years ago. That simply reduces the junk pile and increases the functional pile, trivially. We remain at over 90% apparently non-functional. No-one is going to don sackcloth and ashes to appease you, and certainly not with those figures. But the argument is more about whether the figure is likely to change substantially. For that to happen, we don't need cheeseparing, we need a whole different type of functional DNA.
Now that the evidence is starting to lean the other direction I am not at all interested in letting them off the hook easily with any backpedaling (although, astoundingly, some of the folks I cited haven’t even started backpedaling, as they seem to be oblivious about recent research).
It is not astounding at all. They have absolutely no reason to start backpedalling. Different issue, but equivalent reasoning: if we found that 85% of the population was gay, when we had thought it was 90%, would that mean anyone proposing substantial levels of heterosexuality needed a good roasting?
The bottom line on so-called “junk DNA” is this. 1. There is very little logical reason to assume that there is a lot of junk DNA.
No-one assumed there was junk. Indeed it, like 'selfish DNA' (which contributes a large fraction of junk), was subject to resistance until more evidence began to be gathered. Ohno: “It seems as though ‘junk DNA’ has become a legitimate jargon in a glossary of molecular biology. Considering the violent reactions this phrase provoked when it was first proposed in 1972, the aura of legitimacy it now enjoys is amusing, indeed.” Ohno suggested that there were too many base pairs to have them all functional, given defensible mutation rate and population size assumptions. He also felt that the gene-duplication model of novel function would create more duds (pseudogenes) than successes. These arguments date from the days before genome sequencing. Now we have genomes, we can characterise the various classes, and it's not looking good, contrary to the Creationist/ID spin.
2. In contrast, there are very good reasons to think that the great majority, though perhaps not all, of junk DNA in fact has function. I will give just three off the top of my head. A. First of all, the non-functioning DNA argument (just like vestigal organs) has an abysmal track record, so we should be very cautious about jumping on that bandwagon. Introns are now known to be critical to alternative splicing.
The main issue is with the length of intronic sequence. All you need to alternatively splice is an exon boundary - a splice site. There is no obvious reason why exon boundaries as represented by introns need to be 10 times longer than the exons they bound. As with transposons, the suspicion is 'selfish DNA' accumulating in the gaps.
Pseudogenes help regulate DNA.
Some pseudogenes!
LINEs and SINEs have known functions, as do certain transposons.
Some transposons (which class includes LINE and SINE) are in coding/regulatory positions. This cannot explain why there are several million of these in other positions. These are decaying at a rate consistent with lack of function. A particular arbitrary sequence, migrated to an exon, can donate up to 6 different peptide sequences (3 frame shifts, sense and antisense). Given the lack of conservation (because they are junk!) the number of possible random sequences goes up still further. But it is a stretch to suggest that >2 million transposons (whose transpositional ability is frequently broken) are there in order to donate such adventitious mutational sequence. We go to great "hi-fidelity" lengths to avoid point mutation, and then cast random peptide sequences about like so much confetti? Nah.
C. From a basic engineering standpoint the idea of large amounts of useless DNA is problematic.
DNA is not engineering! No engineering project had to worry about DNA. Only organisms do. And if it proves to be no particular disadvantage, there is no pressure favouring its loss. Everyone imagines it to be costly; no-one demonstrates that (to a well-fed multicellular eukaryote) it actually is.
I don’t know whether we can put a precise number on it, but I would venture that more than 90% of DNA will eventually be shown to have function.
That is very unlikely - due to intergenic transposons and virus fragments alone (50%). If a 'functional' part of a computer program were also repeated (but not executed) in 50% of that same program, we would not be correct in our belief that the discovered function gave us a clue as to the unexplained repeats. There is an explanation for these repeats, but function is not it.Chas D
November 14, 2011
November
11
Nov
14
14
2011
02:27 PM
2
02
27
PM
PDT
No, I don't think I would quite agree with that. Programming implies far more genetic determinism than there is evidence for. This seems to touch on that old and poor metaphor that the genome is a 'blueprint' for the organism. Even the hardline genetic determinists of old, e.g. 1980s Richard Dawkins and sociobiologists of his ilk, never took genetic determinism that far. There are certainly many phenotypes that have unexplained or only partially explained genetic components. Many are quantitative traits - the result of multiple genes interacting. While some phenotypes will undoubtedly have links to putative junk, others will likely resolve as the results of interactions between current genes and the environment. I expect they will be derived with less determinism than what, at least for me, the word 'programming' implies. Also, even if you are correct to use the word programming, I will still come back to the same point I made earlier: most of the genome does not have the degree of sequence conservation required for the specified biological functions of the type I think you are referring to. If those sequences freely accumulate change, how is it that they can perform a function that is as specified as a conserved gene's function? Also, under a programming analogy such lack of specificity makes no sense, as even small changes to the program will have an effect on how it runs. Even the lines of a computer program that aren't run by the computer need specificity to be useful (e.g. comments).paulmc
November 14, 2011
November
11
Nov
14
14
2011
11:52 AM
11
11
52
AM
PDT
paulmc: Interesting comments, and hopefully Jonathan will have a chance to respond. Question: Do you agree that there are thousands (probably millions, but I'll stick with thousands for now) of known biological functions, the programming for which has not yet been discovered? Where does that programming reside?Eric Anderson
November 14, 2011
November
11
Nov
14
14
2011
08:32 AM
8
08
32
AM
PDT
Dear Professor Wells, You once wrote an essay about how Darwin was wrong about anything. I've tried to find it in the archives but I can't. Can you send me a link? Thanks. Noamnoam_ghish
November 14, 2011
November
11
Nov
14
14
2011
06:42 AM
6
06
42
AM
PDT
Eigenstate, "It will be a long time before we have even a modest portion of the kind of computing power nature brings to bear in assessing fitness, moment by moment." But why is it doing it? Just because? All this chirality, fractality, feasibility, probability stuff is just words. Why is it doing it?Eugene S
November 14, 2011
November
11
Nov
14
14
2011
04:45 AM
4
04
45
AM
PDT
Paul, The only way to make your case is by removing the allged junk and having teh organism develop and live without any issues. Absent of that all you have is arm/ hand waving.Joseph
November 14, 2011
November
11
Nov
14
14
2011
04:03 AM
4
04
03
AM
PDT
eigenstate, You want math? Well let's see your position's math pertaining to the FEASIBILITY that blind, undirected processes can produce a living organism from non-living matter. Your position is regularly challenged and it never responds. Why is that?Joseph
November 14, 2011
November
11
Nov
14
14
2011
03:59 AM
3
03
59
AM
PDT
Firstly, Jonathan, thanks for responding. A few asides to begin: You seem perhaps a little derisive that my UD handle doesn't contain my full name. So, for whatever it's worth, my name is Paul McBride. I am a PhD candidate in New Zealand, studying molecular ecology. I have research interests that interesect with the junk DNA debate, and the broader selectionist--neutralist debate in molecular evolution. I'm intrigued by the interactions between ecology and the patterns and rates of sequence evolution, (predominantly in vertebrates, as far as my own research goes). I use the term 'putative junk' because it reflects the potential for the functional components that may be scattered in low densities through the genome, while primarily reflecting the balance of molecular and population-genetic evidence of the last 40 years that large sections are likely functionless or close to it.
I’m not particularly interested in debates over what percentage of our genome is currently known to be functional.
You have previously stated that most of the genome is functional (in Myth: "the idea that most of our DNA is junk became the dominant view among biologists. That view has turned out to be spectacularly wrong"), and in doing so have made strong - and in my view, inaccurate - claims about the presence or otherwise of junk. So, you do seem broadly interested in proportions of genomic junk; I take it you only mean you don't care about exact percentages. Fine, but am I wrong to infer you have made the argument for >50% functional DNA?
Whatever the current percentage might be, it is increasing every week as new discoveries are reported—and such discoveries will probably continue into the indefinite future.
This is weak for the reason I have given a dozen times already: it is decidedly qualitative. The real question must be 'increasing by how much every week?' I gave the analogy in the previous thread that the discovery of function is rather like the setting of world records in the 100m sprint: just because occasionally the world record is broken, we wouldn't conclude that eventually the record is approaching zero - rather it is decreasing to a limit. This can only change if new/when new functional classes of genomic elements are found. Leaving the world of analogy, discoveries of micro RNAs which are regularly brought up in discussions of newly discovered genomic function, are a point in case of incremental increases. Imagine you find a new one every week and never slow down. In 20,000 years you'll have explained 1% of the human genome, still leaving 90% unexplained. Yet people still hold miRNA discoveries up as evidence that any talk of junk is premature! Such claims relies on the dicussion remaining qualitative, rather than addressing the question of "how much?" Sure, new functions are being discovered in the human genome. Sure, the genome is complex, with many difficult-to-predict interactions. Nothing about either of these observations should lead us to conclude that there is little genomic junk. When we consider the other lines of evidence here (e.g. mutational load) in the context of a) the mutational origins of these sequences and b) the population-genetic fate of these mutations, the balance of evidence squarely falls on one side of this debate. To caricature my position as being "Darwin of the gaps" demonstrates a grave misunderstanding of my reasoning by ignoring the positive evidence for the position of genomic junk. Incidentally, it is also inaccurate to the decidely non-selectionist origins of the arguments. Onto mutational load (e.g. Ohno's work):
The proper way to reason scientifically is not “Ohno predicted theoretically that the vast majority of our DNA is junk, therefore it is,” but “If much of our non-protein-coding DNA turns out to be functional, then Ohno’s theoretical prediction was wrong.”
You've failed to address the argument I have made. Yes, there is current debate about the precise mutation rate - but despite your quote from Kondrashov & Kondrashov, there are emerging direct estimates of the human germline mutation rate. Let's also note that predictions of the mutation rate come from several, independent lines of inquiry and converge on similar numbers. A more precise human mutation rate estimate will emerge over the next couple years. This will help to determine an upper limit for the number of nucleotides that can be under purifying selection. Critically, though, not knowing the exact number has little bearing on the mutational load argument as a case for the existence of junk - it only bears on the question of how much. Indeed, to dismiss the mutational load argument as being purely theoretical would suggest there is nothing to support it. This is far from true. Time dependence in rates of molecular evolution support the idea of a deleterious mutational input that is removed by selection over successive generations. The time dependence is this: between recently diverged species, the spectrum of differences contains an elevated proportion of non-neutral changes (i.e. an elevated dN:dS ratio). This ratio declines as distances between pairs increases, as those differences proportionately change from mutation/low-frequency polymorhphisms to fixed substitutions. Read Ho's seminal paper or his review. This is well supported by another recent paper from a separate research group who show time dependency for non-synonymous but not synonymous change, as predicted from the mutational load argument. Do you have an alternative explanation for these phenomena? In short, you cannot condemn the mutational load argument as simply being theoretical. Not only is the best current explanation for observations such as time dependence in molecular rates, but to dismiss it would at least require an explanation for how populations would otherwise avoid the accumulation of deleterious alleles. On sequence conservation:
Although sequence conservation in divergent organisms suggests function, the absence of sequence conservation does not indicate lack of function. Indeed, according to modern Darwinian theory, species diverge because of mutational changes in their functional DNA. Obviously, if such DNA were constrained, then evolution could not occur.
I am suprised by this line of argument. We cannot examine these data with depth and arrive at this conclusion. Putative bursts of selection occur with distinct molecular signatures, whereas general, unconstrained evolution occurs at approximately the neutral divergence rate (i.e. the mutation rate), without marked, lineage-specific differences. In any case, lineage-specific increases should not be interpreted as accelerated positive selection without evidence. Relaxation of purifying selection will produce the same pattern. Only a hardlined selectionist would argue otherwise. If we were to accept that every acceleration was evidence of positive selection, we would be forced to believe that synonymous sites were under strong positive selection, and pseudogenes have accelerated positive selection relative to their functional homologues. On removing putative junk:
In 2010, Rubin was part of another team of scientists that engineered mice missing a 58,000-base stretch of so-called “junk” DNA. The team found that the DNA-deficient mice appeared normal until they (along with a control group of normal mice) were fed a high-fat, high-cholesterol diet for 20 weeks.
The deletion was done to a locus previously been linked to heart disease, so it was already known to have phenotypic effects. Undoutedly, there will be plenty of such findings in the future. Will this explain the majority of the genome, or little fragments, though? On pervasive transcription, I think this is a much more open question that areas such as mutational load. However, if you wish to discuss methodological problems, you should at least for balance describe the issues with tiling microarrays, and the likelihood of low level transcription being noise. Transcription does not equal function. Transcriptional noise/junk is expected from RNA polymerase binding. I would like to say much more but this is already a lengthy response for a comments thread where it will get lost amongst the rest.paulmc
November 13, 2011
November
11
Nov
13
13
2011
07:47 PM
7
07
47
PM
PDT
@Gil, I haven't attacked your memory. I think you truly believe you were a "Dawkins-style atheist". You just don't know any better, how that claim doesn't even begin to pass the smell test from you, given what you've written over a long period of time. It's not a big deal, it's just something that stick out conspicuously when you get your "breastplate of righteousness" on and get going. I'm just no interested in checkers, I'm a go player, have done some work on chunking algorithms and pattern recognition that I hoped might serve a group effort to build a computerized go-playing program (there's a challenge you could really brag about!) My colleague often reminded the folks who joined the project from time to time that "Go is to chess what chess is to checkers" in terms of complexity and depth". Checkers was the example we used of simple brute force search -- you can run enough plies quickly enough on modern hardware to power a world champion (vs. humans anyway) checkers program just with brute force search. We had a guy participate who was affiliated with a program named "Chinook", that I think was competitive back in the day. Maybe you are familiar with this. In any case, your claim to thoroughly familiarity with evolutionary algorithms is not grounded in your work on "WCC" as I read it, but if you are thinking that's your backgrounding, the search computation for a checkers game, I suggest we're not talking about the same subject. As for the rest... hand waving. If you want to tell me your calculation as to the probability figures for the resources required to make a cell run, I'd be like to see that. I'm regularly challenged with the Big Number Tiny Probability Game from creationists, and it's always interesting (and here, especially so, for one thoroughly familiary with EA and computing mechanics, etc.) to see the math. But I won't hold your breath. You've got the fulfillment of Lewis now driving your math, and forget that, the spirit of God. So it hardly seems fruitful for either of us, in light of that.eigenstate
November 13, 2011
November
11
Nov
13
13
2011
07:34 PM
7
07
34
PM
PDT
eigenstate, Three comments: 1) I'm thoroughly familiar with evolutionary algorithms. 2) Your challenge is empirically unsupported speculation concerning the applicability of computational evolutionary algorithms to the real world of biology. 3) The probabilistic resources are not available in the real world to make your stuff work at even the cellular level, much less at the macro level. You might enjoy clicking on my name and downloading my brain child, WCC. It's free. It uses all the most sophisticated search algorithms gleaned from the chess world, plus a dozen more of my own invention. People like you fascinate me. You've attacked my credibility, character, motivation, integrity, and even my memory about my own past. This is evidence of some kind of pathological obsession on your part. You are a religious fanatic, and I've attacked the creation story of your religion with the very science you thought supported it. This explains your hostility and antipathy.GilDodgen
November 13, 2011
November
11
Nov
13
13
2011
06:50 PM
6
06
50
PM
PDT
@Eric Anderson
Interesting comments. If I am understanding your description properly, you have a process that introduces particular classes of changes to a pre-existing program and then applies a filter to sift through the changes. This may look, superficially, like some kind of Darwinian process, but it really is not. We should not let the term “evolutionary algorithm” confuse us into thinking that the program actually demonstrates naturalistic evolution as it would need to occur in biology if the naturalistic story is true.
EA doesn't pretend to be analogy. Rather it harnesses what we've learned in biology, and puts it to use ends in computing. Impersonal nature (or what in computing we'd call "brute force processing") is a powerful designer, it's just very expensive (by our human standards in terms of the cycles and resources needed for the exploration of the search landscape to work enough to accumulate valuable adaptations and structures. Nature doesn't have anything to offer in terms of brains (so far as we can tell), but what she lacks in brains, she makes up for in a wealth of deep time an resources. Evolutionary algorithms don't and can't simulate all the attendant biology, but it does harness the basic engineering principle that nature has worked out -- persistent, scaled landscape searches with a cumulative adaption filter can produce a wealth of novel and sophisticated structures, structures that we may find extraordinary valuable. In network traffic analysis, EA provides a method to "let the search go", and through the vast number of "duds", effective and ingenious creative structures toward identifying target patterns and detecting anomalies arise. So EA just "virtualizes" some of the core design principles we've learned from the (putatively) impersonal design products of nature.
Therefore, I take it the process would not work well over a practical period of time with random or unplanned state changes or without some optimization of goals. These would be precisely the kinds of things that cannot be controlled in a naturalistic evolutionary scenario. Further, it is certainly fair to say that the amount, complexity and depth of programming in, say a human, dwarfs anything we are currently doing with our machines.
I think that's fair to say. It's certainly true that we've not run enough generations, across all the EA programs man has ever written and run, to reach reasonable odds of producing something on the order of the "goo-to-the-zoo-to-you" developmental pathway of humans. But even so, the objection would have to be "I believe in inches, but I can't see them adding up to miles". Even fairly modest EA implementations demonstrate the stochastic creativity toward novel structures. They are predictably and routinely produced -- it's just massive iterations over a (pseudo)random mutation algorithm. The key lever is the fitness function, the filter. In the programs I've worked with, the fitness function has nothing to do with biology -- we're looking for complex trigger patterns that can detect network intrusion and other traffic anomolies humans are really bad (as intelligent designers) at detecting. The whole of the EA computing community, though, has not produced even a small fraction of the equivalent "cycles" of physics and biology for the history of our universe. Not even close. Deep time is.... really mind-bendingly deep.
You seem to be suggesting that we should “let go of the need for a human (or an intelligence) to be driving all parts of the system” and just rely on the vague idea of stuff-happens-plus-lots-of-time, when your own work in fact demonstrates the need for that intelligent intervention.
Not saying that. It's not vague, or the least bit mysterious. It's just computing automata. It's boring as hell, but it's perfectly specific and well understood, what's happening there. But when you work with genetic algorithms in an applied way (like you are really trying to get something done, professionally), both the power and speed of "intelligent design" from humans is underscored, AND the power and slowness of mindless, undirected process to come up with solutions and structures that humans fail to develop is proven out as well. These are two alternative heuristics for design. "Human design" is highly efficient compared to "impersonal design" with respect to conserving time and resources. Human design produces some types of solutions which impersonal design doesn't, ever, or at least fantastically infrequently. But impersonal design produces solutions humans are too stupid (or rather, just too impatient) to explore. And while it's lavish in the amount of resources and time it demands, it is positively genius in a humbling way for some of its solutions it produces. It's just as unlikely man would come up with some of the "impersonal design" designs. It's a weird, even spooky sensation, when you find some creative results that work coming from brute force, impersonal design process of EA. I can totally understand the human superstitious reflex -- wow, this is like miraculous, or something. But it's just mundane computing machinery.
Further, as I understand it, you are not suggesting that the evolutionary algorithm wrote the top-down architecture in the first place.
No, the operating environment, like nature and natural law in biology, is an enclosing context.
Thus, your algorithm isn’t really a major creative force in any sense of the word.
It is. If you crawled through every line of code, you would not be able to find any code that indicates or steers the program toward any particular solution, or pathway. It uses the (pseudo-)random functions of the OS libraries to drive changes to the parameters, and the executable code itself. For any of the very cool and elegant solutions we came up with for detecting network traffic anomalies, all of the design for that was synthesized from mass iterations over a search landscape, with a fitness filter sweeping up behind to accumulate the positive adaptations. When you look at how this dumb, impersonal bit of code and computing machinery produced the solution, it does seem somewhat magical. But humans has a hair-trigger magic reflex, and this situation is really enlightening because we wrote it all ourselves and know exactly how much design we baked in for those solutions -- none. We can specify the end criteria, but the criteria is not the design. Many ways to skin a cat, etc.
It may, within the carefully planned parameters you have outlined above, be a useful diagnostic or auditing tool to assess the pre-existing program. But even if we concede a small number of potential positive changes flowing from this audit (I don’t think your description matches what would occur in biology, but I’m willing to concede the point for a moment), the evolutionary algorithm is not the primary creative force by any stretch of the imagination.
I think that's where you are mistaken. The operating environment is key to defining the solution criteria -- what succeeds and propagates, and what does not -- but the searching of the landscape, random variation with accumulation of successful adaptation is the creative source for the designs of the successful solutions (and the designs of the unsuccessful non-solutions, of course). In terms of the designs themselves -- not the solution criteria, and these are different elements -- that little bugger of a program IS the creative force. We just send it on its way searching the landscape, and sit back after (seemingly interminable) hours and days and weeks, and watch it create. A key limitation right now, and the reason EA is not more pervasive in commercial computing than it is, is the difficulty of assessing fitness in real-time, or in any way that incorporates massive numbers of iterations and some substantial depth in gauging how good the candidate solution really is. That is not, however, a problem for the physical environment -- it resolves everything, everywhere, in real time. That's just physics. It will be a long time before we have even a modest portion of the kind of computing power nature brings to bear in assessing fitness, moment by moment.eigenstate
November 13, 2011
November
11
Nov
13
13
2011
05:48 PM
5
05
48
PM
PDT
eigenstate: Interesting comments. If I am understanding your description properly, you have a process that introduces particular classes of changes to a pre-existing program and then applies a filter to sift through the changes. This may look, superficially, like some kind of Darwinian process, but it really is not. We should not let the term "evolutionary algorithm" confuse us into thinking that the program actually demonstrates naturalistic evolution as it would need to occur in biology if the naturalistic story is true. You state: "We put in try/catch blocks and fastidiously avoid random or unplanned state changes in our finite automata precisely because we CANNOT operate like biology does, and because we have what biology does not — a governing design agent that works to optimize goals with minimal resources and cycles." Therefore, I take it the process would not work well over a practical period of time with random or unplanned state changes or without some optimization of goals. These would be precisely the kinds of things that cannot be controlled in a naturalistic evolutionary scenario. Further, it is certainly fair to say that the amount, complexity and depth of programming in, say a human, dwarfs anything we are currently doing with our machines. You seem to be suggesting that we should "let go of the need for a human (or an intelligence) to be driving all parts of the system" and just rely on the vague idea of stuff-happens-plus-lots-of-time, when your own work in fact demonstrates the need for that intelligent intervention. Further, as I understand it, you are not suggesting that the evolutionary algorithm wrote the top-down architecture in the first place. Thus, your algorithm isn't really a major creative force in any sense of the word. It may, within the carefully planned parameters you have outlined above, be a useful diagnostic or auditing tool to assess the pre-existing program. But even if we concede a small number of potential positive changes flowing from this audit (I don't think your description matches what would occur in biology, but I'm willing to concede the point for a moment), the evolutionary algorithm is not the primary creative force by any stretch of the imagination. Chance + time (+law, of course, but that always exists and goes without saying) can produce some things. It is wonderful at breaking things, which is a large part of the whole auditing process. Very occasionally something useful might be discovered, *when an algorithm is run in a very narrow space and with carefully-crafted parameters.* But evolutionary algorithms have never been shown to produce large amounts of creative content on their own. The amount of time and resources needed to accomplish even piddling biological tasks vastly dwarfs the available resources of the known universe. I understood Gil's comment to refer to the idea of a naturalistic process being responsible for the incredible information content and creative genius we see around us. I didn't think he was quibbling with the idea that in narrow applications a trial-and-error process might be a useful tool if carefully tailored by the very intelligence the materialist would seek to eliminate from the process.Eric Anderson
November 13, 2011
November
11
Nov
13
13
2011
02:48 PM
2
02
48
PM
PDT
corrected link: THE GOD OF THE MATHEMATICIANS – DAVID P. GOLDMAN – August 2010 http://www.firstthings.com/article/2010/07/the-god-of-the-mathematiciansbornagain77
November 13, 2011
November
11
Nov
13
13
2011
09:26 AM
9
09
26
AM
PDT
further note:
The Genius Behind the Ingenious - Evolutionary Computing Excerpt: The field dedicated to this undertaking is known as evolutionary computing, and the results are not altogether encouraging for evolutionary biology. http://biologicinstitute.org/2008/10/17/the-genius-behind-the-ingenious/ Signature In The Cell - Review Excerpt: There is absolutely nothing surprising about the results of these (evolutionary) algorithms. The computer is programmed from the outset to converge on the solution. The programmer designed to do that. What would be surprising is if the program didn't converge on the solution. That would reflect badly on the skill of the programmer. Everything interesting in the output of the program came as a result of the programmer's skill-the information input. There are no mysterious outputs. Software Engineer - quoted to Stephen Meyer http://www.scribd.com/full/29346507?access_key=key-1ysrgwzxhb18zn6dtju0
Here is a brutally honest admission that neo-Darwinism has no mathematical foundation from a job description from Oxford university, seeking a mathematician to ‘fix’ the ‘mathematical problems’ of neo-Darwinism:
Oxford University Seeks Mathemagician — May 5th, 2011 by Douglas Axe Excerpt: Grand theories in physics are usually expressed in mathematics. Newton’s mechanics and Einstein’s theory of special relativity are essentially equations. Words are needed only to interpret the terms. Darwin’s theory of evolution by natural selection has obstinately remained in words since 1859. … http://biologicinstitute.org/2011/05/05/oxford-university-seeks-mathemagician/
More notes:
In computer science we recognize the algorithmic principle described by Darwin - the linear accumulation of small changes through random variation - as hill climbing, more specifically random mutation hill climbing. However, we also recognize that hill climbing is the simplest possible form of optimization and is known to work well only on a limited class of problems. Watson R.A. - 2006 - Compositional Evolution - MIT Press - Pg. 272 At last, a Darwinist mathematician tells the truth about evolution - November 2011 Excerpt: 7. Chaitin looks at three kinds of evolution in his toy model: exhaustive search (which stupidly performs a search of all possibilities in its search for a mutation that would make the organism fitter, without even looking at what the organism has already accomplished), Darwinian evolution (which is random but also cumulative, building on what has been accomplished to date) and Intelligent Design (where an Intelligent Being selects the best possible mutation at each step in the evolution of life). All of these – even exhaustive search – require a Turing oracle for them to work – in other words, outside direction by an Intelligent Being. In Chaitin’s own words, “You’re allowed to ask God or someone to give you the answer to some question where you can’t compute the answer, and the oracle will immediately give you the answer, and you go on ahead.” 8. Of the three kinds of evolution examined by Turing (Chaitin), Intelligent Design is the only one guaranteed to get the job done on time. Darwinian evolution is much better than performing an exhaustive search of all possibilities, but it still seems to take too long to come up with an improved mutation. https://uncommondescent.com/intelligent-design/at-last-a-darwinist-mathematician-tells-the-truth-about-evolution/ Oracle must possess infinite information for ‘unlimited evolution’ https://uncommondescent.com/intelligent-design/at-last-a-darwinist-mathematician-tells-the-truth-about-evolution/comment-page-1/#comment-408176 THE GOD OF THE MATHEMATICIANS – DAVID P. GOLDMAN – August 2010 Excerpt: we cannot construct an ontology that makes God dispensable. Secularists can dismiss this as a mere exercise within predefined rules of the game of mathematical logic, but that is sour grapes, for it was the secular side that hoped to substitute logic for God in the first place. Gödel’s critique of the continuum hypothesis has the same implication as his incompleteness theorems: Mathematics never will create the sort of closed system that sorts reality into neat boxes. http://www.faqs.org/periodicals/201008/2080027241.html
bornagain77
November 13, 2011
November
11
Nov
13
13
2011
09:05 AM
9
09
05
AM
PDT
eigenstate:
LIFE’S CONSERVATION LAW - William Dembski - Robert Marks - Pg. 13 Excerpt: Simulations such as Dawkins’s WEASEL, Adami’s AVIDA, Ray’s Tierra, and Schneider’s ev appear to support Darwinian evolution, but only for lack of clear accounting practices that track the information smuggled into them.,,, Information does not magically materialize. It can be created by intelligence or it can be shunted around by natural forces. But natural forces, and Darwinian processes in particular, do not create information. Active information enables us to see why this is the case. http://evoinfo.org/publications/lifes-conservation-law/ Roberts Marks on Avida and ev - video - 6:00 minute mark http://www.youtube.com/watch?v=Uc6Ktq0SEBo Evolutionary Synthesis of Nand Logic: Dissecting a Digital Organism - Dembski - Marks - Dec. 2009 Excerpt: The effectiveness of a given algorithm can be measured by the active information introduced to the search. We illustrate this by identifying sources of active information in Avida, a software program designed to search for logic functions using nand gates. Avida uses stair step active information by rewarding logic functions using a smaller number of nands to construct functions requiring more. Removing stair steps deteriorates Avida’s performance while removing deleterious instructions improves it. http://evoinfo.org/publications/evolutionary-synthesis-of-nand-logic-avida/ New paper using the Avida “evolution” software shows it doesn’t evolve. - May 2011 https://uncommondescent.com/evolution/new-paper-using-the-avida-evolution-software-shows/ The effects of low-impact mutations in digital organisms (Testing Avida using realistic biological parameters) - Chase W Nelson and John C Sanford http://www.tbiomed.com/content/8/1/9 The Problem of Information for the Theory of Evolution – debunking Schneider's ev computer simulation Excerpt: In several papers genetic binding sites were analyzed using a Shannon information theory approach. It was recently claimed that these regulatory sequences could increase information content through evolutionary processes starting from a random DNA sequence, for which a computer simulation was offered as evidence. However, incorporating neglected cellular realities and using biologically realistic parameter values invalidate this claim. The net effect over time of random mutations spread throughout genomes is an increase in randomness per gene and decreased functional optimality. http://www.trueorigin.org/schneider.asp The Capabilities of Chaos and Complexity - David L. Abel Excerpt: "To stem the growing swell of Intelligent Design intrusions, it is imperative that we provide stand-alone natural process evidence of non trivial self-organization at the edge of chaos. We must demonstrate on sound scientific grounds the formal capabilities of naturally-occurring physicodynamic complexity. Evolutionary algorithms, for example, must be stripped of all artificial selection and the purposeful steering of iterations toward desired products. The latter intrusions into natural process clearly violate sound evolution theory." http://www.mdpi.com/1422-0067/10/1/247/pdf Constraints vs. Controls - Abel - 2010 Excerpt: Classic examples of the above confusion are found in the faulty-inference conclusions drawn from many so-called “directed evolution,” “evolutionary algorithm,” and computer-programmed “computational evolutionary” experimentation. All of this research is a form of artificial selection, not natural selection. Choice for potential function at decision nodes, prior to the realization of that function, is always artificial, never natural. http://www.bentham.org/open/tocsj/articles/V004/14TOCSJ.pdf
bornagain77
November 13, 2011
November
11
Nov
13
13
2011
08:56 AM
8
08
56
AM
PDT
@Gil, If you are a software developer (I am a software developer), then you will have no trouble discrediting your incredulity on this, just by familiarizing yourself with evolutionary algorithms. I no longer work with EA professionally, but did for many years, primarily in large-scale network traffic and intrusion detection applications as well as some financial pattern analysis that can be profitably developed to explore spaces where human engineering can't or won't reach. One of the basic functions of EA is deliberately introducing "random errors" in the generation of new cohorts, and seeing how they do. This is usually implemented just with calls (in C++) to srand(now) and rand(). I point that out just because it demonstrates (a developer) just how "error-driven" the process is. We take code and values and just randomly mutate them to see what happens. Over and over and over. With a filter (a fitness function), the vast majority of 'errors' do not help, but because of that filter, they are also not preserved. The rare few errors that do lead to improvements are preserved, and accumulate. You are thinking about software in an a context where failures by the millions and millions are not available or allowed. But in computing contexts where they are allowed (and this is a step isomorphically toward the dynamics of biology in the physical world), generating millions of steps backward to sift through in finding the odd step forward which gets preserved and which acculmulates upon the other steps forward does produce interesting, useful, tangible, and sometimes highly profitable results. The way you are thinking about software, and writing programs in the traditional way, is a BARRIER to understanding biology and evolution, not an aid. We put in try/catch blocks and fastidiously avoid random or unplanned state changes in our finite automata precisely because we CANNOT operate like biology does, and because we have what biology does not -- a governing design agent that works to optimize goals with minimal resources and cycles. On an evolutionary model, both in the computing/EA sense and in the biological sense, there is no "pre-testing". The tests occur in the environment, and the process traverses (with stochastic inputs and fitness filters that preserve and accumulate forward steps) a search landscape. Nature also tests, and also fixes bugs, and repeats the process, driving the program (organism) to increasing robustness. It's only absurd if you insist on an anthropomorphic paradigm for the process. As soon as you let go of the need for a human (or an intelligence) to be driving all parts of the system, and let chance + law + time + resources work to harness the creative capabilities those two can exhibit when working in concert, your sense of absurdity will be obviated as misplaced, unwarranted.eigenstate
November 13, 2011
November
11
Nov
13
13
2011
08:43 AM
8
08
43
AM
PDT
I'm currently writing a software utility for others to use at work, and trapping and handling errors is a real challenge. One tries to think of everything that might go wrong internally, as well as everything a user might do to muck things up, but the list is almost endless, as combinatorics produces an exponential explosion of the number of possible pathways through a program as its complexity increases. One tests, finds bugs (often fatal ones), and fixes the bugs until a program seems to be reasonably robust. The notion that random errors filtered by natural selection can mimic this process seems absurd on its face.GilDodgen
November 13, 2011
November
11
Nov
13
13
2011
06:40 AM
6
06
40
AM
PDT
1 2 3

Leave a Reply