Intelligent Design

Retrovirus infection of germline confirmed in vivo

Spread the love

There was some discussion here in the past year or so of whether retroviruses could indeed infect a germ cell and hence leave deactivated heritable fingerprints in descendents. Mike Behe mentions these retroviral markers as convincing evidence (to him) of common descent, at least in the primate lineage including humans and chimps. This experiment pretty much settles the question.

The testis and epididymis are productively infected by SIV and SHIV
in juvenile macaques during the post-acute stage of infection

Miranda Shehu-Xhilaga*1,2, Stephen Kent3, Jane Batten3, Sarah Ellis5,
Joel Van der Meulen1,2, Moira O’Bryan4, Paul U Cameron1,2,
Sharon R Lewin1,2 and Mark P Hedger4

Published: 31 January 2007
Retrovirology 2007, 4:7 doi:10.1186/1742-4690-4-7

Abstract

Background: Little is known about the progression and pathogenesis of HIV-1 infection within the male genital tract (MGT), particularly during the early stages of infection.

Results: To study HIV pathogenesis in the testis and epididymis, 12 juvenile monkeys (Macacca nemestrina, 4–4.5 years old) were infected with Simian Immunodeficiency Virus mac 251 (SIVmac251) (n = 6) or Simian/Human Immunodeficiency Virus (SHIVmn229) (n = 6). Testes and epididymides were collected and examined by light microscopy and electron microscopy, at weeks 11–13 (SHIV) and 23 (SIV) following infection. Differences were found in the maturation status of the MGT of the monkeys, ranging from prepubertal (lacking post-meiotic germ cells) to post-pubertal (having mature sperm in the epididymal duct). Variable levels of viral RNA were identified in the lymph node, epididymis and testis following infection with both SHIVmn229 and SIVmac251. Viral protein was detected via immunofluorescence histochemistry using specific antibodies to SIV (anti-gp41) and HIV-1 (capsid/p24) protein. SIV and SHIV infected macrophages, potentially dendritic cells and T cells in the testicular interstitial tissue were identified by co-localisation studies using antibodies to CD68, DC-SIGN, ??TCR. Infection of spermatogonia, but not more mature spermatogenic cells, was also observed. Leukocytic infiltrates were observed within the epididymal stroma of the infected animals.

Conclusion: These data show that the testis and epididymis of juvenile macaques are a target for SIV and SHIV during the post-acute stage of infection and represent a potential model for studying HIV-1 pathogenesis and its effect on spermatogenesis and the MGT in general.

113 Replies to “Retrovirus infection of germline confirmed in vivo

  1. 1
    WesleyP says:

    so… isn’t this good for evolution?

  2. 2
    ungtss says:

    not sure quite what question it settles. It certainly settles the question of whether germ lines can be infected by viruses. However, it doesn’t answer the following questions:

    1) How can we presume that shared endogenous retroviruses indicate common ancestry, when many viruses (including SIV/HIV) can infect both lines independently?

    2) How is it that all humans share SO MANY (thousands of) endogenous retroviruses — did our common ancestor or ancestor pool have all those viruses already?

    3) What are we to think of endogenous retroviruses that are not only beneficial but essential for organisms (like this one http://www.sciencedaily.com/re.....233630.htm) — is it possible that these retroviruses are actually intentional, designed mechanism for genetic engineering?

    4) Are to believe that ALL of the hundred-some-odd thousand endogenous retroviruses in the human genome were ALL transmitted via STDs that specifically target the gonads?

    5) How did we come to be HOMOZYGOUS with respect to these endogenous retroviruses? Were both the Adam and the Eve infected?

  3. 3
    Gerry Rzeppa says:

    I was wondering along the same lines as ungtss. Perhaps there are some presumptions making their way into the conclusions here?

  4. 4
    gpuccio says:

    DaveScot:

    I have not yet read the whole article (lack of time), but from the abstract I would say that the subject here is infection of the reproductive system, not integration of the retrovirus in the trasmissable genome. So, I can’t see the relevance to retroviruses in the genomes as evidence of uncommon descent.

    Please, correct me if I am wrong.

  5. 5
    gpuccio says:

    Ehmm… It should have been “common descent”, I suppose…

  6. 6
    DaveScot says:

    ungtss

    The question it settles is whether or not retroviruses can infect mammalian germ cells. It was argued that they cannot as there exist unique barriers to entry in germ cells that somatic cells generally don’t have. There’s still no experimental evidence that I’m aware of that mammalian egg cells are subject to retroviral insertions as there are additional barriers to entry there that sperm cells don’t have. However, it needn’t be able to infect egg cells in order to be integrated into the germ line so long as it can infect male gametes.

    This doesn’t have any bearing on other arguments against retrovirus markers as evidence of common descent. That said I’ll answer your unrelated questions to the best of my knowledge:

    1) How can we presume that shared endogenous retroviruses indicate common ancestry, when many viruses (including SIV/HIV) can infect both lines independently?

    Identical integration points. Identical ERV’s are found scattered all over the genome. While there may be preferred insertion points there’s no evidence of any preference. Identical integration points in the same species are easily explained by inheritance. Common ancestry in the same species isn’t really disputed. The argument goes that identical integration points in different species is also due to inheritance via common ancestry only in this case the ancestor was common to both species.

    2) How is it that all humans share SO MANY (thousands of) endogenous retroviruses — did our common ancestor or ancestor pool have all those viruses already?

    That appears to me to be the best explanation – integration into the germ line then spread through the gene pool in the same way that other alleles spread around and may become fixed. Again, changing allele frequencies and fixation of alleles in the gene pool aren’t really disputed.

    3) What are we to think of endogenous retroviruses that are not only beneficial but essential for organisms (like this one http://www.sciencedaily.com/re…..233630.htm) — is it possible that these retroviruses are actually intentional, designed mechanism for genetic engineering?

    Quite right. When Darwinists ask me what possible mechanism might be used by the unknown designer to effect population-wide changes in species I’ve answered that a highly infectious retrovirus would be a really effective way. In just one or several generations a designed genetic load carried by a retrovirus could spread the load through the entire population. It would happen so fast that in the fossil record it would appear as saltation. Indeed, we use retroviruses as vectors in genetic engineering ourselves.

    4) Are to believe that ALL of the hundred-some-odd thousand endogenous retroviruses in the human genome were ALL transmitted via STDs that specifically target the gonads?

    HIV/SIV doesn’t specifically target the gonads and we’re lucky that it doesn’t survive outside the body long enough to have much chance of spreading through the air like a common cold virus. Oral herpes (which causes canker sores) is a retrovirus and it spreads without direct physical contact. HIV/SIV isn’t nearly as robust and generally requires direct blood-to-blood or semen-to-blood transfer. We’re also lucky it isn’t able to spread via mosquitos like the West Nile virus.

    5) How did we come to be HOMOZYGOUS with respect to these endogenous retroviruses? Were both the Adam and the Eve infected?

    If you need this explained your knowledge of genetics leaves an awful lot to be desired.

  7. 7
    RichardFry says:

    DaveScot

    When Darwinists ask me what possible mechanism might be used by the unknown designer to effect population-wide changes in species I’ve answered that a highly infectious retrovirus would be a really effective way.

    Personally I can’t see why we could not equally say that the designer created the molecules directly, in situ, that made up the new changes. So a direct intervention rather then using things already created.

    To me I don’t see why the designer could create from nothing originaly and then, for some unknown reason, starts to require things like retroviruses to do the same thing it did ealier.

    I mean, creating a living form from nothing must be harder then slightly modifiying one that already exists? Direct manipulation of matter must have happened at least once, no?

    Davescot, if we take it as read that the desiger can directly manipulate matter at the sub-atomic level then what is your opinion as to why retroviruses etc are needed at all?

  8. 8
    DaveScot says:

    gpuccio

    The abstract mentions that spermatogonia were observed to be infected. Spermatogonia mature into sperm cells. It isn’t really surprising that mature sperm cells themselves might not have active retroviral insertion sites. The maturation process might serve to deactivate the viral DNA so it is no longer able to produce new virus particles. It may be as simple as endogenous retroviruses are incapable of producing new virus particles in a haploid cell. Expression of many genes not absolutely required for metabolism are repressed in gametes.

  9. 9
    DaveScot says:

    Richard

    The Darwinists I referred to are asking for an identifiable physical method by which purposeful genomic change can be made quickly in a large population. A highly infectious retrovirus is one demonstrated way it could be accomplished. What they’re hoping to hear from us is the answer you gave – an omnipotent supernatural entity can do *anything*. The answer I give doesn’t require supernatural powers but rather nothing more than the same means that human genetic engineers use to insert foreign DNA into living organisms. By so doing I deny them the chance to say that explanations invoking supernatural agency are outside the domain of science. The retrovirus vector is a perfectly workable material way for a designer to modify the natural course of evolution.

    Acknowledgement of endongenous retroviruses in common descent is double-edged sword. At the same time it lends strong support to common ancestry it also lends strong support to the intelligent design hypothesis by giving a designer a demonstrated physical means of altering the course of evolution. ERV’s are generally deactivated quickly by random mutation. There’s nothing preventing any one or any group of them from being reactivated in the future. Greg Bear did a lot of research into the possible reactivation of ancient ERV networks as a mechanism for saltation of new species. He used it as core plot element in his hard science fiction book “Darwin’s Radio”. Darwin’s Radio was very favorably reviewed in Nature by prominent geneticist Michael Gold. He complimented Bear for thinking outside the box of usual genetic assumptions. I highly recommend the book. The sequel “Darwin’s Children” I didn’t think was anywhere near as good.

  10. 10
    gpuccio says:

    Davescot:

    I agree with you that it is perfectly possible that, after spermatogonia infection, the retroviral genome could be permanently inserted in the genome of gametes; I just think that this paper does not observe that, so it remains a theorical possibility.

    Anyway, I agree that retroviruses could certainly be an instrument of intelligent implementationof information, playing a role similar to that of plasmides in bacteria. Indeed, all transposable elements in non coding DNA could act that way. But really, all that is at present speculative, and I think we need more facts.

  11. 11
    RichardFry says:

    DaveScot:

    I see.

    The Darwinists I referred to are asking for an identifiable physical method by which purposeful genomic change can be made quickly in a large population.

    In that context then you are quite right to avoid supernatuural entities as a mechanism. However by giving that answer in a way you are pandering to their intrepretations. What I mean by that is that who says the changes had to be implemented quickly? Is that not placing constraints upon what the designer can and cannot do?

    In any case, my main point in response to your post is that I understand great progress has been made manipulating single atoms and striking progress has been made in general in that field in the last decade or so (for instance, atoms can be held in a criss-cross laser beam array and tuning the laser moves the atoms, I believe the interference patterns generated are the key).

    So, given that humanity is presumably are nowhere near the abilties of the designer in any way and if we assume that great strides will continue to be made in the future I believe that shortly as well as

    The retrovirus vector is a perfectly workable material way for a designer to modify the natural course of evolution.

    you can say that direct manipluation of matter can (and has) been proven to take place (by humanity for one!) with no supernatural intervention required at all. After all if we take humanities achievements as the absolute minimum that can be achieved by intelligent design(ers) then direct manipluation of matter is within all intelligent entities power (or will shortly be).

    After all, what was used to create irreducibly complex items except direct manipluation of matter? That right there is the key for me on this issue.

    To be clear, I’m not saying that “the designer thought it and it was so”. If we can do it, so can the designer.

  12. 12
    DaveScot says:

    gpuccio

    It observed the active infection of cells that mature into sperm cells. The active infection means that the viral DNA load was inserted into the cell. It’s possible that the infection prevents the spermatogonia from maturing into a viable sperm cell. They’re detecting the infection by the presence of viral RNA in the cell which means it’s an active insertion. Two possibilities exist from that point: the active viral DNA prevents maturation of the spermatogonia into sperm cells, thus there’s still no demonstrated way for the insertion to be heritable or the maturation process deactivates the viral DNA load so it is still heritable. I’ll agree that in order to be conclusive an infected spermatocyte needs to be observed maturing into viable haploid sperm cells and/or sequencing the DNA in a sperm cell and finding the specific, identical ERV sequence in it.

  13. 13
    DaveScot says:

    Richard

    you are pandering to their intrepretations

    Not really. “Their” interpretations are largely my interpretations as well. I’m a materialist to the point where my credulity is stretched beyond the breaking point. This basically occurs at two points. One is the entirely speculative self-assembly of complex machinery and abstract codes that specify their construction and operation. The genetic code, DNA, and ribosomes are the best example of this. Intelligent design is the current best explanation as that is the only demonstrated way that codes and machines even remotely approaching that level of complexity can be created ex nihilo. Another breaking point is the absurd untestable concept of an infinite multiverse. An infinite multiverse is self-defeating when it comes to biological ID in any case. In an infinite multiverse there must by definition be an infinite number of universes where an intelligent agency with vast material powers, including the ability to design organic forms of life, existed and was responsible for the design of life on earth.

    What I mean by that is that who says the changes had to be implemented quickly?

    The fossil record. The fossil record is a record of saltation of new species. New species fully characteristic of their kind appear abruptly in the fossil record, exist largely unchanged for an average span of 10 million years, then just as abruptly disappear from the record. Making matters worse is that almost all modern phyla and all extinct phyla appeared abruptly over a span of 5-10 million years in the Cambrian period some 500 mya. There are few if any predecessors to these in any earlier fossil records. Another troubling obstacle, this time for the origin of life instead of its diversification, is that life as we know it first appeared, arguably, some 4 billion years ago. That far back in time the earth barely had time to cool off enough for liquid water to exist which means that abiogenesis had very little time to occur. A perfectly plausible explanation for this is that the earth, as soon as was able to support organic life in any form, was purposely seeded with it (Francis Crick and Leslie Orgel’s directed panspermia). The purpose of that was to terraform the planet so that it could eventually support oxygen breathing land animals. The first seeding culminated in the Ediacaran biota. A second seeding of modern forms of life took place in the Cambrian. If we suppose that rational man and an industrial civilization capable of repeating the cycle of directed panspermia was the ultimate goal more terraforming was required at least in as much as laying down large stores of easily accessable fossil fuels to power an industrial civilization. In order to repeat the cycle and ensure that life continues beyond the point where the earth is able to support it (the sun will eventually fry the earth into a cinder in another few billion years) there must be some means of identifying young planets able to support life (astronomy), the ability to transport life to them (space exploration), and the ability to customize forms of life suitable for the new environment (genetic engineering). We seem to be proceeding along all of those lines.

    Eric Pianka, the notorious lizard expert at UT who caused all the stir by suggesting that the best thing for the planet is for something like the ebola virus to come along and kill 9 out of 10 humans on the planet, posed a philosophical question: “What makes humans more important than lizards?” I answered that if life is to continue beyond the time when our sun becomes a red giant and turns the earth into a cinder then a spacefaring species is required to do it. Lizards aren’t building telescopes and spacecraft but humans are. If not for humans there is no way for life to relocate to a new planet. That’s what makes humans more important than lizards.

    My general feeling is that life on the earth is just one link in a chain that extends indefinitely into the past and may extend indefinitely into the future. Young planets get seeded with life which by design matures into an intelligent form which is able to repeat the cycle by seeding other planets. This fits very nicely into the scheme of things here – a common attribute of all forms of life is to persist into the future. If earth-life is to persist it eventually has to find its way off this planet and there’s really no other way to accomplish that other than through a high technology industrial species. This view is pretty hard-core materialism about as far removed from personal gods, special creation, and bible stories as one can get. I consider myself to be more of a materialist than the most ardent atheistic Darwin worshippers who for some inexplicable reason seem unable to accept the possibility that of continuity in intelligent life preceding ourselves. There is nothing in science which warrants that belief.

  14. 14
    RichardFry says:

    DaveScot:

    One is the entirely speculative self-assembly of complex machinery and abstract codes that specify their construction and operation.

    What about if you replace “complex” with “simple”? In fact, do you consider there to be a difference at this level between “complex machinery” and “machinery”.

    Intelligent design is the current best explanation as that is the only demonstrated way that codes and machines even remotely approaching that level of complexity can be created ex nihilo.

    Of course it is, however I think this is where ID has a serious failing for me. Of course it is unlikely to the point of impossiblity that complex structures like the bac-flag can self assemble from a bunch of proteins, but nobody is claiming that are they? So in my opinion that’s an argument that should be put into the “do not use” section! Your milage may vary.

    The fossil record is a record of saltation of new species. New species fully characteristic of their kind appear abruptly in the fossil record, exist largely unchanged for an average span of 10 million years, then just as abruptly disappear from the record.

    Even so, those timescales are ones that I would hesitate to apply the label “quickly” to. To help me understand your POV on this problenatic issue what in your opinion was the duration of “abruptly”? A day? A year? 10,000 years? At these timescales decimal points matter!

    A perfectly plausible explanation for this is that the earth, as soon as was able to support organic life in any form, was purposely seeded with it (Francis Crick and Leslie Orgel’s directed panspermia).

    I take it you disagree with the drubbing Dawkins has recieved for suggesting the same in “Expelled”?

  15. 15
    ungtss says:

    DaveScot:

    Thank you for your kind and patient response.

    Identical integration points [are why ERVs imply common descent]. Identical ERV’s are found scattered all over the genome. While there may be preferred insertion points there’s no evidence of any preference. Identical integration points in the same species are easily explained by inheritance. Common ancestry in the same species isn’t really disputed. The argument goes that identical integration points in different species is also due to inheritance via common ancestry only in this case the ancestor was common to both species.

    The way I see it, there are three scenarios for how ERVs could have spread to an entire population:

    a) The virus provided some survival advantage, such that it spread throughout the population via natural selection;

    b) The virus provided no survival advantage, but its appearance on a single chromosome in a single common ancestor managed to spread it throughout an entire population.

    c) The virus provided no survival advantage, but different individuals within the population were infected at identical insertion points;

    Scenario A (advantageous ERVs) screams ID to me, as it seems to for you, also.

    Scenario B (no survival value, spread from a single ancestor) seems highly improbable to me, for two reasons.

    FIRST: if the virus initially infected only a sperm cell, it would be passed on only in ONE of the relevant chromosal pair. In order to spread to an entire population, first two of that individual’s descendents carrying the ERV would have to breed, and then the 1/4 of their descendants that are homozygous with respect to the ERV would have to breed, and their descendants would have to be the common ancestors of the entire population we see today (meaning everybody else died off)

    SECOND: Without any survival advantage to this new addition, genetic drift is going to wipe the new ERV off the map in time, unless you’re looking at a population bottleneck of some sort where only the carriers survived an epidemic of the virus. In any event, the whole scenario looks rather implausible to me.

    c) That leaves us with c — multiple initial infections at identical insertion points. While there’s certainly no proof that it occurred, it seems (to my untutored mind) to be the most reasonable of the three scenarios. Viruses can have preferred insertion points.

    CONCLUSION: If we grant that the most plausible scenario is multiple infections at identical insertion points within the human population, I don’t see why it’s particularly Earth-shattering to say that multiple infections at identical insertion points is plausible across species as well.

    What do you think?

    If you need this explained your knowledge of genetics leaves an awful lot to be desired.

    I would hope we’d all be humble enough to realize that this statement is true for all of us in the face of the enormous complexity of the topic. Those of us with relatively more to be desired thank those of you with relatively less to be desired for your help.

  16. 16

    DaveScot wrote (in #6):

    “Oral herpes (which causes canker sores) is a retrovirus and it spreads without direct physical contact.”

    Oral herpes is not caused by a retrovirus. It is caused by a DNA virus called Herpes Simplex I, a member of the Herpesviridae (see: http://en.wikipedia.org/wiki/Herpesviridae). Herpes Simplex I reproduces completely differently than the RNA retroviridae. Specifically, it does not insert a copy of its genome into the genome of its host cell. Instead, the large DNA genome of the Herpes Simplex I virus remains within the cytosol of the host cell, directly the assembly of multiple copies of itself using the DNA replication machinery of the host cell.

    Furthermore, Herpes Simplex I virus does not cause canker sores. Also known as aphthous ulcers, canker sores are not caused by viruses at all, but rather are most likely a form of autoimmune disease (see http://en.wikipedia.org/wiki/Canker_sore)

  17. 17
    DaveScot says:

    Richard

    Of course it is unlikely to the point of impossiblity that complex structures like the bac-flag can self assemble from a bunch of proteins, but nobody is claiming that are they?

    More or less, that’s exactly what they’re claiming but I avoid the bac-flag as an example as there’s a better example in DNA and ribosomes which are both universal in all forms of life and strictly required to be in place before Darwinian evolution can even get started. If a plausible pathway absent intelligent agencdy can be shown capable of creating DNA and ribosomes ex nihilo then I’ll concede that everything that follows needs no intelligent agency either.

  18. 18
    ungtss says:

    Correction to my post:

    to be the most reasonable of the three scenarios.

    changed to to be the most reasonable of the three scenarios with respect to viruses that don’t bestow any survival advantage

  19. 19

    Furthermore, Herpes Simplex I, like all of the Herpeseviridae, does require physical contact to spread. The HSV particle is enclosed within a pseudomembrane, which it “steals” from its host cell during lysis. This pseudomembrane is relatively fragile, and can be destroyed by dessication and contact with disinfectants such as bleach and alcohol-based sanitizers.

    The Herpeseviridae are are very interesting group of DNA viruses. All of them are infectious in humans, causing such nasty diseases as chickenpox (caused by Herpes varicella), shingles (an adult form of chickenpox usually seen immune compromised people), mononucleosis (caused by Epstein-Barr virus), cytomegalovirus (caused by a close relative of Epstein-Barr virus), and Herpes Simplex II, which mainly affects genetalia. Epstein-Barr virus has also been implicated in chronic fatigue syndrome and a rare form of cancer called Burkitt’s lymphoma.

    Nearly everyone reading this post has had one or more of the herpes viruses at some point in their lives. I have had all of them except Herpes Simplex II. Most people are exposed to the viruses during childhood, and therefore test positive for the antibodies against them. However, I contracted CMV at the age of 33, and had a very severe case of atypical mononucleosis, followed by a four-month bout with Guillon-Barré syndrome, a form of autoimmune paralysis that results from one’s immune system attacking the myelin sheaths insulting one’s motor neurons. This left me paralyzed from the chest down for almost two months; thankfully, like most cases of BGS, I eventually fully recovered.

    It is always best to check what one writes about before committing it to public view.

  20. 20
    RichardFry says:

    DaveScot,
    Good to hear then you don’t disagree with my assessment that direct molecular manipulation was/is required to create CSI structures.

    I did not realise that there were claims that DNA and ribosomes self assembled.

    I presume that they exibit IC and so could not have evolved? If so, it stikes me that a “simpler” form of DNA then the DNA we currently have could have, for example, 3 “letters” instead of 4. Is this not an simple example of a possible precursor to extant DNA?

  21. 21
    DaveScot says:

    Allen

    I mistakeny said canker sores when I meant to say cold sores. Cold sores are caused by a strain of herpes virus.

    My confusing herpes, a dsDNA virus, with a RNA-RT virus is however, inexcusable. It’s a good thing an organic coin collector like you was around to correct the misclassification which is akin to putting a Liberty Head dime into a Roosevelt dime collection.

    P.S. This does nothing to detract from the point about RNA-RT viruses being a perfectly suited, material vector for an intelligent designer to employ to cause quick, widespread, appreciable genotype change. Maybe that’s the way we as budding intelligent designers finally defeat malaria. Not wipe out the mosquito or battle the parasite in the human host but rather make a retrovirus that infects mosquito gametes and inserts a gene that ruins the ability to host the malaria parasite but doesn’t otherwise harm the mosquito. The mosquito will thank us for it too.

  22. 22

    A more interesting question is how the RNA retroviridae evolved in the first place. All of the RNA retroviruses have an enzyme called reverse transcriptase. This enzyme allows the RNA viridae to “violate” the “central dogma of molecular genetics”: it allows them to make a DNA copy of their RNA genome. This cDNA copy is then integrated into the genome of the host cell by another enzyme called integrase. This insertion, as you have pointed out, is essentially random.

    The RNA retroviridae are very pernicious disease-causing agents because they mutate very rapidly, thereby escaping surveillance by the host’s immune system. This is because reverse transcriptase completely lacks the proof-reading and error-correction mechanisms normally found in DNA replication. Virtually every reverse transcription by these viruses produces several point mutations, meaning that a host that has harbored such viruses for many generations (i.e. a few weeks) has multiple independently evolving lines of these viruses attacking their cells.

    The random insertion of the cDNA copy of the RNA genome of these viruses can play hell with normal cell function. This is why most RNA retroviridae attack only cells that are either immediately replaceable (such as epithelial cells) or cells whose loss does not cause immediate impairment or death. Viruses that attack essential and non-replaceable cells (such as brain cells or muscle cells) impair or kill their hosts and therefore are generally eliminated by natural selection.

    Originally thought to be unique to these viruses, reverse transcriptase is now known to be a normal product of genes in eukaryotes, where it participates in the regeneration of telomeres at the ends of chromosomes. Perhaps the best hypothesis for the origin of RNA retroviruses is that they began as “rogue” reverse transcriptase genes. The gene for reverse transcriptase virtually is a retrovirus all by itself; all it needs to do is become detached from its normal position in the genome of a cell that contains it. This can happen as the result of several molecular genetic processes such as transposon “jumping” and chromosome fissioning. Even more likely would be the mutation of an mRNA transcript of the normal reverse transcriptase gene, allowing it to continue to exist outside the genome, but inserting a cDNA transcript into the host genome in random locations.

    This hypothesis is supported by the observation that the genomes of most RNA retroviruses are extremely small. HIV, for example, has a genome that consists of just nine genes: the genes for reverse transcriptase and integrase, a few proteases (which trim the coat proteins), a few coat protein genes, and a gene for G120, a binding protein that allows the virus to bind to the CD4 receptor on the plasma membrane of T4 helper lymphocytes (and a few other cells with similar receptors). The combination of the genes for reverse transcriptase and integrase would immediately function as a “coatless” virus, and given the extraordinarily high mutation rate that is a feature of reverse transcription, the combination could easily pick up a few other genes via repeated insertions and lysis cycles.

    In other words, viruses like these are not “external” agents; they are products of our own molecular genetics, gone “rogue”. If they do not kill their hosts before they can be transmitted, they will increase in frequency over time; that is, natural selection will favor their further reproduction and modification. If they (as some recent research suggests) occasionally benefit their hosts, they will be even more likely to survive and reproduce in future hosts.

    Evolution by natural selection in action.

  23. 23

    RichardFry wrote (in #20):

    “…a “simpler” form of DNA then the DNA we currently have could have, for example, 3 “letters” instead of 4. Is this not an simple example of a possible precursor to extant DNA?”

    Actually, there is a strong possibility that the original genetic code may have consisted of as few as two nucleotide bases, rather than the current three that constitute an mRNA codon. This would mean that the original codon “lexicon” could only have coded for a maximum of sixteen amino acids, rather than the current twenty.

    That this was likely the case is supported by an examination of the current three letter codon lexicon and the amino acids it codes for. This lexicon is highly redundant, with many of the amino acids being coded for by only the first two bases in the codon (the so called “wobble hypothesis” originally proposed by Francis Crick). A few amino acids can even be coded for by six different codons (i.e. even the second base in the triplet can “wobble” a little).

    Only two of the amino acids are coded for by only one codon: methionine and tryptophan. It is likely that methionine has its “privileged” status by virtue of the fact that it is the “start” amino acid for all proteins (for stereochemical reasons that are two complex to describe here). Therefore, the codon for methionine (AUG) was probably among the original set of two-base codons, and only later was modified to its current three-base form.

    Tryptophan, by contrast, was almost certainly the last amino acid to be coded for. It is relatively rare in many proteins, and was therefore probably a kind of “afterthought” in the codon/amino acid lexicon.

    All of the foregoing can be tested by simply examining the codon/amino acid “lexicon” and comparing it with the known abundances of the different amino acids in proteins. In other words, the current codon/amino acid lexicon, rather than being “irreducibly complex”, shows all the signs of having been modified from a much simpler system that consisted of a two-letter code specifying no more than sixteen amino acids. It may even be the case that the “ur” code consisted of only one-letter codons (and therefore could only specify four different amino acids), and that the two-letter and three-letter codes are later elaborations of this simplest of all codes.

  24. 24

    Actually, like Charles Darwin, I collect stamps. I am closing in on a complete collection of all of the US commemorative issues, including those from the 19th century. And yes, I’m aware that this passion of mine plays right into the hands of Ann Coulter and other like genii.

    My wife is the numismatist in the family; indeed, she controls all of the money, currency, coin, and other forms of fiat money.

    And my kids collect their .99 fine silver “walking liberty”, “peace”, and “Morgan” dollars/thallers from the tooth faerie…

  25. 25
    DaveScot says:

    Allen

    Trust me, no one is ever going to compare you to Charles Darwin at this point in time but I won’t say that will remain true as it’s within the realm of possibility you’ll come up with a general theory of biology as important as Darwin’s. Just hope you aren’t like Gregor Mendel and have to be long dead before anyone notices you were onto something important.

  26. 26
    ungtss says:

    Allen:

    A more interesting question is how the RNA retroviridae evolved in the first place … Perhaps the best hypothesis for the origin of RNA retroviruses is that they began as “rogue” reverse transcriptase genes.

    That just begs the question.

    “How did they evolve in the first place?”

    “While, they broke off from eukaryotes significantly more complex than themselves.”

    “Well, how did the significantly more complex eukaryotes get the gene in the first place?”

    That question remains unanswered.

    To meaningfully answer the question, “how RNA retroviridae evolved in the first place,” you need to explain how they evolved in the much more complex and impressive eukaryote …

  27. 27

    Identical integration points in the same species are easily explained by inheritance. Common ancestry in the same species isn’t really disputed. The argument goes that identical integration points in different species is also due to inheritance via common ancestry only in this case the ancestor was common to both species.

    I understand the argument you restated, Davescot, but note (and this is not directed against you) that it is rarely presented with the mathematical rigor.

    I do not claim to be an expert on retroviral markers as evidence of common descent, but, God willing, hope to become one.

    I suspect the argument is a house of cards.

    Until or unless I become an expert, though, it is interesting to consider other’s discussing it. I hope the discussions become more mathematically rigorous.

  28. 28
    ungtss says:

    It’s like saying:

    “How did that computer virus come into being?”

    “Well, there’s a very similar section of Code in Microsoft Word 95 — so it probably originated there and was subsequently modified.”

    “Well that doesn’t mean anything. How did it come to be in Microsoft Office in the first place!?”

  29. 29

    Just hope you aren’t like Gregor Mendel and have to be long dead before anyone notices you were onto something important.

    Things are even worse now, in that somebody who is onto something important might be shunned by “peer reviewed” journals.

  30. 30
    Ekstasis says:

    Dave Scott, you say “If we suppose that rational man and an industrial civilization capable of repeating the cycle of directed panspermia was the ultimate goal more terraforming was required at least in as much as laying down large stores of easily accessable fossil fuels to power an industrial civilization. In order to repeat the cycle and ensure that life continues beyond the point where the earth is able to support it (the sun will eventually fry the earth into a cinder in another few billion years) there must be some means of identifying young planets able to support life (astronomy), the ability to transport life to them (space exploration), and the ability to customize forms of life suitable for the new environment (genetic engineering). We seem to be proceeding along all of those lines.”

    Interesting hypothesis, but it seems that a major inconsistency exists. If the goal of the panspermia is to seed life that then develops, survives, and thrives, then why use individual organisms that possess consciousness, and die, fading to permanent black? In other words, the micro level is totally out of sync with the macro. The only consciousness that presumably exists are a series of concurrent and sequential fragments. Forget the overall fire, we simply have a collection of sparks, each dying out.

    Seems rather cruel and pointless, does it not?

    Besides, it does not fit with empirical evidence. Near Death Experiences, with vast and mounting evidence that it is not simply a psychological phenomenom, too much to discuss here, points to the permanent consciousness of individuals. So, we are back to the permanency of the individual and the temporal nature of the cosmos, the very point at which human belief started!!!

  31. 31
    Russell says:

    DaveScot:
    “It observed the active infection of cells that mature into sperm cells. The active infection means that the viral DNA load was inserted into the cell.”

    1) Correct me if I’m wrong, but doesn’t “active infection” mean an infection that is producing new packaged virions, something that wasn’t measured in this paper?

    2) The “load” (genome) of the virus is RNA, not DNA.

    “It’s possible that the infection prevents the spermatogonia from maturing into a viable sperm cell.”

    Wouldn’t it be more accurate to conclude that it’s likely, not merely possible? What other conclusion is more likely since they observed no infection of anything more mature than spermatogonia?

    “They’re detecting the infection by the presence of viral RNA in the cell which means it’s an active insertion.”

    Doesn’t that merely mean (assuming that they lack the sensitivity to detect input genomes) that they are measuring transcription of the provirus, and do not know if the newly-synthesized viral genomes are being packaged and excreted or not?

  32. 32
    Borne says:

    Gerry: “Perhaps there are some presumptions making their way into the conclusions here?”
    In Darwinism presumption is the ubiquitous, fundamental flaw. It’s always surprising to see those underlying presumptions ignored, nicely swept under the carpet of double talk.

    It is easy to make an assumption, build a complex argument over it with logic that, while being correct in itself, leads to erroneous conclusions.

    If no one notices the base assumption that itself lacks proof, the argument will seem right all the way from point B to Z (a being the assumption that is usually not even presented as the starting point).

    Something like building an equation that assumes that x = 3 when in fact x = 5. The equation may give a logically coherent answer but not a correct one.

  33. 33
    ungtss says:

    Interesting hypothesis, but it seems that a major inconsistency exists. If the goal of the panspermia is to seed life that then develops, survives, and thrives, then why use individual organisms that possess consciousness, and die, fading to permanent black? In other words, the micro level is totally out of sync with the macro. The only consciousness that presumably exists are a series of concurrent and sequential fragments. Forget the overall fire, we simply have a collection of sparks, each dying out.

    Seems rather cruel and pointless, does it not?

    Depends on the designer’s purpose and values. Is it cruel and pointless to breed dogs, knowing that they will “live and ultimately fade to black,” probably after experiencing some discomfort and probably fear? Maybe. But maybe the designer isn’t as concerned with our feelings as we are. Maybe he/she/it is more concerned with something else, we know not what. Maybe we were created as servants. Maybe we were created as an experiment. Maybe we were created for aesthetic purposes (the same reason a man plants a garden). I wouldn’t say the designer was cruel to give me 70 good years that end in nothing. Those were 70 good years, after all.

    Besides, it does not fit with empirical evidence. Near Death Experiences, with vast and mounting evidence that it is not simply a psychological phenomenom, too much to discuss here, points to the permanent consciousness of individuals. So, we are back to the permanency of the individual and the temporal nature of the cosmos, the very point at which human belief started!!!

    references to this empirical evidence?

  34. 34
    Peter Pike says:

    RichardFry (#14) said:

    I take it you disagree with the drubbing Dawkins has recieved for suggesting the same in “Expelled”?

    Actually, there is a difference beetween Dawkins’ view and DaveScot’s view. Namely, before stating he could accept panspermia, Dawkins first said that Intelligent Design was complete nonsense. In other words, he says “There can’t be an Intelligent Designer at all. But maybe aliens could have done it after all.”

    When I saw “Expelled” in an advanced screening, everyone laughed at that comment because it showed Dawkins’ hypocrisy perfectly.

    While I don’t agree with DaveScot here, he’s not being inconsistent. He’s said that if aliens did it, that WOULD be Intelligent Design. Thus, he would not deserve a “drubbing” like Dawkins does for his [Dawkins] comments.

  35. 35
    DaveScot says:

    William

    Nothing in prehistoric biology is mathematically rigorous. In any case modern biology, the study of living tissue, is the important branch of biology from which all practical benefit flows. Chemistry and physics are mathematically rigorous natural sciences – historical biology is not.

  36. 36
    ungtss says:

    Chemistry and physics are mathematically rigorous natural sciences – historical biology is not.

    One way to make this discussion more mathematically rigorous would be to discuss exactly how many shared ERVs there are, the locations in which they are found, the degree of alteration of the ERVs, and the probability of the ERVs spreading throughout an entire population without any identifiable survival value, under the pressures of genetic drift.

  37. 37
    DaveScot says:

    Ekstasis

    If the goal of the panspermia is to seed life that then develops, survives, and thrives, then why use individual organisms that possess consciousness, and die, fading to permanent black?

    Do we know for a fact that individual consciousness fades to black, never to return? How did *your* consciousness arrive in the first place and since it arrived once is it unreasonable to think that it might happen again? I think this is a question for philosophy not science.

  38. 38
    ungtss says:

    You could also make it mathematically rigorous by comparing the number of endogenous retroviruses in the genome (6% for humans as I understand it) to the number we share with other species. Say we have 30k retroviruses in our genome and only share 7 with chimps. The natural conclusion is that we acquired 29,993 of them AFTER our split with the chimps, and only 7 before. But given the fact that most of our evolutionary history is alleged to have been SHARED with the apes, we would expect this to be the reverse — most of our ERVs should be a shared, and only a few different.

    This appears to be the case. 6% of our genome is ERV, but only a very few are shared with the apes. More specific numbers would certainly make this more mathematically rigorous. Anybody have better sources for numbers on this stuff?

  39. 39
    ungtss says:

    Look at HERV-k:

    Ten full-length HERV-K proviruses were cloned from the human genome. Using provirus-specific probes, eight of the ten were found to be present in a genetically diverse set of humans but not in other extant hominoids. Intact preintegration sites for each of these eight proviruses were present in the apes. A ninth provirus was detected in the human, chimpanzee, bonobo and gorilla genomes, but not in the orang-utan genome. The tenth was found only in humans, chimpanzees and bonobos.” (http://www.ncbi.nlm.nih.gov/pubmed/10469592)

    8 out of 10 of the HERVs found were HUMAN ONLY. Only 2 were shared. This indicates either that:

    1) We have been distinct from the apes 5 times as long as we were related; or
    2) There is a 20% chance that HERV-k can also infect apes at the same location

    But one thing this DOESN’T do is support the evolutionary conclusion that most of our evolutionary history is shared with apes.

  40. 40
    DaveScot says:

    ungtss

    Until we have more fully sequenced accurate genomes from many individuals within the same species it’s impossible to say how many ERVs are fixed in the entire population and how many are not. A sample size of one individual from each primate species isn’t a lot to work with. The race for the $1000 human genome sequence is on and is probably 5-10 years away. As it gets cheaper we’ll have larger sample sizes to work with. AFAIK there are thousands of ERVs in both humans and chimps and only a very small fraction are at identical insertion points. Other classes of markers similar to ERVs are pseudogenes and transposons (thousands of those too) with at least some small fraction sharing identical cross-species insertion points.

  41. 41
    ungtss says:

    DaveScot:

    Thanks for the response. I am fascinated about with stuff and greatly appreciate the opportunity to bat ideas around.

    My thought is that evolution would predict that the overwhelming majority of ERVs would be shared by all humans and apes, commeasurate with the alleged “evolutionary distance” between them. The facts (at least as we have them today) appear to lean strongly against this.

    What do you think?

    I also found this interesting: only alpha, beta, gamma, epsilon- and spumaretroviruses have been found as ERVs. Deltaviruses and lentiviruses have not (http://www.retrovirology.com/content/2/1/50). HIV is a lentivirus (http://en.wikipedia.org/wiki/HIV). Therefore, it would be unprecedented for a lentivirus like HIV to actually make it into the genome. Of course, this study didn’t actually show HIV making it into the genome.

  42. 42
    DaveScot says:

    ungtss

    My thought is that evolution would predict that the overwhelming majority of ERVs would be shared by all humans and apes, commeasurate with the alleged “evolutionary distance” between them. The facts (at least as we have them today) appear to lean strongly against this.

    We really need to know how many are fixed in the human gene pool and how many are fixed in the gene pool of other primate species. Fixation for things with no selection value is a low odds event. For that matter fixation of stuff with a positive selection value is a low odds event too. So you’d really expect most of the ERVs are not fixed and are dwindling in frequency in their respective gene pools. Even just of few them that are fixed at identical insertion points in multiple species is strong evidence of a shared ancestor. Similar observations with psuudogenes and transposable elements makes the case even stronger. The vast similarity in function and sequence of active coding genes adds considerably more weight to the evidence for common descent. And that’s just the molecular evidence. The fossil record generally agrees with the molecular evidence and so too do anatomical similarities in both living and extinct species.

    The so-called “overwhelming evidence” of evolution is in fact restricted to evidence of common ancestry. If we use a very broad defintion of “evolution” – common descent with modification – there’s compelling evidence that common descent is true and modifications did indeed occur. What’s missing is the root cause of the modifications. Ascribing all the modifications to chance & necessity is hardly more supportable than claiming it was one of the Gods in various and sundry revealed religions. Compare “God of the Gaps” to “Darwin of the Gaps”. There’s not a dime’s worth of difference between the two of them.

    But rather than endlessly argue about whose speculative dogma, if any, is right about the diversity of life I prefer to focus on just one bit of molecular machinery common to all life – DNA and ribosomes. If someone can demonstrate how that code driven machinery came about without intelligent agency I’ll concede that all subsequent evolution is possible and plausible without intelligent agency. But that’s just me. I’ll follow the evidence whichever way it leads.

  43. 43

    DaveScot wrote (in #42):

    “The so-called “overwhelming evidence” of evolution is in fact restricted to evidence of common ancestry. If we use a very broad defintion of “evolution” – common descent with modification – there’s compelling evidence that common descent is true and modifications did indeed occur.”

    Hear, hear! This is precisely the position that the majority of the naturalists of Darwin’s generation adopted within about a decade of the publication of the Origin of Species, and remains so today.

    And you are correct that most of the controversy over evolution within the scientific community has centered on the mechanism(s) by which “descent with modification has taken place. This controversy continues today, with partisans for neutral molecular evolution squaring off against “pan-adaptationists” against “neo-lamarkians”, etc. etc. etc.

    And so, I agree once again: the real question is not “has evolution (defined as descent with modification from common ancestors) occurred, but rather how has it occurred, and how do we know? And once again, I would like to stress that the answer to this question is not natural selection, if one considers it to be a mechanism for generating new phenotypes. It isn’t; it only preserves a small fraction of all of the possible phenotypes that the “engines of variation” can produce.

    It is these “engines of variation” that are the most likely place to find the source of the variations that are so amply recorded in the empirical evidence, some of which is chronicled in this thread. That will be the job of the next cohort of the fomenters of the next episode of the “evolution revolutions”.

  44. 44
    bFast says:

    Allan_MacNeill:

    And so, I agree once again: the real question is not “has evolution (defined as descent with modification from common ancestors) occurred, but rather how has it occurred

    Recently on ID the Future, Casey Luskin argues that the human chromosomal fusion does not necessarily support the theory of common descent. His case seems flimsy to me. He seems to suggest that, though it is evidence consistent with common descent, it could possibly be otherwise.

    I have seen no evidence that causes me to question common descent. It appears to me that the rejection of common descent by some is wishful thinking, or religious thinking, not evidence based. I wish that the ID community would truly take the position of being ID evolutionists, adopting common descent as a tennet. However, I know that such a position would alienate even more of the religious community than is already alienated. Sometimes philosophy just seems to get in the way.

  45. 45
    Bob O'H says:

    You could also make it mathematically rigorous by comparing the number of endogenous retroviruses in the genome (6% for humans as I understand it) to the number we share with other species. Say we have 30k retroviruses in our genome and only share 7 with chimps. The natural conclusion is that we acquired 29,993 of them AFTER our split with the chimps, and only 7 before. But given the fact that most of our evolutionary history is alleged to have been SHARED with the apes, we would expect this to be the reverse — most of our ERVs should be a shared, and only a few different.

    And here’s where some knowledge of the area you’re pontificating about might be useful.

    Your argument assumes that the evolutions of ERVs is sufficiently slow. If they evolve quickly enough, then any phylogenetic signal will be drowned out by the noise. If you look at any textbook on phylogenetics, you’ll see they discuss this w.r.t. sequence evolution. Contrary to what Dave asserts, this is fully rigourous. Here’s the rigour with full jargon:

    You have a stochastic process where all states intercommunicate. Therefore the process has a stationary distribution.

    The implication is that the historical signal will eventually be degraded. ERVs are a bit more complex, because they aren’t single bases, but as long as we condition on their extinction not having occurred, the same result can be found: it’s just a consequence of having a stochastic process.

    The questions then mainly become empirical – for example, how fast is stationarity achieved? What are the particulars of the stochastic process (and then we can return to mathematical rigour by trying to model them)? Amongst the many books I really should read is Mike Lynch’s on genome evolution, which tackles these sorts of problem. It’s not an area I’ve had to deal with much, so I’m not up to date on the literature.

  46. 46
    RichardFry says:

    DaveScot:

    Making matters worse is that almost all modern phyla and all extinct phyla appeared abruptly over a span of 5-10 million years in the Cambrian period some 500 mya.

    Again, 5-10 million years is not abruptly. Are you saying that one of the things that can be infered about the designer is that it takes 5 to 10 million years per design? If so, it seems it works on similar timescales to standard evolution!

  47. 47
    nullasalus says:

    RichardFry,

    “Again, 5-10 million years is not abruptly.”

    That’s contextual, isn’t it? If development was proceeding over a period of time measured in the hundreds of millions of years, and then in 5-10m (or even longer) we see tremendous development, that seems abrupt to say the least.

    Besides, ‘standard evolution’? The whole point of the cambrian, even among people who happily accept that evolution can accomplish a tremendous amount of things (like myself), is that the established trend was apparently bucked, in a big way.

    Unless ‘standard timescales’ are ‘whatever timescale it takes for evolution to work, be it hundreds of millions of years, or tens, or less.’

  48. 48
    gpuccio says:

    bFast (#44):

    Believe me, I have absolutely no religious reasin to reject common descent. My religious faith is perfectly compatible with it. Indeed, I do provisionally believe in common descent. And yet, I believe that the issue should remain open to discussion because the evidence, though certainly substantial, is not really strict. And above all, admitting common descent does not necessarily mean admitting “universal” common descent.

    There is at least one major step which can in no way be explained by “descent with modifications”, and that’s OOL. Allen MacNeill may choose not to consider it, but I think that all of us in ID are, justly, very interested in it.

    There is no reason, neither logical nor empirical, to believe that OOL is not a fundamental issue to understand successive evolution. After all, the principles, forces or whatever which have designed the first forms of life can well have been active in their successive evolution, and even some modalities, but not necessarily all of them, could be in common. And OOL has nothing to do with common descent.

    Moreover, the information issue is similar in both OOL and successive evolution, even if I believe it is a little bit tougher in OOL.

    Therefore, common descent is certainly the most reasonable hypothesis, at present, about one aspect of the modalities of biological information implementation: in other words, we can say that probably the new information necessary for macroevolution may well have been implemented “over” existing forms of life. There are certainly evidences for that.

    But the issue, and its unknown details, should remain open.

    Regarding Allen MacNeill’s “engines of variation”, I can only restate what I have already said many times: they are either “variations of random variations”, adding nothing to RM + NS, or obscure references to unknown organizing principles, much more metaphysical than the hypothesis of a designer. It is significant, however, that Allen MacNeill, with all his openmindedness (which I certainly am more than willing to recognize), still does not feel like including, even hypothetically, the action of a designer among his “engines of variation”. In the end, the real ideological problem is always there: the most likely explanation is not even considered.

  49. 49
    Ekstasis says:

    Allen MacNeil says “And once again, I would like to stress that the answer to this question is not natural selection, if one considers it to be a mechanism for generating new phenotypes. It isn’t; it only preserves a small fraction of all of the possible phenotypes that the “engines of variation” can produce. It is these “engines of variation” ….”

    The engines of variation are out there and incredibly powerful, got it. However, it seems the engines of variation never was the weak link, and we still have a problem.

    Picture a corporation. OK, the R&D Department (engines of variation) is extremely prolific, lets take that as a given. But also a given is that they are extremely independent, and work undirected, so they create all sorts of conceivable new product concepts. Now, every new concept, good or bad, must be deployed and tested in the marketplace, where natural selection takes place (customer purchases).

    So, what is the choke point, the critical path that takes the greatest amount of time)? It is not the R&D process that you seem to focus on with such great zeal! It is the assembly lines. An assembly line must be established for every new concept/variation. And then, depending on customer purchases (survival and reproduction), every assembly line must either be expanded or closed down. This is a logistical nightmare, to say the least!! Re: Haldane and others.

    Further complicating things, new concepts coming out of R&D cannot explain the products currently on the market. Rather, large groupings of assembly lines must somehow be integrated together, in perfect order, to explain the immense complexity of today’s products. It’s the irreducible complexity thing all over again.

    All this, and we are still focused on the brilliant but misguided minds/brains in R&D. Hmmm. Is this any way to run a business, or your local biosphere for that matter?

  50. 50
    jerry says:

    RichardFry,

    5-10 million years is the max. The actual time span could have been much less but we have no way of measuring it to that detail. If there are several more Cambrian finds then the numbers could change. I am not sure how much you know about the Cambrian Explosion but there was a time period for which very little was found and then a relatively short time later all the modern phyla of the world appeared. And there was zero evidence for a gradual transitions or any predecessors.

    What has happened in any 5-10 million year period? Usually not much. What has happened in the last 10 million years? Humans but what else? You may be able to point out a few interesting changes but most of the present species have not changed a lot. For example birds are still birds after 140 million years. There are lots of varieties but they are still birds.

  51. 51
    peter borger says:

    bFast,

    why is everybody verifying the evolutionary hypotheses? It’s not interesting and there is plenty of opportunity to falsication nowadays. That’s how the scientific method supposed to work. What counts is the details.

    I looked into the details of the human chromosome 2 fusion; It is only superficially in accord with evolution. You can find the details here:

    http://www.volkskrantblog.nl/bericht/150215

    There is too much talking and too little science.

  52. 52
    DeepDesign says:

    What does this finding do to ID?

    http://www.hindu.com/thehindu/.....101021.htm

  53. 53
    DeepDesign says:

    Perhaps the designer, introduced new information so that the lungless frog could breath out of it’s skin.

  54. 54
    DaveScot says:

    Allen

    This controversy continues today, with partisans for neutral molecular evolution squaring off against “pan-adaptationists” against “neo-lamarkians”, etc. etc. etc.

    Of course it continues. That’s because all the partisans are wrong. There’s an elephant in the room.

  55. 55
    DaveScot says:

    Bob OH

    re; rigor

    Teh problem with ToE is that it is ultimately based on an unpredictable mechanism. If you can’t predict you can’t be rigorous.

    What does ToE predict is the next step in human evolution? It predicts nothing. It can’t. Anything might happen then again maybe nothing will happen. ToE covers all possible contingencies after the fact but never before the fact.

    Compare this to a rigorous science like astronomy. It not only tells you precisely where all the planets were in the past but precisely where they will be in the future. That’s rigor.

  56. 56
    ungtss says:

    Dave Scot:

    We really need to know how many are fixed in the human gene pool and how many are fixed in the gene pool of other primate species. Fixation for things with no selection value is a low odds event. For that matter fixation of stuff with a positive selection value is a low odds event too. So you’d really expect most of the ERVs are not fixed and are dwindling in frequency in their respective gene pools.

    I agree — the key is to identify fixed ERVs, data we don’t yet have.

    Even just of few them that are fixed at identical insertion points in multiple species is strong evidence of a shared ancestor.

    You don’t think the proportion would matter? For instance, if there are 5,000 fixed ERVs and 10 are at identical insertion points with chimps, that wouldn’t imply anything to you?

    Similar observations with psuudogenes and transposable elements makes the case even stronger. The vast similarity in function and sequence of active coding genes adds considerably more weight to the evidence for common descent.

    Personally I don’t buy that inference. If you know any computer programmers, ask them if there are any “redundant, disabled, inefficient subroutines” analogous to pseudogenes in today’s highly complex software. The fact is, when programmers write programs, they don’t start from scratch every time — they modify previous designs, and oftentimes don’t bother to delete sections of code that no longer have any function. When they grab an “object” to plug into their program, the “object” often contains functionality they don’t need — but they stick it in anyway and just take quick steps to disable the unnecessary code. You’ll often find these redundant segments in identical locations in the code in wildly diverse software packages, because the same subroutines are pulled out of libraries. It’s called “code bloat.”

    In fact, I’d be surprised if the designer of ours DIDN’T use this approach to genetic engineering. Reinventing the wheel is a waste of time.

    Consequently, since the same redundant, disabled code in the same locations can easily be explained by both systems, I don’t think shared pseudogenes provide meaningful evidence of common ancestry. Shared ERVs COULD provide such evidence … but it appears without a study of the FIXED ERVs, we’re kinda out of luck. At least for now.

    And that’s just the molecular evidence. The fossil record generally agrees with the molecular evidence and so too do anatomical similarities in both living and extinct species.

    In a very similar way to the above, I don’t think that anatomical similarities imply common descent. All cars have certain anatomical similarities — tires, steering wheels, seats, engines, etc. They have these anatomical similarities not because they are related, but because the designs work, and the designers use what works.

    If you were going to populate a planet with life, wouldn’t some of your lifeforms share anatomical similarities?

    Compare “God of the Gaps” to “Darwin of the Gaps”. There’s not a dime’s worth of difference between the two of them.

    I agree with that wholeheartedly.

    If someone can demonstrate how that code driven machinery came about without intelligent agency I’ll concede that all subsequent evolution is possible and plausible without intelligent agency. But that’s just me. I’ll follow the evidence whichever way it leads.

    I’m with you, man.

  57. 57
    ck1 says:

    ~98,000 ERVs have been identified in the sequenced human genome. These are grouped into ~50 different families.

    This study might be of some interest:

    http://www.pubmedcentral.nih.g.....d=17581995

    This study looked at the HERV-K(HLM2) family and did an in depth analysis of the proviruses integrated at 99 ERV sites:

    25/99 of the ERVs were shared by humans and chimps and were thus the oldest age class;
    66/99 were found in humans but not chimps (intermediate age);
    8/99 were found in some but not all humans and in no chimps (newest ERVs).

    So in this small set of ERVs, ~25% of these integrations were also found in chimps.

    Also, remember that retroviruses have RNA genomes. Following infection, a DNA copy is generated and this copy must integrate into the host genome as part of the virus replicative cycle. That integrated copy becomes a permanent part of the host cell genome.

  58. 58
    ungtss says:

    Your argument assumes that the evolutions of ERVs is sufficiently slow. If they evolve quickly enough, then any phylogenetic signal will be drowned out by the noise. If you look at any textbook on phylogenetics, you’ll see they discuss this w.r.t. sequence evolution. Contrary to what Dave asserts, this is fully rigourous. Here’s the rigour with full jargon:

    You have a stochastic process where all states intercommunicate. Therefore the process has a stationary distribution.

    The implication is that the historical signal will eventually be degraded. ERVs are a bit more complex, because they aren’t single bases, but as long as we condition on their extinction not having occurred, the same result can be found: it’s just a consequence of having a stochastic process.

    The questions then mainly become empirical – for example, how fast is stationarity achieved? What are the particulars of the stochastic process (and then we can return to mathematical rigour by trying to model them)? Amongst the many books I really should read is Mike Lynch’s on genome evolution, which tackles these sorts of problem. It’s not an area I’ve had to deal with much, so I’m not up to date on the literature.

    If I’m understanding you correctly (and let me know if I’m not), you’re arguing that we’d expect fewer older (and thus shared) ERVs, because those sequences mutate (and thus become unrecognizable) over time. (In my experience, the use of “full jargon” serves only to obscure meaning).

    That is another way this could be made mathematically rigorous.

    1) Compare the degree to which ERVs have differentiated between humans and chimps to the degree to which the code as a whole as differentiated — if the ERVs have modified significantly less, they are either provide some survival advantage in the configuration, or they are younger than the branching of humans + apes.

    2) Compare the proportion of fixed shared ERVs to fixed unshared ERVs, compare that to the background rate of mutation, and ask of the difference can be explained by background mutation in the allotted time, or if there are far fewers fixed shared ERVs than would be expected.

  59. 59

    gpuccio wrote (in #48):

    “Allen MacNeill, with all his openmindedness (which I certainly am more than willing to recognize), still does not feel like including, even hypothetically, the action of a designer among his “engines of variation”.

    All of the mechanisms listed among the “engines of variation” have been discovered as the result of a century and a half of painstaking empirical research, conducted in the field and in laboratories all over the world. The published literature comprising this research enterprise encompasses something on the order of 2 million volumes of various research journals, quarterly reviews, edited anthologies, and original books, all of them devoted to reporting the materials and methods, results, and implications of those results for empirically testable hypotheses.

    By contrast, there is currently not one journal in which empirical results obtained from field and laboratory research supporting the hypothesis that the mechanisms that produce variation in nature include something that could reasonably be considered to be foresight are being published. Until such research starts being done, and until it starts getting published, and until it is subjected to the same rigorous and highly skeptical scrutiny that the other mechanisms of variation have been subjected to, it simply does not merit inclusion in the list.

    Believe me, if such research results start getting published, not only will I include them in the list, but the researchers involved will almost instantly become as famous as Watson and Crick. But not until then.

  60. 60
    DaveScot says:

    DeepDesign

    re; lungless frog

    Random variation and natural selection is really good at culling things that aren’t needed. It really sucks at creating things that are needed.

    All frogs can breathe through the skin. Given enough skin they don’t need a lung. If they don’t need a lung for a long enough period of time RV+NS will get rid of it.

    What else is evolution good at that helps explain this? Plasticity in scale. The most familiar example even has the latin for “familiar” in the name – canine familiarus. In an evolutionary eyeblink of time plasticity in scaling produced dogs with various combinations of different scale in body parts – snout width and length, head shape, ear size, length of tail, short legs on long trunks, long legs on short trunks, and normal adult weights ranging from 2 pounds to 200 pounds. The key in all this is that there’s nothing new, just bigger or smaller versions of things that were already there.

    The breathable skin of the frog was already there. Making more of it isn’t much of a challenge for evolution. Just modify the size of body parts to optimize skin surface area to body mass and lungs become less and less necessary as that ratio grows. Lungs can shrink or grow in size like any other scaleable body part. If they’re not needed at all they can keep on shrinking to nothing.

    So the most likely explanation for this is RV+NS changing the relative scale of body parts that we know are plastic.

    It almost seems like evolution was thinking ahead of itself when it invented plasticity. But thinking ahead is something that RV+NS can’t do. RV+NS is reactive. Planning ahead is proactive.

  61. 61

    Even chunks of old code in a rewritten computer program once had some function in the older versions of that program. In other words, they were once “adaptive”, to use terminology that links such information to analogous information in genomes.

    However, the kinds of evidence discussed in this thread involves chunks of code that are not adaptive, at least not to the organisms into whose genomes they have been inserted. And yes, this could be interpreted as evidence that an “intelligent coder” could have inserted such code into otherwise adaptive genomes, for His own nefarious purposes.

    But such codes are, of course, exactly what we are discussing here: they are viruses (and/or “worms”, in that some code for their own reproduction, independently of the reproduction of their hosts). That is, they do not benefit their hosts (except by extremely rare accident), and are therefore not evidence of any intent on the part of the “intelligent coder” to promote anything except the survival and reproduction of His parasitic (and usually disruptive, and sometimes fatal) viral codes.

    The only conclusion that one can draw from this hypothesis would be that the “intelligent coder” is a malicious entity that cares not a whit for the organisms whose genomes it meddles with, but who has an overweening desire for His handiwork to be virtually indistinguishable from the operation of purely non-directed “natural” processes.

  62. 62
    ungtss says:

    Allen:

    However, the kinds of evidence discussed in this thread involves chunks of code that are not adaptive, at least not to the organisms into whose genomes they have been inserted. And yes, this could be interpreted as evidence that an “intelligent coder” could have inserted such code into otherwise adaptive genomes, for His own nefarious purposes.

    That was the point I was trying to make. I think that pseudogenes and ERVs need to be analyzed differently in the context of common descent. That’s why I wrote:

    “Consequently, since the same redundant, disabled code in the same locations can easily be explained by both systems, I don’t think shared pseudogenes provide meaningful evidence of common ancestry. Shared ERVs COULD provide such evidence … but it appears without a study of the FIXED ERVs, we’re kinda out of luck. At least for now.”

    Pseudogenes could be understood and analyzed as code bloat. ERVs cannot. However, ERVs can be seen as a mechanism of genetic engineering themselves (as they appear to be useful), and (with enough study) could provide substantial evidence related to the common descent controversy.

    The only conclusion that one can draw from this hypothesis would be that the “intelligent coder” is a malicious entity that cares not a whit for the organisms whose genomes it meddles with, but who has an overweening desire for His handiwork to be virtually indistinguishable from the operation of purely non-directed “natural” processes.

    Not if you differrentiate between ERV and pseudogene.

  63. 63

    DaveScot wrote (in #59):

    “It almost seems like evolution was thinking ahead of itself when it invented plasticity.”

    This is the point of Mary Jane West-Eberhard’s recent book, Developmental Plasticity and Evolution, in which she discusses precisely this point. That is, developmental plasticity (for which we now have mountains of new evidence) makes possible exactly the kinds of rapid changes in allometry you describe for domestic dogs (BTW, the correct scientific name for domestic dogs is Canus familiarus L., not “canine familiarus”).

    Developmental plasticity is a hallmark of the development of virtually all multicellular eukaryotes. As West-Eberhard points out, such plasticity is indeed an adaptive mechanism in eukaryotes, allowing for relatively rapid changes in phenotype that would be impossible via the simple genetic mechanisms upon which the “modern evolutionary synthesis” (i.e. “neo-darwinism”) was based.

    As most of this plasticity is based on the homeotic gene regulatory mechanisms that are a hallmark of eukaryotes (especially animals), and the same kinds of indirect historical evidence that DaveScot cites for accepting common descent also applies to the conclusion that homeotic gene regulation began at the very beginning of the evolution of multicellular eukaryotes, then the conclusion is exactly what DaveScot suggests:

    Developmental plasticity, produced by homeotic gene regulation mechanisms and related processes, pre-adapted the eukaryotes for precisely the kinds of patterns of macroevolutionary phenotypic change that is so amply demonstrated by both the fossil and comparative genomic record.

  64. 64
    Charlie says:

    Allen MacNeill says:

    By contrast, there is currently not one journal in which empirical results obtained from field and laboratory research supporting the hypothesis that the mechanisms that produce variation in nature include something that could reasonably be considered to be foresight are being published.

    as well as:

    DS:“It almost seems like evolution was thinking ahead of itself when it invented plasticity.”

    AM: This is the point of Mary Jane West-Eberhard’s recent book, Developmental Plasticity and Evolution, in which she discusses precisely this point.

    Developmental plasticity, produced by homeotic gene regulation mechanisms and related processes, pre-adapted the eukaryotes for precisely the kinds of patterns of macroevolutionary phenotypic change that is so amply demonstrated by both the fossil and comparative genomic record.

    Sounds like foresight isn’t so unreasonable.

  65. 65
    Charlie says:

    So the section between “AM:” and the final line are supposed to be quotes…

  66. 66
    DaveScot says:

    untss

    the key is to identify fixed ERVs, data we don’t yet have.

    Yes. So here’s a prediction based on common descent. ERV remnants with identical integration points between man and chimp will be fixed within the populations of both species barring any wholesale deletions of the useless ERV code which should still leave evidence behind that something happened at the integration loci as it’s unlikely the foreign code would be deleted without taking some original code with it at either or both ends.

  67. 67
    ungtss says:

    Developmental plasticity, produced by homeotic gene regulation mechanisms and related processes, pre-adapted the eukaryotes for precisely the kinds of patterns of macroevolutionary phenotypic change that is so amply demonstrated by both the fossil and comparative genomic record.

    I’m not following your reasoning. Developmental plasticity is the ability of genetically identical individuals to develop different physical features due to functions of their environment, or genetic switches and regulators. How does that inborn capacity to vary with environment preadapt animals for future GENETIC changes that are the real substance of the common descent controversy?

  68. 68
    ungtss says:

    DaveScot:

    untss

    the key is to identify fixed ERVs, data we don’t yet have.

    Yes. So here’s a prediction based on common descent. ERV remnants with identical integration points between man and chimp will be fixed within the populations of both species barring any wholesale deletions of the useless ERV code which should still leave evidence behind that something happened at the integration loci as it’s unlikely the foreign code would be deleted without taking some original code with it at either or both ends.

    Agreed. But that prediction is not exclusive to common descent. Those same facts are consistent with an discontinuous scenario as well. Especially under the following circumstances:

    1) It turns out that some of the ERVs we are currently using to support this argument are not fixed in the entire human population

    2) It turns out that a disproportionally large number of the fixed ERVs are not shared, or

    3) It turns out that some of the code-segments we think are ERVs are not actually ERVs, but only share some ERV characteristics (particularly in the case of ERVs indispensible to organismal function).

  69. 69
    ungtss says:

    DaveScot:

    If they don’t need a lung for a long enough period of time RV+NS will get rid of it.

    Only if there is some survival advantage to NOT having the lung. Seems to me that given the obvious advantages of lungs ( like being able to live in more diverse and less oxygen-rich rivers), and the rarity of this lungless frog, the most reasonable scenario is not that RV+NS REMOVED the lung, but that the lung was removed by an information DESTROYING mutation, and the damaged, mutant, inferior frog was able to limp along in a small ecological niche due to the oxygen-content of the water. Change the oxygen content of the water, and the frog goes extinct.

    This isn’t evolution; it’s not an increase in functionality or flexibility; this is a line of unfortunate mutants that were only able to survive one place.

  70. 70
    gpuccio says:

    Allen MacNeill (#58):

    I won’t analyze again the epistemological ambiguities (IMO) of your “list of engines of variation”, because I have already done that in detail in a previous post, and you have already given your answers. So, no need to do everything a second time.

    I will just mention that, always IMO, the essence of this new answer from you always derive from the same epistemological confusion between mechanisms of causation of variation and modalities of variation. I see that problem in many of the things you say.

    But I am afraid we have to agree to disagree on that. I respect your position, as I hope you will mine.

  71. 71
    ck1 says:

    Yes. So here’s a prediction based on common descent. ERV remnants with identical integration points between man and chimp will be fixed within the populations of both species barring any wholesale deletions of the useless ERV code which should still leave evidence behind that something happened at the integration loci as it’s unlikely the foreign code would be deleted without taking some original code with it at either or both ends.

    Estimates are that 85% of ERVs are deleted, and deletion is most commonly accomplished through a process of homologous recombination involving the viral LTR sequences at either end of each ERV. This deletion leaves behind a solitary LTR at the integration site, so the site is still identifiable.

    The age of a specific ERV can be determined by accumulation of mutations. What is done is to compare sequence divergence of the two flanking LTRs in a single ERV provirus. The viral RNA has only a single LTR sequence, the two copies in the DNA are generated at the time of integration, so examining the independent mutations that have accumulated in the two LTRs gives a measure of the age of the integration.

    I work on viruses.

  72. 72
    DaveScot says:

    Allen

    re; the hypothetical genetic code using 2 bases per codon

    You said that this is a much simpler code. Mathematically it isn’t. The number system is still base 4 with “digits” A C T G. Biotic messages are still encoded in various lengths of digits. The only thing that changed is the coding gene translation table. Adding extra bits to the reference number so you can reference into a larger 1-dimensional array is trivial in an information processing POV.

    But lets run with it and see where it leads. I’m interested in how you transition from a 2 digit genetic code to a 3 digit code without killing the intermediary.

    We start out with a ribosome translating 2-digit codes. Encoded in the DNA are the specifications for the parts that make up the ribosome largely (if not entirely) in strings that never get translated to protein but rather code for rRNA components.

    So we have a two digit ribosome happily building proteins encoded in DNA at 2 digits per monomer in linear sequence.

    Let’s presume for the sake of argument a single simple random variation in the DNA coding for ribosomal RNA can cause the ribosome to use 3 digits per monomer.

    Holy frameshifts, Batman! All our protein coding genes are turned into instant nonsense. The rRNA mutation alone is fatal.

    To make this scenario work requires coodinated change. Simultaneous with the rRNA mutation we’d have to reorganize all our linear coding genes simultaneously so that they are using 3 digits per polymer instead of 2. What’re the odds?

    My challenge for you, Allen, is to come up with a plausible way to make the transition from pairs to triplets in the genetic code.

    There is a very simple explanation that explains the observations which hint at an original 16 codons. The genetic code never used codon pairs. It always used triplets. The codon translation table always had 64 entries in it, codons were always triplets, but the redundancy was slightly higher. Each of the 16 original monomers were quadruply redundant in the code. They’re not much less than quadruply redundant today. A simple workable explanation. Let’s see how complicated any alternative you offer gets. I can hear Sir Occam sharpening his razor again.

  73. 73

    First of all, rRNA sequences are not translated. On the contrary, rRNA molecules form part of the three-dimensional structure of the ribosomal subunits, in the same way that amino acids form the three-dimensional structure of polypeptides and proteins. Ergo, a shift from a two-base codon to a three-base codon would have no effect on either the base sequence or the three-dimensional structure of the rRNA-containing ribosome subunits.

    Second, the number of bases that constitute a codon is absolutely crucial, rather than trivial, as you suggest. I have not proposed that there has ever been a different number of nucleotide bases than the five that we currently know (i.e. adenine, guanine, cytosine, thymine, and uracil). However, what I am proposing is that there has been a shift in the number of nucleotide bases that constitute a codon. Since there are four bases in DNA (adenine, guanine, cytosine, and thymine), then the number of possible codons in which bases are translated two-at-a-time is 4ex2 = 16 (i.e. four squared, or sixteen). Since at least one “stop” codon is also necessary, this means that the maximum number of amino acids that can be specified by a two-base code is 15.

    Currently there are 20 amino acids that are specified by the genetic code. Ergo, a minimum codon length of three bases is necessary to specify all 20. However, as I pointed out in my earlier post, this means that there are 64 different three-base codons, since 4ex3 = 64 (i.e. four cubed equals 64; this was all figured out by Francis Crick, George Gamov, and the members of the “amino acid tie club” nearly fifty years ago.

    But, as I also pointed out, since the current code is highly redundant, most of the twenty amino acids are coded for by either two codons, four codons, or even six codons. Only two amino acids – methionine and tryptophan – are coded by one one codon each.

    Ergo, the third base in the current code is almost completely unnecessary to code for all but a very small number of the twenty amino acids. (to be continued)

  74. 74
    DaveScot says:

    You don’t think the proportion would matter? For instance, if there are 5,000 fixed ERVs and 10 are at identical insertion points with chimps, that wouldn’t imply anything to you?

    Sure it would matter. Again I’d want a much larger sample size of sequenced individuals within each species. With a whole lot of samples dating via molecular clock provides more reliable data.

    Imagine we have a thousand fully sequenced human genomes from populations all over the world. At $1000 each that’s only a million bucks – which is like nothing for Craig Ventor. Anthropology gives us guidance on when those populations separated from others with less and less certainty going back in time. We should see some interesting things by making a dated phylogenetic tree of ERV distributions from that large, diverse sample size. It should agree more or less with anthropologic data otherwise the bone hunters are at odds with the molecular biologists so you can’t trust either of them.

    To answer your question. I’d be mildly surprised if the fixed ERVs in humans vastly outnumbered those we have in common with chimps. One easy explanation is that chimps and humans aren’t as closely related on the phylogenetic tree as we thought. I have no problem with that. Let the paleoanthropologists duke it out with the molecular biologists. ID has no dog in that hunt.

    Comparative genomics is pretty young and is bound a lot by the cost of DNA sequencing and the number crunching required to work with an exponentially growing database. A large sample of human genomes is the first order of business and that’s valuable in many and huge ways for medical research. I’m not sure what practical benefit there’d be in having a million other primate genomes instead of just a few individuals from each species. If it was me I’d carve that out of the budget and put the resources somewhere more productive.

  75. 75

    Furthermore, if one examines the table of codons, an overall pattern becomes immediately apparent: all of the hydrophobic amino acids (phenylalanine, leucine, isoleucine, valine, proline, and alanine), have either uracil or cytosine as their second base. Structurally and functionally, these amino acids can also often be substituted for each other without changing the function of the proteins of which they are a part. Ergo, it is quite possible that in the early stages in the evolution of the genetic code, it was only necessary for there to be a two-base code to specify the hydrophobic amino acids.

    A similar situation exists for most of the hydrophilic, but uncharged amino acids (serine, threonine, tyrosine, glutamine, asparagine, cystine, and glycine). It would be possible to code for all seven of these amino acids using a two-base code, especially if several of the amino acids can be interchanged.

    This leaves only the charged (i.e. ionizing) amino acids: aspartate and glutamate (both anions), and lysine, arginine, and histidine (all cations). These either have two or four redundant codons, which would tend to indicate that they were among the original smaller set of amino acids, back when the code consisted of only two bases (an observation that also squares with their more restricted chemistry, due to their more complex structure and charged nature).

    Next, the stop codons also can be “clumped”. Two of them – UAA and UAG – differ only in the third (i.e. “wobbly”) base. This again means that in a primitive two-base code, either one or two “stop” codons would have sufficed.

    Finally, the two unique codons – AUG for methionine and UGG for tryptophan – are on opposite sides of the lexicon. The universal “start” codon – AUG, which codes for methionine – is in the “cell” in the lexicon that otherwise includes isoleucine, probably the most “dispensible” of the twenty amino acids, given its similar structure and function to leucine.

    Ergo, it would be relatively easy to construct a genetic code in which there were only two bases per codon, which would specify all of the truly necessary amino acids. (next: “wobbly” ribosomes)

  76. 76

    Most people assume that ribosomes are nearly perfect “machines”, an impression fostered by the typical textbook diagram showing the perfectly regular little P and A binding sites in the small subunit of the ribosome.

    In reality, ribosomes are pretty “wobbly”, and binding of tRNAs at the P and A sites is often a hit-or-miss proposition. The reason this somewhat messy system still works is that there are so many ribosomes, working so incredibly fast, producing so many polypeptides and proteins that functional products generally outnumber non-functional ones.

    This situation would have been much less critical early in the evolution of cells, as they would have needed many fewer proteins and had many fewer essential structures and functions. We know this because that’s the case with the prokaryotes today; both their genomes and their ribosomes are much simpler than those of eukaryotes.

    Under such conditions, if a two-base translating ribosome encountered tRNAs in which the anticodons could be either two or three bases long, the ribosomes could translate mRNAs with two-base codons correctly often enough to produce sufficient functional proteins for the cell.

    With increasing variation in the available amino acids (which can not only form spontaneously, but interconvert relatively easily within broad structural categories, such as the hydrophibic amino acid group), the number of variant tRNAs binding “new” amino acids would also increase.

    Any cell in which the tRNAs could line up along their corresponding amino acids in polypeptides would immediately have a way to specify either two-base or three-base codons in mRNAs that could be assembled complimentary to them.

    This means that such cells could simultaneously be translating two-base and three-base mRNAs using “wobbly” ribosomes, and still be making enough functional polypeptides to survive. And among these cells, the ones that could more easily use three-base codons would have access to more variable amino acids, plus the added benefit of a highly redundant code (thus minimizing the negative effects of about one third of all point mutations.

    Central to this hypothesis for the evolutionary transition from a simple two-base code to a more complex (and variable) three-base code is the idea that the original sequence of genetic specification was the reverse of what it is today. That is, the order of the amino acids in functional polypeptides specified (through stereochemistry or some other chemical means) the order of the appropriate tRNAs, which then specified (via their anticodons) the order of the complementary bases in the corresponding mRNAs (DNA, of course, wouldn’t even enter the picture until much later).

    This hypothesis immediately suggests several testable predictions:

    1) that the association of specific amino acids with specific tRNAs (and their specific anticodons) is not arbitrary (i.e. not a “frozen accident”, as some have suggested), but rather a “natural” consequence of the chemistry of the amino acids and their corresponding tRNAs;

    2) that it can be demonstrated that, under certain “natural” conditions, the sequence of anticodons in tRNA can specify the assembly of corresponding codons in mRNA; and

    3) that it can be demonstrated that, under certain “natural” conditions, ribosomes (especially those of prokaryotes) can translate either two-base or three-base codes into functional proteins (albeit with different amino acid sequences.

    Again, demonstrating all of these would not prove that this was, in fact, the way the genetic code originally evolved. However, it would constitute a “proof of concept” test for such a hypothesis. At that point, the onus would be on anyone who supported an alternative hypothesis to produce an equally convincing “proof-of-concept” demonstration of their hypothesis.

    Notice that this would not include an empirically vacuous “if it’s not the naturalistic mechanism, it has to be magic” argument.

  77. 77
    Russell says:

    DaveScot: “To answer your question. I’d be mildly surprised if the fixed ERVs in humans vastly outnumbered those we have in common with chimps. One easy explanation is that chimps and humans aren’t as closely related on the phylogenetic tree as we thought.”

    I don’t see how that is an explanation.

    Species and the populations within them vary dramatically in the number and species specificity of the retroviruses they carry or are infected with, with massive variability in the ability of those viruses to infect and be transmitted through the germline.

    For example, retroviruses (originally called “RNA tumor viruses”) were studied by so many labs simply because they were so prevalent in mice and chickens. We apes don’t have nearly as many as they do IIRC, and that was somewhat of a surprise.

    That doesn’t nullify the utility of studying chicken and rodent viruses, however, as virtually all of the transduced (or insertionally-activated) oncogenes that were discovered using viruses have been shown to be mutated in human tumors, just by different genetic mechanisms.

  78. 78
    ungtss says:

    allen:

    You don’t think the proportion would matter? For instance, if there are 5,000 fixed ERVs and 10 are at identical insertion points with chimps, that wouldn’t imply anything to you?

    Sure it would matter. Again I’d want a much larger sample size of sequenced individuals within each species. With a whole lot of samples dating via molecular clock provides more reliable data.

    Thanks for your well-reasoned response — I appreciate your help.

  79. 79
    Paul Giem says:

    Allen, (76)

    To quote one of your predictions regarding the original code being possibly 2-codon based,

    1) that the association of specific amino acids with specific tRNAs (and their specific anticodons) is not arbitrary (i.e. not a “frozen accident”, as some have suggested), but rather a “natural” consequence of the chemistry of the amino acids and their corresponding tRNAs;

    Note the complete absence of the possibility that the association of specific amino acids with specific tRNA’s is neither arbitrary nor a “natural” consequence of the chemistry, but rather a designed correspondence. Correct me if I am wrong, but it does seem that this absence is characteristic of your way of thinking about the molecular basis of life.

    Furthermore, this absence appears to be deliberate. Your final statement is,

    Notice that this would not include an empirically vacuous “if it’s not the naturalistic mechanism, it has to be magic” argument.

    From this I gather that (a) you regard anything other than naturalistic mechanisms as being magic and empirically vacuous, and (b) you regard ID arguments as being in this category.

    With this frame of mind, it is difficult to see how you could in principle recognize intelligent design if you saw it. Let us suppose that the majority of the scientific community saw things your way. Let us further suppose that they put their presuppositions to use when they approved grants, edited papers, and peer-reviewed papers. Then, except for the occasional editor and collection of peer reviewers who saw things differently from you, no papers advocating ID could possibly be published. Those occasional editors could always be Sternberged and the rest of them would fall in line if they knew what was good for their careers.

    So your offer in (59) that

    Believe me, if such research results start getting published, not only will I include them in the list, but the researchers involved will almost instantly become as famous as Watson and Crick. But not until then.

    is not quite as generous and open-minded as it sounds. The hurdles that ID has to clear are really quite high, and higher than just about any other theory regarding the scientific data.

    And yet, there is a need to deal with the problem of intelligent design. On another thread I wrote (comment #64),

    As you probably know, extensive attempts to take natural variations and artificially select them for the effect of a blue rose have uniformly resulted in failure. However, recently because of some careful gene insertions and manipulations, scientists have been able to produce a rose that can reasonably be described as blue, and that has no known counterpart in nature.

    Now supposing that we are alien scientists exploring the earth after a disastrous epidemic has wiped out humankind, and enough time has passed so that the products of civilization (including records of what happened to make the roses blue) have disappeared, but the varieties of roses live on. Could we apply those 47+ kinds of phenotypic variation to the roses now (then) existing and explain how the rose became blue? Would we not be tempted to call it “lateral gene transfer” (meaning undirected lateral gene transfer)? And would we not be dead wrong? How could we possibly arrive at the correct answer to the origin of blue roses without allowing for the possibility of intelligent design?

    Furthermore, imagine an island where blue roses were planted and survived because there were few natural enemies. We might observe the roses exhibiting many variants, and identify experimentally confirmable sources of variation until we were blue in the face, and still not be able to identify the correct reason why these roses differed from the vast majority of those on the mainland.

    Part of the problem would seem to stem (ahem) from the assumption that all causes that have ever operated are now operating roughly equally to how they operated in the past. For intelligent agents, that may not be a valid assumption. Intelligent agents may very well create episodically, and they are not required to create when we want them to so that we can see how it is done.

    Would you not agree that intelligent design has already happened, and is therefore theoretically possible? If so, does it not belong with the other 47+ causes of phenotypic variation? And precisely how do you rule it out? Because you “know” that no designer (or Designer) existed back then? That’s an interesting position for one who talks about his association with Friends. Or is there some other, more empirical reason?

  80. 80
    Paul Giem says:

    Allen, (76)

    Regarding your hypothesis, you state,

    Central to this hypothesis for the evolutionary transition from a simple two-base code to a more complex (and variable) three-base code is the idea that the original sequence of genetic specification was the reverse of what it is today. That is, the order of the amino acids in functional polypeptides specified (through stereochemistry or some other chemical means) the order of the appropriate tRNAs, which then specified (via their anticodons) the order of the complementary bases in the corresponding mRNAs (DNA, of course, wouldn’t even enter the picture until much later).

    Do you have any idea how (other than with an elaborate set of decoding enzymes that would put the ribosome and tRNA’s to shame) you would do this? Is this not the same pathway that, or at least a similar pathway to that which, Dean Kenyon traveled before he finally gave up on Biochemical Predestination?

  81. 81
    jpark320 says:

    Am I missing something here or is the selective “advantage” from being infected by SIV or HIV far outway the cost?

    I have read all the commnets, but has anyone mentioned getting aids for some nucleotides is not a good trade off?

  82. 82
    jpark320 says:

    Edid: i meant “haven’t” read all the comments..,

    Also, there is no way this kinda of method can account for Haldane’s dilemma.

  83. 83
    DeepDesign says:

    It seems like Mike Behe’s ideas on ID are probably correct.

  84. 84
    gpuccio says:

    I will not comment further Allen Macneill’s story about the two-three nucleotides “issue”. Paul Giem has already said the essential things.

    I just would like to remark that, if we abandon any connection with the need of a causally credible explanation, imagination and creativity can bring us anywhere. And that is not intended, in any way, to be unkind to Allen, but only as a due methodological defense of a minimum of scientific approach.

  85. 85
    Bob O'H says:

    Teh problem with ToE is that it is ultimately based on an unpredictable mechanism. If you can’t predict you can’t be rigorous.

    The claim was that ToE wasn’t mathematically rigourous. There’s several areas of mathematics devoted to unpredictable (i.e. stochastic) problems. Probability theory, stochastic processes, mathematical statistics etc. These are all as rigourous as any mathematics.

    As for mechanisms not being predictable, and hence not possible to be treated rigourously, that means that statistical mechanics isn’t rigourous either.

  86. 86
    Paul Giem says:

    Bob, (84)

    What’s this about statistical mechanics? Statistical mechanics isn’t rigorous because it’s not predictable? What are the first and second laws of thermodynamics all about? Can we not engineer various heat pumps and engines precisely because statistical mechanics is predictable?

    On the other hand, the theory of evolution predicts the presence of large amounts of “Junk DNA”, except when it doesn’t. It predicts the existence of multiple intermediate forms leading up to the Cambrian fauna (and birds, and edicaran fauna, and so on) except when it doesn’t. It predicts the congruence of molecular and anatomic cladograms, except when it doesn’t. It is totally plastic, nowhere near like statistical mechanics.

    If one does apply statistics to genomic processes, one gets predictions that are somewhat similar to those of Behe in The Edge of Evolution. But I doubt that this is the mechanical rigor you have in mind.

  87. 87
    DaveScot says:

    Bob OH

    It’s rigorous not rigourous.

    Paul Giem’s response to you is essentially how I would have responded so I won’t repeat it.

  88. 88
    Bob O'H says:

    Dave – whilst you may wish to do things with rigor, those of us on this side of the Atlantic prefer our own rigour. And we invented the language. 🙂

    Paul’s response about statistical mechanics really makes my point – it is a stochastic theory, but yet it is used to make predictions (albeit not at the level of an individual atom). Hence, the argument that a stochastic process isn’t predictable doesn’t hold water.

    The second half of Paul’s response is not surprising. Look at how statistical mechanics predicts the location and velocity of an individual atom. It says the atom is over here, except when it isn’t.

    Of course, Paul is also ignorant of the literature like this:
    Hovmøller, M.S.; Munk, L.; Østergård, H., Observed and predicted changes in virulence gene frequencies at 11 loci in a local barley powdery mildew population. Phytopathology (1993) 83 , 253-260.

    Wherein they actually do predict evolutionary changes (albeit over a short timescale). The animal breeding literature also provides cases where they also have to make evolutionary predictions (i.e. of the quality of individual animals). The models for this spring from Fisher’s model in 1918, and are also pretty rigourous: indeed fitting the models was the motivation for the development of REML, which is a really hoopy technique for dealing with complex statistical models.

    Some of us do apply statistics to genomic processes, and we get nothing like Behe’s results. Some of the people who do this actually have degrees in the subject, and are professors in mathematics departments. Feel free to argue that they aren’t rigourous, or even rigorous. But back up your assertions by showing the faults in their analyses please.

  89. 89
    jerry says:

    Bob O’H,

    you said

    “Some of us do apply statistics to genomic processes, and we get nothing like Behe’s results.”

    What Behe results are you referring to. I assume you mean that researchers are getting complex changes that Behe said he has never seen. Do you have any references? It would be interesting to see what they get if it understandable to us lay people.

    The real proof of Behe’s ideas and these researchers that are using models would be in the actual genomes. Are there any analysis of genomes that back up those who are using statistical models to predict changes in genomes? Everytime I see something it is always micro evolution or trivial changes.

    You mention breeding. Does anything ever get beyond micro evolution with breeding experiments?

  90. 90
    DaveScot says:

    BobOH

    I hate to have to educate you on British spelling rules but I feel compelled. In this case the spelling rule is not unique to British spelling.

    Any word in the english language follows this rule:

    If the word ends in “-our” (like honour and rigour) you must drop the u out of “our” when adding either the suffix “-ous” or “-ary”. Honour becomes honorary and rigour become rigorous.

    I don’t make the rules so don’t shoot the messenger.

  91. 91
    sparc says:

    Any word in the english language

    I am not a native speaker but shouldn’t it be English?

  92. 92
    Bob O'H says:

    Ooh, Dave. I decided to reach for the ultimate sanction – the OED. So I did. It says this:

    Also 5 ryger-, rygour-, rygor-; 5 regor-, rigur-, 5-6 riger-, 6 rygur-, 6-7 rigourous; 5 -is, 5-6 -us; 5 -use, 5-6 -ouse.

    1. Characterized by rigour; rigidly severe or unbending; austere, harsh, stern; extremely strict: a. Of laws, procedure, etc.

    I agree that it’s not the usual form (and less usual than I had thought!), but it’s there.

    Jerry – on Behe’s results, I was thinking of his big result in Edge of Evolution, that malaria couldn’t evolve. That’s gone around the houses enough times that it’s clear he blundered in understanding some of the figures he used.

  93. 93
    DaveScot says:

    BobOH

    I suggest you go correct the wiki page on misspellings.

    http://en.wikipedia.org/wiki/W.....pellings/R

    The talk section history is that “rigourous” was in there in the beginning, a Brit complained so it was removed from the list, and then someone quoted the spelling rule that applies, and it was added back to the list of misspelled words. Be a hero for your country and correct this outrage!

    Here’s the editor who last reverted it due to the rule:

    http://en.wikipedia.org/wiki/U.....4#spelling

    Oh hold on. Check the date on your OED. Oxford’s current position is that it’s a misspelling.

    http://www.askoxford.com/betterwriting/spelling/

    Rule: When adding certain endings, such as -ous and -ist, to words that end in -our (in this case, humour), change -our to -or before adding the ending: humorous; humorist.

  94. 94
    Paul Giem says:

    Bob, (88)

    You write,

    The second half of Paul’s response is not surprising. Look at how statistical mechanics predicts the location and velocity of an individual atom. It says the atom is over here, except when it isn’t.

    I expected bettter from you, Bob. The examples I used in (86) for evolution (meaning materialistic evolution)were the behaviors of entire groups, large statistical ensembles, if you will; the ancestors of various Cambrian assemblages of organisms, the presence or absence of large quantities of “Junk DNA” in the genomes of large groups of species, the convergence or lack thereof of different ways of organizing entire phyla into a tree of common ancestry. And you compare the plastic predictions of evolution on a grand scale with the plastic predictions of single atoms by statistical mechanics?

    I recognize my finiteness. You can always pull articles off of the shelf, and the probability that I have read them will be near zero, especially if they come from journals like Phytopathology which are not in any of my specialties. But if I were to cite a source to you with which I had reason to believe you were unfamiliar, I would give a brief summary of the source and why it was relevant to the question at hand. Perhaps you could do the same.

    As I recall the discussion centered around whether the (stochastic) math more closely supported an edge of evolution that was closer to Behe’s or to one that would allow a complete materialistic evolution of all life from an original living cell. Then if it looks interesting enough, I can run the reference down and we can discuss it. In the absence of that, I would consider this literature bluffing, and believe that it was not worth my time to chase down the details.

    (Everyone has an edge to evolution. I have yet to run into even a strict creationist that would say that randomness has absolutely no place in the variation of living organisms. On the other hand, I have yet to run into a mechanistic evolutionist that would say that within one generation, cockroaches can be randomly obtained from an algal culture. As the old joke goes, we’re now haggling about the price. That means that quantity is important.}

  95. 95
    jerry says:

    Bob O’H,

    “on Behe’s results, I was thinking of his big result in Edge of Evolution, that malaria couldn’t evolve. That’s gone around the houses enough times that it’s clear he blundered in understanding some of the figures he used.”

    What did he blunder on? We are of the impression that none of the reviewers ever touched him or his conclusions except for a trivial thing here or there. Maybe you could set both Behe and us straight.

  96. 96
    Bob O'H says:

    Oh hold on. Check the date on your OED. Oxford’s current position is that it’s a misspelling.

    That was yesterday – I checked the online edition. It’s not in my shorter OED (which is older, but I don’t think the language has changed that quickly).

  97. 97
    Bob O'H says:

    And you compare the plastic predictions of evolution on a grand scale with the plastic predictions of single atoms by statistical mechanics?

    Yep. Here’s another article you might like to look at:
    Kuparinen, A., Schurr, F., Tackenberg, O., O’Hara R.B. (2007). Air-mediated pollen flow from genetically modified to conventional crops -risk assessment with a mechanistic model. Ecological Applications 17: 431-440. Link
    In which my student showed that we can’t even predict the movement of a single particle (in this case a pollen grain) with any sort of precision. We can’t even predict the average movement of an ensemble of grains. She used methods from statistical mechanics to do this, so yes I can make this comparison.

    Paul – I gave the Phytopathology reference to make the point, that you now acknowledge, that you don’t know the literature. I wanted to make that point so that it was clear that your claims about evolutionary theory not being predictive were based on ignorance.

  98. 98
    DaveScot says:

    Bob

    So Oxford is saying one thing and doing another (i.e. quoting the spelling rules in the English language then not following them in its dictionary).

    I guess that’s par for the course for things with Oxford’s name on it. Richard Dawkins is another fine example.

  99. 99
    DaveScot says:

    Bob

    Ok, give me some predictions that evolution makes about the future.

    Let’s take a specific, important example. Where, when, and how will HIV evolve? Will it become more tranmissable, less transmissable, or stay the same?

  100. 100
    tribune7 says:

    When he fails to answer the question to your satisfaction, will you be offering the ID predictions?

    Leo — if he should fail to offer, to your satisfaction, an ID prediction will you at least concede that ID and Darwinism are scientific equals?

    But how about this for an ID prediction w/regard to HIV — it will be become better understood, means will be developed to curtail its transmission, hence it will be less transmissible.

  101. 101
    RichardFry says:

    Here’s my go. Over the course of thousands of years the fraction of the human population resistant to HIV/AIDS will increase. It is unlikely that the virulence of HIV will increase and likely will decrease in accordance with similar situations.

    However, due to the fact that HIV can evolve much faster then the humans it infects it’s unlikely that we will ever evolve resistance in 100% of the population and so intervention in the form of technology will be required.

  102. 102
    RichardFry says:

    But how about this for an ID prediction w/regard to HIV — it will be become better understood, means will be developed to curtail its transmission, hence it will be less transmissible.

    What makes that an ID prediction? It seems to be that it’s all equally applicable to “evilutions” interpretation.

    To my mind an ID prediction could be something like “Every time a cure is almost found some mysterious force steps in and moves the virus one step ahead. A cure is never found, and one cannot be evolved either as the virus changes and adapts somehow, almost as if there was an intelligence controlling it that wanted it to persist”

  103. 103
    DaveScot says:

    When he fails to answer the question to your satisfaction, will you be offering the ID predictions?

    I’ll recommend that you read Behe’s “Edge of Evolution” to know what ID predicts in this case. What it predicts is this level of variation from what’s already there is well within the domain of random variation and natural selection.

    We don’t deny RM+NS is able to cause change and adaptation. We just disagree on the constraints. We both agree RM+NS turning a fish into Fermi in 500 years is so close to impossible it’s not worth further consideration. Where we disagree is whether or not RM+NS can turn a fish into Fermi in 500 million years.

  104. 104
    RichardFry says:

    Where we disagree is whether or not RM+NS can turn a fish into Fermi in 500 million years.

    Is it just a matter of degree then?

    What about 500 billion years?

    500 trillion billion?

    Or can this “barrier” never be breached by RM+NS not matter how much time is available, in your opinon Davescot?

  105. 105
    DaveScot says:

    It is unlikely that the virulence of HIV will increase and likely will decrease

    So basically you’re saying that virulence will decrease unless it increases.

    Perfect answer. Thanks.

    Below is my answer from a year ago when an anonymous biologist asked ME that question. He was amazed I got it correct and accused me of somehow cheating because no knuckle dragging IDiot could possibly understand evolution that well:

    You asked about virulenece and I answered. There is no answer. Extreme virulence usually becomes more mild because if it kills its host before it can be transmitted the more virulent form gets selected out. But if tranmission is really easy like then you get stuff like the Spanish Flu in WWI which became more virulent or bird flu on crowded poultry farms. Or take staph infections in hospitals. Those are becoming more virulent as antibiotic usage selects for stronger strains and crowding and open wounds aids in transmission. There is no right answer for your question, at least not the way you put it.

  106. 106
    RichardFry says:

    I’ll recommend that you read Behe’s “Edge of Evolution” to know what ID predicts in this case.

    There was some excitement earlier here when it was noted that a cure for malaria (a drug that requires evolition past the protein binding site limit) was potentially around the corner due to the work in that book.

    Has anything come of that of pratical benefit yet and is there some site tracking progress in that regard?

  107. 107
    RichardFry says:

    So basically you’re saying that virulence will decrease unless it increases.

    I’m saying that if I was going to have a bet of my own money that’s where I’d put it.
    However, due to the somewhat special circumstances surrounding Hiv (mutiple strains merging in a single person then going back out again) a side bet might not be a bad idea.

    And no, I’m not saying virulence will decrease unless it increases, I’m saying that our best predictions come from other similar situations and those situations usually show a decrease in virulance. And perhaps we even co-exist with viri over time. Otherwise how do you get things like this?

    Buried within the genetic blueprint of every human is a snippet of DNA that resembles a gene sequence from the human immunodeficiency virus (HIV). Humans have been carrying this unwanted genetic baggage around for more than 30 million years, according to researchers from the Howard Hughes Medical Institute (HHMI) at Duke University.

    http://www.hhmi.org/news/cullen.html
    If a virus on average got worse the more it spread why are we all still here?

  108. 108
    DaveScot says:

    Richard

    There was some excitement earlier here when it was noted that a cure for malaria (a drug that requires evolition past the protein binding site limit) was potentially around the corner due to the work in that book.

    Who exactly got excited about it?

    Not me, that’s for sure.

  109. 109
    tribune7 says:

    What makes that an ID prediction?

    Design will curtail the transmission of HIV.

    To my mind an ID prediction could be something like “Every time a cure is almost found some mysterious force steps in and moves the virus one step ahead.

    Then you don’t understand ID. The actions of a “mysterious force” can never, almost by definition, be predicted.

    But if some force is preventing an AIDS vaccine via unknown means then HIV will not be made less transmissible the way I predict.

    OTOH, if this force somehow communicates to us that if you don’t do this or this or this you will not be likely to transmit or acquire HIV, and if all follow the directives of this force and the transmission of HIV is greatly curtailed I guess you can say that’s an ID prediction too.

    Of course, conversely, if the directives are rejected with greater frequency then you would expect the transmissibility to increase in accordance with the prediction set by your “mysterious force” theorem.

  110. 110
    DaveScot says:

    Richard

    re; how much time to evolve fish into Fermi

    Somewhere beyond the age of the universe yet still short of an infinite amount of time.

    How much time does the pure chance theory of evolution predict it would take? What are YOUR boundaries?

  111. 111
    DaveScot says:

    Richard

    Davescot, if we take it as read that the desiger can directly manipulate matter at the sub-atomic level then what is your opinion as to why retroviruses etc are needed at all?

    In that case they aren’t needed. I don’t make presumptions that are unnecessary. There’s no need for a designer who can wave a magic wand to manipulate matter when I’ve just given you an adequate material mechanism that requires nothing more than a custom designed virus and ability to broadcast it. Humans already have the capacity to do this which is why we take bio-warfare as a deadly serious subject.

  112. 112
    Bob O'H says:

    Let’s take a specific, important example. Where, when, and how will HIV evolve? Will it become more tranmissable, less transmissable, or stay the same?

    It’s always evolving, even within the host. So the “where” and “when” is “everywhere the virus is replicating within its host”.

    The how is trickier. That depends on how its environment changes – i.e. how human behaviour changes, and what sort of drugs are developed.

    One thing I would predict is that any drug we find that cures AIDS will be overcome by the virus. Now, I’ll hedge my bets and say this isn’t certain, not least because we don’t have a drug yet, so we don’t know how it will act.

    I am pleased with Dave’s ID prediction: evolution operates.

  113. 113
    Paul Giem says:

    Bob,

    At (97) you just doubled down:

    [Quoting me] And you compare the plastic predictions of evolution on a grand scale with the plastic predictions of single atoms by statistical mechanics? [End quote]

    Yep.

    Now, if you take the Einsteinian modification that E=mc^2, the First Law of Thermodynamics has no known exceptions. Granted that we can’t test it all the time, but it has been tested multiple times with the same result, and its incorporation into multiple other theories yields consistent results.

    The same can be said for almost all instances of the Second Law, certainly all laboratory instances. While the behavior of any given atom cannot be predicted with accuracy, the activity of large ensembles of atoms can be predicted with amazing accuracy. Gases will mix irreversibly without energy input to separate them, a cold object will cool a warmer object while a warm object will warm a colder object, et cetera. Those are strong predictions, not to be violated.

    The examples I gave,

    the ancestors of various Cambrian assemblages of organisms, the presence or absence of large quantities of “Junk DNA” in the genomes of large groups of species, the convergence or lack thereof of different ways of organizing entire phyla into a tree of common ancestry.

    are where materialistic evolution didn’t just hedge its bets; it got the facts grossly wrong.

    Now, I could have understood if you had challenged the idea that ME got these facts grossly wrong; disagreed, but understood. But the apparently cheerful acceptance of a theory that makes gross mistakes, after equating it to statistical mechanics, is breathtaking.

    You claim that a stochastic theory can make predictions. Fair enough; I agree with you. But then, what do you do with a stochastic theory that makes grossly wrong predictions? At this point the comparison between it and thermodynamic theory is distinctly unflattering to ME.

    I think I now see why you didn’t regard what you were doing as literature dumping. You cheerfully did it again, right after your “Yep.” I think (correct me if I am wrong) that you felt these articles really did answer the question being raised (or at least, a question being raised). It looks to me like you followed both article citations with a short synopsis of what was observed, and what you perceived as its relevance.

    In that case, you did not understand the question being raised. Nobody (I think) is disputing whether gene frequencies can vary with time, or whether those gene frequency variations can be partially predicted on the basis of perceived fitness functions. Your example of changes in virulence gene frequencies in powdery mildew is beating a dead horse. The article on cross-pollination of conventional crops by genetically modified crops has nothing to do with the naturalistic origin of genetic information. In fact, for that particular article, the origin of the genetic information, or at least that particular combination, is known to be intelligent design. That’s what genetically modified means.

    The problem that needs solving, from a ME point of view, is to get enough information, fast enough, and sorted from all the genetic misinformation that would occur from random processes, to get from a “simple” cell to the variety of life we find today, in less than 4 billion years. You say that it can be done; we say (very highly) probably not. We say that evolution, meaning the grand biological materialist theory, is not on good mathematical ground. We offer some preliminary calculations, specifically in The Edge of Evolution. You come back with assertions, not calculations, and then challenge us with

    Some of us do apply statistics to genomic processes, and we get nothing like Behe’s results. Some of the people who do this actually have degrees in the subject, and are professors in mathematics departments. Feel free to argue that they aren’t rigourous, or even rigorous. But back up your assertions by showing the faults in their analyses please.

    But there is no outline of the statistics you use, or the results that you get, or even a link or reference, for the production of new information by stochastic processes, or how it allows apes to give rise to humans, or reptiles to turtles, or algae to spiders, or club moss to maples, or any major shifts. It is very hard to show the faults of a non-existent analysis.

    For large changes, what has been termed Megaevolution, the math is either absent, or in favor of Behe. Demonstrating microevolution, or even species formation, will not get you the grand theory unless you can show how the extra information can be expected to be obtained without intelligent input.

    What you need to show is mathematically, first, how much information you need (e.g., how much new information do humans, or whatever other group you wish to consider, have compared to the presumed ancestor), and second, what the probabilities are of crossing that gap with known stochastic processes combined with natural selection. Powdery mildew will not help with this question unless you can show that they can create new enzymes and/or structures on a routine basis. I haven’t yet read your reference, but I doubt that it gives that kind of data, in which case it would be completely irrelevant to the question being raised.

    You now admit that the real reason you cited that article is to bolster an ad hominem argument:

    Paul – I gave the Phytopathology reference to make the point, that you now acknowledge, that you don’t know the literature. I wanted to make that point so that it was clear that your claims about evolutionary theory not being predictive were based on ignorance.

    In my major field, medicine, stacks of unread journals on a doctor’s desk are a proverb. Most of us are lucky to read JAMA and the New England Journal of Medicine, plus perhaps a prominent specialty journal. That leaves a lot out, and we rely on various journal summarizers to alert us to the really important stuff (yes, there is stuff out there that is not really important). It is only on the really important stuff that we might do a full-court press on the literature. I have read enough biology and geology literature to observe the same phenomenon there.

    However, one can understand the major outlines of a given question without having exhaustively read the literature. Your claim that my “claims about evolutionary theory not being predictive were based on ignorance” is unfair, and smacks of “He’s smarter than you, he studied biology.” The reason it’s unfair is not that I know everything, but that the specific points I made were accurate. Rather than challenge the points, you went after the person presenting them. That’s classic ad hominem.

    And if you are not careful, your ad hominem argument will boomerang back on you. Your claim is that “Some of us do apply statistics to genomic processes, and we get nothing like Behe’s results.” Well, then, what results do you get, and as the result of what statistics, and how do they differ from those of Behe? Or do you not really know, in which case your “claims” are “based on ignorance”?

Leave a Reply