Uncommon Descent Serving The Intelligent Design Community

Retrovirus infection of germline confirmed in vivo

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

There was some discussion here in the past year or so of whether retroviruses could indeed infect a germ cell and hence leave deactivated heritable fingerprints in descendents. Mike Behe mentions these retroviral markers as convincing evidence (to him) of common descent, at least in the primate lineage including humans and chimps. This experiment pretty much settles the question.

The testis and epididymis are productively infected by SIV and SHIV
in juvenile macaques during the post-acute stage of infection

Miranda Shehu-Xhilaga*1,2, Stephen Kent3, Jane Batten3, Sarah Ellis5,
Joel Van der Meulen1,2, Moira O’Bryan4, Paul U Cameron1,2,
Sharon R Lewin1,2 and Mark P Hedger4

Published: 31 January 2007
Retrovirology 2007, 4:7 doi:10.1186/1742-4690-4-7

Abstract

Background: Little is known about the progression and pathogenesis of HIV-1 infection within the male genital tract (MGT), particularly during the early stages of infection.

Results: To study HIV pathogenesis in the testis and epididymis, 12 juvenile monkeys (Macacca nemestrina, 4–4.5 years old) were infected with Simian Immunodeficiency Virus mac 251 (SIVmac251) (n = 6) or Simian/Human Immunodeficiency Virus (SHIVmn229) (n = 6). Testes and epididymides were collected and examined by light microscopy and electron microscopy, at weeks 11–13 (SHIV) and 23 (SIV) following infection. Differences were found in the maturation status of the MGT of the monkeys, ranging from prepubertal (lacking post-meiotic germ cells) to post-pubertal (having mature sperm in the epididymal duct). Variable levels of viral RNA were identified in the lymph node, epididymis and testis following infection with both SHIVmn229 and SIVmac251. Viral protein was detected via immunofluorescence histochemistry using specific antibodies to SIV (anti-gp41) and HIV-1 (capsid/p24) protein. SIV and SHIV infected macrophages, potentially dendritic cells and T cells in the testicular interstitial tissue were identified by co-localisation studies using antibodies to CD68, DC-SIGN, ??TCR. Infection of spermatogonia, but not more mature spermatogenic cells, was also observed. Leukocytic infiltrates were observed within the epididymal stroma of the infected animals.

Conclusion: These data show that the testis and epididymis of juvenile macaques are a target for SIV and SHIV during the post-acute stage of infection and represent a potential model for studying HIV-1 pathogenesis and its effect on spermatogenesis and the MGT in general.

Comments
It seems like Mike Behe's ideas on ID are probably correct.DeepDesign
April 10, 2008
April
04
Apr
10
10
2008
06:44 PM
6
06
44
PM
PDT
Edid: i meant "haven't" read all the comments.., Also, there is no way this kinda of method can account for Haldane's dilemma.jpark320
April 10, 2008
April
04
Apr
10
10
2008
05:19 PM
5
05
19
PM
PDT
Am I missing something here or is the selective "advantage" from being infected by SIV or HIV far outway the cost? I have read all the commnets, but has anyone mentioned getting aids for some nucleotides is not a good trade off?jpark320
April 10, 2008
April
04
Apr
10
10
2008
05:18 PM
5
05
18
PM
PDT
Allen, (76) Regarding your hypothesis, you state,
Central to this hypothesis for the evolutionary transition from a simple two-base code to a more complex (and variable) three-base code is the idea that the original sequence of genetic specification was the reverse of what it is today. That is, the order of the amino acids in functional polypeptides specified (through stereochemistry or some other chemical means) the order of the appropriate tRNAs, which then specified (via their anticodons) the order of the complementary bases in the corresponding mRNAs (DNA, of course, wouldn’t even enter the picture until much later).
Do you have any idea how (other than with an elaborate set of decoding enzymes that would put the ribosome and tRNA's to shame) you would do this? Is this not the same pathway that, or at least a similar pathway to that which, Dean Kenyon traveled before he finally gave up on Biochemical Predestination?Paul Giem
April 10, 2008
April
04
Apr
10
10
2008
05:05 PM
5
05
05
PM
PDT
Allen, (76) To quote one of your predictions regarding the original code being possibly 2-codon based,
1) that the association of specific amino acids with specific tRNAs (and their specific anticodons) is not arbitrary (i.e. not a “frozen accident”, as some have suggested), but rather a “natural” consequence of the chemistry of the amino acids and their corresponding tRNAs;
Note the complete absence of the possibility that the association of specific amino acids with specific tRNA's is neither arbitrary nor a "natural" consequence of the chemistry, but rather a designed correspondence. Correct me if I am wrong, but it does seem that this absence is characteristic of your way of thinking about the molecular basis of life. Furthermore, this absence appears to be deliberate. Your final statement is,
Notice that this would not include an empirically vacuous “if it’s not the naturalistic mechanism, it has to be magic” argument.
From this I gather that (a) you regard anything other than naturalistic mechanisms as being magic and empirically vacuous, and (b) you regard ID arguments as being in this category. With this frame of mind, it is difficult to see how you could in principle recognize intelligent design if you saw it. Let us suppose that the majority of the scientific community saw things your way. Let us further suppose that they put their presuppositions to use when they approved grants, edited papers, and peer-reviewed papers. Then, except for the occasional editor and collection of peer reviewers who saw things differently from you, no papers advocating ID could possibly be published. Those occasional editors could always be Sternberged and the rest of them would fall in line if they knew what was good for their careers. So your offer in (59) that
Believe me, if such research results start getting published, not only will I include them in the list, but the researchers involved will almost instantly become as famous as Watson and Crick. But not until then.
is not quite as generous and open-minded as it sounds. The hurdles that ID has to clear are really quite high, and higher than just about any other theory regarding the scientific data. And yet, there is a need to deal with the problem of intelligent design. On another thread I wrote (comment #64),
As you probably know, extensive attempts to take natural variations and artificially select them for the effect of a blue rose have uniformly resulted in failure. However, recently because of some careful gene insertions and manipulations, scientists have been able to produce a rose that can reasonably be described as blue, and that has no known counterpart in nature. Now supposing that we are alien scientists exploring the earth after a disastrous epidemic has wiped out humankind, and enough time has passed so that the products of civilization (including records of what happened to make the roses blue) have disappeared, but the varieties of roses live on. Could we apply those 47+ kinds of phenotypic variation to the roses now (then) existing and explain how the rose became blue? Would we not be tempted to call it “lateral gene transfer” (meaning undirected lateral gene transfer)? And would we not be dead wrong? How could we possibly arrive at the correct answer to the origin of blue roses without allowing for the possibility of intelligent design? Furthermore, imagine an island where blue roses were planted and survived because there were few natural enemies. We might observe the roses exhibiting many variants, and identify experimentally confirmable sources of variation until we were blue in the face, and still not be able to identify the correct reason why these roses differed from the vast majority of those on the mainland. Part of the problem would seem to stem (ahem) from the assumption that all causes that have ever operated are now operating roughly equally to how they operated in the past. For intelligent agents, that may not be a valid assumption. Intelligent agents may very well create episodically, and they are not required to create when we want them to so that we can see how it is done.
Would you not agree that intelligent design has already happened, and is therefore theoretically possible? If so, does it not belong with the other 47+ causes of phenotypic variation? And precisely how do you rule it out? Because you "know" that no designer (or Designer) existed back then? That's an interesting position for one who talks about his association with Friends. Or is there some other, more empirical reason?Paul Giem
April 10, 2008
April
04
Apr
10
10
2008
04:57 PM
4
04
57
PM
PDT
allen: You don’t think the proportion would matter? For instance, if there are 5,000 fixed ERVs and 10 are at identical insertion points with chimps, that wouldn’t imply anything to you? Sure it would matter. Again I’d want a much larger sample size of sequenced individuals within each species. With a whole lot of samples dating via molecular clock provides more reliable data. Thanks for your well-reasoned response -- I appreciate your help.ungtss
April 10, 2008
April
04
Apr
10
10
2008
01:53 PM
1
01
53
PM
PDT
DaveScot: "To answer your question. I’d be mildly surprised if the fixed ERVs in humans vastly outnumbered those we have in common with chimps. One easy explanation is that chimps and humans aren’t as closely related on the phylogenetic tree as we thought." I don't see how that is an explanation. Species and the populations within them vary dramatically in the number and species specificity of the retroviruses they carry or are infected with, with massive variability in the ability of those viruses to infect and be transmitted through the germline. For example, retroviruses (originally called "RNA tumor viruses") were studied by so many labs simply because they were so prevalent in mice and chickens. We apes don't have nearly as many as they do IIRC, and that was somewhat of a surprise. That doesn't nullify the utility of studying chicken and rodent viruses, however, as virtually all of the transduced (or insertionally-activated) oncogenes that were discovered using viruses have been shown to be mutated in human tumors, just by different genetic mechanisms.Russell
April 10, 2008
April
04
Apr
10
10
2008
12:36 PM
12
12
36
PM
PDT
Most people assume that ribosomes are nearly perfect "machines", an impression fostered by the typical textbook diagram showing the perfectly regular little P and A binding sites in the small subunit of the ribosome. In reality, ribosomes are pretty "wobbly", and binding of tRNAs at the P and A sites is often a hit-or-miss proposition. The reason this somewhat messy system still works is that there are so many ribosomes, working so incredibly fast, producing so many polypeptides and proteins that functional products generally outnumber non-functional ones. This situation would have been much less critical early in the evolution of cells, as they would have needed many fewer proteins and had many fewer essential structures and functions. We know this because that's the case with the prokaryotes today; both their genomes and their ribosomes are much simpler than those of eukaryotes. Under such conditions, if a two-base translating ribosome encountered tRNAs in which the anticodons could be either two or three bases long, the ribosomes could translate mRNAs with two-base codons correctly often enough to produce sufficient functional proteins for the cell. With increasing variation in the available amino acids (which can not only form spontaneously, but interconvert relatively easily within broad structural categories, such as the hydrophibic amino acid group), the number of variant tRNAs binding "new" amino acids would also increase. Any cell in which the tRNAs could line up along their corresponding amino acids in polypeptides would immediately have a way to specify either two-base or three-base codons in mRNAs that could be assembled complimentary to them. This means that such cells could simultaneously be translating two-base and three-base mRNAs using "wobbly" ribosomes, and still be making enough functional polypeptides to survive. And among these cells, the ones that could more easily use three-base codons would have access to more variable amino acids, plus the added benefit of a highly redundant code (thus minimizing the negative effects of about one third of all point mutations. Central to this hypothesis for the evolutionary transition from a simple two-base code to a more complex (and variable) three-base code is the idea that the original sequence of genetic specification was the reverse of what it is today. That is, the order of the amino acids in functional polypeptides specified (through stereochemistry or some other chemical means) the order of the appropriate tRNAs, which then specified (via their anticodons) the order of the complementary bases in the corresponding mRNAs (DNA, of course, wouldn't even enter the picture until much later). This hypothesis immediately suggests several testable predictions: 1) that the association of specific amino acids with specific tRNAs (and their specific anticodons) is not arbitrary (i.e. not a "frozen accident", as some have suggested), but rather a "natural" consequence of the chemistry of the amino acids and their corresponding tRNAs; 2) that it can be demonstrated that, under certain “natural” conditions, the sequence of anticodons in tRNA can specify the assembly of corresponding codons in mRNA; and 3) that it can be demonstrated that, under certain “natural” conditions, ribosomes (especially those of prokaryotes) can translate either two-base or three-base codes into functional proteins (albeit with different amino acid sequences. Again, demonstrating all of these would not prove that this was, in fact, the way the genetic code originally evolved. However, it would constitute a “proof of concept” test for such a hypothesis. At that point, the onus would be on anyone who supported an alternative hypothesis to produce an equally convincing “proof-of-concept” demonstration of their hypothesis. Notice that this would not include an empirically vacuous “if it’s not the naturalistic mechanism, it has to be magic” argument.Allen_MacNeill
April 10, 2008
April
04
Apr
10
10
2008
12:21 PM
12
12
21
PM
PDT
Furthermore, if one examines the table of codons, an overall pattern becomes immediately apparent: all of the hydrophobic amino acids (phenylalanine, leucine, isoleucine, valine, proline, and alanine), have either uracil or cytosine as their second base. Structurally and functionally, these amino acids can also often be substituted for each other without changing the function of the proteins of which they are a part. Ergo, it is quite possible that in the early stages in the evolution of the genetic code, it was only necessary for there to be a two-base code to specify the hydrophobic amino acids. A similar situation exists for most of the hydrophilic, but uncharged amino acids (serine, threonine, tyrosine, glutamine, asparagine, cystine, and glycine). It would be possible to code for all seven of these amino acids using a two-base code, especially if several of the amino acids can be interchanged. This leaves only the charged (i.e. ionizing) amino acids: aspartate and glutamate (both anions), and lysine, arginine, and histidine (all cations). These either have two or four redundant codons, which would tend to indicate that they were among the original smaller set of amino acids, back when the code consisted of only two bases (an observation that also squares with their more restricted chemistry, due to their more complex structure and charged nature). Next, the stop codons also can be "clumped". Two of them – UAA and UAG – differ only in the third (i.e. "wobbly") base. This again means that in a primitive two-base code, either one or two "stop" codons would have sufficed. Finally, the two unique codons – AUG for methionine and UGG for tryptophan – are on opposite sides of the lexicon. The universal "start" codon – AUG, which codes for methionine – is in the "cell" in the lexicon that otherwise includes isoleucine, probably the most "dispensible" of the twenty amino acids, given its similar structure and function to leucine. Ergo, it would be relatively easy to construct a genetic code in which there were only two bases per codon, which would specify all of the truly necessary amino acids. (next: "wobbly" ribosomes)Allen_MacNeill
April 10, 2008
April
04
Apr
10
10
2008
11:35 AM
11
11
35
AM
PDT
You don’t think the proportion would matter? For instance, if there are 5,000 fixed ERVs and 10 are at identical insertion points with chimps, that wouldn’t imply anything to you? Sure it would matter. Again I'd want a much larger sample size of sequenced individuals within each species. With a whole lot of samples dating via molecular clock provides more reliable data. Imagine we have a thousand fully sequenced human genomes from populations all over the world. At $1000 each that's only a million bucks - which is like nothing for Craig Ventor. Anthropology gives us guidance on when those populations separated from others with less and less certainty going back in time. We should see some interesting things by making a dated phylogenetic tree of ERV distributions from that large, diverse sample size. It should agree more or less with anthropologic data otherwise the bone hunters are at odds with the molecular biologists so you can't trust either of them. To answer your question. I'd be mildly surprised if the fixed ERVs in humans vastly outnumbered those we have in common with chimps. One easy explanation is that chimps and humans aren't as closely related on the phylogenetic tree as we thought. I have no problem with that. Let the paleoanthropologists duke it out with the molecular biologists. ID has no dog in that hunt. Comparative genomics is pretty young and is bound a lot by the cost of DNA sequencing and the number crunching required to work with an exponentially growing database. A large sample of human genomes is the first order of business and that's valuable in many and huge ways for medical research. I'm not sure what practical benefit there'd be in having a million other primate genomes instead of just a few individuals from each species. If it was me I'd carve that out of the budget and put the resources somewhere more productive.DaveScot
April 10, 2008
April
04
Apr
10
10
2008
11:16 AM
11
11
16
AM
PDT
First of all, rRNA sequences are not translated. On the contrary, rRNA molecules form part of the three-dimensional structure of the ribosomal subunits, in the same way that amino acids form the three-dimensional structure of polypeptides and proteins. Ergo, a shift from a two-base codon to a three-base codon would have no effect on either the base sequence or the three-dimensional structure of the rRNA-containing ribosome subunits. Second, the number of bases that constitute a codon is absolutely crucial, rather than trivial, as you suggest. I have not proposed that there has ever been a different number of nucleotide bases than the five that we currently know (i.e. adenine, guanine, cytosine, thymine, and uracil). However, what I am proposing is that there has been a shift in the number of nucleotide bases that constitute a codon. Since there are four bases in DNA (adenine, guanine, cytosine, and thymine), then the number of possible codons in which bases are translated two-at-a-time is 4ex2 = 16 (i.e. four squared, or sixteen). Since at least one "stop" codon is also necessary, this means that the maximum number of amino acids that can be specified by a two-base code is 15. Currently there are 20 amino acids that are specified by the genetic code. Ergo, a minimum codon length of three bases is necessary to specify all 20. However, as I pointed out in my earlier post, this means that there are 64 different three-base codons, since 4ex3 = 64 (i.e. four cubed equals 64; this was all figured out by Francis Crick, George Gamov, and the members of the "amino acid tie club" nearly fifty years ago. But, as I also pointed out, since the current code is highly redundant, most of the twenty amino acids are coded for by either two codons, four codons, or even six codons. Only two amino acids – methionine and tryptophan – are coded by one one codon each. Ergo, the third base in the current code is almost completely unnecessary to code for all but a very small number of the twenty amino acids. (to be continued)Allen_MacNeill
April 10, 2008
April
04
Apr
10
10
2008
10:51 AM
10
10
51
AM
PDT
Allen re; the hypothetical genetic code using 2 bases per codon You said that this is a much simpler code. Mathematically it isn't. The number system is still base 4 with "digits" A C T G. Biotic messages are still encoded in various lengths of digits. The only thing that changed is the coding gene translation table. Adding extra bits to the reference number so you can reference into a larger 1-dimensional array is trivial in an information processing POV. But lets run with it and see where it leads. I'm interested in how you transition from a 2 digit genetic code to a 3 digit code without killing the intermediary. We start out with a ribosome translating 2-digit codes. Encoded in the DNA are the specifications for the parts that make up the ribosome largely (if not entirely) in strings that never get translated to protein but rather code for rRNA components. So we have a two digit ribosome happily building proteins encoded in DNA at 2 digits per monomer in linear sequence. Let's presume for the sake of argument a single simple random variation in the DNA coding for ribosomal RNA can cause the ribosome to use 3 digits per monomer. Holy frameshifts, Batman! All our protein coding genes are turned into instant nonsense. The rRNA mutation alone is fatal. To make this scenario work requires coodinated change. Simultaneous with the rRNA mutation we'd have to reorganize all our linear coding genes simultaneously so that they are using 3 digits per polymer instead of 2. What're the odds? My challenge for you, Allen, is to come up with a plausible way to make the transition from pairs to triplets in the genetic code. There is a very simple explanation that explains the observations which hint at an original 16 codons. The genetic code never used codon pairs. It always used triplets. The codon translation table always had 64 entries in it, codons were always triplets, but the redundancy was slightly higher. Each of the 16 original monomers were quadruply redundant in the code. They're not much less than quadruply redundant today. A simple workable explanation. Let's see how complicated any alternative you offer gets. I can hear Sir Occam sharpening his razor again.DaveScot
April 10, 2008
April
04
Apr
10
10
2008
09:21 AM
9
09
21
AM
PDT
Yes. So here’s a prediction based on common descent. ERV remnants with identical integration points between man and chimp will be fixed within the populations of both species barring any wholesale deletions of the useless ERV code which should still leave evidence behind that something happened at the integration loci as it’s unlikely the foreign code would be deleted without taking some original code with it at either or both ends. Estimates are that 85% of ERVs are deleted, and deletion is most commonly accomplished through a process of homologous recombination involving the viral LTR sequences at either end of each ERV. This deletion leaves behind a solitary LTR at the integration site, so the site is still identifiable. The age of a specific ERV can be determined by accumulation of mutations. What is done is to compare sequence divergence of the two flanking LTRs in a single ERV provirus. The viral RNA has only a single LTR sequence, the two copies in the DNA are generated at the time of integration, so examining the independent mutations that have accumulated in the two LTRs gives a measure of the age of the integration. I work on viruses.ck1
April 10, 2008
April
04
Apr
10
10
2008
08:59 AM
8
08
59
AM
PDT
Allen MacNeill (#58): I won't analyze again the epistemological ambiguities (IMO) of your "list of engines of variation", because I have already done that in detail in a previous post, and you have already given your answers. So, no need to do everything a second time. I will just mention that, always IMO, the essence of this new answer from you always derive from the same epistemological confusion between mechanisms of causation of variation and modalities of variation. I see that problem in many of the things you say. But I am afraid we have to agree to disagree on that. I respect your position, as I hope you will mine.gpuccio
April 10, 2008
April
04
Apr
10
10
2008
08:55 AM
8
08
55
AM
PDT
DaveScot: If they don’t need a lung for a long enough period of time RV+NS will get rid of it. Only if there is some survival advantage to NOT having the lung. Seems to me that given the obvious advantages of lungs ( like being able to live in more diverse and less oxygen-rich rivers), and the rarity of this lungless frog, the most reasonable scenario is not that RV+NS REMOVED the lung, but that the lung was removed by an information DESTROYING mutation, and the damaged, mutant, inferior frog was able to limp along in a small ecological niche due to the oxygen-content of the water. Change the oxygen content of the water, and the frog goes extinct. This isn't evolution; it's not an increase in functionality or flexibility; this is a line of unfortunate mutants that were only able to survive one place.ungtss
April 10, 2008
April
04
Apr
10
10
2008
08:39 AM
8
08
39
AM
PDT
DaveScot: untss the key is to identify fixed ERVs, data we don’t yet have. Yes. So here’s a prediction based on common descent. ERV remnants with identical integration points between man and chimp will be fixed within the populations of both species barring any wholesale deletions of the useless ERV code which should still leave evidence behind that something happened at the integration loci as it’s unlikely the foreign code would be deleted without taking some original code with it at either or both ends. Agreed. But that prediction is not exclusive to common descent. Those same facts are consistent with an discontinuous scenario as well. Especially under the following circumstances: 1) It turns out that some of the ERVs we are currently using to support this argument are not fixed in the entire human population 2) It turns out that a disproportionally large number of the fixed ERVs are not shared, or 3) It turns out that some of the code-segments we think are ERVs are not actually ERVs, but only share some ERV characteristics (particularly in the case of ERVs indispensible to organismal function).ungtss
April 10, 2008
April
04
Apr
10
10
2008
08:17 AM
8
08
17
AM
PDT
Developmental plasticity, produced by homeotic gene regulation mechanisms and related processes, pre-adapted the eukaryotes for precisely the kinds of patterns of macroevolutionary phenotypic change that is so amply demonstrated by both the fossil and comparative genomic record. I'm not following your reasoning. Developmental plasticity is the ability of genetically identical individuals to develop different physical features due to functions of their environment, or genetic switches and regulators. How does that inborn capacity to vary with environment preadapt animals for future GENETIC changes that are the real substance of the common descent controversy?ungtss
April 10, 2008
April
04
Apr
10
10
2008
08:10 AM
8
08
10
AM
PDT
untss the key is to identify fixed ERVs, data we don’t yet have. Yes. So here's a prediction based on common descent. ERV remnants with identical integration points between man and chimp will be fixed within the populations of both species barring any wholesale deletions of the useless ERV code which should still leave evidence behind that something happened at the integration loci as it's unlikely the foreign code would be deleted without taking some original code with it at either or both ends.DaveScot
April 10, 2008
April
04
Apr
10
10
2008
08:09 AM
8
08
09
AM
PDT
So the section between "AM:" and the final line are supposed to be quotes...Charlie
April 10, 2008
April
04
Apr
10
10
2008
08:09 AM
8
08
09
AM
PDT
Allen MacNeill says:
By contrast, there is currently not one journal in which empirical results obtained from field and laboratory research supporting the hypothesis that the mechanisms that produce variation in nature include something that could reasonably be considered to be foresight are being published.
as well as:
DS:“It almost seems like evolution was thinking ahead of itself when it invented plasticity.”
AM: This is the point of Mary Jane West-Eberhard’s recent book, Developmental Plasticity and Evolution, in which she discusses precisely this point. ... Developmental plasticity, produced by homeotic gene regulation mechanisms and related processes, pre-adapted the eukaryotes for precisely the kinds of patterns of macroevolutionary phenotypic change that is so amply demonstrated by both the fossil and comparative genomic record. Sounds like foresight isn't so unreasonable.Charlie
April 10, 2008
April
04
Apr
10
10
2008
08:08 AM
8
08
08
AM
PDT
DaveScot wrote (in #59):
"It almost seems like evolution was thinking ahead of itself when it invented plasticity."
This is the point of Mary Jane West-Eberhard's recent book, Developmental Plasticity and Evolution, in which she discusses precisely this point. That is, developmental plasticity (for which we now have mountains of new evidence) makes possible exactly the kinds of rapid changes in allometry you describe for domestic dogs (BTW, the correct scientific name for domestic dogs is Canus familiarus L., not "canine familiarus"). Developmental plasticity is a hallmark of the development of virtually all multicellular eukaryotes. As West-Eberhard points out, such plasticity is indeed an adaptive mechanism in eukaryotes, allowing for relatively rapid changes in phenotype that would be impossible via the simple genetic mechanisms upon which the "modern evolutionary synthesis" (i.e. "neo-darwinism") was based. As most of this plasticity is based on the homeotic gene regulatory mechanisms that are a hallmark of eukaryotes (especially animals), and the same kinds of indirect historical evidence that DaveScot cites for accepting common descent also applies to the conclusion that homeotic gene regulation began at the very beginning of the evolution of multicellular eukaryotes, then the conclusion is exactly what DaveScot suggests: Developmental plasticity, produced by homeotic gene regulation mechanisms and related processes, pre-adapted the eukaryotes for precisely the kinds of patterns of macroevolutionary phenotypic change that is so amply demonstrated by both the fossil and comparative genomic record.Allen_MacNeill
April 10, 2008
April
04
Apr
10
10
2008
07:59 AM
7
07
59
AM
PDT
Allen: However, the kinds of evidence discussed in this thread involves chunks of code that are not adaptive, at least not to the organisms into whose genomes they have been inserted. And yes, this could be interpreted as evidence that an “intelligent coder” could have inserted such code into otherwise adaptive genomes, for His own nefarious purposes. That was the point I was trying to make. I think that pseudogenes and ERVs need to be analyzed differently in the context of common descent. That's why I wrote: "Consequently, since the same redundant, disabled code in the same locations can easily be explained by both systems, I don’t think shared pseudogenes provide meaningful evidence of common ancestry. Shared ERVs COULD provide such evidence … but it appears without a study of the FIXED ERVs, we’re kinda out of luck. At least for now." Pseudogenes could be understood and analyzed as code bloat. ERVs cannot. However, ERVs can be seen as a mechanism of genetic engineering themselves (as they appear to be useful), and (with enough study) could provide substantial evidence related to the common descent controversy. The only conclusion that one can draw from this hypothesis would be that the “intelligent coder” is a malicious entity that cares not a whit for the organisms whose genomes it meddles with, but who has an overweening desire for His handiwork to be virtually indistinguishable from the operation of purely non-directed “natural” processes. Not if you differrentiate between ERV and pseudogene.ungtss
April 10, 2008
April
04
Apr
10
10
2008
07:57 AM
7
07
57
AM
PDT
Even chunks of old code in a rewritten computer program once had some function in the older versions of that program. In other words, they were once "adaptive", to use terminology that links such information to analogous information in genomes. However, the kinds of evidence discussed in this thread involves chunks of code that are not adaptive, at least not to the organisms into whose genomes they have been inserted. And yes, this could be interpreted as evidence that an "intelligent coder" could have inserted such code into otherwise adaptive genomes, for His own nefarious purposes. But such codes are, of course, exactly what we are discussing here: they are viruses (and/or "worms", in that some code for their own reproduction, independently of the reproduction of their hosts). That is, they do not benefit their hosts (except by extremely rare accident), and are therefore not evidence of any intent on the part of the "intelligent coder" to promote anything except the survival and reproduction of His parasitic (and usually disruptive, and sometimes fatal) viral codes. The only conclusion that one can draw from this hypothesis would be that the "intelligent coder" is a malicious entity that cares not a whit for the organisms whose genomes it meddles with, but who has an overweening desire for His handiwork to be virtually indistinguishable from the operation of purely non-directed "natural" processes.Allen_MacNeill
April 10, 2008
April
04
Apr
10
10
2008
07:45 AM
7
07
45
AM
PDT
DeepDesign re; lungless frog Random variation and natural selection is really good at culling things that aren't needed. It really sucks at creating things that are needed. All frogs can breathe through the skin. Given enough skin they don't need a lung. If they don't need a lung for a long enough period of time RV+NS will get rid of it. What else is evolution good at that helps explain this? Plasticity in scale. The most familiar example even has the latin for "familiar" in the name - canine familiarus. In an evolutionary eyeblink of time plasticity in scaling produced dogs with various combinations of different scale in body parts - snout width and length, head shape, ear size, length of tail, short legs on long trunks, long legs on short trunks, and normal adult weights ranging from 2 pounds to 200 pounds. The key in all this is that there's nothing new, just bigger or smaller versions of things that were already there. The breathable skin of the frog was already there. Making more of it isn't much of a challenge for evolution. Just modify the size of body parts to optimize skin surface area to body mass and lungs become less and less necessary as that ratio grows. Lungs can shrink or grow in size like any other scaleable body part. If they're not needed at all they can keep on shrinking to nothing. So the most likely explanation for this is RV+NS changing the relative scale of body parts that we know are plastic. It almost seems like evolution was thinking ahead of itself when it invented plasticity. But thinking ahead is something that RV+NS can't do. RV+NS is reactive. Planning ahead is proactive. DaveScot
April 10, 2008
April
04
Apr
10
10
2008
07:42 AM
7
07
42
AM
PDT
gpuccio wrote (in #48):
"Allen MacNeill, with all his openmindedness (which I certainly am more than willing to recognize), still does not feel like including, even hypothetically, the action of a designer among his “engines of variation”.
All of the mechanisms listed among the "engines of variation" have been discovered as the result of a century and a half of painstaking empirical research, conducted in the field and in laboratories all over the world. The published literature comprising this research enterprise encompasses something on the order of 2 million volumes of various research journals, quarterly reviews, edited anthologies, and original books, all of them devoted to reporting the materials and methods, results, and implications of those results for empirically testable hypotheses. By contrast, there is currently not one journal in which empirical results obtained from field and laboratory research supporting the hypothesis that the mechanisms that produce variation in nature include something that could reasonably be considered to be foresight are being published. Until such research starts being done, and until it starts getting published, and until it is subjected to the same rigorous and highly skeptical scrutiny that the other mechanisms of variation have been subjected to, it simply does not merit inclusion in the list. Believe me, if such research results start getting published, not only will I include them in the list, but the researchers involved will almost instantly become as famous as Watson and Crick. But not until then.Allen_MacNeill
April 10, 2008
April
04
Apr
10
10
2008
07:34 AM
7
07
34
AM
PDT
Your argument assumes that the evolutions of ERVs is sufficiently slow. If they evolve quickly enough, then any phylogenetic signal will be drowned out by the noise. If you look at any textbook on phylogenetics, you’ll see they discuss this w.r.t. sequence evolution. Contrary to what Dave asserts, this is fully rigourous. Here’s the rigour with full jargon: You have a stochastic process where all states intercommunicate. Therefore the process has a stationary distribution. The implication is that the historical signal will eventually be degraded. ERVs are a bit more complex, because they aren’t single bases, but as long as we condition on their extinction not having occurred, the same result can be found: it’s just a consequence of having a stochastic process. The questions then mainly become empirical - for example, how fast is stationarity achieved? What are the particulars of the stochastic process (and then we can return to mathematical rigour by trying to model them)? Amongst the many books I really should read is Mike Lynch’s on genome evolution, which tackles these sorts of problem. It’s not an area I’ve had to deal with much, so I’m not up to date on the literature. If I'm understanding you correctly (and let me know if I'm not), you're arguing that we'd expect fewer older (and thus shared) ERVs, because those sequences mutate (and thus become unrecognizable) over time. (In my experience, the use of "full jargon" serves only to obscure meaning). That is another way this could be made mathematically rigorous. 1) Compare the degree to which ERVs have differentiated between humans and chimps to the degree to which the code as a whole as differentiated -- if the ERVs have modified significantly less, they are either provide some survival advantage in the configuration, or they are younger than the branching of humans + apes. 2) Compare the proportion of fixed shared ERVs to fixed unshared ERVs, compare that to the background rate of mutation, and ask of the difference can be explained by background mutation in the allotted time, or if there are far fewers fixed shared ERVs than would be expected.ungtss
April 10, 2008
April
04
Apr
10
10
2008
07:34 AM
7
07
34
AM
PDT
~98,000 ERVs have been identified in the sequenced human genome. These are grouped into ~50 different families. This study might be of some interest: http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pubmed&pubmedid=17581995 This study looked at the HERV-K(HLM2) family and did an in depth analysis of the proviruses integrated at 99 ERV sites: 25/99 of the ERVs were shared by humans and chimps and were thus the oldest age class; 66/99 were found in humans but not chimps (intermediate age); 8/99 were found in some but not all humans and in no chimps (newest ERVs). So in this small set of ERVs, ~25% of these integrations were also found in chimps. Also, remember that retroviruses have RNA genomes. Following infection, a DNA copy is generated and this copy must integrate into the host genome as part of the virus replicative cycle. That integrated copy becomes a permanent part of the host cell genome.ck1
April 10, 2008
April
04
Apr
10
10
2008
07:23 AM
7
07
23
AM
PDT
Dave Scot: We really need to know how many are fixed in the human gene pool and how many are fixed in the gene pool of other primate species. Fixation for things with no selection value is a low odds event. For that matter fixation of stuff with a positive selection value is a low odds event too. So you’d really expect most of the ERVs are not fixed and are dwindling in frequency in their respective gene pools. I agree -- the key is to identify fixed ERVs, data we don't yet have. Even just of few them that are fixed at identical insertion points in multiple species is strong evidence of a shared ancestor. You don't think the proportion would matter? For instance, if there are 5,000 fixed ERVs and 10 are at identical insertion points with chimps, that wouldn't imply anything to you? Similar observations with psuudogenes and transposable elements makes the case even stronger. The vast similarity in function and sequence of active coding genes adds considerably more weight to the evidence for common descent. Personally I don't buy that inference. If you know any computer programmers, ask them if there are any "redundant, disabled, inefficient subroutines" analogous to pseudogenes in today's highly complex software. The fact is, when programmers write programs, they don't start from scratch every time -- they modify previous designs, and oftentimes don't bother to delete sections of code that no longer have any function. When they grab an "object" to plug into their program, the "object" often contains functionality they don't need -- but they stick it in anyway and just take quick steps to disable the unnecessary code. You'll often find these redundant segments in identical locations in the code in wildly diverse software packages, because the same subroutines are pulled out of libraries. It's called "code bloat." In fact, I'd be surprised if the designer of ours DIDN'T use this approach to genetic engineering. Reinventing the wheel is a waste of time. Consequently, since the same redundant, disabled code in the same locations can easily be explained by both systems, I don't think shared pseudogenes provide meaningful evidence of common ancestry. Shared ERVs COULD provide such evidence ... but it appears without a study of the FIXED ERVs, we're kinda out of luck. At least for now. And that’s just the molecular evidence. The fossil record generally agrees with the molecular evidence and so too do anatomical similarities in both living and extinct species. In a very similar way to the above, I don't think that anatomical similarities imply common descent. All cars have certain anatomical similarities -- tires, steering wheels, seats, engines, etc. They have these anatomical similarities not because they are related, but because the designs work, and the designers use what works. If you were going to populate a planet with life, wouldn't some of your lifeforms share anatomical similarities? Compare “God of the Gaps” to “Darwin of the Gaps”. There’s not a dime’s worth of difference between the two of them. I agree with that wholeheartedly. If someone can demonstrate how that code driven machinery came about without intelligent agency I’ll concede that all subsequent evolution is possible and plausible without intelligent agency. But that’s just me. I’ll follow the evidence whichever way it leads. I'm with you, man.ungtss
April 10, 2008
April
04
Apr
10
10
2008
07:15 AM
7
07
15
AM
PDT
Bob OH re; rigor Teh problem with ToE is that it is ultimately based on an unpredictable mechanism. If you can't predict you can't be rigorous. What does ToE predict is the next step in human evolution? It predicts nothing. It can't. Anything might happen then again maybe nothing will happen. ToE covers all possible contingencies after the fact but never before the fact. Compare this to a rigorous science like astronomy. It not only tells you precisely where all the planets were in the past but precisely where they will be in the future. That's rigor.DaveScot
April 10, 2008
April
04
Apr
10
10
2008
06:58 AM
6
06
58
AM
PDT
Allen This controversy continues today, with partisans for neutral molecular evolution squaring off against “pan-adaptationists” against “neo-lamarkians”, etc. etc. etc. Of course it continues. That's because all the partisans are wrong. There's an elephant in the room.DaveScot
April 10, 2008
April
04
Apr
10
10
2008
06:46 AM
6
06
46
AM
PDT
1 2 3 4

Leave a Reply