Uncommon Descent Serving The Intelligent Design Community

So, why are the human and chimpanzee/bonobo genomes so similar? A reply to Professor Larry Moran

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Professor Larry Moran has kindly responded to my recent post questioning whether he, or anyone else, understands macroevolution. In the course of his response, titled, What do Intelligent Design Creationists really think about macroevolution?, Professor Moran posed a rhetorical question:

I recently wrote up a little description of the differences between the human and chimpanzee/bonobo genomes showing that those differences are perfectly consistent with everything we know about mutation rates and the fixation of alleles in populations [Why are the human and chimpanzee/bonobo genomes so similar?]. In other words, I answered Vincent Torley’s question [about whether there was enough time for macroevolution to have occurred – VJT].

That post was met with deafening silence from the IDiots. I wonder why?

I’ve taken the trouble to read Professor Moran’s post on the genetic similarity between humans, chimpanzees and bonobos, and I’d like to make the following points in response.

1. Personally, I accept the common ancestry of humans, chimpanzees and bonobos. Of course, I am well aware that many Intelligent Design theorists don’t accept common ancestry, but some prominent ID advocates do. Why do I accept common descent? Because I think it’s the best explanation for the pattern of similarities we find between humans, chimpanzees and bonobos. Young-earth creationist Todd Wood (who is also a geneticist) has freely acknowledged that it is difficult to explain these similarities without assuming common ancestry, in his 2006 article, The Chimpanzee Genome and the Problem of Biological Similarity (Occasional Papers of the BSG, No. 7, 20 February 2006, pp. 1-18). Referring to studies which highlight these similarities, he writes:

Creationists have responded to these studies in a variety of ways. A very popular argument is that similarity does not necessarily indicate common ancestry but could also imply common design (e.g. Batten 1996; Thompson and Harrub 2005; DeWitt 2005). While this is true, the mere fact of similarity is only a small part of the evolutionary argument. Far more important than the mere occurrence of similarity is the kind of similarity observed. Similarity is not random. Rather, it forms a detectable pattern with some groups of species more similar than others. As an example consider a 200,000 nucleotide region from human chromosome 1 (Figure 2). When compared to the chimpanzee, the two species differ by as little as 1-2%, but when compared to the mouse, the differences are much greater. Comparison to chicken reveals even greater differences. This is exactly the expected pattern of similarity that would result if humans and chimpanzees shared a recent common ancestor and mice and chickens were more distantly related. The question is not how similarity arose but why this particular pattern of similarity arose. To say that God could have created the pattern is merely ad hoc. The specific similarity we observe between humans and chimpanzees is not therefore evidence merely of their common ancestry but of their close relationship.

Evolutionary biologists also appeal to specific similarities that would be predicted by evolutionary descent. Max’s (1986) argument for shared errors in the human and chimpanzee genome example of a specific similarity expected if evolution were true. This argument could be significantly amplified from recent findings of genomic studies. For example, Gilad et al. (2003) surveyed 50 olfactory receptor genes in humans and apes. They found that the open reading frame of 33 of the human genes were interrupted by nonsense codons or deletions, rendering them pseudogenes. Sixteen of these human pseudogenes were also pseudogenes in chimpanzee, and they all shared the exact same substitution or deletion as the human sequence. Eleven of the human pseudogenes were shared by chimpanzee, gorilla, and human and had the exact same substitution or deletion. While common design could be a reasonable first step
to explain similarity of functional genes, it is difficult to explain why pseudogenes with the exact same substitutions or deletions would be shared between species that did not share a common ancestor.

Nevertheless, Wood feels compelled to reject common ancestry, since he believes the Bible clearly teaches the special creation of human beings (Genesis 1:26-27; 2:7, 21-22). Personally, I’d say that depends on how you define “special creation.” Does the intelligent engineering of a pre-existing life-form into a human being count as “creation”? In my book it certainly does.

2. In his post, Professor Moran (acting as devil’s advocate) proposes the intelligent design hypothesis that “the intelligent designer created a model primate and then tweaked it a little bit to give chimps, humans, orangutans, etc.” However, he argues that this hypothesis fails to explain “the fact that humans are more similar to chimps/bonobos than to gorillas and all three are about the same genetic distance from orangutans.” On the contrary, I think it’s very easy to explain that fact: all one needs to posit is three successive acts of tweaking, over the course of geological time: a first act, which led to the divergence of African great apes from orangutans; a second act, which caused the African great apes to split into two lineages (the line leading to gorillas and the line leading to humans, chimps and bonobos); and finally, a third act, which led humans to split off from the ancestors of chimps and bonobos.

“Why would a Designer do it that way?” you ask. “Why not just make a human being in a single step?” The short answer is that the Designer wasn’t just making human beings, but the entire panoply of life-forms on Earth, including all of the great apes. Successive tweakings would have meant less work on the Designer’s part, whereas a single tweaking causing a simultaneous radiation of orangutans, gorillas, chimps, bonobos and humans from a common ancestor would have necessitated considerable duplication of effort (e.g. inducing identical mutations in different lineages of African great apes), which would have been uneconomical. If we suppose that the Designer operates according to a “minimum effort” principle, then successive tweakings would have been the way to go.

3. But Professor Moran has another ace up his sleeve, for he argues that the number of mutations that have occurred since humans and chimps diverged matches the mutation rate that has occurred over the last few million years. In other words, time is all that is required to generate the differences we observe between human beings and chimpanzees, without any need for an Intelligent Designer:

The average generation time of chimps and humans is 27.5 years. Thus, there have been 185,200 generations since they last shared a common ancestor if the time of divergence is accurate. (It’s based on the fossil record.) This corresponds to a substitution rate (fixation) of 121 mutations per generation and that’s very close to the mutation rate as predicted by evolutionary theory.

Now, I suppose that this could be just an amazing coincidence. Maybe it’s a fluke that the intelligent designer introduced just the right number of changes to make it look like evolution was responsible. Or maybe the IDiots have a good explanation that they haven’t revealed?

Some mathematical objections to Professor Moran’s argument

Professor Moran makes the remarkable claim that 130 mutations are fixed in the human population, in each generation. Here are a few reasons why I’m doubtful, even after reading his posts on the subject (see here, here and here):

(a) most mutations will be lost due to drift, so a mutation will have to appear many times before it gets fixed in the population;
(b) necessarily, the mutation rate will always be much greater than the fixation rate;
(c) nearly neutral mutations cannot be fixed except by a bottleneck.

I owe the above points to a skeptical biologist who kindly offered me some advice about fixation. As I’m not a scientist, I shall pursue the matter no further. Instead, I’d like to invite other readers to weigh in. Is Professor Moran’s figure credible?

Professor Moran is also assuming that chimps and humans diverged a little over five million years ago. He might like to read the online articles, What is the human mutation rate? (November 4, 2010) and A longer timescale for human evolution (August 10, 2012), by paleoanthropologist John Hawks, who places the human-chimp divergence at about ten million years ago, but I’ll let that pass for now.

I shall also overlook the fact that Professor Moran severely underestimates the genetic differences between humans and chimps. As Jon Cohen explains in an article in Science (Vol. 316, 29 June 2007) titled, Relative Differences: The Myth of 1%, these differences include “35 million base-pair changes, 5 million indels in each species, and 689 extra genes in humans,” although he adds that many of these may have no functional meaning, and he points out that many of the extra genes in human beings are probably the result of duplication. Cohen comments: “Researchers are finding that on top of the 1% distinction, chunks of missing DNA, extra genes, altered connections in gene networks, and the very structure of chromosomes confound any quantification of ‘humanness’ versus ‘chimpness.’” Indeed, Professor Moran himself acknowledges in another post that “[t]here are about 90 million base pair differences as insertion and deletions (Margues-Bonet et al., 2009),” but he goes on to add that the indels (insertions and deletions) “may only represent 90,000 mutational events if the average length of an insertion/deletion is 1kb (1000 bp).” Still, 90,000 is a pretty small number, compared to his estimate of 22.4 million mutations that have occurred in the human line.

I could also point out that the claim made by Professor Moran that the DNA of humans and chimps is 98.6% identical in areas where it can be aligned is misleading, taken on its own: what it overlooks is the fact that, as creationist geneticist Jeffrey Tomkins (who obtained his Ph.D. from Clemson University) has recently demonstrated, the chromosomes of chimpanzees display “an overall genome average of only 70 percent similarity to human chromosomes” (Human and Chimp DNA–Nearly Identical, Acts & Facts 43 (2)).

I might add (h/t StephenB) that Professor Moran has overlooked the fact that humans have 23 pairs of chromosomes, whereas chimpanzees (and other great apes) have 24. However, Dr. Jeffrey Tomkins has published an article titled, Alleged Human Chromosome 2 “Fusion Site” Encodes an Active DNA Binding Domain Inside a Complex and Highly Expressed Gene—Negating Fusion (Answers Research Journal 6 (2013):367–375). Allow me to quote from the abstract:

A major argument supposedly supporting human evolution from a common ancestor with chimpanzees is the “chromosome 2 fusion model” in which ape chromosomes 2A and 2B purportedly fused end-to-end, forming human chromosome 2. This idea is postulated despite the fact that all known fusions in extant mammals involve satellite DNA and breaks at or near centromeres. In addition, researchers have noted that the hypothetical telomeric end-to-end signature of the fusion is very small (~800 bases) and highly degenerate (ambiguous) given the supposed 3 to 6 million years of divergence from a common ancestor. In this report, it is also shown that the purported fusion site (read in the minus strand orientation) is a functional DNA binding domain inside the first intron of the DDX11L2 regulatory RNA helicase gene, which encodes several transcript variants expressed in at least 255 different cell and/or tissue types. Specifically, the purported fusion site encodes the second active transcription factor binding domain in the DDX11L2 gene that coincides with transcriptionally active histone marks and open active chromatin. Annotated DDX11L2 gene transcripts suggest complex post-transcriptional regulation through a variety of microRNA binding sites. Chromosome fusions would not be expected to form complex multi-exon, alternatively spliced functional genes. This clear genetic evidence, combined with the fact that a previously documented 614 Kb genomic region surrounding the purported fusion site lacks synteny (gene correspondence) with chimpanzee on chromosomes 2A and 2B (supposed fusion sites of origin), thoroughly refutes the claim that human chromosome 2 is the result of an ancestral telomeric end-to-end fusion.

If Professor Moran believes that Dr. Tomkins’ article on chromosome fusion is flawed, then he owes his readers an explanation as to why he thinks so.

The vital flaw in Moran’s reasoning

Leaving aside these points, the real flaw in Professor Moran’s analysis is that he assumes that the essential differences between humans and chimpanzees reside in the 22.4 million-plus mutations – for the most part, neutral or near-neutral – that have occurred in the human line since our ancestors split off from chimpanzees. This is where I must respectfully disagree with him.

In my recent post, Does Professor Larry Moran (or anyone else) understand macroevolution? (March 19, 2014), I wrote:

No scientist can credibly claim to have a proper understanding of macroevolution unless they can produce at least a back-of-the-envelope calculation showing that it is capable of generating new species, new organs and new body plans, within the time available. So we need to ask: is there enough time for macroevolution?

I didn’t ask for a demonstration that macroevolution is capable of generating the neutral or near-neutral mutations that distinguish one lineage from another. Rather, what I wanted was something more specific.

In the post cited above, I endorsed the claim made by Dr. Branko Kozulic, in his 2011 VIXRA paper, Proteins and Genes, Singletons and Species, that the essential differences between species resided not in the neutral mutations they may have accumulated over the course of time, but in the hundreds of chemically unique genes and proteins they possessed, which have no analogue in other species. What Professor Moran really needs to show, then, is that a process of random genetic drift acting on neutral mutations is capable of generating these the chemically unique genes and proteins.

In an article titled, All alone (NewScientist, 19 January 2013), Helen Pilcher (whose hypothesis for the origin of orphan genes I critiqued in my last post) writes:

Curiously, orphan genes are often expressed in the testes – and in the brain. Lately, some have even dared speculate that orphan genes have contributed to the evolution of the biggest innovation of all, the human brain. In 2011, Long and his colleagues identified 198 orphan genes in humans, chimpanzees and orang-utans that are expressed in the prefrontal cortex, the region of the brain associated with advanced cognitive abilities. Of these, 54 were specific to humans. In evolutionary terms, the genes are young, less than 25 million years old, and their arrival seems to coincide with the expansion of this brain area in primates. “It suggests that these new genes are correlated with the evolution of the brain,” says Long.

These are the genes that I’m really interested in. Can a neutral theory of evolution, such as the one espoused by Professor Moran, account for their origin? Creationist geneticist Jeffrey Tomkins thinks not. In a recent blog article titled, Newly Discovered Human Brain Genes Are Bad News for Evolution, he writes:

Did the human brain evolve from an ape-like brain? Two new reports describe four human genes named SRGAP2A, SRGAP2B, SRGAP2C, and SRGAP2D, which are located in three completely separate regions on chromosome number 1.(1) They appear to play an important role in brain development.(2) Perhaps the most striking discovery is that three of the four genes (SRGAP2B, SRGAP2C, and SRGAP2D) are completely unique to humans and found in no other mammal species, not even apes.

Dr. Tomkins then summarizes the evolutionary hypothesis regarding the origin of these genes:

While each of the genes share some regions of similarity, they are all clearly unique in their overall structure and function when compared to each other. Evolutionists claim that an original version of the SRGAP2 gene inherited from an ape-like ancestor was somehow duplicated, moved to completely different areas of chromosome 1, and then altered for new functions. This supposedly occurred several times in the distant past after humans diverged from an imaginary ancestor in common with chimps.

However, this hypothesis faces two objections, which Dr. Tomkins considers fatal:

But this story now wields major problems. First, when compared to each other, the SRGAP2 gene locations on chromosome 1 are each very unique in their protein coding arrangement and structure. The genes do not look duplicated at all. The burden of proof is on the evolutionary paradigm, which must explain how a supposed ancestral gene was duplicated, spliced into different locations on the chromosome, then precisely rearranged and altered with new functions—all without disrupting the then-existing ape brain and all by accidental mutations.

The second problem has to do with the exact location of the B, C, and D versions of SRGAP2. They flank the chromosome’s centromere, which is a specialized portion of the chromosome, often near the center, that is important for many cell nucleus processes, including cell division and chromatin architecture.(3) As such, these two regions near the centromere are incredibly stable and mutation-free due to an extreme lack of recombination. There is no precedent for duplicated genes even being able to jump into these super-stable sequences, much less reorganizing themselves afterwards.

Professor Moran asks some more questions about species

In his latest post, What do Intelligent Design Creationists really think about macroevolution? (March 20, 2014), Professor Moran writes:

I’m not very clear on the "Theory" of Intelligent Design Creationism. Maybe it also predicts what it will be difficult to decide whether Neanderthals and Denisovans are separate species or part of Homo sapiens. Does anyone know how Intelligent Design Creationism deals with these problems? Can it tell us whether lions and tigers are different species or whether brown bears and polar bears are different species?

That’s a fair question, and I’ll do my best to answer it.

(a) Why Modern humans, Neandertals and Denisovans are all one species

Modern humans, Neandertals and Denisovans, who broke off from the lineages leading to Neandertal man and modern man at least 800,000 years ago, are known to have had 23 pairs of chromosomes in their body cells (or 46 chromosomes altogether), as opposed to the other great apes, which have 24 pairs (or 48 altogether).

What’s more, the genetic differences between modern man, Neandertal man and Denisovan man are now known to have been slight – so slight that it has been suggested that they be grouped in one species, Homo sapiens (see here, here, here, here, but see also here).

Finally, Dr. Jeffrey Tomkins addressed the genome of Neandertal man in a 2012 blog post titled, Neanderthal Myth and Orwellian Double-Think (16 August 2012):

Modern humans and Neanderthals are essentially genetically identical. Neanderthals are unequivocally fully human based on a number of actual genetic studies using ancient DNA extracted from Neanderthal remains.

An excursus regarding fruit flies and the identification of species

We noted above that Neandertals and Denisovans (which are all thought to belong to the same species) diverged around 800,000 years ago. However, a recent article by Nicola Palmieri et al., titled, The life cycle of Drosophila orphan genes (eLife 2014;3:e01311, 19 February 2014), indicates that orphan genes have been gained and lost in different species of the fruit-fly genus Drosophila. According to Timetree, Drosophila persimilis and Drosophila pseudoobscura diverged 0.9 million years ago. Drosophila pseudoobscura possesses no less than 228 orphan genes.

It seems prudent to conclude, then, that lineages which are known to have diverged more than 1 million years ago are indeed bona fide species.

N.B. A Science Daily press release at the time of publication of the article makes the following extravagant claim: “Recent work in another group has shown how orphan genes can arise: Palmieri and Schlötterer’s work now completes the picture by showing how and when they disappear.” It appears that this “other group” is actually a group of researchers at the University of California, Davis, who have shown in a recent study that new genes are being continually created from non-coding DNA, more rapidly than expected. Here’s the reference: Li Zhao, Perot Saelao, Corbin D. Jones, and David J. Begun. Origin and Spread of de Novo Genes in Drosophila melanogaster Populations. Science, 2013; DOI: 10.1126/science.1248286. I haven’t read the article, but judging from the press release, it seems that the authors haven’t identified a mechanism for the creation of these genes, as yet: “Zhao said that it’s possible that these new genes form when a random mutation in the regulatory machinery causes a piece of non-coding DNA to be transcribed to RNA.”

Dr. Jeffrey Tomkins provides a hilarious send-up of this logic in his article, Orphan Genes and the Myth of De Novo Gene Synthesis:

The circular form of illogical reasoning for the evolutionary paradigm of orphan genes and its counterpart ‘de novo gene synthesis’, goes like this. Orphan genes have no ancestral sequences that they evolved from. Therefore, they must have evolved suddenly and rapidly from non-coding DNA via de novo gene synthesis. And, are you ready? De novo gene synthesis must be true because orphan genes exist – orphan genes exist because of de novo gene synthesis. As you can see, one aspect of this supports the other in a circular fashion of total illogic – called a circular tautology.

At this stage, I think that press claims that scientists have solved the origin of orphan genes look decidedly premature, to say the least.

(b) Lions, tigers and leopards

What about lions and tigers? According to Timetree, lions and leopards diverged only 2.9 million years ago, while lions and tigers diverged 3.7 million years ago. All of these “big cats” represent different species of the genus Panthera. By comparison, humans and chimps (which are unquestionably different species) are said to have diverged 6.3 million years ago.

A recent article from Nature by Yun Sung Cho et al. (Nature Communications 4, Article number: 2433, doi:10.1038/ncomms3433, published 17 September 2013), titled, The tiger genome and comparative analysis with lion and snow leopard genomes, makes the following observations:

The Amur tiger genome is the first reference genome sequenced from the Panthera lineage and the second from the Felidae species. For comparative genomic analyses of big cats, we additionally sequenced four other Panthera genomes and tried to predict possible big cats’ molecular adaptations consistent with the obligatory meat eating and muscle strength of the predatory Panthera lineage. The tiger and cat genomes showed unexpectedly similar repeat compositions and high genomic synteny, and these indicated strong genomic conservation in Felidae. These results could be supported by the recency of the 37 species-Felidae radiation (<11 MYA)(15) and well-known hybridizations in captivity among subspecies in Felidae lineage such as liger and tigon. By contrast, the ratio of repeat components for the great apes was considerably different among species, especially between human and orang-utan(28), which diverged about the same time as felines. The breaks in synteny that we observed are likely occasional rare sporadic exchanges that accumulated over this short period (<11 MYA) of evolutionary time. The paucity of exchanges across the mammalian radiations (by contrast to more reshuffled species such as Canidae, Gibbons, Ursidae and New World monkeys) is a hallmark of evolutionary constraints.

Figure 1b in the article reveals that tigers have certain genes which cats lack. However, I was unable to ascertain whether tigers had any chemically unique orphan genes that lions or leopards lacked.

A Science Daily report titled, Tiger genome sequenced: Tiger, lion and leopard genomes compared (September 20, 2013) which discussed the findings in the above-cited article, added the following information:

Researchers also sequenced the genomes of other Panthera-a white Bengal tiger, an African lion, a white African lion, and a snow leopard-using next-gen sequencing technology, and aligned them using the genome sequences of tiger and domestic cat. They discovered a number of Panthera lineage-specific and felid-specific amino acid changes that may affect the metabolism pathways. These signals of amino-acid metabolism have been associated with an obligatory carnivorous diet.

Furthermore, the team revealed the evidence that the genes related to muscle strength as well as energy metabolism and sensory nerves, including olfactory receptor activity and visual perception, appeared to be undergoing rapid evolution in the tiger.

I should add that although lions and tigers can interbreed, the offspring (ligers and tigons) are nearly always sterile, because the parent species have different numbers of chromosomes.

From the above evidence, it appears likely that lions, tigers and leopards are genuinely different species, and that each species was intelligently engineered.

(c) Brown bears and polar bears

The case of brown bears and polar bears is much more difficult to decide, as there appear to be no online articles on orphan genes in these animals. However, Timetree indicates that they diverged 1.2 million years ago, which is a little earlier than the time when Drosophila persimilis and Drosophila pseudoobscura diverged (0.9 million years ago). It therefore seems likely that these two bears belong to different species.

Conclusion

I shall stop there for today. In conclusion, I’d like to point out that Professor Moran nowhere addressed the problem of the origin of orphan genes in his reply, so he didn’t really answer the first argument in my previous post, which was that we cannot claim to understand macroevolution until we ascertain the origin of the hundreds of chemically unique proteins and orphan genes that characterize each species.

To Professor Moran’s credit, he did attempt to answer my second argument (why is there so much stasis in the fossil record?), by suggesting that even large populations will still change slowly in their diversity, as new alleles increase in frequency and old ones are lost, but that morphological change is “more likely to occur during speciation events when the new daughter population (species) is quite small and rapid fixation of rare alleles is more likely.” But as I argued previously, why, during the times of environmental upheaval described by Professor Prothero, don’t we see a diversification of niches? Why don’t species branch off? Why do we instead see morphological stasis persisting for millions of years? That remains an unsolved mystery.

Finally, it seems to me that Professor Moran has solved the “time” question (my third argument) only in a trivial sense: he has calculated that the requisite number of mutations separating humans and chimps could have gotten fixed in the human line. I have to say I found his claim that in the last five million years, 22.4 million mutations have become fixed in the lineage leading to human beings, utterly astonishing. But even supposing that this figure is correct, what it overlooks is that the mutations accounting for the essential differences between humans and chimps aren’t your ordinary, run-of-the-mill mutations. Many of them seem to have involved orphan genes, which means that until we can explain how these genes arise, we lack an adequate account of macroevolution.

Comments
Evolve, I know you think you got evolution all 'scientifically proven' with all your genetic similarity evidence, (all the while neglecting to mention very similar sequences in highly divergent species, Bats and Whales for instance), but, call me a skeptic if you will, but could you be so kind as to actually show us a demonstration of this awesome Darwinian mechanism in action? It would go a long way towards helping you make your case that what you believe to be true, that material processes can generate highly integrated information in genomes, actually is true! The Law of Physicodynamic Insufficiency - Dr David L. Abel - November 2010 Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”,,, After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.” http://www-qa.scitopics.com/The_Law_of_Physicodynamic_Insufficiency.htmlbornagain77
March 24, 2014
March
03
Mar
24
24
2014
01:43 PM
1
01
43
PM
PDT
guccio @ 79 ///So, it is true that many new genes derive from non coding sequences, as I have argued here myself, but they cannot do that by RV alone (including drift), and NS cannot act in that scenario. Therefore, only Intelligent Design can explain that kind of result./// I addressed this already. Your creation model utterly fails to explain why the Poldi gene (I cited above) has more functional mutations in closely-related mice species, but less functional mutations in distantly-related rats and still less functional mutations in even more distantly related humans. The gene is not expressed in both rats & humans, so what was the designer doing with making the rat version look more similar to mice than the human version?! He probably wanted to make it look as if evolution happened! I hope you realize the fallacy of your ID argument.Evolve
March 24, 2014
March
03
Mar
24
24
2014
01:26 PM
1
01
26
PM
PDT
Genetic similarity can be accounted for via a common design or convergence. However it still remains that the alleged fusion occurred in the human lineage and therefor had nothing to do with any alleged common ancestor with chimps. Human Chromosome 2 From a Design PerspectiveJoe
March 24, 2014
March
03
Mar
24
24
2014
01:10 PM
1
01
10
PM
PDT
Evolve: I don’t get you. The well-understood mechanisms of mutation, natural selection, random genetic drift and neutral theory explain the evolution of all life that we know of, including humans.
Care to provide a gap-free account of the emergence of any cell type, tissue type, organ or body plan using these "mechanisms of mutation, natural selection, random genetic drift and neutral theory" that isn't actually a degradation (e.g. cancer cells) ? ThanksCentralScrutinizer
March 24, 2014
March
03
Mar
24
24
2014
01:02 PM
1
01
02
PM
PDT
VJTorley @58 ///ENCODE is right. Tomkins pulls the data from ENCODE and there is clearly a gene transcribed across the fusion event. The gene is rather larger than the 1500 bp that Miller quotes. The gene is a helicase (if I remember correctly) expressed in various tissues throughout development. If you read Tomkins’ paper, he demonstrates where the supposed telomere ends are located, where additional centromeres are supposedly located, and how the region is transcriptionally active (a hallmark associated with non-telomeric DNA)./// I’ll reserve judgement on the exact location of the DDX11L2 gene in relation to the fusion site, since that needs to be cross-checked in detail. But there are other points Tomkins ignores and misrepresents: The DDX11L2 gene present on human chromosome 2 is part of the DDX11L gene family, members of which are present in humans as well as other apes. These are homologous genes sharing evolutionary ancestry and they arose by duplication events. The following paper by Costa et. al reports that the human DDX11L genes share 98% sequence homology to that of chimps and 91% to that of rhesus monkey. Fig. 3 in the same paper shows that the chimp and gorilla DDX11L genes are detectable using probes derived from a human version of the gene, further underlining their homology. Moreover, DDX11L genes exclusively localize to the ends of chromosomes in all the apes examined: http://www.biomedcentral.com/content/pdf/1471-2164-10-250.pdf Thus, the presence of a DDX11L gene in the middle of human chromosome 2 makes perfect sense, if these genes are exclusively located at chromosome ends and human chromosome 2 was produced by the end-to-end fusion of two ancestral ape chromosomes. Tomkins ignores these observations because they directly contradict his claims, although he quotes the paper to cherry-pick what he wants! Tomkins tries to salvage the situation by saying chimps don’t have DDX11L2 in their corresponding chromosomal end portions. But chromosomal ends are well known to be highly unstable & dynamic areas that undergo deletions, duplications & inversions. So, chimps could have simply lost their DDX11L2 versions or these genes may have jumped on to other chromosomes. Tomkins further explains in length how human DDX11L2 is transcribed, has transcription factor binding sites and microRNA binding sites. But all these still do not indicate there’s a functional RNA or protein product! The transcript produced can simply be noise. Such spurious transcription is widespread as even Tomkin’s pet data, ENCODE, shows. Tomkins doesn’t attempt any experiments to identify or characterize the RNA/protein product under question at, despite repeatedly assuming & claiming that the gene is functional! Now, even if a functional protein product is identified for DDX11L2, it can be due to neo-functionalization of the pseudogene. Some pseudogenes may acquire novel functions as they accumulate mutations. This is hardly unknown or surprising. Tomkins mostly focuses on the fusion site. But he also tries to dismiss the massive similarity found along the length of human chromosome 2 and its counterparts in apes, by invoking a kind of common design argument. But, there too, he fails to account for the presence of increasing disparity in banding pattern with increasing distance between species, exactly as predicted by evolution. See Fig. 2 in the following paper: http://genome.cshlp.org/content/12/11/1651.long Human chromosome 2 banding pattern is most similar to that of chimps, slightly less similar to that of gorillas and still less to that of orangutans. There’s a reason why Tomkins published his piece in a creation journal and not in a proper science journal, where his dubious claims would have prevented it from getting past peer-review.Evolve
March 24, 2014
March
03
Mar
24
24
2014
12:41 PM
12
12
41
PM
PDT
Update on the possibility of chromosome 2 fusion in the human line: I've gotten some more feedback from relevantly qualified academics who were willing to "weigh in" on the case, and here's a short summary of their findings: (1) Not all of them agree with Tomkins. Some do, some don't. Of those who disagree with him, nobody is saying that chromosome 2 fusion would have been a routine affair; on the contrary, it might have been quite difficult. But some of them say we shouldn't rule it out. (2) Tomkins obtained his data from ENCODE, and he claims there is clearly a functional gene transcribed across the site of the alleged fusion event. But some of the experts disputed this. One wrote: "I looked into Tomkins' story of a gene straddling the chromosome 2 fusion site. There is indeed a transcript that straddles the site. However, it is a pseudogene, DDX11L2, which is one of a large family of 17 DDX11 pseudogenes. In this case, the first exon is distal to the fusion site." Since DDX11L2 is one of a large family of pseudogenes, it could be argued that one more or less is not going to make a difference. Moreover, the proportion of pseudogenes that have been shown to be functional is miniscule, and the chances that all 17 pseudogenes were functional appears very remote. In short: a new transcript seems to have formed by the fusion event, but we already know that transcription occurs across the genome, and in any case, this would have been a non-functional transcript. (3) Some experts had claimed that there were no clear-cut cases of telomere to telomere fusion. However, there seems to be one: Ventura, M.. (2012). The evolution of African great ape subtelomeric heterochromatin and the fusion of human chromosome . Genome Research, 22(6). See also this paper: Giannuzzi, G.. (2013). Hominoid fission of chromosome 14/15 and the role of segmental duplication. Genome Research, 23(11). (4) The argument that if chromosomes did fuse spontaneously, we should see it all the time, does not apply here, as this is not a simple joining of ends. The postulated event was not a real case of telomere-to-telomere fusion. Instead, it was more of a recombination event that lopped off most telomeric sequences, rather than a fusion event. See http://genome.cshlp.org/content/22/6/1036.long . (5) What's more, there's evidence of a degenerate centromere in humans at the location where the other chimp chromosome 2 has a centromere. (6) However, the fusion of the 2 chromosomes would have had to have been accompanied by the simultaneous loss of the other centromere, which is unlikely, but by no means impossible. (7) The argument that this can't be a fusion event because it's degenerate, is a circular one. On the other hand: (1) Perhaps the real reason that only a miniscule proportion of pseudogenes have been identified as functional is that no-one even bothers to look for a function, because they are already convinced by evolutionary dogma that there won't be any function. (2) Non-essential doesn't mean non-functional. (3) A problem with degeneracy still remains: when are these alleged alterations from the normal telomere (i.e. mutations) occurring, and why do they occur at a high frequency only before they get propagated to the entire human population? (4) Why isn't the occurrece of fusion hypervariable, across the human population? The fact that it isn't, argues against fusion. They're the key points. Please bear in mind that I'm not a geneticist, so I can't offer any opinions of my own here. However, it seems to me that Tomkins has not yet made a knock-down case for the impossibility of the alleged fusion event leading to chromosome 2 in the human line. So far, it's a verdict of "Case not proven," as the Scots say. Cheers.vjtorley
March 24, 2014
March
03
Mar
24
24
2014
12:41 PM
12
12
41
PM
PDT
gpuccio #69: I'm not necessarily advocating or rejecting a particular copying mechanism--although admittedly it would be more convenient (from a religious perspective) if there wasn't any common descent. What I'm mainly interested in is a method for determining whether some arbitrary similarity 1) can or 2) should be attributed to common descent, like the design inference chart that's been promoted in other posts. So in no way am I disputing that common descent is a reasonable (let alone possible) explanation for some evidence. But before it becomes the "best possible explanation" (or best current explanation), it's necessary to address and exclude other explanations. So back to what you wrote. I agree, it's fair to say that the designer would need some kind of software repository if common descent is not involved. Although perhaps he just has an expansive memory. Who knows. a) No reason, necessarily. However...what you do mean by "reasonable access" to the existing copies? If you or I were going to update or modify some DNA, that would require a laboratory with a lot of specialized equipment. Which of course you would have needed in the first place, to synthesize life. In the scheme of things, throwing in some kind of data storage unit seems like a minor detail. I would sure want one, to keep track of all my original work. But who knows. Either way it does lead to some interesting questions about what resources the designer has at his disposal. "Why did God do it that way?" is an important question, when taken seriously and not as a childish dismissal. Unless the designer doesn't have access to the original code, it's not immediately apparent why he would prefer to create a new organism by using an older one (that's been living for a while) as the starting template. It's not like the code has improved (giggle!) or anything. However, maybe that is just what he did. And maybe we will never know what the reason was. Shrug. b) That's fair. And if those particular modifications are truly random (and presumably within the limits of evolution) and there truly is no functional difference, then I agree it would be a strong case in favor of common descent. I don't mean "truly" in a cheeky way, I'm just leaving the door open (as you mentioned at the end of your first response) for future discover and analysis, since it seems like we still have a long way to go. In contrast, I was impressed by the endogenous retrovirus argument for a long time--until I read (here, of course) how the choice of integration sites isn't plainly random. Honestly I'm not sure what to make of varying levels in sequence homology, since I don't know what exactly is being compared. For example I find it surprising that you could have such high levels of code variation without any difference in function. I don't doubt the devil is in the details. As I said, I am a layman as far as genetics (and biology) goes; my degree is in ME and I have some familiarity with programming. I also have to confess I have never investigated an ID perspective on common descent (that's what I'm doing now!), and up until now I've viewed it with great skepticism because the people usually promoting it are Darwinists.Timmy
March 24, 2014
March
03
Mar
24
24
2014
12:39 PM
12
12
39
PM
PDT
VJ: About Helen Pilcher’s "suggestions", a few brief comments: a) Gene duplication is only a way to transform a functional gene into non coding DNA. Functional genes cannot traverse the search space because negative NS prevents them from doing that. Non functional genes (be them non coding intergenic sequences or duplicated inactivated genes) can traverse the search space as they like, but get no help from NS, therefore they can arrive nowhere useful. b) I can't see how a new protein with a new function could benefit from the "switches" of a different protein. When that happens by chance, in somatic cells, what we get is sometimes a tumor. Or just a non functional cell. And anyway, we have to have a new functional protein to use a switch. What is the use of a switch associated to a non functional sequence? c) I am not aware that non coding DNA is being constantly translated in cells, in the hope to get something functional from it. I suppose we should have noted that, because the proteome can be explored much more easily than the transcriptome. ENCODE has taught us that the majority of the genome is transcribed, not that it is translated. And the transcription of non coding DNA has specific functional purposes, which are mostly related to transcription regulation, and take place in the nucleus. d) There is no continuum of "proto genes". We know very well that the 2000 superfamilies listed in SCOP are completely unrelated at the sequence level. What else do we need to understand that they are isolated functional islands? Small peptides are important, but they have mainly regulatory roles. The true effectors (enzymes and similar) are big molecules. Just look at the 20 aminoacyl tRNA transerases, which are extremely old and are the true effectors of transcription, IOWs the repository of the key to translate the genetic code: they are huge proteins, some of them of more than 1000 AAs, and all of them, if I remember well, of more than 500 AAs. That's what neo darwinists have to explain by random variation, or by some mysterious natural selection which should have taken place before the genetic code even existed. And they accuse us of invoking magic!gpuccio
March 24, 2014
March
03
Mar
24
24
2014
12:31 PM
12
12
31
PM
PDT
Evolve: My compliments! You understand nothing of ID and its arguments, and yet you boldly come here to give us poor fools your supreme knowledge. Unfortunately, I have some bad news for you. a) ID is about the informational problem: how does functional information in biological beings arise? You seem to believe that it arises by RV alone. That is a strange position, that even the most die hard neo darwinists usually try to avoid. Well, as VJT has already mentioned in his last answer to you, the simple problem is that there are practically zero probabilities to get the functional sequences that we observe only by RV. Try to read something about ID, and you will understand why. b) It's not ID that invokes magic. It's you that are invoking magic. How else could random variation events generate all the functional sequences of proteins (and much more)? c) Random genetic drift is just part of random variation. It adds nothing. Any sequence can be fixed by drift, so the probabilities of any particular sequence to be found in a random walk do not change at all because of the existence of drift. What counts is only the ratio between the target space (the functional sequences) and the search spaces (all possible sequences), IOWs, the number of states that should be tested versus the probabilistic resources of the system. And believe me, there is no game for neo darwinism without NS (well, indeed there is no game even with NS, but without it, you cannot even start to discuss!). So, take the time to read and understand what ID says, before coming here with your statements. So, it is true that many new genes derive from non coding sequences, as I have argued here myself, but they cannot do that by RV alone (including drift), and NS cannot act in that scenario. Therefore, only Intelligent Design can explain that kind of result.gpuccio
March 24, 2014
March
03
Mar
24
24
2014
12:13 PM
12
12
13
PM
PDT
Hi Evolve, Thank you for your comment. I'd like to draw your attention to gpuccio's remark, "You realize that an unexpressed segment of DNA cannot be selected for function, don’t you?" Pardon my ignorance of genetics, but I think the underlying idea that gpuccio is driving at is this. Very roughly, a gene can be defined as a piece of DNA coding for a protein (yes, I know that's horribly simplistic). It does this by creating an RNA copy of the DNA, and the RNA copy has to get into a protein-making factory in the cell. As you're aware, the research of Douglas Axe suggests that the probability of a given sequence of 100 amino acids actually being able to fold up into a protein that can carry out a biologically useful function is very low: about 1 in 10^77. (You would probably contest that figure, but humor me.) What's more, as far as we can tell, all of these sequences are equiprobable: Nature has no built-in bias in favor of functionality. "Biochemical predestination" is out. An unexpressed segment of DNA has no way of knowing in advance: (a) whether the sequence of 100 amino acids that it codes for represents a biologically useful molecule (yes or no); or (b) how far "off base" it is, in terms of being able to code for a biologically useful protein (e.g. "just fix amino acids 17, 43, 51 and 94, and you're done!"). Even if we suppose that it could eventually hit on a solution by trial and error, the problem is that there aren't anywhere near 10^77 trials, in the history of life on Earth. Hence it seems that hitting on a DNA that could code for a viable sequence of amino acids would be tantamount to a miracle. Or as Francois Jacob famously put it, "the probability that a functional protein would appear de novo by random association of amino acids is practically zero." Now let's look at Helen Pilcher's specific suggestions in the article she wrote for New Scientist (see http://evolutionarygenomics.imim.es/AllalonebyHelenPilcher_NewScientist19Jan2013.pdf ), which I cited in my post: (a) orphan genes arise by a process of gene duplication. Except that in the vast majority of cases they don't, as Pilcher herself admits; (b) the orphan genes sit next to and slightly overlap existing, older genes, so the orphans might be able to “borrow” their switches. That doesn't sound promising, as these are genes that code for chemically unrelated proteins - singleton, as Kozulic calls them in his 2011 Vixra article; (c) the protein-making factories in complex organisms are constantly churning out new proteins, allowing them to be “tested” all the time. Even if the non-coding sections of DNA are doing this, the number of trials is still way below the requisite number (10^77) for success to become likely; (d) there is a whole continuum of "proto-genes" which gradually gather useful mutations over time. As I understand it, there's a certain threshold number n of amino acids, below which none of the combinations is capable of performing a useful biological function, and what's more, n is fairly high (around 100), so the possibility of building up step by step is ruled out. Moreover, the "Methinks it is like a weasel" argument doesn't apply here; nothing would inform the proto-gene which parts of its structure need fixing. Now I know you will say that a chain of argumentation is only as strong as its weakest link, but do you see what I'm getting at here? There's a real problem, and it's not going to go away by saying that we already know that new functional genes arise. Sure they do. But the question is: do we know that they arise by a viable natural mechanism? And that's precisely what we don't know. Cheers.vjtorley
March 24, 2014
March
03
Mar
24
24
2014
11:51 AM
11
11
51
AM
PDT
Evolve sez:
The well-understood mechanisms of mutation, natural selection, random genetic drift and neutral theory explain the evolution of all life that we know of, including humans.
That is the propaganda, anyway.
The predictions made by these theories have been tested and validated numerous times.
What predictions? Natural selection doesn't make any predictions, drift doesn't make any predictions and neutral theory doesn't make any pr4edictions. You are just very gullible.Joe
March 24, 2014
March
03
Mar
24
24
2014
11:34 AM
11
11
34
AM
PDT
Evolve is conmfused. Evolutionism doesn't explain "molecular, anatomical and fossil data". Evolutionism can't even explain eukaryotes, Evolve.Joe
March 24, 2014
March
03
Mar
24
24
2014
11:31 AM
11
11
31
AM
PDT
Hi Sal, Thanks very much for your clear exposition of the logic behind the statement that the probability of a mutation going to fixation is mathematically the same as its frequency in the population. I instantly realized what was wrong when I read this sentence:
Now there are only 4 possible scenarios after infinite time (aka Generation infinity) since we're almost guaranteed that one will go to fixation, we just don't know which mutant prevails.
1. This is a mathematical idealization. The phrase "after infinite time" has no real meaning, as there is no "after" infinity. Moreover, (n+infinity) is the same as infinity. 2. In the real world, where the environment is in a state of continual upheaval and the assumption of constancy is never true, even for a short period, we are not guaranteed that one allele will eventually go to fixation. 3. In a population where there are n individuals, each with its own distinct version of a given gene, we can reasonably assume that the ultimate triumph of individual A's version is just as likely as the ultimate triumph of individual B's version (especially if all versions are neutral). But it is far from clear to me that if individual A has version X and all other individuals have version Y, the ultimate triumph of individual A's version is (1/n) as likely as the ultimate triumph of some other individual's version (Y). For any number of a variety of reasons, A's version might have a far greater disadvantage than that: it could be (1/n^2), for all we know. To assume that A has just as much chance as "any other individual" is begging the question, for individual A is not in a race with "any other individual," but with every other individual. A's odds of winning could be a lot longer than n:1. Maybe I'm just terribly thick. But even as an idealization (ignoring #2), it is far from clear to me that the outcome will be as described in the model case. And in the real world, where equilibrium is always being disturbed, that assertion seems doubly doubtful. Am I missing something? Thanks very much for your time and trouble by the way, Sal.vjtorley
March 24, 2014
March
03
Mar
24
24
2014
11:09 AM
11
11
09
AM
PDT
Timmy @ 63 ///What I want is a mechanism actually capable of engineering the new and improved systems found in humans. Is that too much to ask? /// I don’t get you. The well-understood mechanisms of mutation, natural selection, random genetic drift and neutral theory explain the evolution of all life that we know of, including humans. The predictions made by these theories have been tested and validated numerous times. If you think a magical designer is involved, then you’re more than welcome to topple evolutionary theory by elaborating how it better explains all the observed molecular, anatomical and fossil data. No one is stopping you from publishing your world-changing discovery in Science & Nature. Go ahead!Evolve
March 24, 2014
March
03
Mar
24
24
2014
10:03 AM
10
10
03
AM
PDT
VjTorley @ 59 & 60 ///No. They are ancestral genes that have been mutated and lost in a proportion of the population that includes where they got the reference sequence from. It is the complete opposite of what they are claiming./// Wrong! Ancestral genes that have been mutated and lost in extant populations can be detected by phylogenetic methods. In the Science paper they find that the new genes are derived from ancestral intergenic unexpressed sequences. Here’s another paper reporting the origin of a new gene called Poldi from an intergenic region in mouse: http://www.sciencedirect.com/science/article/pii/S0960982209014754 The corresponding region is present in humans and rats, but do not produce a transcript. ///Evolve fails to recognize the human error and computer error associated with annotating genomes./// Lol, now you can start blaming the methodology! These are all standard procedures molecular biologists use. gpuccio @ 31 & 70 ///So, you realize that the paper tells us nothing about the mechanism of mutation, don’t you? You realize that according to this scenario the new gene should have evolved by RV alone, without any help from NS, at least until the new gene is ready and functional, don’t you? You realize that an unexpressed segment of DNA cannot be selected for function, don’t you? You realize that the probabilities of finding new functional genes that way is practically nil, don’t you? Is that evidence for a neo darwinian mechanism? Absolutely not. Indeed, it is evidence against it, and for design. As I have tried to explain in my answer to Evolve in my post #31./// How is this evidence for design? New genes arise by mutations in intergenic sequences which allow it to be transcribed, spliced and expressed - nothing that we don’t understand already! Natural Selection is not the only process by which new mutations get fixed, you can get that by random genetic drift too. Another mechanism that we understand! In Fig. 3 of the Current Biology paper I posted above, they show changes in and around the new gene in a phylogenetic context. Guess what? The region is more similar in related mouse species and becomes more divergent in rats & humans as distance between species increases, exactly as evolution predicts. No magic is required here. Even if you explicitly want to invoke it, magic becomes superfluous since we can explain the data through already known natural mechanisms.Evolve
March 24, 2014
March
03
Mar
24
24
2014
09:53 AM
9
09
53
AM
PDT
VJ: You have perfectly understood my point. Regarding the examples of new genes coming out of non coding genes, especially by transposon activity, I have found many examples in the literature, many of them discussed here at UD in the past. Now I have not the references, so I quote just a couple. One is the RAG1 protein, which appears in fish and is one of the main components of the adaptive immune system. It is considered transposon derived. Another example is a human specific protein, probably important for nervous development, which had clearly recognizable homologies in primates, where it was never transcribed and translated, and was therefore pure non coding DNA. In humans, a very simple final mutation transformed it into an ORF, and therefore it wa translated and active (unfortunately, I don't remember the reference). The drosophila paper is perfectly compatible with this scenario. Moreover, the new genes were absent not only from the melanogaster reference sequence (however accurate it may be), but also from other drosophila parent species, if I remember well. I do believe that the designer, or designers, is constantly and intelligently working. That would also explain why we are finding that many genes implied in complex regulations in higher species are often already present in earlier simpler forms of life, where their role is still an enigma. The emergence of a new species is certainly a special creative moment, but I think it could rely on previous preparatory work. Let's call it "Punctuated Intelligent Design". :) Fascinating? Yes, fascinating.gpuccio
March 24, 2014
March
03
Mar
24
24
2014
09:34 AM
9
09
34
AM
PDT
Hi gpuccio, In answer to your question, the source whom I consulted for #60 above adds:
Here is the clue: " newly-transcribed genes which were absent from the D. melanogaster reference sequence" [a quote from an article cited by Evolve in #24 - VJT]. What do you suppose the D. melanogaster reference sequence is? If you thought it was a consensus sequence you would be wrong. If you thought it was the definitive sequence you would be wrong. If you thought it was a representative sequence you would still be wrong. The reference sequence is simply the first sequence that was obtained.
I hope that makes sense to you. For my part, I have no idea what a reference sequence is. I can see your point in #70 that the explanation in #60 appears rather ad hoc. I was, however, interested in this remark if yours: "there is now rather ample evidence that new functional genes can arise, and have arisen, from non coding DNA segments in the course of natural history, and often through some transposon activity." Can you recommend an article written for the layperson, summarizing this evidence? Looking at your comment in #31 above, I find much to agree with. I especially liked this paragraph, addressed to Evolve:
So, you realize that the paper tells us nothing about the mechanism of mutation, don't you? You realize that according to this scenario the new gene should have evolved by RV alone, without any help from NS, at least until the new gene is ready and functional, don't you? You realize that an unexpressed segment of DNA cannot be selected for function, don't you? You realize that the probabilities of finding new functional genes that way is practically nil, don't you?
Hear, hear! If I understand you rightly, you seem to believe that the Designer (God) periodically creates new functional genes and pops them in non-coding sections of our DNA, knowing (and presumably intending) that they'll be expressed sooner or later. What's more, you seem to think this creative process goes on continually, throughout the lifetime of a species (usually reckoned at 5 million years or so): it happens "in the course of natural history," as you put it, and you maintain that it's been verified to occur. Personally, I had been inclined to favor a model in which the Designer creates the 1,000 (or is it 100?) unique orphan genes that characterize a species, all up-front, at the moment when that species appears in the fossil record. That would make the creation of orphan genes a rare event. It would of course be even rarer if different species of organisms tend to appear (and die out) in waves, every few million years or so. I found that an appealing hypothesis: it is also compatible with a form of essentialism, too, as I argued in an earlier post in response to Professor Moran. But now look what happens if we use your model. Average lifetime of a species: 5,000,000 years. Average number of orphan genes that characterize that species and no other: 100 to 1,000. Frequency with which orphan genes appear in that species, assuming they're created continually over time: once every 5,000 to 50,000 years. Number of species on the planet: around 10,000,000. If new species appear and disappear fairly independently of one another, then we should be seeing 200 to 2,000 creative acts per year by the Designer, continually. Now I'm not saying that can't happen. But that does take a bit of getting used to. If you're right, we should be able to observe God in the very act of creating, if we have enough scientists and enough cell microscopes. Fascinating!vjtorley
March 24, 2014
March
03
Mar
24
24
2014
07:42 AM
7
07
42
AM
PDT
VJ: If I understand well your answer in post #60 (by your informed source) to Evolve's post #24, I must say that I don't agree with your source, and agree with Evolve (on the facts, not the interpretation of them). To explain the observations in the cited paper as genes lost in the majority of strains and of related species seems really an ad hoc explanation. I cannot accept it, unless your source has some objective facts in its support. Moreover, there is now rather ampple evidence that new functionl genes can arise, and have arisen, from non coding DNA segments in the course of natural history, and often through some transposon activity. Is that evidence for a neo darwinian mechanism? Absolutely not. Indeed, it is evidence against it, and for design. As I have tried to explain in my answer to Evolve in my post #31. So, why do you think that there is any problem in new functional genes arising from non coding segments? That's exactly what a designer would do to implement new information. While a neo darwinian mechanism could never do that, because it should use only RV, without any help from NS.gpuccio
March 24, 2014
March
03
Mar
24
24
2014
01:44 AM
1
01
44
AM
PDT
Timmy: I appreciate your comments very much. I will try to answer. And the answer is simple. As you say, a copying mechanism is the best explanation for the similarities and differences we observe in the same functional protein across time. But, if I understand you well, you are suggesting that the copying does not happen in the "hardware" (the existing living beings), but elsewhere (the "software" is copied). Have I understood you well? So, let's say that a designer has a repository of the software he used to build species A, then after some time he decides to build a new species, B, and he starts from his stored software for species A, modifies it with the new implementations, and then build species B. Is that the idea? That would be common design, but not from scratch. The solutions already developed would be reused. OK, that's fine for me, but I ask two things: a) Why should we believe that the designer has some other non physical repository (possible, but we obviously have no direct evidence of that), when he reasonably can access the copies that are already around to implement the new functions? That would be, instead, common descent (but, obviously, with engineered modifications). b) What would be the observed difference between the two scenarios? It's simple. In the second scenario (common descent) neutral modifications which happened in the time of existence of species A "on the market", and did not modify the function, would be retained in species B, while that would not happen in the common design scenario, even with reuse of the software from the original repository. Now, that's exactly what we apparently observe, and in great abundance. We know that protein function is redundant: many different sequences can generate the same structure and function. And in homologous protein, let's say in bacteria and humans, we often observe only 30-40% sequence homology (while the function is the same), while we can ususally observe 80-90% between mouse and humans, for example, and let's say 60% between C. elegans and humans. That's the general pattern, and frankly I don't believe that it can be explained saying that each observed difference is due to functional differences, and that none of them is neutral. There are even the synonimous mutations (DNA mutations which code for the same aminoacid and therefore do not alter the protein sequence), which are neutral by definition (OK, OK, I know that it is not always true, but in general it is probably true), and follow the same pattern. I would like to know your opinion on that.gpuccio
March 24, 2014
March
03
Mar
24
24
2014
01:22 AM
1
01
22
AM
PDT
VJ,
Why is this so? (It might sound obvious to you, but I’m afraid it isn’t to me.)
For a simple illustration, I'll have to bend the rules just a bit so you can see. Let's just have a population of 4 individuals, and we'll just look at a particular nucleotide position. Each individual has a different "mutant" for the same nucleotide position. I describe the initial generation:
Generation 0 Individual #1: A Individual #2: C Individual #3: T Individual #4: G
The mutant's "compete" for primacy and at the end one of them has a monopoly on that nucleotide position. Now there are only 4 possible scenarios after infinite time (aka Generation infinity) since we're almost guaranteed that one will go to fixation, we just don't know which mutant prevails. I list the 4 possible outcomes:
Generation Infinity Individual #1: A Individual #2: A Individual #3: A Individual #4: A or Individual #1: C Individual #2: C Individual #3: C Individual #4: C or Individual #1: T Individual #2: T Individual #3: T Individual #4: T or Individual #1: G Individual #2: G Individual #3: G Individual #4: G
Those cover all the possible outcomes. We can see the probability of fixation of A is 1/4, C is 1/4, T is 1/4, G is 1/4. Thus the fixation probability is equal to the initial density of a particular mutant. With a little imagination you can see this extrapolate to 10,000 individuals and whatever proportion of the mutant there is. You just have to be clever in relabeling things.scordova
March 24, 2014
March
03
Mar
24
24
2014
12:54 AM
12
12
54
AM
PDT
Here's a question for the biologists. Can anyone find a single example of authentic telomere to telomere fusion?vjtorley
March 24, 2014
March
03
Mar
24
24
2014
12:20 AM
12
12
20
AM
PDT
Hi Allen MacNeill, Thanks very much for that interesting and illuminating exposition. Just one question: you and other commenters on this thread have asserted that "the probability of a mutation going to fixation is mathematically the same as its frequency in the population." Why is this so? (It might sound obvious to you, but I'm afraid it isn't to me.)vjtorley
March 23, 2014
March
03
Mar
23
23
2014
11:39 PM
11
11
39
PM
PDT
I'd like to follow up Rob and Sal's mathematical analysis of fixation of mutations (this is also relevant to our previous discussion of orphan sequences). R. A. Fisher did pioneering work in the theoretical understanding of the effects of various kinds of mutations under various types of selection pressures and under conditions that would favor drift (the latter were refined by Sewall Wright). According to their analyses, a mutation (defined as any change in the genetics of an organism) can have one of three mutually exclusive phenotypic effects and one of three mutually exclusive outcomes as the result of differential survival and reproduction. A mutation can be expressed as: • dominant (masking a recessive alternative), • recessive (masked by a dominant alternative), or • intermediate (expressed as a blend with the expression of its alternative). The phenotypic expression of a mutation can be: • beneficial (increasing in frequency as a result of increased relative survival and reproduction of its carrier), • deleterious (decreasing in frequency as a result of decreased relative survival and reproduction of its carrier), or • neutral (fluctuating randomly in frequency as a result of random relative survival and reproduction of its carrier). It is essential to realize here that there is no necessary relationship between the inheritance pattern of a mutation and its effect on survival and reproduction: a dominant mutation can be beneficial, deleterious, or neutral, as can a recessive or intermediate mutation. To analyze the effects of the interactions between these conditions, let us assume that mutations arise singly, randomly, and at low frequency in a population (i.e. not simultaneously among many individuals, not in any particular "direction" vis-à-vis differential survival and reproduction, and only very rarely). Finally, for the purposes of comparative analysis, a population can be relatively large or small. Under the various combinations of these conditions (all of the following are verbal descriptions of what are actually mathematical relationships): • A mutation that is dominant and beneficial may increase in frequency in a population, as its carrier will express a genetic tendency to have a phenotype that results in increased survival and reproduction. However, since its frequency is very low (i.e. mathematically, one over the number of individuals in the population), it is likely that it will disappear as the result of random chance (i.e. it will be eliminated as the result of drift), since the probability of a mutation going to fixation is mathematically the same as its frequency in the population. Ergo, beneficial dominant mutations are likely to flicker in and out of existence in all except relatively small populations. • In relatively small populations, individuals with dominant beneficial mutations are more likely to increase in frequency, as they constitute a higher relative frequency of a small population than they would in a larger one. • A mutation that is dominant and deleterious will almost immediately disappear, as its carrier will express a genetic tendency to have a phenotype that results in decreased survival and reproduction. Again, since its frequency is very low, it is also likely that it will disappear as the result of drift. • In relatively small populations, individuals with dominant deleterious mutations are more likely to decrease in frequency, as they constitute a higher relative frequency of a small population than they would in a larger one. Ergo, deleterious dominant mutations are likely to disappear in virtually all populations, regardless of size. • A mutation that is recessive and beneficial will not initially increase in frequency in a population, as its carrier will not express the genetic tendency to have a phenotype that results in increased survival and reproduction (it will be masked by the dominant alternative). Again, since its frequency is very low (i.e. one over the number of individuals in the population), it is likely that it will disappear as the result of drift. Ergo, beneficial recessive mutations are just as likely to flicker in and out of existence as dominant ones in all except relatively small populations. • However, in relatively small populations, individuals with recessive beneficial mutations will show a "threshold effect", as they are relatively more likely to combine in homozygotes and therefore rapidly increase in frequency, again because they constitute a higher relative frequency of a small population than they would in a larger one. • A mutation that is recessive and deleterious will not initially decrease in frequency in a population, as its carrier will not express the genetic tendency to have a phenotype that has an decreased probability of survival and reproduction, as it will be masked by the dominant beneficial alternative. Again, since its frequency is very low, it is very likely that it will disappear as the result of drift. • However, if by chance its frequency increases (for example, in a relatively small population), it will eventually reach a threshold value at which homozygous recessive individuals will phenotypically express the mutation. The result would be a concomitant decrease in the frequency of the mutation as the result of decreased survival and reproduction. Ergo, deleterious recessive mutations are likely to disappear in virtually all populations, but especially in relatively small populations in which they will be more often expressed and eliminated in homozygotes. • A mutation that is intermediate (i.e. expressed equally with its genetic alternatives) will essentially perform the same way a dominant mutation will, as it will be expressed in those individuals that carry it. Ergo, intermediate mutations will also tend to flicker in and out of existence in all except relatively small populations. • In small populations, intermediate mutations have generally the same effects as dominant ones: if beneficial they will rapidly increase, if deleterious they will rapidly disappear. Note the asymmetries in outcomes, especially with respect to population size: • Beneficial and intermediate mutations may either increase or decrease in frequency, depending on population size. • Beneficial and intermediate mutations are more likely to increase in frequency if the population size is relatively small, as the probability of accidental removal as the result of drift decreases in smaller populations. • Deleterious mutations are more likely to decrease in frequency in any size population, but especially if they are dominant or if the population is relatively small. Fisher then analyzed what would happen if the environment changed in such a way as to make a formerly beneficial trait deleterious: • If it were dominant or intermediate it would decrease in frequency, especially if the population were relatively small. • If it were recessive it would also decrease in frequency, especially if the population were relatively small. This would mean that the alternative form (i.e. the newly beneficial alternative) would increase in frequency, either monotonically or after reaching a "threshold of expression", especially in relatively small populations. • The result would be the elimination of the formerly beneficial (and now deleterious) form and the fixation of the alternative form. • If a new mutation were then to occur, it would follow the same patterns analyzed above. One more factor needs to be included here: random mutations are more likely to be either deleterious or neutral than beneficial. This is because of what could be called the "Swiss watch effect": • If you randomly alter one of the parts of a Swiss watch (say, by poking in ice pick into its innards), the more likely outcome is a decline in its function. • By the same logic, the simpler and more "fine-tuned" (i.e. tightly functionally interconnected) the watch, the more likely a random change will have a deleterious effect on its function. • Alternatively, the more redundant the mechanisms in the watch and the less "fine tuned" they are, the less deleterious a random change will be, compared with a simpler, more "fine-tuned" mechanism. The net effect of combining all of these outcomes is: • Beneficial mutations, while rare, will increase in frequency in populations, especially dominant mutations in relatively small populations. • Deleterious mutations, regardless of rarity, will decrease in frequency in populations, especially relatively small ones. Since these relationships were first proposed in the 1930s, they have been exhaustively tested, in both the field and laboratory. The outcome of such tests has generally upheld the predictions, with some interesting variations: • Some single-gene traits that are deleterious when homozygous are apparently beneficial when heterozygous (sickle cell anemia is the best-known example) • More traits have turned out to be neutral than predicted by the original version of the theory • Population size has a greater effect on outcomes than predicted by the original version of the theory; specifically, drift has turned out to be more important that the original theory predicted. One final note: "beneficial" can be translated as "functional" in all of the foregoing. Indeed, "functional" should be translated as "resulting in increased relatively survival and reproduction" in general.Allen_MacNeill
March 23, 2014
March
03
Mar
23
23
2014
11:16 PM
11
11
16
PM
PDT
Dr. Torley, It's deep in the weeds, but one potentially very devastating argument that hasn't made the forefront, but should eventually is polyconstrained DNA. It was featured in the Cornell conference. Personally it casts serious doubt on neutral evolution or Darwinian evolution. Here's an easy intro: http://bevets.com/equotess.htmscordova
March 23, 2014
March
03
Mar
23
23
2014
09:34 PM
9
09
34
PM
PDT
I appreciate the replies. vjtorley #44: As far as I can tell, all similarity necessarily implies (similarity in information, of course) is copying. But it does not necessarily tell us anything about the copying mechanism. Common ancestry is such a mechanism, so is plagiarism, so is the use of a code library. So my question is, what details of the similarity could lead us to infer a particular mechanism? Obviously certain conditions allow us to infer all three of the above mechanisms. However common ancestry between different species is a bit harder. What is it about substitutions/deletions in pseduogenes that infer common ancestry over and above the designer simply reusing his code? (Sorry if this is obvious but I can't say it's my field!) Second point: once everyone is on the same page regarding the limits of evolution and the fact of design, the theology ("why did God do it that way?") becomes very interesting. From a human perspective, for example, it would make a lot of sense to engineer an organism through trial and error (common descent): let it live for a while, observe what happens, make changes based on mission objectives, and so on. However, presumably God hasn't got these limits, so what would common descent tell us about God? Difficult to say. gpuccio #45: Thanks for the clear answer. I can't say I'm especially bothered by common descent, I just want to understand the circumstances under which it can be inferred as a copying mechanism vs. some other copying mechanism (since I do not have enough background knowledge of genetics one way other other). So if I read you right, (and I am going to jump to a conclusion here), you are reflecting some sort of punctuated saltation, where species A lives for some length of time, accumulating evolutionary changes/errors to its code, and then kaboom!, the designer injects new code into some sub-population of species A, creating species B, which then lives for a while and then species C is created and so on and so forth. So when we compare the code between species A and Z and species C and Z, etc. we observe accumulation deltas that reflect the inherited relationship? Or, if I am totally off base, what is it about the type of patterns you refer to that might preclude common design? That's really what I want to get at. How are we to judge between copying mechanisms? (This is pretty much a repeat of what I wrote to Dr. Torley.) Evolve #51 writes:
What more do you want? How is your proposed mechanism of a totally imaginary designer better
What I want is a mechanism actually capable of engineering the new and improved systems found in humans. Is that too much to ask?Timmy
March 23, 2014
March
03
Mar
23
23
2014
08:11 PM
8
08
11
PM
PDT
Follow-Up: Refuting Ken Miller on Chromosome 2 - 2012 - video http://www.youtube.com/watch?v=YJikA1gH7CYbornagain77
March 23, 2014
March
03
Mar
23
23
2014
07:48 PM
7
07
48
PM
PDT
Thanks Dr. Torley for taking time to clear that upbornagain77
March 23, 2014
March
03
Mar
23
23
2014
07:41 PM
7
07
41
PM
PDT
Hi Evolve, Re the reports you referenced on orphan genes, here's a comment from an academic source:
When you read this it sounds so compelling. "Oh my gosh there are transcribed genes in some flies that are absent from the reference sequence?" No. They are ancestral genes that have been mutated and lost in a proportion of the population that includes where they got the reference sequence from. It is the complete opposite of what they are claiming.
vjtorley
March 23, 2014
March
03
Mar
23
23
2014
07:35 PM
7
07
35
PM
PDT
Hi Evolve, Re orphan genes, you wrote:
..there’s no escaping the evidence. The Science paper did RNA-seq on the testis transcriptome of few D. melanogaster strains and identified several newly-transcribed genes which were absent from the D. melanogaster reference sequence. A sequence alignment showed that these new genes corresponded to intergenic regions in the D. melanogaster reference sequence and orthologous regions of closely related species such as D. simulans & D. yakuba. This clearly shows that new genes can arise by mutations in previously non-expressed DNA and it contributes to speciation.
Here's a reply from an informed source:
From what I can read, Evolve fails to recognize the human error and computer error associated with annotating genomes. Computer programs are written to identify certain features that the molecular biologists look for when identifying where to find genes. Any molecular biologist with sufficient training recognizes that those algorithms are not 100% perfect. If RNAseq data shows genes were transcribed, there is no denying the transcript--only denying the bioinformatics. Unfortunately, we often only find what we're looking for.
Hope that helps. Cheers, and thanks very much for your participation in this discussion.vjtorley
March 23, 2014
March
03
Mar
23
23
2014
07:32 PM
7
07
32
PM
PDT
Hi Evolve: Back again. So who's right: Tomkins or Miller? Short answer: Tomkins. Here's a short reply from an academic who teaches microbiology:
ENCODE is right. Tomkins pulls the data from ENCODE and there is clearly a gene transcribed across the fusion event. The gene is rather larger than the 1500 bp that Miller quotes. The gene is a helicase (if I remember correctly) expressed in various tissues throughout development. If you read Tomkins' paper, he demonstrates where the supposed telomere ends are located, where additional centromeres are supposedly located, and how the region is transcriptionally active (a hallmark associated with non-telomeric DNA).
vjtorley
March 23, 2014
March
03
Mar
23
23
2014
07:24 PM
7
07
24
PM
PDT
1 2 3 4

Leave a Reply