Culture Darwinism Evolution

Darwinism is taking a beating in the Anthropocene

Spread the love

Anthropocene Magazine, that is. (For all you Under rocks out there, the “Anthropocene” is the era we are said to be living in, dominated by humans, the way the Jurassic was dominated by tyrannosaurs, etc.)

The Tangled Tree

But now, get this:

Darwinian theory is based on the idea that heredity flows vertically, parent to offspring, and that life’s history has branched like a tree. Now we know otherwise: that the ‘tree’ of life isn’t that simple. David Quammen, “Blurring Life’s Boundaries” at Anthropocene

It gets worse. He goes on to say:

One of the most disorienting results of these developments is a new challenge to the concept of “species.” Biologists have long recognized that the boundaries of one species may blur into another—by the process of hybridism, for instance. And the notion of species is especially insecure in the realm of bacteria and archaea. But the discovery that horizontal gene transfer (HGT) has occurred naturally, many times, even in the lineages of animals and plants, has brought the categorical reality of a species into greater question than ever. That’s even true for us humans—we are composite individuals, mosaics.

It’s not just that—as you may have read in magazine articles—your human body contains at least as many bacterial cells as it does human cells. (This doesn’t even count all the nonbacterial microbes—the virus particles, fungal cells, archaea, and other teeny passengers inhabiting our guts, mouths, nostrils, and other bodily surfaces.) That’s the microbiome. Each of us is an ecosystem.

I’m talking about something else, a bigger and more shocking discovery that has come from the revolution in a field called molecular phylogenetics. (That phrase sounds fancy and technical, but it means merely the use of molecular information, such as DNA or RNA sequences, in discerning how one creature is related to another.) The discovery was that sizeable chunks of the genomes of all kinds of animals, including us, have been acquired by horizontal transfer from bacteria or other alien species.

David Quammen, “Blurring Life’s Boundaries” at Anthropocene

and much more. Where’s Darwin’s caregiver? Should we ring the bell?

Now, David Quammen … That name rings a bell. Oh yes, the author of The Tangled Tree:A Radical New History of Life, about the role of epigenetics.

It gets really interesting when the anti-Darwinists are not creationists. Will they be more vicious?

Follow UD News at Twitter!

See also: Jerry Coyne Continues To Be Unhappy Over David Quammen’s Book On Carl Woese

The real issue, of course, is the way horizontal gene transfer turns Darwin’s fabled Tree of Life into confetti.

See also: Jerry Coyne minimizes the significance of horizontal gene transfer

and

At New York Times: Darwin skeptic Carl Woese “effectively founded a new branch of science”

2 Replies to “Darwinism is taking a beating in the Anthropocene

  1. 1
    AaronS1978 says:

    I’ve always considered this from the sheer fact that viruses do exactly that and that’s how viruses will species hop

    Does this article explains anything about us being able to determine the difference from acquire genes and related Gene’s

    Because that’s kind of a big deal especially when it comes to being related to another species we might not be as related to certain species as we claim to be because of that type of genetic meddling

    A lot of junk DNA might have been created due to this as well

    But if everything shares DNA in someway shape or form, It does make disentangling relations very difficult as we might acquire a false positive or a false negative when trying to determine what is related to what

  2. 2
    bornagain77 says:

    Their assumptions are questionable to put it mildly

    A critical analysis suggests that something is deeply amiss with eukaryote LGT (lateral gene transfer) theories.

    Too Much Eukaryote LGT – William F. Martin – 25 October 2017
    Abstract
    The realization that prokaryotes naturally and frequently disperse genes across steep taxonomic boundaries via lateral gene transfer (LGT) gave wings to the idea that eukaryotes might do the same. Eukaryotes do acquire genes from mitochondria and plastids and they do transfer genes during the process of secondary endosymbiosis, the spread of plastids via eukaryotic algal endosymbionts. From those observations it, however, does not follow that eukaryotes transfer genes either in the same ways as prokaryotes do, or to a quantitatively similar degree. An important illustration of the difference is that eukaryotes do not exhibit pangenomes, though prokaryotes do. Eukaryotes reveal no detectable cumulative effects of LGT, though prokaryotes do. A critical analysis suggests that something is deeply amiss with eukaryote LGT theories.
    https://onlinelibrary.wiley.com/doi/full/10.1002/bies.201700115

    Microbial Genes in the Human Genome: Lateral Transfer or Gene Loss? – 2001
    Abstract
    The human genome was analyzed for evidence that genes had been laterally transferred into the genome from prokaryotic organisms. Protein sequence comparisons of the proteomes of human, fruit fly, nematode worm, yeast, mustard weed, eukaryotic parasites, and all completed prokaryote genomes were performed, and all genes shared between human and each of the other groups of organisms were collected. About 40 genes were found to be exclusively shared by humans and bacteria and are candidate examples of horizontal transfer from bacteria to vertebrates. Gene loss combined with sample size effects and evolutionary rate variation provide an alternative, more biologically plausible explanation.
    https://science.sciencemag.org/content/292/5523/1903

    Further notes from a previous post: “Ewert specifically chose Metazoan species because “horizontal gene transfer is held to be rare amongst this clade.”

    Response to a Critic: But What About Undirected Graphs? – Andrew Jones – July 24, 2018
    Excerpt: The thing is, Ewert specifically chose Metazoan species because “horizontal gene transfer is held to be rare amongst this clade.” Likewise, in Metazoa, hybridization is generally restricted to the lower taxonomic groupings such as species and genera — the twigs and leaves of the tree of life. In a realistic evolutionary model for Metazoa, we can expect to get lots of “reticulation” at lower twigs and branches, but the main trunk and branches ought to have a pretty clear tree-like form. In other words, a realistic undirected graph of Metazoa should look mostly like a regular tree.
    https://evolutionnews.org/2018/07/response-to-a-critic-but-what-about-undirected-graphs/

    This Could Be One of the Most Important Scientific Papers of the Decade – July 23, 2018
    Excerpt: Now we come to Dr. Ewert’s main test. He looked at nine different databases that group genes into families and then indicate which animals in the database have which gene families. For example, one of the nine databases (Uni-Ref-50) contains more than 1.8 million gene families and 242 animal species that each possess some of those gene families. In each case, a dependency graph fit the data better than an evolutionary tree.
    This is a very significant result. Using simulated genetic datasets, a comparison between dependency graphs and evolutionary trees was able to distinguish between multiple evolutionary scenarios and a design scenario. When that comparison was done with nine different real genetic datasets, the result in each case indicated design, not evolution. Please understand that the decision as to which model fit each scenario wasn’t based on any kind of subjective judgement call. Dr. Ewert used Bayesian model selection, which is an unbiased, mathematical interpretation of the quality of a model’s fit to the data. In all cases Dr. Ewert analyzed, Bayesian model selection indicated that the fit was decisive. An evolutionary tree decisively fit the simulated evolutionary scenarios, and a dependency graph decisively fit the computer programs as well as the nine real biological datasets.
    http://blog.drwile.com/this-co.....he-decade/

    New Paper by Winston Ewert Demonstrates Superiority of Design Model – Cornelius Hunter – July 20, 2018
    Excerpt: Ewert’s three types of data are: (i) sample computer software, (ii) simulated species data generated from evolutionary/common descent computer algorithms, and (iii) actual, real species data.
    Ewert’s three models are: (i) a null model which entails no relationships between any species, (ii) an evolutionary/common descent model, and (iii) a dependency graph model.
    Ewert’s results are a Copernican Revolution moment. First, for the sample computer software data, not surprisingly the null model performed poorly. Computer software is highly organized, and there are relationships between different computer programs, and how they draw from foundational software libraries. But comparing the common descent and dependency graph models, the latter performs far better at modeling the software “species.” In other words, the design and development of computer software is far better described and modeled by a dependency graph than by a common descent tree.
    Second, for the simulated species data generated with a common descent algorithm, it is not surprising that the common descent model was far superior to the dependency graph. That would be true by definition, and serves to validate Ewert’s approach. Common descent is the best model for the data generated by a common descent process.
    Third, for the actual, real species data, the dependency graph model is astronomically superior compared to the common descent model.
    Where It Counts
    Let me repeat that in case the point did not sink in. Where it counted, common descent failed compared to the dependency graph model. The other data types served as useful checks, but for the data that mattered — the actual, real, biological species data — the results were unambiguous.
    Ewert amassed a total of nine massive genetic databases. In every single one, without exception, the dependency graph model surpassed common descent.
    Darwin could never have even dreamt of a test on such a massive scale. Darwin also could never have dreamt of the sheer magnitude of the failure of his theory. Because you see, Ewert’s results do not reveal two competitive models with one model edging out the other.
    We are not talking about a few decimal points difference. For one of the data sets (HomoloGene), the dependency graph model was superior to common descent by a factor of 10,064. The comparison of the two models yielded a preference for the dependency graph model of greater than ten thousand.
    Ten thousand is a big number. But it gets worse, much worse.
    Ewert used Bayesian model selection which compares the probability of the data set given the hypothetical models. In other words, given the model (dependency graph or common descent), what is the probability of this particular data set? Bayesian model selection compares the two models by dividing these two conditional probabilities. The so-called Bayes factor is the quotient yielded by this division.
    The problem is that the common descent model is so incredibly inferior to the dependency graph model that the Bayes factor cannot be typed out. In other words, the probability of the data set, given the dependency graph model, is so much greater than the probability of the data set given the common descent model, that we cannot type the quotient of their division.
    Instead, Ewert reports the logarithm of the number. Remember logarithms? Remember how 2 really means 100, 3 means 1,000, and so forth?
    Unbelievably, the 10,064 value is the logarithm (base value of 2) of the quotient! In other words, the probability of the data on the dependency graph model is so much greater than that given the common descent model, we need logarithms even to type it out. If you tried to type out the plain number, you would have to type a 1 followed by more than 3,000 zeros. That’s the ratio of how probable the data are on these two models!
    By using a base value of 2 in the logarithm we express the Bayes factor in bits. So the conditional probability for the dependency graph model has a 10,064 advantage over that of common descent.
    10,064 bits is far, far from the range in which one might actually consider the lesser model. See, for example, the Bayes factor Wikipedia page, which explains that a Bayes factor of 3.3 bits provides “substantial” evidence for a model, 5.0 bits provides “strong” evidence, and 6.6 bits provides “decisive” evidence.
    This is ridiculous. 6.6 bits is considered to provide “decisive” evidence, and when the dependency graph model case is compared to comment descent case, we get 10,064 bits.
    But It Gets Worse
    The problem with all of this is that the Bayes factor of 10,064 bits for the HomoloGene data set is the very best case for common descent. For the other eight data sets, the Bayes factors range from 40,967 to 515,450.
    In other words, while 6.6 bits would be considered to provide “decisive” evidence for the dependency graph model, the actual, real, biological data provide Bayes factors of 10,064 on up to 515,450.
    We have known for a long time that common descent has failed hard. In Ewert’s new paper, we now have detailed, quantitative results demonstrating this. And Ewert provides a new model, with a far superior fit to the data.
    https://evolutionnews.org/2018/07/new-paper-by-winston-ewert-demonstrates-superiority-of-design-model/

    He is a more detailed defense of the ‘unbiased’ nature of Ewert’s paper. As the article notes “Bayesian reasoning is important in that it gives us some hope of escaping from the tyranny of fundamentalist presuppositions, whether evolutionist/naturalistic fundamentalism or creationist/biblical fundamentalism (though interestingly Thomas Bayes was a clergyman).”

    The Dependency Graph Hypothesis — How It Is Inferred – Andrew Jones – July 23, 2018
    Excerpt: In Bayesian Model Selection, the best model is the one that makes the data most probable. There is no point in having a simple model if it does not explain the data (if the probability of the data is zero). Likewise there is no point in having a model that is more complex (and thus even less improbable) than the data it needs to explain. That would be overfitting. The overall complexity is the probability of the model combined with the probability of the data given the model. In Ewert’s paper, there are two overarching models that we want to distinguish: the ancestry tree, and the dependency graph, but there are myriad possible sub-models, each contributing to the overall probability of the overarching model.
    Average Values and Bayesian Priors
    Unfortunately, both models (the tree of life and the dependency graph) are extremely complex, with a very large number of adjustable parameters. This might seem to make the question undecidable: We often argue that the tree of life is a terrible fit to the data, requiring numerous ad-hoc “epicycles” to make the data fit. We might further argue that a particular dependency graph is a better fit to the data. But a believer in common descent might reasonably respond that our theory is also not parsimonious; if you add enough modules you could explain literally anything, even random data. It seems that deciding between the two models can never be a rational decision; it seems it will always involve a good deal of intuition or even faith. Fortunately, however, there are ways to tame the complexity enough to get objective and meaningful answers.
    The main strategy for coping with the complexity is summation (or mathematical integration) over all possibilities. Ewert handles many of the parameters by integration: these include the edge probability b (the expected connectivity) of the nodes and the different propensities to add ? or lose ? genes on each of the n nodes. This may seem strange, but it is standard probabilistic reasoning. If the probability distribution of Y (for example, the actual number of gene-losses) depends on X (for example, ?), but you don’t know X, you can still calculate the probability of Y if you have the probability distribution of X. In many cases we don’t even know what the true distribution of X would be. In such cases, Ewert assumes that every possibility has an equal probability (a flat distribution) because this should introduce the least bias. This may also seem strange, but it is quite common in Bayesian reasoning, where it is called a flat prior. Although the prior distribution of X is technically a choice, and yes that choice has some influence on the result, the way Bayesian reasoning works is that the more data you add, the less the particular choice of prior matters. The important thing is to choose a prior that is not biased; a prior that allows the data to speak, if you like. Bayesian reasoning is important in that it gives us some hope of escaping from the tyranny of fundamentalist presuppositions, whether evolutionist/naturalistic fundamentalism or creationist/biblical fundamentalism (though interestingly Thomas Bayes was a clergyman).
    The idea is that we want to make sure that the many things we don’t know don’t stop us from making reasonable inferences using what we do know. ,,,
    https://evolutionnews.org/2018/07/the-dependency-graph-hypothesis-how-it-is-inferred/

Leave a Reply