Uncommon Descent Serving The Intelligent Design Community

Ann Gauger on watching Ayala’s no Adam or Eve analysis crumble …

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

… in light of later research.

In “On Population Genetics Estimates” (Biologic Institute, August 3, 2012),Ann Gauger explains,

In his review of our book Science and Human Origins, Paul McBride wonders why I have not engaged the broader population genetics literature on human origins, but instead chose to focus on a single paper from 1995 by Francisco Ayala.

As I stated in the book, I chose that paper because in my opinion it presented the most difficult challenge to a very small bottleneck in our history as a species. If Ayala was right, and we shared thirty-two allelic lineages with chimps, then there was no way for a bottleneck as small as two individuals to have occurred. That kind of evidence, if substantiated, would have been conclusive. That’s why I found it so fascinating as I watched his analysis crumble in the light of later research.

I was very aware that others beside Ayala have investigated human origins, using other methods and data. I chose not to address those studies directly in the book because I wanted to focus on the intriguing problem of HLA-DRB1’s patchwork phylogenetic history. I did allude to them in discussing problems with retrospective analyses, however. The fact that I had not addressed those alternate estimates is one reason why I never claimed to have proved the existence of a two-person bottleneck, but rather questioned the rush to judgment against such a bottleneck on the part of others.

So now, let’s consider how much these other methods add to the discussion.

See also: Ann Gauger sets record straight on Wistar II

New York Times report on human evolution controversy vindicates book Science and Human Origins

Comments
"No. I asked a very simply question, and you please answer it without the spam?" - WD400 Hilarious! I was just going to say, 'Steady, there, bornagain. You know that when you blind them with REAL science from that encyclopaedic brain of yours, they bawl out that you're just spamming!!!' 'What's he saying, Mummy? I can't understand 'im. It's just nonsense!'Axel
August 19, 2012
August
08
Aug
19
19
2012
06:53 AM
6
06
53
AM
PST
You must actually produce repeatable, observational, evidence in order to support your claim that you have observed body-plan morphogenesis.
I never made that claim anyway. It's not even a straw man.A Gene
August 9, 2012
August
08
Aug
9
09
2012
01:11 AM
1
01
11
AM
PST
A Gene: observe is not the same word as infer. After all, I can infer the moon is made out of green cheese all day long from cherry picking select evidence that supports my position and ignoring all disconfirming evidence that points to a contrary conclusion. But that is the beauty of empirical science. You must actually produce repeatable, observational, evidence in order to support your claim that you have observed body-plan morphogenesis. And that, sir, you simply do not have:
“Whatever we may try to do within a given species, we soon reach limits which we cannot break through. A wall exists on every side of each species. That wall is the DNA coding, which permits wide variety within it (within the gene pool, or the genotype of a species)-but no exit through that wall. Darwin's gradualism is bounded by internal constraints, beyond which selection is useless." R. Milner, Encyclopedia of Evolution (1990) "Despite a close watch, we have witnessed no new species emerge in the wild in recorded history. Also, most remarkably, we have seen no new animal species emerge in domestic breeding. That includes no new species of fruitflies in hundreds of millions of generations in fruitfly studies, where both soft and harsh pressures have been deliberately applied to the fly populations to induce speciation. And in computer life, where the term “species” does not yet have meaning, we see no cascading emergence of entirely new kinds of variety beyond an initial burst. In the wild, in breeding, and in artificial life, we see the emergence of variation. But by the absence of greater change, we also clearly see that the limits of variation appear to be narrowly bounded, and often bounded within species." Kevin Kelly from his book, "Out of Control" https://uncommondescent.com/intelligent-design/the-evolutionary-tree-continues-to-fall-falsified-predictions-backpedaling-hgts-and-serendipity-squared/#comment-392638 etc.. etc..
In fact Dr. Stephen Meyer's next book is going to be on the sheer impossibility of neo-Darwinian processes to explain the origination of 'Body-Plan information': Here is a sneak peek at his forthcoming book: Dr. Stephen Meyer: Why Are We Still Debating Darwin? pt. 2 - podcast http://intelligentdesign.podomatic.com/entry/2012-05-23T13_26_22-07_00 Of related interest, there is a whole other level of information on a cell's surface that is scarcely even beginning to be understood (which would seem to be important if one were to claim that he understood body-plan morphogenesis: Glycan Carbohydrate Molecules - A Whole New Level Of Scarcely Understood Information on The Surface of Cells Glycan carbohydrate molecules are very complex molecules found primarily on a cell's surface and are found to be very important for cell surface functions, such as immunity responses, and are found to show “remarkably discontinuous distribution across evolutionary lineages,”;
Glycans: Where Are They and What Do They Do? - short video http://www.youtube.com/watch?v=BgZ61TxnxKo New tools developed to unveil mystery of the 'glycome' - June 10, 2012 Excerpt: One of the Least Understood Domains of Biology: The "glycome"—the full set of sugar molecules in living things and even viruses—has been one of the least understood domains of biology. While the glycome encodes key information that regulates things such as cell trafficking events and cell signaling, this information has been relatively difficult to "decode." Unlike proteins, which are relatively straightforward translations of genetic information, functional sugars have no clear counterparts or "templates" in the genome. Their building blocks are simple, diet-derived sugar molecules, and their builders are a set of about 250 enzymes known broadly as glycosyltransferases.,,, http://phys.org/news/2012-06-tools-unveil-mystery-glycome.html
Glycans rival DNA and proteins in terms of complexity;
Glycans: What Makes Them So Special? - The Complexity Of Glycans - short video http://www.youtube.com/watch?v=WXez_OyNBQA
Yet Glycans, despite their complexity and importance to cell function, are found, like DNA and Proteins, to be 'rather uncooperative' with neo-Darwinian evolution;
This Non Scientific Claim Regularly Appears in Evolutionary Peer Reviewed Papers - Cornelius Hunter - April 2012 Excerpt: Indeed these polysaccharides, or glycans, would become rather uncooperative with evolution. As one recent paper explained, glycans show “remarkably discontinuous distribution across evolutionary lineages,” for they “occur in a discontinuous and puzzling distribution across evolutionary lineages.” This dizzying array of glycans can be (i) specific to a particular lineage, (i) similar in very distant lineages, (iii) and conspicuously absent from very restricted taxa only. In other words, the evidence is not what evolution expected. Here is how another paper described early glycan findings: There is also no clear explanation for the extreme complexity and diversity of glycans that can be found on a given glycoconjugate or cell type. Based on the limited information available about the scope and distribution of this diversity among taxonomic groups, it is difficult to see clear trends or patterns consistent with different evolutionary lineages. It appears that closely related species may not necessarily share close similarities in their glycan diversity, and that more derived species may have simpler as well as more complex structures. Intraspecies diversity can also be quite extensive, often without obvious functional relevance. http://darwins-god.blogspot.com/2012/04/this-non-scientific-claim-regularly.html
As well, It seems clear that Glycans, being on the cell's surface, would, besides immunity responses, be very important for explaining the exact positioning of cells in a multicellular organism (body-plan morphogenesis). In fact, experiments have been done rearranging parts of a cell's surface in which the 'rearrangement' on the cell's surface carried forward even though the DNA sequence had remained exactly the same:
Cortical Inheritance: The Crushing Critique Against Genetic Reductionism - Arthur Jones - video http://www.metacafe.com/watch/4187488
So it seems clear that this 'cell surface information' represents another whole new level of information that is not reducible to DNA (central dogma (modern synthesis) of neo-Darwinism) and yet this is clearly very important information to understand if one were to try to explain body-plan morphogenesis coherently.bornagain77
August 8, 2012
August
08
Aug
8
08
2012
11:36 AM
11
11
36
AM
PST
A Gene- no one has observed any amount of polymorphs via random mutations. No one knows if they have observed any random mutations.Joe
August 8, 2012
August
08
Aug
8
08
2012
10:37 AM
10
10
37
AM
PST
can you please point to an example of ‘observed amount of polymorphism’ from random mutations that needs to be explained?
-> HapMap -> 1000 genomes Anna Guager mentions both in her blogpost that this post is the subject of.A Gene
August 8, 2012
August
08
Aug
8
08
2012
09:31 AM
9
09
31
AM
PST
wd400- why don't YOU run a simulation that would support your position? I know why, because everyone would then see what a fraud your position is. What sort of mutation rate do you think is necessary to get the observed polymorphs starting with a founding population of 2? Please show your work.Joe
August 8, 2012
August
08
Aug
8
08
2012
08:32 AM
8
08
32
AM
PST
i.e. what you imagine to have happened in the past does not count as 'observed' within empirical science: of note:
“We have all seen the canonical parade of apes, each one becoming more human. We know that, as a depiction of evolution, this line-up is tosh (i.e. nonsense). Yet we cling to it. Ideas of what human evolution ought to have been like still colour our debates.” Henry Gee, editor of Nature (478, 6 October 2011, page 34, doi:10.1038/478034a),
bornagain77
August 8, 2012
August
08
Aug
8
08
2012
07:31 AM
7
07
31
AM
PST
"observed amount of polymorphism" can you please point to an example of 'observed amount of polymorphism' from random mutations that needs to be explained?
Response to John Wise – October 2010 Excerpt: But there are solid empirical grounds for arguing that changes in DNA alone cannot produce new organs or body plans. A technique called “saturation mutagenesis”1,2 has been used to produce every possible developmental mutation in fruit flies (Drosophila melanogaster),3,4,5 roundworms (Caenorhabditis elegans),6,7 and zebrafish (Danio rerio),8,9,10 and the same technique is now being applied to mice (Mus musculus).11,12 None of the evidence from these and numerous other studies of developmental mutations supports the neo-Darwinian dogma that DNA mutations can lead to new organs or body plans–because none of the observed developmental mutations benefit the organism.
bornagain77
August 8, 2012
August
08
Aug
8
08
2012
07:14 AM
7
07
14
AM
PST
Yeah, so none of that has anything to do with what I'm talking about. Why do you think Gauger hasn't just run the sims to see what sort of mutation rate you'd need to explain observed amount of polymorphism?wd400
August 8, 2012
August
08
Aug
8
08
2012
06:48 AM
6
06
48
AM
PST
A mutational target would be the genes in which the polymorphs are found- IOW mutations are not random, they would have had specific targets. And the entire theory of evolution invokes magical mystery mutations- Anyone involved in a debate about evolution has come to realize that the theory of evolution and universal common descent rely heavily on magical mystery mutations. I say that because those mutations can change an invertebrate to a vertebrate and no one knows how or why. Those mutations can change a fish into a land animal and then a land animal into an aquatic one- again without anyone knowing how or why. These magical mystery mutations operate when/ where no one can observe them. They cannot be studied which means no testing and no verification. We are told we just have to accept the "fact" that universal common descent occured even though the same data for UCD can be used for alternative scenarios, such as a common design or convergence. By relying on these magical mystery mutations evolutionitwits are admitting their scenario is a fairy tale and doesn't belong in a science classroom.Joe
August 8, 2012
August
08
Aug
8
08
2012
06:35 AM
6
06
35
AM
PST
No, really, what are you talking about? What's a "mutational target"? What does it have to do with mutations per generation? Where have I invoked "mystery" mutations?wd400
August 8, 2012
August
08
Aug
8
08
2012
06:32 AM
6
06
32
AM
PST
wd400- I was responding to your post. Apparently you can't follow alongJoe
August 8, 2012
August
08
Aug
8
08
2012
06:27 AM
6
06
27
AM
PST
Here is an excellent article, just up on ENV, that is related to the overarching topic at hand: "Complexity Brake" Defies Evolution - August 2012 Excerpt: In a recent Perspective piece called "Modular Biological Complexity" in Science, Christof Koch (Allen Institute for Brain Science, Seattle; Division of Biology, Caltech) explained why we won't be simulating brains on computers any time soon:
"Although such predictions excite the imagination, they are not based on a sound assessment of the complexity of living systems. Such systems are characterized by large numbers of highly heterogeneous components, be they genes, proteins, or cells. These components interact causally in myriad ways across a very large spectrum of space-time, from nanometers to meters and from microseconds to years. A complete understanding of these systems demands that a large fraction of these interactions be experimentally or computationally probed. This is very difficult."
Physicists can use statistics to describe a homogeneous system like an ideal gas, because one can assume all the member particles interact the same. Not so with life. When describing heterogeneous systems each with a myriad of possible interactions, the number of discrete interactions grows faster than exponentially. Koch showed how Bell's number (the number of ways a system can be partitioned) requires a comparable number of measurements to exhaustively describe a system. Even if human computational ability were to rise exponentially into the future (somewhat like Moore's law for computers), there is no hope for describing the human "interactome" -- the set of all interactions in life.
This is bad news. Consider a neuronal synapse -- the presynaptic terminal has an estimated 1000 distinct proteins. Fully analyzing their possible interactions would take about 2000 years. Or consider the task of fully characterizing the visual cortex of the mouse -- about 2 million neurons. Under the extreme assumption that the neurons in these systems can all interact with each other, analyzing the various combinations will take about 10 million years..., even though it is assumed that the underlying technology speeds up by an order of magnitude each year.
Even with shortcuts like averaging, "any possible technological advance is overwhelmed by the relentless growth of interactions among all components of the system," Koch said. "It is not feasible to understand evolved organisms by exhaustively cataloging all interactions in a comprehensive, bottom-up manner." He described the concept of the Complexity Brake:
Allen and Greaves recently introduced the metaphor of a "complexity brake" for the observation that fields as diverse as neuroscience and cancer biology have proven resistant to facile predictions about imminent practical applications. Improved technologies for observing and probing biological systems has only led to discoveries of further levels of complexity that need to be dealt with. This process has not yet run its course. We are far away from understanding cell biology, genomes, or brains, and turning this understanding into practical knowledge.
Why can't we use the same principles that describe technological systems? Koch explained that in an airplane or computer, the parts are "purposefully built in such a manner to limit the interactions among the parts to a small number." The limited interactome of human-designed systems avoids the complexity brake. "None of this is true for nervous systems.",,, to read more click here: http://www.evolutionnews.org/2012/08/complexity_brak062961.htmlbornagain77
August 8, 2012
August
08
Aug
8
08
2012
06:16 AM
6
06
16
AM
PST
Joe - what on earth are you talking about?wd400
August 8, 2012
August
08
Aug
8
08
2012
05:31 AM
5
05
31
AM
PST
wd400:
Joe, if Gauger wants to, for no reason other than protecting her pet hypothesis, invoke large changes in mutation rate over time she’s welcome to.
Well heck your position invokes magical mystery muations, for no other reason then to protect your worthless position. And perhaps she doesn't need to adjust the mutation rates, just the mutational target, as we appear to get more than enough mutations per birth.Joe
August 8, 2012
August
08
Aug
8
08
2012
04:53 AM
4
04
53
AM
PST
wd400, well regardless of what you think of the 'spam' I linked, the 'spam' I linked is precisely the reason why I think 'random', bottom up, unguided, mutations are wholly inadequate for creating new complex functional information in the genome and why I believe they, truly random mutations, will ALWAYS tend to degrade the complex functional information that is already in the genome. Of course if you disagree that chance and necessity processes of nature are not up to the task, which is your right to disagree since we do still live in America where that right is guaranteed, then you can easily prove your point by generating functional information above that which is already present in the genome:
Michael Behe on Falsifying Intelligent Design - video http://www.youtube.com/watch?v=N8jXXJN4o_A Three subsets of sequence complexity and their relevance to biopolymeric information - Abel, Trevors Excerpt: Three qualitative kinds of sequence complexity exist: random (RSC), ordered (OSC), and functional (FSC).,,, Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC).,,, Testable hypotheses about FSC What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses: Null hypothesis #1 Stochastic ensembles of physical units cannot program algorithmic/cybernetic function. Null hypothesis #2 Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function. Null hypothesis #3 Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function. Null hypothesis #4 Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time. We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified. http://www.tbiomed.com/content/2/1/29 Kirk Durston - Functional Information In Biopolymers - video http://www.youtube.com/watch?v=QMEjF9ZH0x8
bornagain77
August 8, 2012
August
08
Aug
8
08
2012
04:30 AM
4
04
30
AM
PST
No. I asked a very simply question, and you please answer it without the spam?wd400
August 8, 2012
August
08
Aug
8
08
2012
04:15 AM
4
04
15
AM
PST
Because, as pointed out yesterday, completely random changes to the genetic text written in the DNA are, for one thing, random changes to the lowest level of information in the the information hierarchy of the cell:
Multidimensional Genome – Dr. Robert Carter – video http://www.metacafe.com/watch/8905048/ The Extreme Complexity Of Genes – Dr. Raymond G. Bohlin http://www.metacafe.com/watch/8593991/
On top of that amazing fact, as Dr. Bohlin pointed out in his talk, unlike the 'one dimensional' computer code written in our computers, in which we certainly wouldn't expect unguided random changes to confer any benefit, we are dealing with 'one dimensional' coding that is 'overlapping' which makes the problem much more severe for the 'bottom up' neo-Darwinists who are dogmatically committed to their materialistic worldview:
Astonishing DNA complexity update Excerpt: (ENCODE revealed) The untranslated regions (now called UTRs, rather than ‘junk’) are far more important than the translated regions (the genes), as measured by the number of DNA bases appearing in RNA transcripts. Genic regions are transcribed on average in five different overlapping and interleaved ways, while UTRs are transcribed on average in seven different overlapping and interleaved ways. Since there are about 33 times as many bases in UTRs than in genic regions, that makes the ‘junk’ about 50 times more active than the genes. http://creation.com/astonishing-dna-complexity-update 'It's becoming extremely problematic to explain how the genome could arise and how these multiple levels of overlapping information could arise, since our best computer programmers can't even conceive of overlapping codes. The genome dwarfs all of the computer information technology that man has developed. So I think that it is very problematic to imagine how you can achieve that through random changes in a code.,,, More and more it looks like top down design and not just bottom up chance discovery of making complex systems.' - Dr. John Sanford - quote taken from this talk: http://www.youtube.com/watch?v=YemLbrCdM_s&list=UUYiki-ERLi69PwxQqz4YvlQ&index=16&feature=plcp Dual-Coding Genes in Mammalian Genomes - 2007 Abstract: Coding of multiple proteins by overlapping reading frames is not a feature one would associate with eukaryotic genes. Indeed, codependency between codons of overlapping protein-coding regions imposes a unique set of evolutionary constraints, making it a costly arrangement. Yet in cases of tightly coexpressed interacting proteins, dual coding may be advantageous. Here we show that although dual coding is nearly impossible by chance, a number of human transcripts contain overlapping coding regions. Using newly developed statistical techniques, we identified 40 candidate genes with evolutionarily conserved overlapping coding regions. Because our approach is conservative, we expect mammals to possess more dual-coding genes. Our results emphasize that the skepticism surrounding eukaryotic dual coding is unwarranted: rather than being artifacts, overlapping reading frames are often hallmarks of fascinating biology. http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.0030091 etc.. etc..
John Sanford, a leading expert in Genetics, co-inventor of 'the gene-gun', comments on some of the stunning poly-functional complexity found in the genome here:
"There is abundant evidence that most DNA sequences are poly-functional, and therefore are poly-constrained. This fact has been extensively demonstrated by Trifonov (1989). For example, most human coding sequences encode for two different RNAs, read in opposite directions i.e. Both DNA strands are transcribed ( Yelin et al., 2003). Some sequences encode for different proteins depending on where translation is initiated and where the reading frame begins (i.e. read-through proteins). Some sequences encode for different proteins based upon alternate mRNA splicing. Some sequences serve simultaneously for protein-encoding and also serve as internal transcriptional promoters. Some sequences encode for both a protein coding, and a protein-binding region. Alu elements and origins-of-replication can be found within functional promoters and within exons. Basically all DNA sequences are constrained by isochore requirements (regional GC content), “word” content (species-specific profiles of di-, tri-, and tetra-nucleotide frequencies), and nucleosome binding sites (i.e. All DNA must condense). Selective condensation is clearly implicated in gene regulation, and selective nucleosome binding is controlled by specific DNA sequence patterns - which must permeate the entire genome. Lastly, probably all sequences do what they do, even as they also affect general spacing and DNA-folding/architecture - which is clearly sequence dependent. To explain the incredible amount of information which must somehow be packed into the genome (given that extreme complexity of life), we really have to assume that there are even higher levels of organization and information encrypted within the genome. For example, there is another whole level of organization at the epigenetic level (Gibbs 2003). There also appears to be extensive sequence dependent three-dimensional organization within chromosomes and the whole nucleus (Manuelides, 1990; Gardiner, 1995; Flam, 1994). Trifonov (1989), has shown that probably all DNA sequences in the genome encrypt multiple “codes” (up to 12 codes). Dr. John Sanford; Genetic Entropy 2005
The reason why this introduces 'polyconstraint' is that if we were to actually get a beneficial effect from a 'random mutation’ in a overlapping 'polyfunctional' genome then we would actually be encountering something akin to this illustration found on page 141 of the book Genetic Entropy by Dr. Sanford.
S A T O R A R E P O T E N E T O P E R A R O T A S https://docs.google.com/document/d/1xkW4C7uOE8s98tNx2mzMKmALeV8-348FZNnZmSWY5H8/edit
Which is translated ; THE SOWER NAMED AREPO HOLDS THE WORKING OF THE WHEELS. This ancient puzzle, which dates back to at least 79 AD, reads the same four different ways, Thus, If we change (mutate) any letter we may get a new meaning for a single reading read any one way, as in Dawkins weasel program, but we will consistently destroy the other 3 readings of the message with the new mutation (save for the center spot). This is what is meant when Dr. Sanford says a poly-functional genome is poly-constrained to any random mutations. The evidence clearly indicates 'top-down' design. And it, severe polyfunctionality of the genome, is certainly not a situation in which we should expect random changes/mutations to confer benefit, which is certainly what is found after exhaustive search:
Response to John Wise - October 2010 Excerpt: But there are solid empirical grounds for arguing that changes in DNA alone cannot produce new organs or body plans. A technique called "saturation mutagenesis"1,2 has been used to produce every possible developmental mutation in fruit flies (Drosophila melanogaster),3,4,5 roundworms (Caenorhabditis elegans),6,7 and zebrafish (Danio rerio),8,9,10 and the same technique is now being applied to mice (Mus musculus).11,12 None of the evidence from these and numerous other studies of developmental mutations supports the neo-Darwinian dogma that DNA mutations can lead to new organs or body plans--because none of the observed developmental mutations benefit the organism. http://www.evolutionnews.org/2010/10/response_to_john_wise038811.html
Neo-Darwinists, with the requirement of 'bottom up' random mutations, are simply not even in the right conceptual field to begin with in order to try to understand this astonishing level of interweaved complexity:
How we could create life: The key to existence will be found not in primordial sludge, but in the nanotechnology of the living cell - Paul Davies - 2002 Excerpt: Instead, the living cell is best thought of as a supercomputer – an information processing and replicating system of astonishing complexity. DNA is not a special life-giving molecule, but a genetic databank that transmits its information using a mathematical code. Most of the workings of the cell are best described, not in terms of material stuff – hardware – but as information, or software. Trying to make life by mixing chemicals in a test tube is like soldering switches and wires in an attempt to produce Windows 98. It won’t work because it addresses the problem at the wrong conceptual level. - Paul Davies http://www.guardian.co.uk/education/2002/dec/11/highereducation.uk
Also of interest, besides overlapping coding, is that the integrated coding between the DNA, RNA and Proteins of the cell apparently seems to be ingeniously programmed along the very stringent guidelines laid out by Landauer’s principle for ‘reversible computation’ in order to achieve such amazing energy efficiency. The amazing energy efficiency possible with ‘reversible computation’ has been known about since Rolf Landauer laid out the principles for such programming decades ago, but as far as I know, due to the extreme level of complexity involved in achieving such ingenious 'reversible coding', has yet to be accomplish in any meaningful way for our computer programs even to this day:
Notes on Landauer’s principle, reversible computation, and Maxwell’s Demon - Charles H. Bennett Excerpt: Of course, in practice, almost all data processing is done on macroscopic apparatus, dissipating macroscopic amounts of energy far in excess of what would be required by Landauer’s principle. Nevertheless, some stages of biomolecular information processing, such as transcription of DNA to RNA, appear to be accomplished by chemical reactions that are reversible not only in principle but in practice.,,,, http://www.hep.princeton.edu/~mcdonald/examples/QM/bennett_shpmp_34_501_03.pdf
Perhaps computer programmers should study this more closely so as to learn how to 'design' better programs? Oh wait, that is already being done, at least by one group funded by Bill Gates. Bill Gates, in recognizing this 'far more superior' coding, has now funded research into this area:
Welcome to CoSBi - (Computational and Systems Biology) Excerpt: Biological systems are the most parallel systems ever studied and we hope to use our better understanding of how living systems handle information to design new computational paradigms, programming languages and software development environments. The net result would be the design and implementation of better applications firmly grounded on new computational, massively parallel paradigms in many different areas.
bornagain77
August 8, 2012
August
08
Aug
8
08
2012
03:25 AM
3
03
25
AM
PST
...that should read "without blockquotes" - just an explanation for why you believe this to be true, thanks.wd400
August 7, 2012
August
08
Aug
7
07
2012
11:50 PM
11
11
50
PM
PST
BA, Are you actually saying all random mutations destroy information? And that, say, once and A -> G mutation has occured the back mutation G -> A either (a) couldn't happen by random mutation or (b) would also decrease information? In either case, I should like an explanation (ideally with blockquotes...) as to why you think this.wd400
August 7, 2012
August
08
Aug
7
07
2012
11:17 PM
11
11
17
PM
PST
bornagain77, how stay focus in differrent statements of others to respond to one person? me too distract, loss of ink for pen twice for notes! sergiosergiomendes
August 7, 2012
August
08
Aug
7
07
2012
08:09 PM
8
08
09
PM
PST
Joe, "genetic algorithm" = ??. something obtained in computer science, yes? sergiosergiomendes
August 7, 2012
August
08
Aug
7
07
2012
07:17 PM
7
07
17
PM
PST
random mutations A -> G (of whatever 'random' change) ALWAYS deteriorates information, whereas DNA repair mechanisms to correct those 'random' changes, compensatory (calculated) changes to alleles, and/or epigentic regulatory actions on DNA sequences (Shapiro) etc.. etc.. are NOT truly random mutations as is required per your theoretical basis in materialistic neo-Darwinism! Notes on the fruitless search for the all so elusive random mutation that actually would be beneficial as to building complex functional information:
Mutations : when benefits level off - June 2011 - (Lenski's e-coli after 50,000 generations) Excerpt: After having identified the first five beneficial mutations combined successively and spontaneously in the bacterial population, the scientists generated, from the ancestral bacterial strain, 32 mutant strains exhibiting all of the possible combinations of each of these five mutations. They then noted that the benefit linked to the simultaneous presence of five mutations was less than the sum of the individual benefits conferred by each mutation individually. http://www2.cnrs.fr/en/1867.htm?theme1=7 “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain - Michael Behe - December 2010 Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.(that is a net 'fitness gain' within a 'stressed' environment i.e. remove the stress from the environment and the parent strain is always more 'fit') http://behe.uncommondescent.com/2010/12/the-first-rule-of-adaptive-evolution/ The GS (genetic selection) Principle - David L. Abel - 2009 Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.” http://www.bioscience.org/2009/v14/af/3426/fulltext.htm Experimental Evolution in Fruit Flies - October 2010 Excerpt: "This research really upends the dominant paradigm about how species evolve".,,, as stated in regards to the 35 year experimental failure to fixate a single beneficial mutation within fruit flies. http://www.arn.org/blogs/index.php/literature/2010/10/07/experimental_evolution_in_fruit_flies "I have seen estimates of the incidence of the ratio of deleterious-to-beneficial mutations which range from one in one thousand up to one in one million. The best estimates seem to be one in one million (Gerrish and Lenski, 1998). The actual rate of beneficial mutations is so extremely low as to thwart any actual measurement (Bataillon, 2000, Elena et al, 1998). Therefore, I cannot ...accurately represent how rare such beneficial mutations really are." (J.C. Sanford; Genetic Entropy page 24) - 2005 Estimation of spontaneous genome-wide mutation rate parameters: whither beneficial mutations? (Thomas Bataillon) Abstract......It is argued that, although most if not all mutations detected in mutation accumulation experiments are deleterious, the question of the rate of favourable mutations (and their effects) is still a matter for debate. http://www.nature.com/hdy/journal/v84/n5/full/6887270a.html “But in all the reading I’ve done in the life-sciences literature, I’ve never found a mutation that added information… All point mutations that have been studied on the molecular level turn out to reduce the genetic information and not increase it.” Lee Spetner - Ph.D. Physics - MIT - Not By Chance John Sanford writes in “Genetic Entropy & the Mystery of the Genome”: “Bergman (2004) has studied the topic of beneficial mutations. Among other things, he did a simple literature search via Biological Abstracts and Medline. He found 453,732 ‘mutation’ hits, but among these only 186 mentioned the word ‘beneficial’ (about 4 in 10,000). When those 186 references were reviewed, almost all the presumed ‘beneficial mutations’ were only beneficial in a very narrow sense–but each mutation consistently involved loss of function changes–hence loss of information. While it is almost universally accepted that beneficial (information creating) mutations must occur, this belief seems to be based upon uncritical acceptance of RM/NS, rather than upon any actual evidence.” (pp. 26-27) http://www.trueorigin.org/evomyth01.asp "The neo-Darwinians would like us to believe that large evolutionary changes can result from a series of small events if there are enough of them. But if these events all lose information they can’t be the steps in the kind of evolution the neo-Darwin theory is supposed to explain, no matter how many mutations there are. Whoever thinks macroevolution can be made by mutations that lose information is like the merchant who lost a little money on every sale but thought he could make it up on volume." Lee Spetner (Ph.D. Physics - MIT - Not By Chance)
bornagain77
August 7, 2012
August
08
Aug
7
07
2012
06:23 PM
6
06
23
PM
PST
Joe, if Gauger wants to, for no reason other than protecting her pet hypothesis, invoke large changes in mutation rate over time she's welcome to. Run the simulation with different values of ? - see how big it has to be to get to observed levels of polymorphism. She won't, of course... BA, Some mutations must fall in, say, broken transposons. How are they likely to "deterioate genetic information". In fact, if a mutation A -> G deteriorates genetic information then, obviously, the mutation G -> A resotres it right? So some mutations create genetic information?wd400
August 7, 2012
August
08
Aug
7
07
2012
05:01 PM
5
05
01
PM
PST
No Matter What Type Of Selection, Mutations Deteriorate Genetic Information - article and video https://uncommondescent.com/evolution/nachmans-paradox-defeats-darwinism-and-dawkins-weasel/bornagain77
August 7, 2012
August
08
Aug
7
07
2012
04:41 PM
4
04
41
PM
PST
A Gene:
Yes, of course population genetic models are simplifications, so yes they might be wrong. But if Gauger’s attempts at criticism are to carry any weight, she should do the work to show that what she has identified as problems actually make a difference.
Yet no one has ever done any work to demonstrate that mutations can accumulate in such a way as to transform a knuckle-walker/ quadraped into an upright biped. As a matter of fact the "theory" of evolution is void of work.Joe
August 7, 2012
August
08
Aug
7
07
2012
05:11 AM
5
05
11
AM
PST
Why doesn’t Gauger run some simulations of a population expanding from Ne = 2 to Ne =10 000 in ~10 000 years and see what level of polymorphism you’d expect to see in the resulting population given the observed mutation rate?
God that is retarded. If Gauger is right then the observed mutation rates do not apply as what we observe now is not what was originally designed, duh. But yes I am sure someone could write a genetic algorithm that could easily produce the diversity observed in a few thousand generations.Joe
August 7, 2012
August
08
Aug
7
07
2012
05:05 AM
5
05
05
AM
PST
paulmc:
Nope, there is no evidence that the majority of mutations are slightly deleterious. A point mutation to most intron and intergenic sequences will not have fitness effects. That you’re trying to link this to gene interactions shows you don’t understand the evidence.
Well, paul, there's np evidence that any amount of genetic change can transform a knuckle-walker/ quadraped into an upright biped. So the question would be why do you ignore that? THAT says that YOU do NOT understand the evidence. Where are all the transforming mutations, paul? The safe money says they do not exist and never have.Joe
August 7, 2012
August
08
Aug
7
07
2012
05:02 AM
5
05
02
AM
PST
The reason why it is very interesting for me to learn that energy itself, instead of just molecules, are communicating massive amounts of information in the cell is that energy, per Einstein, has shown itself to be of a 'higher dimensional' nature than mass itself is: i.e. time, as we understand it, would come to a complete stop at the speed of light. To grasp the whole 'time coming to a complete stop at the speed of light' concept a little more easily, imagine moving away from the face of a clock at the speed of light. Would not the hands on the clock stay stationary as you moved away from the face of the clock at the speed of light? Moving away from the face of a clock at the speed of light happens to be the same 'thought experiment' that gave Einstein his breakthrough insight into e=mc2.
Albert Einstein - Special Relativity - Insight Into Eternity - 'thought experiment' video http://www.metacafe.com/w/6545941/
As well, please note the similarity of the optical effect, noted at the 3:22 minute mark of the following video, when the 3-Dimensional world ‘folds and collapses’ into a tunnel shape around the direction of travel as a 'hypothetical' observer moves towards the ‘higher dimension’ of the speed of light: (Of note: This following video was made by two Australian University Physics Professors with a supercomputer.)
Approaching The Speed Of Light - Optical Effects - video http://www.metacafe.com/watch/5733303/
Here is the interactive website, with link to the relativistic math at the bottom of the page, related to the preceding video;
Seeing Relativity http://www.anu.edu.au/Physics/Searle/
Moreover, there is even a higher quality of information found in the cell than even that of the 'higher dimensionality' of energy is:
Quantum Information/Entanglement In DNA - Elisabeth Rieper - short video http://www.metacafe.com/watch/5936605/ Light and Quantum Entanglement Both Reflect Some Characteristics Of God - video http://www.metacafe.com/watch/4102182
AS to how this relates to the hierarchy of information in the cell: Materialism had postulated for centuries that everything reduced to, or emerged from material atoms, yet the correct structure of reality is now found by science to be as follows:
1. material particles (mass) normally reduces to energy (e=mc^2) 2. energy and mass both reduce to information (quantum teleportation) 3. information reduces to consciousness (geometric centrality of conscious observation in universe dictates that consciousness must precede quantum wave collapse to its single bit state)
bornagain77
August 7, 2012
August
08
Aug
7
07
2012
04:02 AM
4
04
02
AM
PST
paulmc, you mention several mutation scenarios to DNA, to 'presupposed' completely functionless sequences, yet it interesting to note that 'simple' sequences to DNA are in many respects now found to be the 'bottom rung of the ladder' in the information hierarchy of the cell. A 'scratch the surface' overview of the information hierarchy is here:
Multidimensional Genome - Dr. Robert Carter - video http://www.metacafe.com/watch/8905048/ The Extreme Complexity Of Genes - Dr. Raymond G. Bohlin http://www.metacafe.com/watch/8593991/
But perhaps of more interest, at least for me, is that energy itself is now found to be communicating information in the cell
Cellular Communication through Light Excerpt: Information transfer is a life principle. On a cellular level we generally assume that molecules are carriers of information, yet there is evidence for non-molecular information transfer due to endogenous coherent light. This light is ultra-weak, is emitted by many organisms, including humans and is conventionally described as biophoton emission. http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0005086 An Electric Face: A Rendering Worth a Thousand Falsifications - September 2011 Excerpt: The video suggests that bioelectric signals presage the morphological development of the face. It also, in an instant, gives a peak at the phenomenal processes at work in biology. As the lead researcher said, “It’s a jaw dropper.” http://darwins-god.blogspot.com/2011/09/electric-face-rendering-worth-thousand.html The (Electric) Face of a Frog - video http://www.youtube.com/watch?v=ndFe5CaDTlI Not in the Genes: Embryonic Electric Fields - Jonathan Wells - December 2011 Excerpt: although the molecular components of individual sodium-potassium channels may be encoded in DNA sequences, the three-dimensional arrangement of those channels -- which determines the form of the endogenous electric field -- constitutes an independent source of information in the developing embryo. http://www.evolutionnews.org/2011/12/not_in_the_gene054071.html Biophotons - The Light In Our Cells - Marco Bischof - March 2005 Excerpt page 2: The Coherence of Biophotons: ,,, Biophotons consist of light with a high degree of order, in other words, biological laser light. Such light is very quiet and shows an extremely stable intensity, without the fluctuations normally observed in light. Because of their stable field strength, its waves can superimpose, and by virtue of this, interference effects become possible that do not occur in ordinary light. Because of the high degree of order, the biological laser light is able to generate and keep order and to transmit information in the organism. http://www.international-light-association.eu/PDF/Biophotons.pdf
bornagain77
August 7, 2012
August
08
Aug
7
07
2012
04:02 AM
4
04
02
AM
PST
1 2

Leave a Reply