Uncommon Descent Serving The Intelligent Design Community

Oldies but baddies — AF repeats NCSE’s eight challenges to ID (from ten years ago)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a recent thread by Dr Sewell, AF raised again the Shallit-Elsberry list of eight challenges to design theory from a decade ago:

14 Alan FoxApril 15, 2013 at 12:56 am Unlike Profesor Hunt, Barry and Eric think design detection is well established. How about having a go at this list then. It’s been published for quite a while now.

I responded a few hours later:

______________

>>* 16 kairosfocus April 15, 2013 at 2:13 am

AF:

I note on points re your list of eight challenges.

This gets tiresomely repetitive, in a pattern of refusal to be answerable to adequate evidence, on the part of too many objectors to design theory:

>>1 Publish a mathematically rigorous definition of CSI>>

It has long since been shown, objections and censorship games notwithstanding, that reasonable quantitative metrics for FSCO/I and so for CSI, can be built and have been built. Indeed Durston et al have used such to provide a published list of values for 15 protein families.

>> 2 Provide real evidence for CSI claims >>

Blatant, all around you. But, a man convinced against his will is of the same opinion still.

Just to pick an example {–> from the list}, a phone number is obviously functionally specific (ever had a wrong number call?) and — within a reasonable context [though not beyond the 500 bit threshold] complex.

>> 3 Apply CSI to identify human agency where it is currently not known >>

FSCO/I is routinely intuitively used to identify artifacts of unknown cause, as IIRC, WmAD has pointed out regarding a room in the Smithsonian full of artifacts of unknown purpose but identified to be credibly human.

>> 4 Distinguish between chance and design in archaeoastronomy >>

The pattern of Nazca lines or the like, fit within the nodes-arcs pattern and collectively exhibit FSCO/I similar to other complex drawings. The 500 bit threshold is easily passed. If you want to contrast odds of a marker wandering randomly in a random walk, the difference will be trivial.

In short this is a refusal to use simple common sense and good will.

>> 5 Apply CSI to archaeology >>

Just shown, this is a case or repeating much the same objection in much the same context as though drumbeat repetition is capable of establishing a claim by erasing the underlying fallacies. Being wrong over and over and over again, even in the usual anti-design echo chambers, does not convert long since corrected fallacy into cogent reasoning.

>> 6 Provide a more detailed account of CSI in biology
Produce a workbook of examples using the explanatory filter, applied to a progressive series of biological phenomena, including allelic substitution of a point mutation. >>

There are book-length cogent treatments of CSI as applied to biology [try Meyer’s SITC for starts {{ –> . . . I know, I know, this was published 2009, six years after the “challenge,” but AF is raising it in 2013, TEN years after the challenge}}], and that is not enough for the objectors, there will never be enough details.

Similarly, the objection starts within an island of existing function and demands a CSI based explanation of a phenomenon known to be well within the threshold of complexity. This is a strawman tactic.

>> 7 Use CSI to classify the complexity of animal communication As mentioned in Elsberry and Shallit (2003: 9), many birds exhibit complex songs. >>

What?

Is there any doubt that bird or whale songs or bee dances for that matter are long enough and complex enough to be FSCI? That they function in communication? That we did not directly observe the origin of the capacities for such but have reason to see that they are grounded in CSI in the genome and related regulatory information expressed in embryological development that wires the relevant nerve pathways?

So, are you demanding a direct observation of the origin of such, which we do not have access to and cannot reasonably expect, when we do have access to the fact that we have indications of FSCO/I and so raise the question as to what FSCO/I is a known reliable, strongly tested sign of as best causal explanation?

>> 8 Animal cognition
Apply CSI to resolve issues in animal cognition and language use by non-human animals. >>

Capacity for language, of course, is biologically rooted, genetically stamped and embryologically expressed. So it fits into the same set of issues addressed under 7 just now.

Repetitive use of fallacies does not suddenly convert them into sound arguments.

Nor, can one reasonably demand solutions to any number of known unresolved scientific problems as a condition of accepting something that is already well enough warranted on reasonable application of inductive principles. That is, it is well established on billions of test cases without significant exception, that FSCO/I is a reliable sign of design as cause.
____________

To suddenly demand that design thinkers must solve any number of unsolved scientific questions or the evidence already in hand will be rejected, is a sign of selective hyeprskepticism and a red herring tactic led away to a strawman misrepresentation, not a case of serious and cogent reasoning. >>

=========

(*And yes, AF, I am modifying French-style quote marks to account for the effect of the Less Than sign in an HTML-sensitive context. No need to go down that little convenient side-track again twice within a few days. Especially, as someone by your own testimony apparently living in a Francophone area.)

NB: BA77’s comment at 17 is worth a look also. Let’s clip in modified French style, that he may clip and run that readeth:

>> Mr. Fox, it seems the gist of your eight ‘questions’ from ten years ago is that you doubt whether or not information, as a distinct entity, is even in the cell? In fact I remember many arguments with neo-Darwinists on UD, not so many years back, who denied information, as a distinct entity, was even in the cell. Is this still your position? If so, may I enlighten you to this recent development???,,,

Harvard cracks DNA storage, crams 700 terabytes of data into a single gram – Sebastian Anthony – August 17, 2012
Excerpt: A bioengineer and geneticist at Harvard’s Wyss Institute have successfully stored 5.5 petabits of data — around 700 terabytes — in a single gram of DNA, smashing the previous DNA data density record by a thousand times.,,, Just think about it for a moment: One gram of DNA can store 700 terabytes of data. That’s 14,000 50-gigabyte Blu-ray discs… in a droplet of DNA that would fit on the tip of your pinky. To store the same kind of data on hard drives — the densest storage medium in use today — you’d need 233 3TB drives, weighing a total of 151 kilos. In Church and Kosuri’s case, they have successfully stored around 700 kilobytes of data in DNA — Church’s latest book, in fact — and proceeded to make 70 billion copies (which they claim, jokingly, makes it the best-selling book of all time!) totaling 44 petabytes of data stored.
http://www.extremetech.com/ext…..ingle-gram

That DNA stores information is pretty much the mainstream position now Mr. Fox,,,

Venter: Life Is Robotic Software – July 15, 2012
Excerpt: “All living cells that we know of on this planet are ‘DNA software’-driven biological machines comprised of hundreds of thousands of protein robots, coded for by the DNA, that carry out precise functions,” said (Craig) Venter.
http://crev.info/2012/07/life-is-robotic-software/

That information is a distinct entity in the cell is pretty uncontroversial Mr. Fox, so why the list of eight questions? The only question that really matters is can purely material processes generate these extreme levels of functional information? Perhaps you would like to be the first Darwinist on UD to produce evidence that material processes can produce enough functional information for say the self assembly of a novel molecular machine?>>

The much underestimated and too often derided BA77  continues at 18:

>> Mr. Fox, as to the fact that a cell contains functional information, I would like to, since Dr. Sewell approaches this from the thermodynamic perspective, point out something that gets missed in the definition of functional information in the specific sequences of DNA, RNAs, and proteins. There is a deep connection between entropy and information,,

“Is there a real connection between entropy in physics and the entropy of information? ….The equations of information theory and the second law are the same, suggesting that the idea of entropy is something fundamental…”
Siegfried, Dallas Morning News, 5/14/90, [Quotes Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin]

“Bertalanffy (1968) called the relation between irreversible thermodynamics and information theory one of the most fundamental unsolved problems in biology.”
Charles J. Smith – Biosystems, Vol.1, p259.

Demonic device converts information to energy – 2010
Excerpt: “This is a beautiful experimental demonstration that information has a thermodynamic content,” says Christopher Jarzynski, a statistical chemist at the University of Maryland in College Park. In 1997, Jarzynski formulated an equation to define the amount of energy that could theoretically be converted from a unit of information2; the work by Sano and his team has now confirmed this equation. “This tells us something new about how the laws of thermodynamics work on the microscopic scale,” says Jarzynski.
http://www.scientificamerican……rts-inform

And what is particularly interesting about this deep connection between information and entropy is that,,,

“Gain in entropy always means loss of information, and nothing more.”
Gilbert Newton Lewis – preeminent Chemist of the first half of last century

And yet despite the fact that entropic processes tend to degrade information, it is found that the thermodynamic disequilibrium of a ‘simple’ bacteria and the environment is,,,

“a one-celled bacterium, e. coli, is estimated to contain the equivalent of 100 million pages of Encyclopedia Britannica. Expressed in information in science jargon, this would be the same as 10^12 bits of information. In comparison, the total writings from classical Greek Civilization is only 10^9 bits, and the largest libraries in the world – The British Museum, Oxford Bodleian Library, New York Public Library, Harvard Widenier Library, and the Moscow Lenin Library – have about 10 million volumes or 10^12 bits.” – R. C. Wysong
http://books.google.com/books?…..;lpg=PA112

Moleular Biophysics – Information theory. Relation between information and entropy: – Setlow-Pollard, Ed. Addison Wesley
Excerpt: Linschitz gave the figure 9.3 x 10^12 cal/deg or 9.3 x 10^12 x 4.2 joules/deg for the entropy of a bacterial cell. Using the relation H = S/(k In 2), we find that the information content is 4 x 10^12 bits. Morowitz’ deduction from the work of Bayne-Jones and Rhees gives the lower value of 5.6 x 10^11 bits, which is still in the neighborhood of 10^12 bits. Thus two quite different approaches give rather concordant figures.
http://www.astroscu.unam.mx/~a…..ecular.htm

Moreover we now have good empirics to believe that information itself is what is constraining the cell to be so far out of thermodynamic equilibrium:

Information and entropy – top-down or bottom-up development in living systems? A.C. McINTOSH
Excerpt: It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate.
http://journals.witpress.com/paperinfo.asp?pid=420

Does DNA Have Telepathic Properties?-A Galaxy Insight – 2009
Excerpt: DNA has been found to have a bizarre ability to put itself together, even at a distance, when according to known science it shouldn’t be able to.,,, The recognition of similar sequences in DNA’s chemical subunits, occurs in a way unrecognized by science. There is no known reason why the DNA is able to combine the way it does, and from a current theoretical standpoint this feat should be chemically impossible.
http://www.dailygalaxy.com/my_…..ave-t.html

In fact, Encoded ‘classical’ information such as what Dembski and Marks demonstrated the conservation of, and such as what we find encoded in computer programs, and yes, as we find encoded in DNA, is found to be a subset of ‘transcendent’ (beyond space and time) quantum information/entanglement by the following method:,,,

Quantum knowledge cools computers: New understanding of entropy – June 2011
Excerpt: No heat, even a cooling effect;
In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that “more than complete knowledge” from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy. Renner emphasizes, however, “This doesn’t mean that we can develop a perpetual motion machine.” The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what’s known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says “We’re working on the edge of the second law. If you go any further, you will break it.”
http://www.sciencedaily.com/re…..134300.htm

And yet, despite all this, we have ZERO evidence that material processes can generate even trivial amounts classical information much less generate massive amounts transcendent ‘non-local’ quantum information/entanglement,,,

Stephen Meyer – The Scientific Basis Of Intelligent Design
https://vimeo.com/32148403

Stephen Meyer – “The central argument of my book is that intelligent design—the activity of a conscious and rational deliberative agent—best explains the origin of the information necessary to produce the first living cell. I argue this because of two things that we know from our uniform and repeated experience, which following Charles Darwin I take to be the basis of all scientific reasoning about the past. First, intelligent agents have demonstrated the capacity to produce large amounts of functionally specified information (especially in a digital form). Second, no undirected chemical process has demonstrated this power. Hence, intelligent design provides the best—most causally adequate—explanation for the origin of the information necessary to produce the first life from simpler non-living chemicals. In other words, intelligent design is the only explanation that cites a cause known to have the capacity to produce the key effect in question.”

Verse and Music:

John 1:1-4
In the beginning was the Word, and the Word was with God, and the Word was God. He was with God in the beginning. Through him all things were made; without him nothing was made that has been made. In him was life, and that life was the light of all mankind.

The Afters – Every Good Thing – Lyric Video
http://www.youtube.com/watch?v=FY2ycrpbOlw >>

Joe puts in a good knock at 25:

>>Earth to Alan Fox,

Neither you, Shallit, Elsberry nor the NCSE need concern yourselves with CSI. That is because all of you can render CSI moot just by stepping up and demonstrating that blind and undirected processes can account for what we call CSI.

It is that simple- demonstrate blind and undirected processes can produce CSI and our argument wrt CSI, falls.

However seeing that you all are nothing but cowards, you won’t do that because that means actually having to make a positive case. And everyone in the world knows that you cannot do such a thing.

The point being is that your misguided attacks on ID are NOT going to provide positiove evidence for your position. And only positive evidence for blind and undirected processes producing CSI is going to refute our arguments. >>

I picked back up from BA77 at 26:

>> BA77: The connexion between entropy and information is indeed important. I like the expression of it that runs like: the entropy of a body is the average missing info to specify the exact microstate of its constituent particles, that exists if what one knows about the system is the thermodynamic macrostate defined by its macro-level thermodynamic properties. This of course implies the degree of freedom or lack of constraint on the particles, and links to the situation where a rise in entropy is often linked to a rise in disorder, a degradation of availability of energy.  >>

_______________
And, dear Reader, what do you think AF’s answer is, several days later on this the 19th of April in this, The Year of Our Risen Lord, “dos mil trece” [= 2013]?

Dead silence, and heading off to other threads where he thought he could score debate points.

(In short, he raised dismissive talking points and stayed not for an answer. Sad.)

Let us hope that headlining the above will at least allow others who need and want such, to find a reasonable summary answer to the NCSE talking points. END

PS: Dembski and Luskin have responded at one time or another to the S-E team, try here and here (part II here; complete with with AF popping up here at no 3).

Comments
I mean <<like this>>Alan Fox
April 19, 2013
April
04
Apr
19
19
2013
07:22 AM
7
07
22
AM
PDT
1 Publish a mathematically rigorous definition of CSI
(Side-note, the quotation marks you are using would be correct in Finnish but not in French. Unfortunately I can't demonstrate as the characters are interpreted as HTML. The opening Guillemet "points" out from the quote, not in.) It would seem to be a simple matter, if a mathematically rigorous definition of CSI existed, to reproduce it.Alan Fox
April 19, 2013
April
04
Apr
19
19
2013
07:14 AM
7
07
14
AM
PDT
But perhaps the best way to understand the reason why this is so devastating to neo-Darwinian evolution is by taking a look at what Richard Dawkins himself said about what would happen if one were to ‘randomly’ change the genetic code once it is in place:
Venter vs. Dawkins on the Tree of Life – and Another Dawkins Whopper – March 2011 Excerpt:,,, But first, let’s look at the reason Dawkins gives for why the code must be universal: “The reason is interesting. Any mutation in the genetic code itself (as opposed to mutations in the genes that it encodes) would have an instantly catastrophic effect, not just in one place but throughout the whole organism. If any word in the 64-word dictionary changed its meaning, so that it came to specify a different amino acid, just about every protein in the body would instantaneously change, probably in many places along its length. Unlike an ordinary mutation…this would spell disaster.” (2009, p. 409-10) OK. Keep Dawkins’ claim of universality in mind, along with his argument for why the code must be universal, and then go here (linked site listing 23 variants of the genetic code). Simple counting question: does “one or two” equal 23? That’s the number of known variant genetic codes compiled by the National Center for Biotechnology Information. By any measure, Dawkins is off by an order of magnitude, times a factor of two. http://www.evolutionnews.org/2011/03/venter_vs_dawkins_on_the_tree_044681.html
The bottom line is that if any regulatory code, such as the alternative splicing code, is ‘randomly changed’ in part, then it throws the entire code out of whack and will be ‘instantly catastrophic’ to the organism, to use Richard Dawkins most appropriate words, thus rendering the 'bottom up' gradual change of neo-Darwinism impossible. i.e. The entire code must be implemented ‘top down’ when the species was originally created! A code is a all or nothing, take it or leave it, 'top down' deal! It is also interesting to remember just how hard it was to crack the alternative splicing code for humans:
Breakthrough: Second Genetic Code Revealed – May 2010 Excerpt: The paper is a triumph of information science that sounds reminiscent of the days of the World War II codebreakers. Their methods included algebra, geometry, probability theory, vector calculus, information theory, code optimization, and other advanced methods. One thing they had no need of was evolutionary theory,,, http://crev.info/content/breakthrough_second_genetic_code_revealed
Also of note, the genetic code and the 'species specific' alternative splicing codes are not the only codes to be discovered in life thus far:
“In the last ten years, at least 20 different natural information codes were discovered in life, each operating to arbitrary conventions (not determined by law or physicality). Examples include protein address codes [Ber08B], acetylation codes [Kni06], RNA codes [Fai07], metabolic codes [Bru07], cytoskeleton codes [Gim08], histone codes [Jen01], and alternative splicing codes [Bar10]. Donald E. Johnson – Programming of Life – pg.51 – 2010
Moreover,,,
“Our experience-based knowledge of information-flow confirms that systems with large amounts of specified complexity (especially codes and languages) invariably originate from an intelligent source — from a mind or personal agent.” (Stephen C. Meyer, “The origin of biological information and the higher taxonomic categories,” Proceedings of the Biological Society of Washington, 117(2):213-239 (2004).) “A code system is always the result of a mental process (it requires an intelligent origin or inventor). It should be emphasized that matter as such is unable to generate any code. All experiences indicate that a thinking being voluntarily exercising his own free will, cognition, and creativity, is required. ,,,there is no known law of nature and no known sequence of events which can cause information to originate by itself in matter. Werner Gitt 1997 In The Beginning Was Information pp. 64-67, 79, 107.” (The retired Dr Gitt was a director and professor at the German Federal Institute of Physics and Technology (Physikalisch-Technische Bundesanstalt, Braunschweig), the Head of the Department of Information Technology.)
Verse and music:
John 1:1-4 In the beginning was the Word, and the Word was with God, and the Word was God. He was with God in the beginning. Through him all things were made; without him nothing was made that has been made. In him was life, and that life was the light of all mankind. Eric Church - Like Jesus Does (Acoustic) http://www.youtube.com/watch?v=wuG1rLPjVkk
bornagain77
April 19, 2013
April
04
Apr
19
19
2013
06:26 AM
6
06
26
AM
PDT
But one thing is crystal clear, the trend in evidence is definitely against neo-Darwinism ever being scientifically established as the true explanation for how we got here! As is evident thus far, Darwinists have been getting squeezed from both ends over the last few years, from the protein level, the plasticity is just not there for them as it was assumed to be, and on the whole genome level, the gaps are growing wider and wider, to the point of making genetic comparisons completely embarrassing as a Darwinian talking point. Moreover with the gaps between humans and apes/chimps growing wider and wider, it is also interesting to point out that it is now known that there is also far more similarity between genomes of widely divergent species than was at first expected by Darwinists:
Kangaroo genes close to humans Excerpt: Australia’s kangaroos are genetically similar to humans,,, “There are a few differences, we have a few more of this, a few less of that, but they are the same genes and a lot of them are in the same order,” ,,,”We thought they’d be completely scrambled, but they’re not. There is great chunks of the human genome which is sitting right there in the kangaroo genome,” http://www.reuters.com/article/science%20News/idUSTRE4AH1P020081118 First Decoded Marsupial Genome Reveals “Junk DNA” Surprise – 2007 Excerpt: In particular, the study highlights the genetic differences between marsupials such as opossums and kangaroos and placental mammals like humans, mice, and dogs. ,,, The researchers were surprised to find that placental and marsupial mammals have largely the same set of genes for making proteins. Instead, much of the difference lies in the controls that turn genes on and off. http://news.nationalgeographic.com/news/2007/05/070510-opossum-dna.html
Moreover,,
Family Ties: Completion of Zebrafish Reference Genome Yields Strong Comparisons With Human Genome – Apr. 17, 2013 Excerpt: Researchers demonstrate today that 70 per cent of protein-coding human genes are related to genes found in the zebrafish,,, http://www.sciencedaily.com/releases/2013/04/130417131725.htm
Now, at least for me, the percent genetic similarity between apes and humans falling precipitously from the original 99% mark that Darwinists had originally, from a point of forcing evidence into a desired conclusion, misled the general public to believe, as well as the percent genetic similarity being far higher for species that were thought to be widely divergent from humans, should have, by all rights, stopped the molecular reductionism model of neo-Darwinism (DNA makes RNA makes proteins) dead in its tracks, or at least it should have, at least, put a severe pause in the step of many of the neo-Darwinists who come on UD, such as wd400, who continually push the modern synthesis of neo-Darwinism as if none of this crushing evidence has even come to light.,, But aside from dealing the mendacity of the neo-Darwinists who come on UD,,, Joe pointed something out to me that this 'problem' has actually been dealt with to some degree by people who hold a evolutionary viewpoint (although apparently they hold it not as dogmatically as the UD Darwinists typically do),,, Joe stated,,
That has all changed thanks to evo-devo. Now it isn’t so much changing genotypes but the way the genotypes are used, especially during development. Same genes used in different ways is now the mechanism for macroevolutionary change – see Shubin “Your Inner Fish”.
Now evo-devo, as far as I know, tries to explain body plan morphogenesis (macro-evolution) by appealing to mutations in the regulatory sequences of DNA which regulate gene expression. (i.e. as far as I know it is molecular reductionism with a twist!), but, as I pointed out to Joe, Dr. Nelson has addressed this issue,,
Darwin or Design? – Paul Nelson at Saddleback Church – Nov. 2012 – ontogenetic depth – No Evidence For Body Plan Morphogenesis From Embryonic Mutations (excellent update) – video Text from one of the Saddleback slides: 1. Animal body plans are built in each generation by a stepwise process, from the fertilized egg to the many cells of the adult. The earliest stages in this process determine what follows. 2. Thus, to change — that is, to evolve — any body plan, mutations expressed early in development must occur, be viable, and be stably transmitted to offspring. 3. But such early-acting mutations of global effect are those least likely to be tolerated by the embryo. Losses of structures are the only exception to this otherwise universal generalization about animal development and evolution. Many species will tolerate phenotypic losses if their local (environmental) circumstances are favorable. Hence island or cave fauna often lose (for instance) wings or eyes. http://www.saddleback.com/mc/m/7ece8/ Understanding Ontogenetic Depth, Part II: Natural Selection Is a Harsh Mistress – Paul Nelson – April 7, 2011 http://www.evolutionnews.org/2011/04/understanding_ontogenetic_dept_1045581.html
Moreover,,,
Evolution by Splicing – Comparing gene transcripts from different species reveals surprising splicing diversity. – Ruth Williams – December 20, 2012 Excerpt: A major question in vertebrate evolutionary biology is “how do physical and behavioral differences arise if we have a very similar set of genes to that of the mouse, chicken, or frog?”,,, A commonly discussed mechanism was variable levels of gene expression, but both Blencowe and Chris Burge,,, found that gene expression is relatively conserved among species. On the other hand, the papers show that most alternative splicing events differ widely between even closely related species. “The alternative splicing patterns are very different even between humans and chimpanzees,” said Blencowe.,,, http://www.the-scientist.com/?articles.view%2FarticleNo%2F33782%2Ftitle%2FEvolution-by-Splicing%2F The mouse is not enough – February 2011 Excerpt: Richard Behringer, who studies mammalian embryogenesis at the MD Anderson Cancer Center in Texas said, “There is no ‘correct’ system. Each species is unique and uses its own tailored mechanisms to achieve development. By only studying one species (eg, the mouse), naive scientists believe that it represents all mammals.” http://www.the-scientist.com/news/display/57986/ ,,,Alternative splicing,,, may contribute to species differences – December 21, 2012 Excerpt: After analyzing vast amounts of genetic data, the researchers found that the same genes are expressed in the same tissue types, such as liver or heart, across mammalian species. However, alternative splicing patterns—which determine the segments of those genes included or excluded—vary from species to species.,,, The results from the alternative splicing pattern comparison were very different. Instead of clustering by tissue, the patterns clustered mostly by species. “Different tissues from the cow look more like the other cow tissues, in terms of splicing, than they do like the corresponding tissue in mouse or rat or rhesus,” Burge says. Because splicing patterns are more specific to each species, it appears that splicing may contribute preferentially to differences between those species, Burge says,,, Excerpt of Abstract: To assess tissue-specific transcriptome variation across mammals, we sequenced complementary DNA from nine tissues from four mammals and one bird in biological triplicate, at unprecedented depth. We find that while tissue-specific gene expression programs are largely conserved, alternative splicing is well conserved in only a subset of tissues and is frequently lineage-specific. Thousands of previously unknown, lineage-specific, and conserved alternative exons were identified; http://phys.org/news/2012-12-evolution-alternative-splicing-rna-rewires.html
Finding widely different ‘alternative splicing codes’ in different species is devastating to neo-Darwinism because of neo-Darwinism’s inability to account for any changes of any fundamental code once it is in place. The reason why drastically different alternative splicing codes between different species is devastating to neo-Darwinian evolution is partly seen by understanding ‘Shannon Channel Capacity’:
“Because of Shannon channel capacity that previous (first) codon alphabet had to be at least as complex as the current codon alphabet (DNA code), otherwise transferring the information from the simpler alphabet into the current alphabet would have been mathematically impossible” Donald E. Johnson – Bioinformatics: The Information in Life Shannon Information – Channel Capacity – Perry Marshall – video http://www.metacafe.com/watch/5457552/
bornagain77
April 19, 2013
April
04
Apr
19
19
2013
06:25 AM
6
06
25
AM
PDT
Before Mr. Fox steps in with his usual obfuscation and denial of the facts already in evidence, I would like to point out that the 'problem of information' is far deeper than Mr. Fox, and perhaps many others, realize.,,, Usually ID argues that strings of functional information, such as what we find in DNA, RNAs and proteins, can't be arrived at by Darwinian, chance and necessity, processes. And this is indeed a VERY powerful argument against neo-Darwinism that has nothing less than a null hypothesis backing up its assertion (Dembski, Marks, Abel). Moreover, as was pointed out recently, the trend in evidence in recent years, has decidedly been against Darwinian processes ever falsifying the null. To recap this trend in evidence (that I went over the other day). It was at first assumed by the ID community, at least as far as I am aware, that the plasticity of proteins was far greater than it is now found to be. And as such far more leeway was granted to Darwinian explanations at that time. With ID, at that time, focused mainly on the origin of proteins and IC molecular machines and systems. But now, in the past few years, the limits for what Darwinian processes can actually accomplish has been found, empirically, to be much greater than what was originally granted to them. Dr. Behe relates that sentiment here:
Severe Limits to Darwinian Evolution: – Michael Behe – Oct. 2009 Excerpt: The immediate, obvious implication is that the 2009 results render problematic even pretty small changes in structure/function for all proteins — not just the ones he worked on.,,,Thanks to Thornton’s impressive work, we can now see that the limits to Darwinian evolution are more severe than even I had supposed. http://www.evolutionnews.org/2009/10/severe_limits_to_darwinian_evo.html
In 2010 Ann K. Gauger, Stephanie Ebnet, Pamela F. Fahey, and Ralph Seelke came along and found:
Reductive Evolution Can Prevent Populations from Taking Simple Adaptive Paths to High Fitness – May 2010 Excerpt: Despite the theoretical existence of this short adaptive path to high fitness, multiple independent lines grown in tryptophan-limiting liquid culture failed to take it. Instead, cells consistently acquired mutations that reduced expression of the double-mutant trpA gene. Our results show that competition between reductive and constructive paths may significantly decrease the likelihood that a particular constructive path will be taken. http://bio-complexity.org/ojs/index.php/main/article/view/BIO-C.2010.2
Then in 2011 Douglas Axe and Ann Gauger came along and drove the nail home:
The Evolutionary Accessibility of New Enzyme Functions: A Case Study from the Biotin Pathway – Ann K. Gauger and Douglas D. Axe – April 2011 Excerpt: We infer from the mutants examined that successful functional conversion would in this case require seven or more nucleotide substitutions. But evolutionary innovations requiring that many changes would be extraordinarily rare, becoming probable only on timescales much longer than the age of life on earth. http://bio-complexity.org/ojs/index.php/main/article/view/BIO-C.2011.1/BIO-C.2011.1
But what does all this mean as to the constraints for neo-Darwinian processes? Well Dr. Gauger lays out the implications here:
More from Ann Gauger on why humans didn’t happen the way Darwin said – July 2012 Excerpt: Each of these new features probably required multiple mutations. Getting a feature that requires six neutral mutations is the limit of what bacteria can produce. For primates (e.g., monkeys, apes and humans) the limit is much more severe. Because of much smaller effective population sizes (an estimated ten thousand for humans instead of a billion for bacteria) and longer generation times (fifteen to twenty years per generation for humans vs. a thousand generations per year for bacteria), it would take a very long time for even a single beneficial mutation to appear and become fixed in a human population. You don’t have to take my word for it. In 2007, Durrett and Schmidt estimated in the journal Genetics that for a single mutation to occur in a nucleotide-binding site and be fixed in a primate lineage would require a waiting time of six million years. The same authors later estimated it would take 216 million years for the binding site to acquire two mutations, if the first mutation was neutral in its effect. Facing Facts But six million years is the entire time allotted for the transition from our last common ancestor with chimps to us according to the standard evolutionary timescale. Two hundred and sixteen million years takes us back to the Triassic, when the very first mammals appeared. One or two mutations simply aren’t sufficient to produce the necessary changes— sixteen anatomical features—in the time available. At most, a new binding site might affect the regulation of one or two genes. https://uncommondescent.com/intelligent-design/more-from-ann-gauger-on-why-humans-didnt-happen-the-way-darwin-said/
Moreover, while ID was doing the dirty work that Darwinists refused to do, i.e. finding out what the limits to Darwinian processes actually were/are, from the other end of the spectrum it was/is becoming more and more evident that the supposed genetic similarity between species, particularly between man and apes, was far greater than Darwinists had originally misled the general public to believe. Falling from approximately 99% similarity to now around 85% to 70% or even lower genetic similarity:
Comprehensive Analysis of Chimpanzee and Human Chromosomes Reveals Average DNA Similarity of 70% – by Jeffrey P. Tomkins – February 20, 2013 Excerpt: For the chimp autosomes, the amount of optimally aligned DNA sequence provided similarities between 66 and 76%, depending on the chromosome. In general, the smaller and more gene-dense the chromosomes, the higher the DNA similarity—although there were several notable exceptions defying this trend. Only 69% of the chimpanzee X chromosome was similar to human and only 43% of the Y chromosome. Genome-wide, only 70% of the chimpanzee DNA was similar to human under the most optimal sequence-slice conditions. While, chimpanzees and humans share many localized protein-coding regions of high similarity, the overall extreme discontinuity between the two genomes defies evolutionary timescales and dogmatic presuppositions about a common ancestor. http://www.answersingenesis.org/articles/arj/v6/n1/human-chimp-chromosome
As well please note the conservative nature of the preceding study in this excerpt from materials and methods section of the preceding paper:
"The definition of similarity for each chimp chromosome was the amount (percent) of optimally aligned chimp DNA (minus ‘N’s). This definition was considered to be quite conservative because it did not include the amount of human DNA absent in the chimp genome nor does it include chimp DNA that could not be aligned to the human genome assembly—a category of chimp DNA termed “unanchored contigs”."
But if a person were to ask, "how much DNA is absent between the two species?", the answer is far larger than most people would have expected. It is now found that a significant percentage of all genomes sequenced, including humans, have completely unique ORFan genes with no traceable evolutionary lineage:
Genes from nowhere: Orphans with a surprising story – 16 January 2013 – Helen Pilcher Excerpt: When biologists began sequencing genomes they discovered up to a third of genes in each species seemed to have no parents or family of any kind. Nevertheless, some of these “orphan genes” are high achievers (are just as essential as ‘old’ genes),,, Orphan genes have since been found in every genome sequenced to date, from mosquito to man, roundworm to rat, and their numbers are still growing. http://ccsb.dfci.harvard.edu/web/export/sites/default/ccsb/publications/papers/2013/All_alone_-_Helen_Pilcher_New_Scientist_Jan_2013.pdf Orphan Genes (And the peer reviewed ‘non-answer’ from Darwinists) – video http://www.youtube.com/watch?v=1Zz6vio_LhY
This early study found,,
Human Gene Count Tumbles Again – 2008 Excerpt:,, the analysis revealed 1,177 “orphan” DNA sequences.,,, the researchers compared the orphan sequences to the DNA of two primate cousins, chimpanzees and macaques. After careful genomic comparisons, the orphan genes were found to be true to their name — they were absent from both primate genomes. http://www.sciencedaily.com/releases/2008/01/080113161406.htm
Even Jerry Coyne, who is certainly not friendly to ID, nor to Christians in general, admitted that,,
From Jerry Coyne, More Table-Pounding, Hand-Waving – May 2012 Excerpt: “More than 6 percent of genes found in humans simply aren’t found in any form in chimpanzees. There are over fourteen hundred novel genes expressed in humans but not in chimps.” Jerry Coyne – ardent and ‘angry’ neo-Darwinist – professor at the University of Chicago in the department of ecology and evolution for twenty years. He specializes in evolutionary genetics.
Where will percentage difference between chimps and humans finally end up? Nobody really knows,,,
Ten years on, still much to be learned from human genome map – April 12, 2013 Excerpt:,,,”What’s more, about 10 percent of the human genome still hasn’t been sequenced and can’t be sequenced by existing technology, Green added. “There are parts of the genome we didn’t know existed back when the genome was completed,” he said.,,, http://medicalxpress.com/news/2013-04-ten-years-human-genome.html
bornagain77
April 19, 2013
April
04
Apr
19
19
2013
06:24 AM
6
06
24
AM
PDT
1 16 17 18

Leave a Reply