Uncommon Descent Serving The Intelligent Design Community

Biological Neg-Entropy

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Some of you might have heard that Jonathan Schaeffer and his team at the U. of Alberta recently solved the game of checkers. It made big news in the computer science world.

I first met Jon at the First Computer Olympiad in London (organized by the famous David Levy of chess and computer-chess fame) at which Jon’s program won the gold medal and mine won the silver.

Jon and his team eventually computed the eight-piece endgame database for checkers, and later my colleague Ed Trice and I computed it as well. Jon and I compared results, and it turned out that his database had errors that had evaded his error-detection scheme. This scheme produced internally consistent results, despite the errors. Later, Jon detected errors in my database, which were traced back to a scratch on a CD that evaded my error-detection scheme.

All the errors were eventually traced to data transfer anomalies and not the generative computational algorithms, so CRC (cyclic redundancy check) methods were used to solve the problem.

Check out
http://www.cs.ualberta.ca/~chinook/thankyou/
Jon thanks three of us (Ed Gilbert, Gilbert Dodgen, and Ed Trice — how’s that for a strange combination of names?) for database verification.

The bottom line is this: Errors creep in easily, are difficult to detect, and are even more difficult to correct.

Biology apparently does much more than detect and correct errors. It is not only anti-entropic, it is neg-entropic; that is, it mysteriously produces new information despite all the forces of nature that attempt to drive it in the opposite direction.

This is something that materialistic evolutionary theory is completely impotent to explain. How one cannot arrive at a design inference from this obvious evidence is a complete mystery to me.

Comments
Patrick, thanks for the post! This one deserves its own thread, but instead it is languishing in an inactive thread. I have heard of this data, but have been seeking hard validation. It would seem correct that when an RNA world replicator gets replicating it devolves, rather than evolves. On the flip side, if we can find naturally occurring rna replicase, we can produce a devolving replicator with more chance than UPB.bFast
July 4, 2008
July
07
Jul
4
04
2008
04:22 PM
4
04
22
PM
PDT
You can look at chemical replicators like the Spiegelman monster to get an estimate of what happens in the absence of elaborate error correction.
Interesting. http://en.wikipedia.org/wiki/Spiegelman_Monster
After 74 generations, the original strand with 4,500 nucleotide bases ended up as a dwarf genome with only 218 bases. Such a short RNA had been able to replicate very quickly in these unnatural circumstances. In 1997, Eigen and Oehlenschlager showed that the Spiegelman monster eventually becomes even shorter, containing only 48 or 54 nucleotides, which are simply the binding sites for the reproducing enzyme RNA replicase.
No word on whether this replicating core, which is not even a fully self replicating RNA molecule, could continue avoiding an error catastrophe indefinitely due to the high number of replications per generation. Bacteria and viruses can pull this off, but their error rate is also significantly lower. "But here we hit a paradox. If a genome is too short, it can't store enough information to build the copying machinery itself. Eigen believes that the simplest replication equipment requires much more information than could ever have been accommodated in a primitive nucleic-acid sequence. To reach the sort of length needed for the necessary copying enzymes, the genome risks falling foul of the very error catastrophe it is trying to combat. To put it simply: complex genomes demand reliable copying, and reliable copying demands complex genomes." And besides a simple chemical replicator then there's still the problem of producing more information in the form of a symbolic language. But all evidence points to such additional complexity being paired down due to necessity before anything could become functional. Anyway, let me reiterate the main point by kvwells. BIOLOGICAL SYSTEMS require error correction mechanisms to persist for long time periods. Not simple chemical replicators...biological systems with all their uncomfortable complexity and symbolic languages. Error rate of human DNA polymerase is approximately 10^9 (or 3 mutations per genome replication). Virus RNA and DNA polymerases are much more error prone, with error rates ranging from 10^4 to 10^7. Even simple viruses, which some don't even count as life, need error correction in order to maintain their functional IC core.Patrick
July 3, 2008
July
07
Jul
3
03
2008
10:58 AM
10
10
58
AM
PDT
kvwells: "2. biological systems require error correction mechanisms to persist for long time periods." I'm not clear on what this means. If a replicating system is producing a Malthusian increase in population, you don't need an elaborate error correcting system. Defective individuals simply don't show up in subsequent generations. You can look at chemical replicators like the Spiegelman monster to get an estimate of what happens in the absence of elaborate error correction. In electronic amplifiers, you can get quite good error correction with the addition of a single, passive component, a resistor. Passive error correction would not inhibit more sophisticated kinds of error correction -- whether developed through chance or through design.Petrushka
July 3, 2008
July
07
Jul
3
03
2008
09:07 AM
9
09
07
AM
PDT
KV: As the Spartans used to say: IF. (In short, since 1 is not an established and unchallengeable fact, then the utter improbability of 4 within the gamut of the observed cosmos, for search-space reasons, leads to the inference -- yet again -- that 1 is highly suspect.) GEM of TKIkairosfocus
July 3, 2008
July
07
Jul
3
03
2008
12:28 AM
12
12
28
AM
PDT
to take a shot at framing a philosophical framework: 1. life develops from inanimate matter, unguided. 2. biological systems require error correction mechanisms to persist for long time periods. 3. Life has persisted for a long period of time. 4. therefore 3 and 1 require that error correction mechanisms can develop through blind errors accumulated over time, and filtered through changing environmental parametric limits. if one takes 1 and 2 to be true, the argument is over, regardless of how incredible a statement 4 may seem to some.kvwells
July 2, 2008
July
07
Jul
2
02
2008
03:50 PM
3
03
50
PM
PDT
"Apparently? Do you have any evidence for this? There are error correction mechanisms in biology but while these mechanisms are interesting, they would hardly seem sufficient to overwhelm the second law." What Second Law would that be? Every time a zygote develops into an adult there is an island of "negative entropy," but such islands do not violate thermodynamics. Nor do populations that change over time. Over astronomical units of time, the piper must be paid, but no laws of thermodynamics are violated by living things, or living populations.Petrushka
July 2, 2008
July
07
Jul
2
02
2008
11:49 AM
11
11
49
AM
PDT
"Biology apparently does much more than detect and correct errors. It is not only anti-entropic, it is neg-entropic; that is, it mysteriously produces new information despite all the forces of nature that attempt to drive it in the opposite direction." Apparently? Do you have any evidence for this? There are error correction mechanisms in biology but while these mechanisms are interesting, they would hardly seem sufficient to overwhelm the second law. This is a God-of-the-gaps argument. It's a very interesting one though because it means that non-natural error-correcting in the genetic code should be observable. You should think carefully about what you are suggesting here. If biology is constantly in the "mysterious" process of correcting itself through non-natural means, as you seem to suggest, than that means these corrections should be observable within a population over time. You are making a prediction that I doubt will be substantiated. More likely is that biological life is not really billions of years old and the current amount of errors in the code are consistent with a young biological life. You implicitly acknowledge that there aren't enough errors in the current code to believe both that life is billions of years old and no corrections have taken place. Therefore, if you never observe corrections taking place, you will be forced to discard this hypothesis. However I commend you for making a testable hypothesis, but I predict it will be falsified.tragicmishap
July 2, 2008
July
07
Jul
2
02
2008
08:32 AM
8
08
32
AM
PDT
bornagain77, (8) “only creationists would ask a question like that”. Welcome to the creationist camp. ;) I wonder if he had any idea of your belief regarding the age of life on earth. Why is it that creationism, let alone ID, is regarded as a "science-stopper", when we can ask perfectly empirically answerable questions like this, that Darwinists would never ask, let alone that the available evidence seems to favor a non-Darwinian answer? However, I can understand Vreeland's sensitivity, especially given the obvious questions that can be asked regarding time given his experimental results.Paul Giem
July 1, 2008
July
07
Jul
1
01
2008
05:05 PM
5
05
05
PM
PDT
Thanks BA. The 250 myo bacteria does fit very badly into the square hole of neo-darwinian evolution. It also strongly supports Denton's suggestion that the state of the Cytochrome C gene is the product of an intentional pattern, not drift. (I would love to see specific study of the cytochrome C in this bacteria.) That said, many of us IDers presume that darwinian processes, including drift, are pieces of the evolutionary puzzle. This finding challenges this perspective as well.bFast
June 30, 2008
June
06
Jun
30
30
2008
08:18 PM
8
08
18
PM
PDT
bfast, Vreeland is the scientist behind the 250 million year old bacteria that is extremely similar to modern bacteria, that is why his refusal to give me a direct answer for fitness level tests seemed odd, Indeed he had to go through all sorts of hoops with his peers to firmly establish that it was indeed a ancient bacteria and not a modern contaminant. But then again, with as much hostility as he generated among his evolutionary peers with the revelation of extreme similarity in the first place, I can see why he may be a bit "controversy" shy right now.bornagain77
June 30, 2008
June
06
Jun
30
30
2008
11:38 AM
11
11
38
AM
PDT
Gil, great post. It seems to me that error correction mechanisms in bio systems ought, all by themselves, to prove ID. It is impossible for error correction to exist in a complex system without intelligence. How do you know there is an error? How do you know how to correct it. Programmers know this intuitively. You must know the states a system can be in at any given time during it's execution. Exception trapping in computer applications is a necessity. Try catch constructs exist for this and are used abundantly in any well designed system of any size. So we have a self-reference or "self-awareness" - akin to that of information systems - in DNA for trapping errors. Where did it come from? Error correction is the same. It requires intelligence by it's very nature. Pre-knowledge of correct states is absolutely a must to get anything "back on the right track". Mutations are mistakes - exceptions. Analogy: You drive your car, you notice a gauge that says temperature is above normal. But only because you know what the right temp range is. Conclusion? Too much heat is being generated somewhere in the engine. Friction = entropy. Cause? You have to stop and check the oil level, fans working, cooling fluid, valve stuck or whatever. When you find the fault you can fix it by restoring liquids to correct amounts or replacing a faulty part etc. All this because you "know" something about the proper state of the system. DNA has error trapping and correction mechanisms. DNA must therefore be designed by some intelligence that knows what an acceptable state ought to be, knows how to detect erroneous states and knows how to restore state to normal - within certain limits of course. Some exceptions will crash you application, others will cause it to produce erroneous data, etc.. Error detection and correction intrinsically imply knowledge and thus cannot come to exist without it. Because Darwinists have utterly failed at both grasping this and explaining it, they must be immune to logic or something worse.Borne
June 30, 2008
June
06
Jun
30
30
2008
10:03 AM
10
10
03
AM
PDT
Hasn't there been some 250 million year old bacteria that have been revivied? Hasn't this bacteria also shown a shocking lack of genetic variety wrt modern bacteria?bFast
June 30, 2008
June
06
Jun
30
30
2008
09:48 AM
9
09
48
AM
PDT
"I also inquired of Dr. Vreeland for such a fitness test and was rather bruskly told something to the effect that “only creationists would ask a question like that”." Of course. And this is the attitude that makes Darwinians look so blind. They can't and won't even ask the right questions. They don't want to look in the "wrong" directions that might lead them to see the intrinsic flaws in their religiously believed theory.Borne
June 30, 2008
June
06
Jun
30
30
2008
09:35 AM
9
09
35
AM
PDT
slightly off topic: Protein Translation; (please note the "factory" background noises they added for sound effects. http://www.youtube.com/watch?v=nl8pSlonmA0&feature=relatedbornagain77
June 30, 2008
June
06
Jun
30
30
2008
07:19 AM
7
07
19
AM
PDT
Gil, The best evidence I have seen against a "long term" front loading scenario is the ancient "revived" bacteria that remain exceedingly stable across their entire genome for millions of years. (Cano, Vreeland) In the case of the 25 to 40 million year old bacteria, of Dr. Cano, I was able to get some feedback from him that not only is the genome staying "surprisingly stable but the minor change, that is found in the genome sequence, appears to be obeying Genetic Entropy. In his responce to inquiry, about a "fitness" test, he states: We performed such a test, a long time ago, using a panel of substrates (the old gram positive biolog panel) on B. sphaericus. From the results we surmised that the putative "ancient" B. sphaericus isolate was capable of utilizing a broader scope of substrates. Additionally, we looked at the fatty acid profile and here, again, the profiles were similar but more diverse in the amber isolate. No antimicrobial panel was used. Thus the "narrowing" for the panel of substrates indicates a loss of complexity for the descendant strains, Thus staying within the principle of genetic entropy. It should be noted that Dr. Cano is undecided in whether this firmly indicates a loss of complexity, but the implications seem pretty obvious to me. I also inquired of Dr. Vreeland for such a fitness test and was rather bruskly told something to the effect that "only creationists would ask a question like that". Yet Dr. Vreeland's bacteria matches the same overall pattern as Dr. Cano's so I am fairly certain that the appropriate tests on panels of substrates would also stay within Genetic Entropy, and in fact see no reason in Dembski's, Behe's, Gitt's, or Sanford's, work that I should expect otherwise. In my opinion the stability of the genome, and its adherance to Entropy, over so many millions of years argues, with much weight, against any unused "front loaded" genome scenario and argues forcefully that we are dealing with optimal genomes that are 100% functional, though the suggestive "knockout" experiments seem to indicate otherwise. In regards to the suggestive "knockout" experiments, I would like to point out that the "ENCODE' findings highlighted how little will actually do understand of genome functionality.bornagain77
June 30, 2008
June
06
Jun
30
30
2008
06:34 AM
6
06
34
AM
PDT
The genetic entropy problem raises questions for me about the front-loading hypothesis. If, even with error detection and correction, which can never be perfect, species eventually become extinct as a result of genetic informational decay, how would the presumably long-preserved information, waiting to be expressed at a much later date (like a billion years) evade the informational decay that causes extinctions? I'm perfectly open to following the evidence wherever it leads, unlike our Darwinist detractors.GilDodgen
June 29, 2008
June
06
Jun
29
29
2008
07:08 PM
7
07
08
PM
PDT
<Negative feedback control is used extensively in industrial chemical process control and is used to keep a certain process (flow, level, temp., pressure, etc..etc..) on a preselected value." Governors are an example of negative feedback, but not the only, or most common one. Closer to what I have in mind is the negative feedback employed in amplifiers to reduce distortion. One side effect is a reduction in amplification. Living things also amplify, producing copies of their genomes and cellular machinery in the well known Malthusian exponential population rise. Many things keep the population in check, and as a side effect, guarantee that copy errors, analogous to distortion, do not result in the inability to reproduce. But selection, viewed as a kind of feedback, differs from feedback in an electronic circuit. In an amplifier, feedback tracks and tends to eliminate variations between input and output. Biological selection only insures that changes do not degrade the ability to compete and reproduce. Change, in and of itself, is not prevented. I don't think information and entropy are useful metaphors in such a system. Change, in and of itself is neither improvement nor degradation. There really isn't any way to quantify an increase or decrease of information except by reference to reproductive success.Petrushka
June 29, 2008
June
06
Jun
29
29
2008
04:38 PM
4
04
38
PM
PDT
Off topic video you may like Gil: Aerial ballet http://www.godtube.com/view_video.php?viewkey=0315c9b2379ec0b96ed4bornagain77
June 29, 2008
June
06
Jun
29
29
2008
04:34 PM
4
04
34
PM
PDT
Negative feedback control is used extensively in industrial chemical process control and is used to keep a certain process (flow, level, temp., pressure, etc..etc..) on a preselected value. Thus negative feedback, as we have it employed in industry, never creates new information, but only acts on "preprogrammed" information. http://en.wikipedia.org/wiki/Control_theorybornagain77
June 29, 2008
June
06
Jun
29
29
2008
02:37 PM
2
02
37
PM
PDT
"Biology apparently does much more than detect and correct errors. It is not only anti-entropic, it is neg-entropic; that is, it mysteriously produces new information despite all the forces of nature that attempt to drive it in the opposite direction." It's called negative feedback, isn't it?Petrushka
June 29, 2008
June
06
Jun
29
29
2008
07:33 AM
7
07
33
AM
PDT
Does this mean I can never win in checkers? Congratulations Gil, and I am with you on the "complete mystery".idnet.com.au
June 29, 2008
June
06
Jun
29
29
2008
12:30 AM
12
12
30
AM
PDT
Nice way of putting it Gil. Its one of the best examples of entropy argument I've heard of in a long time.jpark320
June 28, 2008
June
06
Jun
28
28
2008
08:52 PM
8
08
52
PM
PDT

Leave a Reply