Uncommon Descent Serving The Intelligent Design Community

Why isn’t ALL life extinct?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In another thread talking about engineers’ perspectives on the machinery of life the topic of entropy came up. Engineers have to deal with entropy in all their designs and the very best efforts at dealing with it only serve to slow it down and never stop it.

So one of my big questions isn’t why most cell lines sooner or later go extinct as that’s easily explained by entropy. Rather my big question is how a rare few of them have managed to persist for hundreds of millions or billions of years.

In computer design engineering we have to deal with users changing the software load in unpredictable ways. We ship the computer out with a software load we know works. It then “evolves” unpredictably as the computer is used and customized without or without the informed consent of the owner. Often that evolution of the software load results in a system that no longer functions. Without some recovery method that particular computer, which is analogous to a single cell line, would become extinct. The ways we address this problem are many, convoluted, and complex but I’d like to focus on a few in particular.

One method is called a “factory restore”. In this method a protected image of the known working software load from the factory is used to replace the evolved load. This is typically implemented by a user accessable trigger and an image stored on a protected segment of the hard disk or separately on a CD rom disk.

A somewhat less effective but largely successful method of recovery from disastrous software load “evolution” is employed by more recent versions of Microsoft Windows. It employs automatic trigger mechanisms that take a snapshot of the current state of the software such that if disaster happens the state can be restored to a previously known working state. This allows successful modifications wrought by evolution to “survive” without backtracking farther than absolutely required. It doesn’t always work because the images aren’t complete snapshots, which would quickly overflow the storage capacity of the disk, but rather carefully selected critical bits are saved that usually work to restore a known working state.

In other than inexpensive personal computers where cost isn’t so prohibitive (such as servers) both automatic and manually triggered backups of the software state are made to external, removeable media. In the most critical applications multiple backups are made and stored in physically separated locations such that a catastrophe, like a fire, in one location won’t destroy all the copies.

Now since in my experience many (if not most or all) designs we humans invent to solve engineering challenges end up having analogs found in the design of life I propose that a plausible answer to how life deals with the devastating effect of entropy is along the same lines used in computer systems. Periodic backups are made such that what random evolution hath wrought on the software load (DNA) can be undone back to a previously known working state. Evolution starts over but doesn’t start over from scratch. The details and triggers employed can only be imagined at this point in time but it doesn’t take much imagination for a computer design engineer. Experiments to tease out the methods and triggers, if they exist, seem like a reasonable line of inquiry.

It’s difficult to imagine how mutation and selection could invent a disaster recovery method such as this and that likely explains why there’s little if any research thrown at figuring out why most but not all cell lines eventually go extinct. This is where the ID paradigm becomes very valuable. Rather than limit the possibilities in the design of life to what is reasonable for a reactive process like mutation and selection to invent we extend our thinking to what is reasonable for a proactive process like intelligent design to invent.

Comments
Is Prof. Sanford still at Cornell or has he been "expelled" by the Darwinian storm troopers? I would guess that it's not that easy to ostracize someone with such an impeccable and brilliant history of achievements.Mapou
February 26, 2008
February
02
Feb
26
26
2008
09:57 AM
9
09
57
AM
PDT
bFast Before dismissing Prof. John C. Sanford, consider that his life work has dealt with mutations and genetic changes. (He invented the "gene gun".) He knows more about mutations and genetic changes than most biochemists discussing the issues. May I strongly recommend that you review his scientific publications on the issue, NOT tar him with his popular book. See especially his recent publications. (But don't publicize them until Sanford does.) I also strongly urge you to read up on recent developments in Junk DNA. I expect Sanford et al. to be discovering treasure that others have dismissed as "junk" - and to have a few patents filed before his discoveries are publicized. I expect you will find it instructive to review Sanford's position.DLH
February 26, 2008
February
02
Feb
26
26
2008
09:30 AM
9
09
30
AM
PDT
DLH, that said, Sanford takes a huge bite when he dejunkifies all DNA without scientific proof. As such, his argument is ignored. If Sanford used the modern understanding of what constitutes active DNA, even if that understanding is in error, he would make a far more convincing case. The case is still there to be easily made, it does not need the level of science-rejection that Sanford chooses.bFast
February 26, 2008
February
02
Feb
26
26
2008
08:35 AM
8
08
35
AM
PDT
bFast "1 - I contend that, because almost all mutations in the active DNA are deleterious, if there are any more than one mutation in active DNA, any beneficial mutation will be counterbalanced by multiple deleterious mutations. I belive that the fastest pace of mutations in active DNA that natural selection can filter is 1 per generation." Affirming your points, Sanford in Genetic Entropy, explains why harmful mutations accumulate faster than "natural selection" can filter them, and that they swamp "beneficial" mutations. (Technically, if several harmful mutations occurred on the same DNA strand, "selection" would delete both if it could delete either.)DLH
February 26, 2008
February
02
Feb
26
26
2008
07:38 AM
7
07
38
AM
PDT
Daniel Shortening telomere lengths is what is proposed as the evolved mechanism to avoid cancer I mentioned in shorter Hayflick Limits. The Hayflick Limit is nothing more or less than the observation that some cell lines aren't immortal. Digging a little deeper you'll find that the mortal cell lines are all large genomes and further that the mortality is explained by genetic entropy. This leads us back to the original question of how some rare large genomes have managed to avoid genetic entropy for geologic spans of time. Something appears to be working to wipe the genome clean of near neutral deleterious mutations which natural selection can't effectively deal with in large genomes. Keep in mind the average survival time of species in the fossil record is about 10 million years. That's millions of generations. The HeLa cell line (a human cancer cell line) has been alive since 1951. HeLa cells divide roughly once per day. In 60 years that's 22,000 generations removed from the ancestral cells. It doesn't even approach the number of generations we're talking about where the effects of genetic entropy become overwhelming. As far as I know HeLa cells are the longest observed large genomed cells. Since they're grown on culture plates as single cells and they have a modified human genome I think we can reasonably presume that most of their genome, which normally produces whole humans, is unused. In that regard they're probably more like a small genomed organism that just happens to have a lot of truly junk DNA. It won't matter to the cell if the junk DNA decays by genetic entropy so they can now enjoy the same benefits of large population size and small genomes to defeat, or at least drastically slow down, genetic entropy. Consider the cells in your body. Ostensibly every one of them is the most recent incarnation of an unbroken cell line going back at least to some Cambrian chordate individual 500 million years ago - a hundred million generations. Most of the lines starting from that ancestor are extinct. Was it just luck that yours survived and you're here today to wonder about it? Maybe. The thing of it is that "luck" isn't a very satisfying explanation. You've heard of "God of the Gaps" I'm sure. Is "God of the Gaps" any more or less scientific than "Luck of the Gaps"? I don't believe there's any difference at all. Luck, or chance or randomness or whatever you care to call it, is just a replacement for "God" to bridge gaps in knowledge and is just as much a "science stopper". Whether you choose to credit things as "God dunnit" or "Chance dunnit" both are equally vacuous in their ability to make predictions. To coin a phrase, chance evolution is creationism in a cheap tuxedo. DaveScot
February 25, 2008
February
02
Feb
25
25
2008
03:30 AM
3
03
30
AM
PDT
Somatic cells in our bodies don’t replicate by sexual means so slightly deleterious mutuations pile up as they replicate asexually. It causes us to eventually die of old age. This explains the so-called the “Hayflick Limit”. If nothing else kills a cell line with a big genome a number of generations (that varies by species) defined by an empirically determined “Hayflick Limit” number of generations will cause that line to go extinct.
Good old Lenny Hayflick! Believe it or not, I'm so old that I can remember the excitement when his 1965 Exp Cell Res paper came out. Current thinking is that telomere shortening sets the limit.Daniel King
February 25, 2008
February
02
Feb
25
25
2008
02:31 AM
2
02
31
AM
PDT
Daniel Statistical mechanics. It’s not clear to me how this relates to natural selection. Does subatomic uncertainty set bounds on our ability to continue productive work on atoms, molecules, macromolecules, and so on up the ladder of complexity? That's quantum mechanics you're referring to. Statistical mechanics applies to large populations of particles where the unpredictability of individual particles at the quantum scale disappears. Bulk collections of particles behave predictably. You said that extinctions need to be explained, and you think that “genetic entropy” is the explanation. Maybe in some cases, but the prevailing view among scientists is that a race of organisms can’t survive to reproduce if it’s unable to cope with environmental changes. That's still explained by entropy. As the slightly deleterious mutations accumulate it reduces the ability of the organism to adapt to environmental change by the normal method of recombination bringing recessive alleles into dominance. Entropy weakens the gene pool and it's a diverse gene pool that makes organisms robust in the face of environmental stress. This is why inbreeding is generally bad and cross breeding is generally good (hybrid vigor). Note that those iron-eating bacteria and the photosynthetic machinery mentioned above appear to have survived and prospered for eons, presumably because neither encountered an insurmountable environmental obstacle in its history on the planet. Excellent point. I addressed it here before as it occured to me as well. The advantages enjoyed by bacteria are 1) very small genomes which far more often than not are perfect copies of the parent and 2) very large populations. Moreover since they don't have sexual recombination in the equation any deleterious mutations are instantly fixed in all the cell's descendants and beneficial mutations are instantly fixed as well. This allows selection to act on individual mutations and to act rapidly. P.falciparum (a eukayote with a 23mb genome) didn't go extinct in billions of trillions of replications over the last 50 years because 49 out of 50 replications are perfect genetic clones of the parent. But it replicates in such large number that there's always enough trial balloons for selection to choose from in case there's a need to adapt or die. In organisms with large genomes and small populations like birds and mammals there is little chance of any germ cell being free of random DNA copy errors. In other words perfect DNA replication is the rule for bacteria and small genomed single-cell eukaryotes while perfect copies are the exception for larger genomed organisms. In organisms with large genones where asexual reproduction is possible and practiced (such as cloning of a commercially valuable variant of a food crop species) there's something call "senescence" that becomes a problem after a number of generations. That's because the clones have big genomes and each generation isn't quite a perfect copy of the parent. Slightly deleterious mutations accumulate and the variant strain loses its vigor, most often manifesting as slow growth and susceptability to disease. At that point the clonal variant must be abandoned and a new variant with the valuable commercial properties is sought by artificial selection from a sexually reproduced population. Expensive measures such as cryogenic preservation of an early generation of the clonal strain are employed to get around the senescence problem but even cryogenic preservation doesn't last forever so at best it's a temporary solution. The same senescence mechanism in clonal copies is what eventually kills us too if nothing else gets us first. Somatic cells in our bodies don't replicate by sexual means so slightly deleterious mutuations pile up as they replicate asexually. It causes us to eventually die of old age. This explains the so-called the "Hayflick Limit". If nothing else kills a cell line with a big genome a number of generations (that varies by species) defined by an empirically determined "Hayflick Limit" number of generations will cause that line to go extinct. Some large genomed species have either no or very large Hayflick limits. Pines are good example, a redwood being a well known example. Some of these live to be 5000 years old if they are lucky enough to not succumb to disease and other causes of premature death. Shorter Hayflick limits (52 clonal generations in humans) are hypothesized in organisms with faster metabolisms as an evolved means of avoiding cancer. Obviously it's not a perfect solution. DaveScot
February 24, 2008
February
02
Feb
24
24
2008
02:52 PM
2
02
52
PM
PDT
DaveScot: An eloquent and challenging two-part essay. I doubt that I can say anything in response that you haven't already heard many times, but I feel that it's incumbent on me to respond since you addressed me personally. I wish I were up to speed on some of the topics you covered, but I'm not, so I can only remark on a few points. 1. As you noted, the apple example of nutrition that I gave does not apply directly to other metabolic transformations, as in iron-eating bacteria or photosynthesis-powered plant growth. You said, "To get highly complex order from simple energy and chemical gradients there needs to be something else involved." Certainly. The biologist would answer that the "something else" is the metabolic pathways that each organism possesses - ultimately encoded in each organism's genome. 2. Regarding the origin of life, that is certainly a daunting problem. But it's early days in our understanding of life, so it's a reasonable bet that progress will be made in good time. Much too early to give up and say it's not a soluble problem. 3. Statistical mechanics. It's not clear to me how this relates to natural selection. Does subatomic uncertainty set bounds on our ability to continue productive work on atoms, molecules, macromolecules, and so on up the ladder of complexity? 4. You said that extinctions need to be explained, and you think that "genetic entropy" is the explanation. Maybe in some cases, but the prevailing view among scientists is that a race of organisms can't survive to reproduce if it's unable to cope with environmental changes. Note that those iron-eating bacteria and the photosynthetic machinery mentioned above appear to have survived and prospered for eons, presumably because neither encountered an insurmountable environmental obstacle in its history on the planet. 5. "A paradigm shift is needed in the way life is viewed." All fertile paradigm shifts are welcome! Our ability to solve problems depends on how we frame the questions.Daniel King
February 24, 2008
February
02
Feb
24
24
2008
02:13 PM
2
02
13
PM
PDT
DaveScot:
Natural selection operates on the entire watch. In the rare case where a particle lands somewhere good in order to keep that particle it must also keep the other particles that fell along with it.
Two points of enhancement: 1 - I contend that, because almost all mutations in the active DNA are deleterious, if there are any more than one mutation in active DNA, any beneficial mutation will be counterbalanced by multiple deleterious mutations. I belive that the fastest pace of mutations in active DNA that natural selection can filter is 1 per generation. The number of deleterious mutations experienced per person is quite high as pointed out by a recent article on LiveScience about the genetic differences between identical twins. I suspect, in honesty, that this article would suggest that the mutation rate is much nearer to Sanford's than it is to the current scientific expectation. 2 - Not only does the "beneficial mutation" need to overcome any other mutations in the system, it must stand out even thought the organism that received it received an "average" allele mix. Each allele is better or worse than the otehr alleles, as such they add noise to the system of detecting the beneficial mutation. I do not believe that in a 20,000 gene mix, a "slight improvement" of a single point mutation can in any way be detected. This is the signal to noise problem.bFast
February 24, 2008
February
02
Feb
24
24
2008
09:02 AM
9
09
02
AM
PDT
Daniel (con't) Getting back to the focus of the article. From an engineering perspective the molecular machinery in a living cell is highly ordered and finely tuned. Many small machines and information processing systems all interdependently entwined cooperate to accomplish a task - metabolism and replication. Random mutation in this system is equavalent to throwing gold dust into a gold watch. There's a tiny chance that a dust particle will land somewhere where it improves the mechanism, a good chance for most particles landing somewhere where they make very little difference, and a good chance they'll land somewhere that makes the watch inoperable. Natural selection operates on the entire watch. In the rare case where a particle lands somewhere good in order to keep that particle it must also keep the other particles that fell along with it. It's the accumulation of the other particles over time that makes things go extinct. The genome eventually becomes overwhelmed by the accumulation of slightly detrimental changes. This is more or less Haldane's Dilemma. An excellent book on the topic of genetic entropy is Cornell geneticist John Sanford's Genetic Entropy & The Mystery of the Genome. I personally think he cherry picks the literature to establish a background random mutation rate orders of magnitude greater than commonly accepted rates in order to justify such a high rate of entropic decay that life couldn't have been around more than 6000 years. That aside, given more commonly accepted background mutation rates, his thesis remains intact. Instead of 6000 years with a high mutation rate, the genetic entropy problem still remains given a lower mutation rate over millions of years. Extinctions are quite handily explained by this. What isn't explained is how extinction is avoided by a lucky few. If we accept at face value that organic evolution occured over hundreds of millions or billions of years, and we take at face value descent with modification from one or a few common ancestors is true, then it appears there must be some mechanism at work that can sweet genomes clear of accumulated detrimental modifications. A problem of exactly the same nature happens in computer systems and mechanisms to sweep detrimental changes clear have been implimented to address it. Unless we view the molecular machinery of life as an engineered system and look for mechanisms in it that seem implausibly remote for chance & necessity to have invented we're crippled. A paradigm shift is needed in the way life is viewed. ID is that paradigm shift. Mike Gene in the book The Design Matrix: A Consilience of Clues delves into why we need to stop pretending that cellular machinery is the unquestionable result of reactive chance & necessity and start looking at it as the result of proactive design. I'm less than halfway through the book and will write a review of it when I'm finished but so far it almost exactly matches my thoughts on the matter. DaveScot
February 24, 2008
February
02
Feb
24
24
2008
07:33 AM
7
07
33
AM
PDT
Daniel re; Maxwell Yes, it's a closed system. The caveat as I recall is that the demon needs intelligence to know when to open and close and the intelligence needs energy to operate which must come from outside the system. The earth as an open system doesn't really seem relevant in any case. There's enough energy in the earth to power life processes. One example is demonstrated by extremophiles that live deep underground and get their energy from oxidizing iron. The apple/fecal matter example is a good one that I've used myself to illustrate the same kind of order decreasing in one place (the apple) while elsewhere (new tissue in the apple eater) increasing but does that apply to iron eating bacteria? If we let the sun back in then we can look at plants which also take simple matter/energy gradients and construct much more highly ordered systems from it. Are the gradients of H2O, CO2, and minerals in the plant's environment equivalent to the order in the plant tissue? I don't think a good case can be made for that. Just like in Maxwell's demon there has to be some kind of intelligent mechanism directing how the energy is applied. Maxwell's demon comes up short in this case as the order it generates isn't complex. It just creates a simple energy gradient. To get highly complex order from simple energy and chemical gradients there needs to be something else involved. Sewell's paper Can Anything Happen in an Open System? discusses this in greater detail. I think it clearly shows that law and chance aren't sufficient to concentrate the order represented in simple gradients into highly complex ordered forms. In other words you can't just willy-nilly exchange thermal order for any other kind of order and this is exactly what is proposed by the open system rebuttal for entropic objections to life forming by law and chance alone. The order represented by information coded into the DNA molecule is not equivalent to the thermal order in the earth/sun system. Adding energy to a closed system in and of itself is not sufficient. Energy is a requirement but it's not the only requirement. It must be directed. We can say that natural selection is equivalent to Maxwell's demon in that it opens to allow beneficial change through the boundary and closing shut to prevent detrimental change from crossing the boundary and in that way increase order. Indeed, this is exactly what MET is based upon. Natural selection provides the direction required to translate one kind of order into another. This raises two further questions: 1) Natural selection requires mechanism to operate where the mechanism is itself very highly ordered. What directed the formation of that? In other words it leads directly to the origin of life question. 2) Are there no bounds on natural selection? Trial and error, which is essentially what chance & necessity is, is bounded by statistical mechanics. While anything is theoretically possible with chance & necessity merely being possible isn't enough. There is a continuum of probability that needs to addressed ranging from physically possible, to physically plausible, to physically probable, to physically certain. Historic reconstructions of organic evolution ignore these in that if something is possible, even if the odds appear to be impossibly remote, that's good enough to say it is plausible, probable, or even certain. The public is being gulled into believing the merely possible is a certainty. That's what happens when a scientific theory becomes scientific dogma. Chance and necessity is a dogmatic truth so probabilities are ignored. No matter how much the probabilities are stacked against it chance and time and luck combined to overcome those odds. They must have overcome those odds because chance & necessity as the sole mechanism underlying organic evolution is a dogmatic truth. We ID proponents object to the dogmatism underlying this and believe that upon close inspection the emperor is wearing no clothes. Just being physically possible while ignoring what statistical mechanics tells us about the bounds of law and chance in any given circumstance is not good science. It's ideological science. DaveScot
February 24, 2008
February
02
Feb
24
24
2008
07:02 AM
7
07
02
AM
PDT
John Sanford in "Genomic Entropy and the Mystery of the Genome" cites numerous examples of evolutionist population geneticists asking the question Why are we not already extinct. See especially the appendix.DLH
February 23, 2008
February
02
Feb
23
23
2008
07:00 PM
7
07
00
PM
PDT
Maxwell’s Demon is the classic thought experiment showing that no contraption can get a free lunch when it comes to energy being employed to defeat the increase of entropy. Are you familiar with it?
If I remember correctly it deals with a closed system. However, life on this planet does not exist in a closed system. As I was taught in Biology 101, living things contend, for a while, against our internal, local entropy by inflicting increased entropy on our environment. As when one eats an apple, which is a highly organized set of chemicals, extracts what is needed (including energy) through the processes of digestion and excretes the much less ordered remainder in the form of fecal material.
What is man, when you come to think upon him, but a minutely set, ingenious machine for turning with infinite artfulness, the red wine of Shiraz into urine?
Daniel King
February 23, 2008
February
02
Feb
23
23
2008
02:32 PM
2
02
32
PM
PDT
Daniel not even apples and oranges I agree. There is no demonstrated example of highly ordered macroscopic systems emerging by law and chance. It's not my fault that the only demonstrated way for highly ordered systems such as computers and cellular machinery to beat the statistical improbability is by intelligent agents pushing matter and energy around to bring abstract thought into physical reality. Maxwell's Demon is the classic thought experiment showing that no contraption can get a free lunch when it comes to energy being employed to defeat the increase of entropy. Are you familiar with it? DaveScot
February 23, 2008
February
02
Feb
23
23
2008
11:12 AM
11
11
12
AM
PDT
CDs have a limited lifespan even when stored properly. So I think a better analogy would be to take a disc and duplicate it. Then take the copy and duplicate that and so on. Modern error corrections keeps the error rate to around 1 in 10^9 to 1 in 10^12 I believe. Check the information content after so many duplications.Patrick
February 23, 2008
February
02
Feb
23
23
2008
11:08 AM
11
11
08
AM
PDT
"Doesn’t that last bit kind of assume the consequent (that is, that “biological systems” are “pre-planned intelligence”)?" Well, yes it does, and that's my own viewpoint, of course. I was expecting the reply that biological systems themselves decreasing entropy over time are in fact what RM+NS predicts. It is interesting, though to consider how biological agents decrease entropy, or at least do not increase it. They reproduce themselves, and they build structures usable by other creatures, and they may produce waste products or leave their own remains that may be useful for creatures that follow. One thing none of them do, though, is conciously modify their own programming. The only way that can happen is through random mutation, and intelligence is NOT involved there.SCheesman
February 23, 2008
February
02
Feb
23
23
2008
11:03 AM
11
11
03
AM
PDT
Further to DaveScott's: "Try this experiment. Put a CD rom disk out in the sun. Check it every day to see if the information on it becomes more organized, less organized, or stays the same." Applying Kenneth Miller's "evolutionary design" principle, put the CD in a rock polishing jar and "polish". Then see how the information content improves when energy is added to this system. By contrast, intelligently designed CD disk polishing provides surfaces where the "bumps" risking above the polished surface are < 1 nm. Another method of "evolutionary design" is to place the CD in the oven set to about 232 C (450F) and add heat for several hours. Then test the increase in evolutionary "information" content. By contrast, "intelligently designed" controlled crystallization can provide surfaces with variations of < 1 atom in height.DLH
February 23, 2008
February
02
Feb
23
23
2008
10:50 AM
10
10
50
AM
PDT
"The only way to decrease entropy in an open system is to add energy in an intelligent manner. This can be done by an “active” intelligence, or by pre-programmed intelligence (i.e. what you find in biological systems)" Amen to that.PannenbergOmega
February 23, 2008
February
02
Feb
23
23
2008
09:58 AM
9
09
58
AM
PDT
LarryNormanFan: Check your facts. I'm sure you are toting the current Darwinian line though. So you are probably right.PannenbergOmega
February 23, 2008
February
02
Feb
23
23
2008
09:43 AM
9
09
43
AM
PDT
SCheesman,
This can be done by an “active” intelligence, or by pre-programmed intelligence (i.e. what you find in biological systems).
Doesn't that last bit kind of assume the consequent (that is, that "biological systems" are "pre-planned intelligence")? Now, there's a (theological) sense in which I believe that, but in terms of science. . . .?larrynormanfan
February 23, 2008
February
02
Feb
23
23
2008
09:17 AM
9
09
17
AM
PDT
"The “canned” answer is the Earth is not a closed system. Meaning entropy need not apply." "Take that answer boldly out of its can. You will not be shot." OK, try this one. Add salt (maybe from that can you're opening) to a pot of luke-warm water. How fast does entropy increase? Now add salt to a pot of water being boiled on a stove. Yes, it's different. Entropy increases much faster. Adding energy to a system does one thing - it increases the rate at which entropy increases. The only way to decrease entropy in an open system is to add energy in an intelligent manner. This can be done by an "active" intelligence, or by pre-programmed intelligence (i.e. what you find in biological systems).SCheesman
February 23, 2008
February
02
Feb
23
23
2008
08:15 AM
8
08
15
AM
PDT
...or disable it's ability to reproduce.mike1962
February 23, 2008
February
02
Feb
23
23
2008
07:24 AM
7
07
24
AM
PDT
"Rather my big question is how a rare few of them have managed to persist for hundreds of millions or billions of years." They would persist if they were designed such that mutation of any essential elements would cause the mutant to die.mike1962
February 23, 2008
February
02
Feb
23
23
2008
07:24 AM
7
07
24
AM
PDT
Not even apples and oranges, DaveScot.Daniel King
February 23, 2008
February
02
Feb
23
23
2008
05:25 AM
5
05
25
AM
PDT
Daniel, Try this experiment. Put a CD rom disk out in the sun. Check it every day to see if the information on it becomes more organized, less organized, or stays the same. Keep it up until something changes. Let me know which way it eventually changes. Since I'm an engineer who understands entropy I could tell you what's going to happen but I think you need some personal experience in "open systems" with regard to entropy. DaveScot
February 23, 2008
February
02
Feb
23
23
2008
12:27 AM
12
12
27
AM
PDT
Joseph. Thank you for sharing this with me. http://www.worldnetdaily.com/index.php?fa=PAGE.view&pageId=56626Dembskian
February 22, 2008
February
02
Feb
22
22
2008
04:20 PM
4
04
20
PM
PDT
Take that answer boldly out of its can. You will not be shot.Daniel King
February 22, 2008
February
02
Feb
22
22
2008
02:10 PM
2
02
10
PM
PDT
So one of my big questions isn’t why most cell lines sooner or later go extinct as that’s easily explained by entropy.
The "canned" answer is the Earth is not a closed system. Meaning entropy need not apply. (don't shoot the messenger!) ;)Joseph
February 22, 2008
February
02
Feb
22
22
2008
09:57 AM
9
09
57
AM
PDT
...my big question is how a rare few of them have managed to persist for hundreds of millions or billions of years.
Some would say that the obvious answer is that they have not persisted for "hundreds of millions or billions of years" but rather orders of magnitude less. From this point of view it seems that an interesting research program would be to calculate the rate of degradation of the human genome and attempt by extrapolation to determine at what point in the past it would have been perfect.sagebrush gardener
February 22, 2008
February
02
Feb
22
22
2008
08:30 AM
8
08
30
AM
PDT
Yeah, Dave that is an excellent question. I've have argued on other threads that the so-called "deep time" necessary for TTOE and abiogenesis pose as much a problem as a solution. How likely is it that in "deep time" some asteroid or mutated evolved killer virus, or global warming or global cooling wouldn't have obliterated life. Further according to the Wildlife Conservation Society 50K species of life go extinct every year. Wouldn't we need pretty soon some new ones or else suffer ultimate extinction and annihilation. Also I don't understand why TTOE proponents are so concerned about global warming, wouldn't this pressure generate some new opportunity for mutating species? The other angle to this is that for every positive mutation (lions now run a time and half faster) a corresponding negative impact is felt by the zebras as they are driven out of existence and toward extinction. Which, when the zebras die out this would have an adverse affect on the lions. Either way, mutations positive or negative do not represent a net gain for an ecosystem because of the interdependence of the system. Those that are critical of human relations with nature seem to realize this.joshuabgood
February 22, 2008
February
02
Feb
22
22
2008
08:24 AM
8
08
24
AM
PDT
1 2

Leave a Reply