Uncommon Descent Serving The Intelligent Design Community

From the files: Why intelligent design is going to win, revisited

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Douglas Kern at Tech Central Station warned, in 2005 that intelligent design is going to win.

And why was that?

He starts with the claim that ID types are more likely to be fertile than others.

I will not hash that out here except to say this: If it means YOU, you might want to include a budget item for receiving blankets, gripe water, and soothers – and if you do not know what those terms mean, ask your nearest and dearest … 

Update note: Your nearest and dearest may even have some amazing news for you that will change your, um, “expectations.”  Like remember that night when you and she got along so well?  Okay, well, life goes on. No, really, it does, and this is how it does. )

He then argues that “the pro-Darwin crowd is acting like a bunch of losers”:

“Ewww…intelligent design people! They’re just buck-toothed Bible-pushing nincompoops with community-college degrees who’re trying to sell a gussied-up creationism to a cretinous public! No need to address their concerns or respond to their arguments. They are Not Science. They are poopy-heads.”

There. I just saved you the trouble of reading 90% of the responses to the ID position.

Well, that certainly hasn’t changed! In fact, it was never any different. The Darwinists are always willing to believe any nonsense that underwrites materialism. And they always find supporters too.

He follows up with Darwinism’s critical problem:

ID has already made its peace with natural selection and the irrefutable aspects of Darwinism. By contrast, Darwinism cannot accept even the slightest possibility that it has failed to explain any significant dimension of evolution. It must dogmatically insist that it will resolve all of its ambiguities and shortcomings – even the ones that have lingered since the beginning of Darwinism.

Interesting. Strict intelligent design theory has never had – so far as I can determine – a problem in principle with natural selection (NS) as a conservative force that routinely eliminates non-functional life forms. Anyone can see that NS must function that way; otherwise, the planet would be overloaded with kludges.

The PROBLEM has always been with the idea that natural selection functions as a mechanism for creating information, as opposed to editing information. ID theorists have not been able to find any evidence that natural selection creates information at anything like the levels that Darwinists claim, and there is much evidence against it.

Which is, like, curtains, for Darwin’s theory.

Kerns also thinks that ID will win because it will attract the best minds, who are attracted to information theory. Could that be why the Darwoids are stepping up the persecution of smart guys who know that Darwinism is the Enron of biology?

Lastly, Kerns thinks that the human mind tends to find design whether it exists or not. This is a somewhat cynical view, as it begs the question of WHY the human mind finds design. For example, if I think that four and four make eight, did my selfish gene robot prompt that idea in the pile of mush in my head in order to help spread my selfish genes? Or … is Darwinism simply failing as an explanation of the history of life?

Comments
GAW: You have put your finger on a major facet of the core of the debate:
isn’t it the case that lower-probability objects can also exhibit CSI? . . . In my example above, a genetic mutation might increase total information (and could increase improbability) but decrease specification. So the measure of CSI can’t be simply equated to improbability above a certain measure.
Several points jump out: 1] First, let's unpack that abbreviation, CSI: complex, specified information. In short, there is not at all any "equat[ing] to improbability above a certain measure." That is why we talk about functional specification [FSCI], hard-to-find-by-chance islands of functionality in a large configuration space [747 assembly by tornado . . . ], Kolmogorov compressibility, macroscopically describable situations, etc etc. 2] In your example, genetic information in the sense of complexity [how big the config space is, across all possible combinations of values of elements] can increase without functionality [specification] increasing indeed: in short, noise is very -- overwhelmingly -- likely to introduce corruption of the FSCI. Thus, functionality falls, and beyond a certain point life functions fail -- e.g. radiation damage and cancer etc. Or, beyond a certain [too often, frighteningly small] limit, outright life function collapse. 3] It is worth pointing out the scale of the DNA molecule as a storage medium: 500,000 - 3,000,000,000 digital storage units, each capable of 2 bits of storage capacity -- what Shannon Information more or less measures. At the lower end, that corresponds to a config space of 4^500k ~ 9.9 * 10^301,029 cells. 4] A storage capacity of just 500 bits [250 4-state elements] corresponds to ~ 10^150 cells and 1,000 bits moves that to ~ 10^301. As the Dembski UPB type calculation shows, anything beyond this sort of range is so vanishingly improbable to be found by random-walk based searched [even with functional filtering] that it is not credible that on the gamut of our observed universe, such a chance + necessity only mechanism is likely to access such islands or archipelagos of functionality. 5] So, WD chose this sort of range as the threshold for identifying where CSI is present. Now, obviously, designers can make things simpler than that so the filter will "miss" such cases. But what is more material, is that in no observed case where the filter rules" design, and we independently know the causal story, has it been wrong. 6] For instance, look at the text of this post; which is beyond that limit and which the explanatory filter correctly infers as "designed." In short, on the empirical evidence, the filter is reliable in the cases where it rules: designed. [And in fact the use of rejection regions that are imperfect is a well-known, frequently used and generally accepted standard statistical praxis; all the complaints that Bayesian inference is "superior" notwithstanding.] 7] Now, apply to certain key cases: Origin of Life, and origin of body-plan innovation-level genetic information [e.g the Cambrian life revolution]. In both cases, we are dealing with config spaces that are orders of magnitude beyond the relevant limit. The EF rules: designed. 8] As noted, we know that filter to be reliable when it rules positively. So, the issue is not the cases it may miss but the cases it catches, and the implications that flow therefrom -- implications that are fatal for the reigning evolutionary materialist paradigm and worldview. THAT is the challenge. GEM of TKIkairosfocus
December 2, 2007
December
12
Dec
2
02
2007
10:23 PM
10
10
23
PM
PDT
One of the best discussions of CSI I have seen is in Intelligent Desig: The Scientific Alternative to Evolution http://www.intelligentdesignnetwork.org/NCBQ3_3HarrisCalvert.pdf which is by John Calvert and William Harris who led the ID discussion of the Kansas science curriculum in 2005. It is not just on CSI though they spend a few pages on it.jerry
December 2, 2007
December
12
Dec
2
02
2007
07:10 PM
7
07
10
PM
PDT
Also CSI is obviously not the same thing as ID. What the universal probability bound does, is signal design. CSI in relation to a design inference however would need to fall outside the UPB in theory. The term CSI is a way of talking about SC in the form of its informational exponent. This is the transfer from SC to information theory with looks at the bits of information present and required (conservatively) to account for the design inherent in given phenomena. That is why Dembski’s second book NFL was subtitled why SC can not be purchased without intelligence. Intelligence being understood here through information theory (one of the main topics of the book).Frost122585
December 2, 2007
December
12
Dec
2
02
2007
05:11 PM
5
05
11
PM
PDT
Frost, isn’t it the case that lower-probability objects can also exhibit CSI? The UPB is a model for absolute improbability, and calculating hyper-low odds is extremely impractical in any event. Besides, In my example above, a genetic mutation might increase total information (and could increase improbability) but decrease specification. So the measure of CSI can’t be simply equated to improbability above a certain measure.
Excellent point. I would then say that the probability of that mutation would have to be defined. Lower probability objects I don’t think can display definite CSI. This because we don’t know what is or isn’t designed unless we can assess it as improbable by natures natural resources. This requires a probability test of some kind. Specification needs to be taken in to probability. A mutation might increase complexity but at the same time hide the ID in the object. Also Dembski points out that Randomly picking 250 proteins and having them all fall among those 500 therefore has probability (500/4,289)250, which has order of magnitude 10-234 and falls considerably below the universal probability bound of 10-150." So UPB is a good test for SC. Now, it is a logical truth that designers can and do design things that are simple with low or high specification, but, ID needs to be safe and secure in its specificity and explicitness in order to be a scientific theory. Dembski uses UPB to be very conservative.Frost122585
December 2, 2007
December
12
Dec
2
02
2007
04:36 PM
4
04
36
PM
PDT
Getawitness you stated: In fact, as I think about it, it may be that an increase of random information in a genome could lead to a decrease of CSI! BINGO getawitness: That is the principle of Genetic Entropy! This principle stands for the genomes of all living organism. Viruses only "seemingly" get by this principle because they decrease far more complexity of the organism they infect than they increase personally. Thus, like rocks off the side of a mountain, Genetic Entropy is obeyed even down to the level of Viruses! All mutations that are scrutinized to be beneficial to the genome turn out to be detrimental in some fashion. All sorts of mutations have been offered here on UD as positive proof of evolution (Nylon Bacteria, Styrene Bacteria, Antibiotic Bacteria, Heavy metal Bacteria etc.etc..) but all fain to generate complexity and all decrease complexity in some fashion from "parent" type. As well encode is revealing that the genome is in fact devoid of any "swaths" of junk DNA and is in fact a "complex interwoven network" that appears to be spread throughout the ENTIRE genome! This was only a study for 1% of the genome so the functionality is sure to be proven encompass the entire genome in short order. This is all in accordance with the Theistic postulation of "front-loaded" parent species for all sub-speciation events. The fact that all sub-speciation events to new environmens, occur with a loss in complexity stays in accordance with Genetic Entropy and indicates that once design is implemented the "Designer" no longer tinkers with the genome of the "sub-species". This is all fine and well for now the ID/Genetic Entropy mo^del can rest its postulations on both the second law and on the law of conservation of information. Is there a formula for figuring out CSI? I don't know. But I like you think and hope it should be possible to figure CSI in a rigorous manner but I really have nothing more than a few hunches to back up that thought. For now, to measure information, I primarily follow the changes in complexity of organism to find if it has gained or lost it. And as I stated before, all mutations claiming roof of evolution decrease complexity.bornagain77
December 2, 2007
December
12
Dec
2
02
2007
04:31 PM
4
04
31
PM
PDT
Bork,
There is no doubt that ID has a philosophical upper hand- but stating that the future will tend towards it is a stretch.
Are computers becoming more or less intelligent? Are cars loosing technological advancement? Even if we reduced the complexity of a car to burn less fuel it would be for the purpose of dealing with environmental issues of consumption and/or possibly climate change. This would be a reduction in complexity but an increase in specification. What are the odds a car could be formed by a tornado going through a junk yard? Then, how much lower are the odds that a seconded tornado would come by and make that car more efficient just when the environment requires it to be? Selfishness, is purposeful. If I want my kid to live a long life because I WANT him to experience the complexity beauty and mystery of being alive. I have a selfish motivation here. I reject the philosophical/ethical debate that ego or selfishness is bad. This is what gives us great competition. It is why people want to know more about origins. I could go on and on. The down side to selfishness is that not everyone's goals align perfectly and this is where the selfishness of enlightened individuals comes in. We want to help the world coexist peacefully and lovingly even though our motives are driven by “our” forces within. Forces that ultimately display extreme SC and that I believe require an ID to account for.Frost122585
December 2, 2007
December
12
Dec
2
02
2007
04:22 PM
4
04
22
PM
PDT
Frost, isn't it the case that lower-probability objects can also exhibit CSI? The UPB is a model for absolute improbability, and calculating hyper-low odds is extremely impractical in any event. Besides, In my example above, a genetic mutation might increase total information (and could increase improbability) but decrease specification. So the measure of CSI can't be simply equated to improbability above a certain measure.getawitness
December 2, 2007
December
12
Dec
2
02
2007
04:07 PM
4
04
07
PM
PDT
getawittness, good question...
"I have seen him put forward methods he says determine the presence of CSI but not for determining the amount of CSI."
Well I am not sure about this but I too have read NFl and to me the Universal Probability Bound (UPB) tells us when an event or object exists that is more improbable that all of the universes natural resources can purchase. The UPB, if i correctly remember is 10^-150. Therefore, anything falling outside of this which also has a n arbitrary given pattern or a specification would be a candidate for a design inference. I would then assume that the lower the probability the greater the CSI. So, an object displaying SC of 10^-151 displays greater SC than one that has SC with odds of 10^-150. Remember the design inference is about SC which is then inferred into CSI. SC require CSI and the lower the odds the more CSI is required. I don’t know too, too much about this topic so I could be missing something. Nonetheless, this is my current understanding of NFL, SC and CSI. One can infer that if we took every example of SC and combined their probabilities you would very likely have a representation of a universe that is extremely improbable without the guiding for of an ID. This is an ultimate SC argument for the Universe’s fine tuning.Frost122585
December 2, 2007
December
12
Dec
2
02
2007
03:58 PM
3
03
58
PM
PDT
Thanks poachy. BA77, the amount of information in a piece of DNA is in principle calculable. But I may not have been clear: I want to distinguish between total information and CSI. Is all the information in DNA CSI? I doubt it. For example, a random mutation could "add" information to the genome, but it would not "add" CSI. In fact, as I think about it, it may be that an increase of random information in a genome could lead to a decrease of CSI! (Assuming that CSI is a mathematically coherent and quantifiable concept.) In order to make meaningful comparisons among species -- again, assuming common descent, which I take for granted -- you'd have to figure out how much of the total information in a genome is CSI.getawitness
December 2, 2007
December
12
Dec
2
02
2007
03:37 PM
3
03
37
PM
PDT
getawitness, I have always struggled with how to operationalize the fundamentals of ID (perhaps because I am a layman and not a biologist). You have made what, is to me, a fantastic suggestion for a research program! It is probably old hat to Patrick and them, but it really made a lightbulb go off for me.poachy
December 2, 2007
December
12
Dec
2
02
2007
03:15 PM
3
03
15
PM
PDT
A couple of tidbits on information capacity that may help: DNA molecules contain the highest known packing density of information. This exceedingly brilliant storage method reaches the limit of the physically possible, namely down to the level of single molecules. At this level, the information density is more than 10^21 bits per cm3. W. Gitt, In The Beginning Was Information, pg 195. "Man is undoubtedly the most complex information processing system existing on the earth. The total number of bits handled daily in all information processing events occurring in the human body, is 3 x 10^24. The number of bits being processed daily in the human body is more than a million times the total amount of human knowledge stored in all the libraries of the world, which is about 10^18 bits." W. Gitt, In The Beginning Was Information, pg 88.bornagain77
December 2, 2007
December
12
Dec
2
02
2007
02:34 PM
2
02
34
PM
PDT
Patrick, I've read NFL carefully and TDI a little less so. I have seen him put forward methods he says determine the presence of CSI but not for determining the amount of CSI. This would be important for figuring out when a design event happened. For example, if you accept (as I do) common descent, then it would be interesting to calculate the amount of CSI in two related species. If one species had more CSI than another, then either the ancestor species had more than either (the front-loading idea, if I understand that correctly) or there was a design event in the interim. The grail of such an investigation would be to show quantitatively that a species had definitively more CSI than its ancestor species. If ID is correct in its central contention that new CSI requires intelligence, then we would know when a design event occurred. One could even write a history of biological design events. But that would all depend on some way to measure the amount of CSI (as well as confirmation of the "CSI is created only by intelligence" prsemise).getawitness
December 2, 2007
December
12
Dec
2
02
2007
02:10 PM
2
02
10
PM
PDT
Patrick, this discussion is really interesting, but Dr Dembski's math is over my head. While we wait for getawitness to opine, do you have any numerical values that would rank, based on your understanding, the specified complexities of the bacterial flagellum and the ribosome? (These are just random examples, and if you have done the calculations on other examples, that would be helpful instead.)Daniel King
December 2, 2007
December
12
Dec
2
02
2007
01:59 PM
1
01
59
PM
PDT
getawitness, Try reading Dembski's books to see how CSI is calculated.Patrick
December 2, 2007
December
12
Dec
2
02
2007
10:10 AM
10
10
10
AM
PDT
magnan you stated, In principle it can be measured, because there is a general relationship between information and entropy. I just wanted to back this assertion up a little bit: "Gain in entropy always means loss of information, and nothing more." Gilbert N. Lewis- Gilbert Newton Lewis(October 23, 1875 - March 23, 1946) was a famous American physical chemist known for his 1902 Lewis dot structures, his 1916 paper "The Atom and the Molecule", which is the foundation of modern valence bond theory, developed in coordination with Irving Langmuir, and his 1923 textbook Thermodynamics and the Free Energy of Chemical Substances, written in coordination with Merle Randall, one of the founding books in chemical thermodynamics. In 1926, Lewis coined the term "photon" for the smallest unit of radiant energy. As well, inverse of entropy is how we get the information content of the Big Bang,, 10^10^123 thats, 1 with 10^123 zeros to the right, bits of information, or as said another way: "The initial entropy of the universe had to be within one part in 10^10^123!". Roger Penrose Of course when measuring Genetic Entropy you have to measure it by loss of "functional" information i.e. loss of specific within species variability and within order/family variability of specific number of species. Maybe someone can come up with a more specific for specified complexity at the molecular level but Genetic entropy seems to be the best way right now to measure specified information content of a genome.bornagain77
December 1, 2007
December
12
Dec
1
01
2007
08:20 PM
8
08
20
PM
PDT
getawitness, this gets into metaphysical areas. If specified complexity were actually in principle not quantifiable or measurable, then science would have nothing real to say about it. For metaphysical naturalists this means it doesn't exist. But observation of living organisms reveals huge amounts of ordered complexity which can be roughly measured and compared, organism to organism. A man has more than a mouse and a mouse has more than an amoeba and an amoeba has more than a bacterium. This is common sense aided by intuition. However, specified complexity is a characteristic that can in principle also be scientifically quantified and measured. Leslie Orgel used the term specified complexity (meaning much the same thing) in his book The Origins of Life (1973): "Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity." Charles B. Thaxton discussed the quantification of specified complexity in his book From The Mystery of Life's Origin: Reassessing Current Theories (1984). He points out that "...only certain sequences of amino acids (out of all randomly possible) in polypeptides and bases along polynucleotide chains correspond to useful biological functions. Thus, informational macro-molecules may be described as being and in a specified sequence." So the concept of specified complexity has a seriously considered history in science. In principle it can be measured, because there is a general relationship between information and entropy. Thaxton goes on to say "If we want to convert a random polymer into an informational molecule, we can determine the increase in information (as defined by Brillouin) by finding the difference between the negatives of the entropy states for the initial random polymer and the informational molecule." Brillouin information is closely related to Shannon information. This shows that specified complexity is not purely subjective but is as intuition tells us, an objective indicator. For amino acid polymers this is of how tightly the physical system must be constrained (specified) in order to obtain a functional protein.magnan
December 1, 2007
December
12
Dec
1
01
2007
07:33 PM
7
07
33
PM
PDT
that Darwinism is the Enron of biology?
O'Leary, I love this analogy. What if all the good little Darwinists who dogmatically oppose ID had to stake jail time or even just their teaching careers the next time their theory failed to account for the hard evidence or completely failed to predict an event? How many people do you think would be still be selling the great infallible truth of Darwinism if this was the case? The promissory notes or Darwinism are even greater than those used to inflate Enron’s value.
Frost122585
December 1, 2007
December
12
Dec
1
01
2007
06:02 PM
6
06
02
PM
PDT
magnan, if CSI can't be measured but only noted in terms of presence or absence, then saying that CSI can't be added without intelligence is meaningless in scientific terms. You may be right that CSI "is a basic intuition." Intuition, however, is not science.getawitness
December 1, 2007
December
12
Dec
1
01
2007
04:40 PM
4
04
40
PM
PDT
getawitness, of course random genetic variation can create Shannon information, which is proportional to the number of characters or bits and is inversely proportional to the probability of the string. Randomly inserting randomly selected letters and words into Shakespeare's Hamlet will certainly increase the Shannon information content. But it certainly won't increase the complex specified information content as defined by Dembski. In fact it will degrade it. Spilling a can of alphabet soup on the counter will produce a pattern of letters representing a lot of Shannon information, but not specified complex information. I will quote Dembski on specified complexity: "Life is both complex and specified. The basic intuition here is straightforward. A single letter of the alphabet is specified without being complex (i.e., it conforms to an independently given pattern but is simple). A long sequence of random letters is complex without being specified (i.e., it requires a complicated instruction-set to characterize but conforms to no independently given pattern). A Shakespearean sonnet is both complex and specified." The first component of specified complexity is great improbability given the total sequence space. This is the criterion of complexity, which is another term for the amount of Shannon information. The second component in the notion of specified complexity is the criterion of specificity. The idea behind specificity is that not only must an event be unlikely (complex, high Shannon information), it must also conform to an independently given, detachable pattern. You say, "Though Shannon information is quantitative, I’m not sure CSI is. In his writings, Dr. Dembski seems to focus on the presence of CSI rather than the amount of CSI. Can anybody point to a rigorous method for determining the amount of CSI rather than its presence? Is there a quantitative unit for CSI?" I don't know. But so what - this is a basic intuition that identifies clearly unique characteristics of living organisms and human intelligent products like Shakespearean plays. You say, "CSI remains pretty much a “boutique” term restricted to ID theorists. It’s not used widely in mathematical or biological literature." So what, especially considering the deep hostility of mainstream science to anything challenging reductionist materialism and its application to biology. getawitness: " If novel information (in the Shannon sense) can be created in the genome, can it become functional?" Your term "functional" basically refers to the same thing as "specified complexity". For the information to be functional, it must be organized and specifically patterned to the organism, not random. NDE theory contends that the way it gets there is through the culling of selection in populations. Therefore NDE theory assumes the novel new functional information (additional specified complexity) came from selection. Selection for "fitness" is the forcing filtering function that imposes order.magnan
December 1, 2007
December
12
Dec
1
01
2007
03:59 PM
3
03
59
PM
PDT
magnan [18],
You apparently don’t understand the ND theory. The origin of variation is genetic mutation and all the other types of genetic change. In neoDarwinian theory all of these sources of genetic variation are (must be) random with respect to fitness. The random changes constitute Shannon information in the sense of mere data like random bit strings, but do not constitute specified, purposeful, organized information as represented by living organisms.
I always enjoy being told what I don't understand. Recall that Denyse located a problem "with the idea that natural selection functions as a mechanism for creating information." No mention of CSI here. As I pointed out, this is not what evolution says, and it's a rookie mistake. You apparently agree with me that mutation can create information in the Shannon sense but take the standard ID position that it doesn't create information in the sense of CSI. There are a few things to note here. 1. So we agree that mutation can create information in the Shannon sense. I'd add that the amount of Shannon information in the genome is in principle measurable. (As is common in science, of course, there are disagreements about the best way to do that.) For example, gene duplication can clearly add Shannon information to the genome. 2. Though Shannon information is quantitative, I'm not sure CSI is. In his writings, Dr. Dembski seems to focus on the presence of CSI rather than the amount of CSI. Can anybody point to a rigorous method for determining the amount of CSI rather than its presence? Is there a quantitative unit for CSI? 3. CSI remains pretty much a "boutique" term restricted to ID theorists. It's not used widely in mathematical or biological literature. 4. If novel information (in the Shannon sense) can be created in the genome, can it become functional? Certainly NDE does not distinguish in some black and white sense between useless and useful information. As I understand the NDE model, a new piece of information might have no initial function but can hang around anyway if it's not deleterious and doesn't get eliminated by chance. It might operate in tandem with other genes to create weakly beneficial functions that strengthen over time. That's the claim, anyway.getawitness
December 1, 2007
December
12
Dec
1
01
2007
06:26 AM
6
06
26
AM
PDT
Magnan The problem is that the random variations have to get tot he organised complexity and associated functionally specified, complex, often fine-tuned information required for the sort of biotech we see in cells and at the foundation of body plans. That puts the information way beyond 500 - 1,000 bits worth of info-carrying capacity [a better term for what Shannon was dealing with!], and that in turn means the probabilistic resources of the observed cosmos have long since been surpassed. In short, as I discuss in my always linked App 1 section 6, the problem is to find the islands and archipelagos of functionality in the config space, but the space is obviously too large for the gamut of our observed cosmos to plausibly access these regions. And that is why we see the resort to the a posteriori ad hoc metaphysical speculation on a quasi-infinite multiverse. (So, if you have moved to the province of phil that opens up the full range of live option worldviews and the issues of comparative difficulties. Then too it is improper to exclude certain live option views because key parts of the ruling elites in relevant institutions don't like those views. Or else you are resorting to selective hyper-skepticism and censorship or outright intimidation and worse to back it up.) GEM of TKIkairosfocus
December 1, 2007
December
12
Dec
1
01
2007
03:33 AM
3
03
33
AM
PDT
getawitness: "I’m not making a claim for the origin of information. I’m just saying that nobody in the evolutionary camp, as far as I can tell, is locating that origin in natural selection." You apparently don't understand the ND theory. The origin of variation is genetic mutation and all the other types of genetic change. In neoDarwinian theory all of these sources of genetic variation are (must be) random with respect to fitness. The random changes constitute Shannon information in the sense of mere data like random bit strings, but do not constitute specified, purposeful, organized information as represented by living organisms. So if all of the specified information of complex organisms is generated by RV + NS as claimed by Darwinists, the only source of this information must be in the cumulative results of natural selection.magnan
December 1, 2007
December
12
Dec
1
01
2007
03:10 AM
3
03
10
AM
PDT
I thought evolutionists denied that there was a way to measure information. If it is a measureless quantity then they can claim that it is trivial. Since we have all sorts of different ways to measure it clearly we win. And I think Stephen J Gould did claim that and borrowed his ideas from Weismann. of course he was a marxist and had all sorts of kooky ideas that no one else agreed with at first.digdug24
November 30, 2007
November
11
Nov
30
30
2007
06:56 PM
6
06
56
PM
PDT
digdug, I'm not making a claim for the origin of information. I'm just saying that nobody in the evolutionary camp, as far as I can tell, is locating that origin in natural selection.getawitness
November 30, 2007
November
11
Nov
30
30
2007
06:49 PM
6
06
49
PM
PDT
GAW are you saying that variation = information? Because uh I think some people have scientifically defined information very strictly and i doubt that your definition fits the criterion. How do you measure your information? If you tell me how you measure yours I'll tell you how I measure mine.digdug24
November 30, 2007
November
11
Nov
30
30
2007
06:41 PM
6
06
41
PM
PDT
Denyse,
the idea that natural selection functions as a mechanism for creating information
What "Darwinist" has made that claim? Evolution claims that novelty comes from variation, not from selection. I'm surprised that you made such a mistake. Also, there's a comma error in your first sentence.getawitness
November 30, 2007
November
11
Nov
30
30
2007
06:33 PM
6
06
33
PM
PDT
You'd think that by drinking more, darwinists would be having more kids due to their somewhat less morals. The odds would point to more out of marriage kids. I guess the liberal tendancy to abortion is what holds there numbers down. Haha, a little selection weeding out the liberals, gotta like that.Dog_of_War
November 30, 2007
November
11
Nov
30
30
2007
05:24 PM
5
05
24
PM
PDT
Gripe water is what you give to babies with colic.Janice
November 30, 2007
November
11
Nov
30
30
2007
05:13 PM
5
05
13
PM
PDT
There is no doubt that ID has a philosophical upper hand- but stating that the future will tend towards it is a stretch. Design often brings up purpose- as things are designed for a purpose. If we believe we are designed than we might conclude we have a purpose, and it is one not decided by us. Many people don't want to be told what to do- I don't see a future of purpose driven individuals, but rather one of selfishness. I am with tyke, ID needs research. Interpreting scientific journals isn't going to cut it. A few mavericks in the scientific field aren't either. Unfortunately, ID is seen as anti-science, and research is going to take more than funding- it will need a cultural defense to persuade people to read the papers and even review them.bork
November 30, 2007
November
11
Nov
30
30
2007
05:05 PM
5
05
05
PM
PDT
gripe water, could that be spit ups?Collin
November 30, 2007
November
11
Nov
30
30
2007
04:17 PM
4
04
17
PM
PDT
1 2

Leave a Reply