Intelligent Design

Voice from the audience: “So when will Darwinism be history?”

Spread the love

Last night I was out giving a talk on the origin and development of the intelligent design controversy, when an engineer audience member asked me, “How long do you give Darwinism before it collapses?”

I found providing an answer difficult because I am not psychic, so I must rely on near term, high impact information when making predictions.

(For example, I figured that ID would become big news in the middle of the first decade of the 21st century because events recorded around 1991-2001 in the United States could have no other outcome – absent, of course, a nuclear holocaust or some other “all-bets-are-off” scenario, but you can’t let unlikely events distract you when you are making predictions based on the flow of normal events.)

In response to him, I pointed out that the Catholic Church is now spreading the news on prayer cards in many languages around the world that “we are not some casual and meaningless product of evolution Each of us is the result of a thought of God”. That is certain to have impact, but I am uncertain how to evaluate its strength.

So finally, I told him, “Look, I don’t think we are going to get anywhere understanding the origin of life or of species until we understand what information is and how it relates to the other factors in the universe.”

Information in life forms clearly does not arise the way Darwin thought it did. Even species don’t seem to arise the way Darwin thought they did. The recent challenge to demonstrate it on this blog did not turn up much. And that was supposed to be Darwin’s big contribution … sigh …

But aw, it’s been worse. The Washington Post was reduced to flogging up the idea that the introduction of Ontario squirrels to the Washington area by an ill-advised naturalist in the early twentieth century was an instance of natural selection at work. Yeah. Read all about it ….

Not  quite sure how to answer the guy’s question, I was reminded  of someone I had quoted in By Design or by Chance?:

The key to a scientific understanding of design is not theology, but information theory. If design is a part of nature, then the design is embedded in life as information. But many people are not used to thinking in terms of an immaterial quantity like information. As G.C. Williams writes:

“Information doesn’t have mass or charge or length in millimeters. Likewise, matter doesn’t have bytes. You can’t measure so much gold in so many bytes. It doesn’t have redundancy, or fidelity, or any of the other descriptors we apply to information. This dearth of shared descriptors makes matter and information two separate domains of existence, which have to be discussed separately, in their own terms.” (G.C. Williams, “A Package of Information” in J. Brockman, ed., The Third culture: Beyond the Scientific Revolution (New York, Simon and Schuster, 1995) , p. 43.)

Williams’ two separate domains unite in life forms. But how separate are these domains?

Have we misunderstood the history of life because we are blinded by the influence of materialist theories? Do we look for a materialist explanation for information (= Darwinism) when it doesn’t and can’t exist?

So I ended up telling the questioner that I can predict this far: Scientists will not find serious answers by trying to prop up Darwinism but by looking more closely at the nature of information and how it relates to matter.

When we understand the history of life better, Darwin’s natural selection will be shown to play at best a minor, conservative role in maintaining fitness. We have yet to discover the patterns that govern the history of life.

I now wish I had remembered to point out that, historically, scientists have spent a lot of time and energy defending failing theories, before they entertain better ones.

 

56 Replies to “Voice from the audience: “So when will Darwinism be history?”

  1. 1
    Patrick says:

    I doubt it’ll ever completely die. It’ll lose relevance as its supporters die of old age…but I don’t expect them to change their minds.

  2. 2
    Tiggy says:

    “Information in life forms clearly does not arise the way Darwin thought it did.”

    Can someone please give me a scientifically rigorous definition of information as it applies to life forms? How does one measure the information in a life form? If information is not rigorously defined or measurerable as it applies to life forms, how can one tell if the information in each subsequent generation of animals has increased or decreased?

    I have searched the ID literature thoroughly but haven’t found explanations for this anywhere. Thanks for assisting this neophyte!

  3. 3
    Ekstasis says:

    So, just another way of expressing it, what the NDEs believe is that the natural order of design (information) followed by structure (manifestation or implementation of the design) was reversed. Much like a kite that would end up flying and then later developing an electronic piloting system on board after it spent some time in the air. And they find ID implausible!

  4. 4
    BarryA says:

    Tiggy,

    Here is a brief introduction to the subject:

    DNA and Other Designs

    Stephen C. Meyer

    Copyright (c) 2000 First Things 102 (April 2000): 30-38.

    . . . In 1953, James Watson and Francis Crick elucidated the structure of the DNA molecule. Soon thereafter, molecular biologists discovered how DNA stores the information necessary to direct protein synthesis. In 1955 Francis Crick first proposed the “sequence hypothesis” suggesting that the specificity of amino acids in proteins derives from the specific arrangement of chemical constituents in the DNA molecule. According to the sequence hypothesis, information on the DNA molecule is stored in the form of specifically arranged chemicals called nucleotide bases along the spine of DNA’s helical strands. Chemists represent these four nucleotides with the letters A, T, G, and C (for adenine, thymine, guanine, and cytosine). By 1961, the sequence hypothesis had become part of the so–called “central dogma” of molecular biology as a series of brilliant experiments confirmed DNA’s information–bearing properties.

    As it turns out, specific regions of the DNA molecule called coding regions have the same property of “sequence specificity” or “specified complexity” that characterizes written codes, linguistic texts, and protein molecules. Just as the letters in the alphabet of a written language may convey a particular message depending on their arrangement, so too do the sequences of nucleotide bases (the A’s, T’s, G’s, and C’s) inscribed along the spine of a DNA molecule convey a precise set of instructions for building proteins within the cell. The nucleotide bases in DNA function in precisely the same way as symbols in a machine code. In each case, the arrangement of the characters determines the function of the sequence as a whole. As Richard Dawkins has noted, “The machine code of the genes is uncannily computer–like.” In the case of a computer code, the specific arrangement of just two symbols (0 and 1) suffices to carry information. In the case of DNA, the complex but precise sequencing of the four nucleotide bases (A, T, G, and C) stores and transmits the information necessary to build proteins. Thus, the sequence specificity of proteins derives from a prior sequence specificity—from the information—encoded in DNA . . .

    The empirical difficulties that attend self–organizational scenarios can be illustrated by examining a DNA molecule. The . . . structure of DNA depends upon several chemical bonds. There are bonds, for example, between the sugar and the phosphate molecules that form the two twisting backbones of the DNA molecule. There are bonds fixing individual (nucleotide) bases to the sugar–phosphate backbones on each side of the molecule. Notice that there are no chemical bonds between the bases that run along the spine of the helix. Yet it is precisely along this axis of the molecule that the genetic instructions in DNA are encoded.

    Further, just as magnetic letters can be combined and recombined in any way to form various sequences on a metal surface, so too can each of the four bases A, T, G, and C attach to any site on the DNA backbone with equal facility, making all sequences equally probable (or improbable). The same type of chemical bond occurs between the bases and the backbone regardless of which base attaches. All four bases are acceptable; none is preferred. In other words, differential bonding affinities do not account for the sequencing of the bases. Because these same facts hold for RNA molecules, researchers who speculate that life began in an “RNA world” have also failed to solve the sequencing problem—i.e., the problem of explaining how information present in all functioning RNA molecules could have arisen in the first place . . .

    To see the distinction between order and information, compare the sequence “ABABABABAB ABAB” to the sequence “Time and tide wait for no man.” The first sequence is repetitive and ordered, but not complex or informative. Systems that are characterized by both specificity and complexity (what information theorists call “specified complexity”) have “information content.” Since such systems have the qualitative feature of aperiodicity or complexity, they are qualitatively distinguishable from systems characterized by simple periodic order. Thus, attempts to explain the origin of order have no relevance to discussions of the origin of information content. Significantly, the nucleotide sequences in the coding regions of DNA have, by all accounts, a high information content—that is, they are both highly specified and complex, just like meaningful English sentences or functional lines of code in computer software.

    Yet the information contained in an English sentence or computer software does not derive from the chemistry of the ink or the physics of magnetism, but from a source extrinsic to physics and chemistry altogether. Indeed, in both cases, the message transcends the properties of the medium. The information in DNA also transcends the properties of its material medium. Because chemical bonds do not determine the arrangement of nucleotide bases, the nucleotides can assume a vast array of possible sequences and thereby express many different biochemical messages.

    If the properties of matter (i.e., the medium) do not suffice to explain the origin of information, what does? Our experience with information–intensive systems (especially codes and languages) indicates that such systems always come from an intelligent source—i.e., from mental or personal agents, not chance or material necessity. This generalization about the cause of information has, ironically, received confirmation from origin–of–life research itself. During the last forty years, every naturalistic model proposed has failed to explain the origin of information—the great stumbling block for materialistic scenarios. Thus, mind or intelligence or what philosophers call “agent causation” now stands as the only cause known to be capable of creating an information–rich system, including the coding regions of DNA, functional proteins, and the cell as a whole.

    The entire article can be found at http://www.firstthings.com.

  5. 5
    Mats says:

    In a way, I agree with Patrick on this one. Darwinism will not die totaly, but it might loose great chunks of its cultural and scientific relevance. I guess that is some form of death. Probably one needs to first say what he means with “death of Darwinism”. Does he mean death within the scientific community?

    Secondly, we must not forget Darwinism’s great apeal:

    “Evolution is unproved and unprovable. We believe it only because the only alternative is special creation, and that is unthinkable.” (Keith Sir A., in Wysong, 1976, p31)

    ..
    “Evolution itself is accepted by zoologists, not because it has been observed to occur…or can be proved by logical coherent evidence, but because the only alternative, special creation, is clearly incredible.” (Watson D.M.S., “Adaptation,” Nature, Vol. 123, 1929,
    233, in Wysong R.L., “The Creation-Evolution Controversy”, 1976, p31).

  6. 6
    Inquisitive Brain says:

    Information in life forms clearly does not arise the way Darwin thought it did.

    Somebody correct me if I’m wrong, but Darwin knew nothing of “biological information,” right?

    Denyse, are you equivocating information with Darwin’s view of the biological process of variation-selection-inheritance?

    Did Darwin even know anything about information in general? Was that even on the academic radar?

    I’m only slightly familiar with Darwin’s views, but I’ve read the Origin several times, once for a class on evolution and another time when writing a research paper on the history of biology. I have not read Darwin’s letters to friends et al, so my knowledge of his wider view could be limited.

  7. 7
    BarryA says:

    Inquisitive Brain,

    Information was not on anyone’s radar until after the discovery of DNA in the 1950’s, a hundred years after Darwin wrote Origin.

  8. 8
    StuartHarris says:

    Gil Dodgen told me once that Darwinism will end when a critical mass of young students and researchers in the life sciences come to the conclusion that they cannot make any original contributions to science by following the Darwinian paradigm. They will realize that following Darwinism will consign them to being stamp collectors rather than real scientists. This realization will at some point lead them to information sciences and to new non-Darwinian conclusions.

    Stu Harris
    http://www.theidbookstore.com

  9. 9
    ofro says:

    StuartHarris:
    ” ….. young students and researchers in the life sciences come to the conclusion that they cannot make any original contributions to science by following the Darwinian paradigm. They will realize that following Darwinism will consign them to being stamp collectors rather than real scientists.”

    I think you are painting with a fairly broad brush. Do you mean to say that they can’t work on finding a cure for cancer, diabetes or heart disease?

  10. 10
    Tiggy says:

    Thanks BarryA, I am already quite familiar with the workings of DNA. However, what I was asking for is a rigorous scientific [b]definition[/b] of information as it applies to life forms, not an analogy to English sentences. Are you equating information with the complexity of the DNA molecules? If so, there are many known natural processes that increase DNA complexity (gene duplication with point mutations, etc.) and thus produce novel morphological changes when expressed. Also, I am still curious as to how to [b]measure[/b] the information content of a life form. Any info on that? Thanks again for responding. I want to be sure I understand the ID position correctly.

  11. 11
    johnnyb says:

    Tiggy —

    I think the problem is mainly that information isn’t a single quantity. I like Gitt’s classes of information, though noone has yet developed a quantitative version of any of the additional levels. We can only, currently, quantitatively talk about statistical information, which is not very relevant.

    I think what we are going to find is that at the level of semantics is where there will be major constraints. Shared semantics is what allows pieces to join together and work. However, I do not think it will be quantifiable with a number. Instead it will be envisaged as “rules” and “importance of rules”. And I think that what we will find is a distinction between “core” and “ancillary” rules, and that “core” rules cannot be added to stochastically.

    I defend such a notion here if anyone is interested.

  12. 12
  13. 13
    Mats says:

    Ofro,

    They sure can find cures for whatever desease, but they won’t find those cures because Darwinian assumptions.

  14. 14
    Tiggy says:

    Thanks again BarryA. I’ve already read that paper by Dr.Dembski some time ago too. It’s probably me, but I just can’t find any place where the paper gives a definition of information as it applies to life forms. There are lots of nifty analogies and examples using probabilities of poker hands, and archers shooting arrows, and rats in a maze, etc., but no rigorous definition of information as it applies to life forms. I would greatly appreciate it if you could maybe copy the relevant passage and post it here, thanks. Also, I couldn’t find anything in the paper about the information content of DNA as in your first answer. As I asked before, does one equate information with the complexity of the DNA molecules?

    “I think the problem is mainly that information isn’t a single quantity. I like Gitt’s classes of information, though noone has yet developed a quantitative version of any of the additional levels. We can only, currently, quantitatively talk about statistical information, which is not very relevant.”

  15. 15
    Tiggy says:

    “They sure can find cures for whatever desease, but they won’t find those cures because Darwinian assumptions.”

    Just curious Mats – what ID assumptions would you use that are different than ‘Darwinian assumptions’, and how would using those different assumptions be better for find cures for diseases?

  16. 16
    Michaels7 says:

    “If so, there are many known natural processes that increase DNA complexity (gene duplication with point mutations, etc.) and thus produce novel morphological changes when expressed.”

    Tiggy, what are the novel morphological changes you’re referencing?

    ofro,
    What macro-evolutionary skills are required to cure cancer? Recognizing mutations that cause disease, malformations, etc., leads to understanding 1)how to repair specific mutations, 2) or other regulatory factors, or 3) blocking factors. 4) and what caused the original mutation.

    Therefore Reverse Engineering cell design, genetics, microstructures leads to insight and cures, or advice of modified behavior patterns.

    Whereas RM guessing on past scenarios leads to more guessing on past scenarios.

    OTH, GMO design is succeeding and proceeding forward. The evidence is in practical commercial applications.

    We’ve been down this road before. MacroEvolution is not required understanding in order to do operational genetics or in medical care. What is required is computational, engineering and systems analysis.

    There seems to be a misinterpretation of mutations with that of on/off switches for the regulatory function of anatomical designs. The evolution paradigm overstates mutations power.

  17. 17
    idnet.com.au says:

    Tiggy,

    You said you want a definition of “information as it applies to life forms, not an analogy to English sentences.”

    Most science is described in words that only have meaning by analogy. I think the analogy between language and DNA sequences is very compelling, and is used by Darwinian evolutionary biologists and ID evolutionary biologists alike.

    If we taks a single cell, single species, entirely functional genome, we have no redundancy. There is a minimum description of that genome. That may represent the information content of the genome. There is also spacial and structural information that is most likely needed to make the cell.

    There has been recent work on Mycoplasma trying to determine the minimum genetic information needed for a functioning organism. They are in the ball park of 400 or so DNA encoded specified long protein sequences. This may be close to the minimum of specified information needed for life.

    You also wrote “there are many known natural processes that increase DNA complexity (gene duplication with point mutations, etc.) and thus produce novel morphological changes when expressed.”

    What you refer to is microevolutionary changes through adaptive mutation. This has not been shown to be very powerful. I think it it is ID’s contention that this process is highly constrained and merely tinkers around the adaptive edges of creative processes.

    The concept of specified complexity is used to denote the difference between complexity and information content. In very small systems like Mycoplasma, it may possibly be quantifiable, but in very large genomes, like ours, there may be many layers of complexity and many forms of compression that remain to be discovered. A good example of compression is the specific immune globulin production mechanism. Here millions of highly specific products arise from relatively few genes. These systems scream out design.

  18. 18
    johnnyb says:

    Inquisitive —

    Of course Darwin knew of biological information, though he certainly didn’t have much background in information theory nor did he know exactly what was being transmitted. But in order to reproduce requires a concept of information. Information is central to biology, even before the information age. The way that Darwin attempted to solve a part of it (which indicates he at least knew it was a problem) is called “pangenesis”. It was also knowledge of the information issue which led Weismann to reject pangenesis and propose the “germ line”, which, while I disagree with the Weismann barrier as being impassible, the germ line/soma distinction has been established essentially beyond question. This was all based on ideas of biological information which were present at the time.

    A way of formulating information questions before the “information age” or before knowledge of DNA is like this:

    We know that organism C is like it’s parents organism A and organism B. What is it that causes these similarities? For something to be inheritted, there had to be a “something” to inherit. So, what creates that something? Darwin proposed natural selection operating on random modifications (though he thought these were somatic changes, not internal mutations like current neo-Darwinists). Even with a primitive view of information, the information question was still central to biology.

  19. 19
    idnet.com.au says:

    I come from the land down under. You guys get to have all your fun while I have to go to work. See you later.

  20. 20
    johnnyb says:

    BarryA —

    For a peer-reviewed paper that comes to essentially the same conclusion, see Three subsets of sequence complexity and their relevance to biopolymeric information:

    But under no known circumstances can self-ordering phenomena like hurricanes, sand piles, crystallization, or fractals produce algorithmic organization. Algorithmic “self-organization” has never been observed [70] despite numerous publications that have misused the term [21,151-162]. Bone fide organization always arises from choice contingency, not chance contingency or necessity. [emphasis mine]

  21. 21
    bebbo says:

    Patrick said: “I doubt it’ll ever completely die. It’ll lose relevance as its supporters die of old age…but I don’t expect them to change their minds.” It’s what, nearly 150 years since the publication of Darwin’s Origin of Species. How many supporters do you think have died during that time?

  22. 22
    Charlie says:

    bebbo,
    Why?

  23. 23
    Michaels7 says:

    As to NDE’s demise or end to Darwinism Dogma. It’s already begun on many practical levels. When Intel’s President refused to fully endorse SciAm’s Editor position to push more forcefully for Darwinism in academics, that was a rebuff from the practical world to the impractical. Why would Intel care about a historical science and entangling itself in speculative arguments that are so devisive and as yet unproven?

    Likewise Microsoft 2020 research program noted the need for more computational systems, math and engineering focus. No one would recommend large outlays on systems research unless they were sure of the direction. They’re putting their money on engineering, software and hardware design concepts, math and physics. I didn’t read anything in the PDF paper which said they were going to try and randomly mutate a new lifeform. Perhaps they too are aware of the failures? Fruitfly experiments? No, it was all practical recommendations for upgrading the Biological sciences with more emphasis on core math, physics, computers and engineering.

    Very practical, they realized we’re still at the beginning of understanding all the internal nano mechanisms of genetics and cellular interactions.

    It is one thing to identify parts, quite another to understand fully how to reverse engineer and produce working micro replicas. The more science engineers mimic different pieces and produce such replicas – the more the Design Paradigm will take hold.

    On the practical level, its already happening. Intellectual elites, especially those that are married to Darwin dogma, will never let go. Much like an eternal vow, it will follow them to their grave.

  24. 24
    John A. Davison says:

    Darwinism BECAME history in 1873, 12 years after its inception when St George Jackson Mivart asked the question – How can natural selection be involved with a structure that has not yet appeared? Like all other critics of the Darwinian myth, Mivart never existed. Don’t take my word for it. Examine the indexes of the several books by Mayr, Gould, Provine and Dawkins and try to find reference to my several sources all critics of Darwinism. They don’t exist either. The Darwinian, atheist “prescribed” ideologues even found it necessary to dismiss two of their own, Julian Huxley, who rightfully claimed evolution was finished, and Theodosius Dobzhansky who proved that selection cannot exceed the species barrier. What is even more baffling is why Huxley and Dobzhansky remained Darwinians! I guess they were just “born that way.”

    It is hard to believe isn’t it?

    I love it so!

    “A past evolution is undeniable, a present evolution undemonstrable.”
    John A. Davison

  25. 25
    bebbo says:

    Charlie responded to my comment by asking “why?”. The point was that many supporters of Darwinism have already died since 1859, but Darwinism (as you people like to call it) continues as the scientific explanation for the origin of species. In other words, you don’t need its supporters to die, you need a scientific theory that better explains the data and is fruitful for research. There’s been more than 150 years for ID to become just that. That it hasn’t should tell you something more important ID.

  26. 26
    Charlie says:

    bebbo,

    There’s been more than 150 years for ID to become just that. That it hasn’t should tell you something more important ID.

    Should it really?
    I am tempted to play the “why” game some more, but I’m afraid you aren’t getting the point.
    Your challenge “how many Darwinists have died” has nothing to do with Patrick’s thought “I doubt it’ll ever completely die. It’ll lose relevance as its supporters die of old age…but I don’t expect them to change their minds.”
    Now you’ve highlighted the reason that your question is irrelevant and you don’t even understand why.
    During the 150 years that your Darwinsts were dying the cell has gone from a black box to an integrated factory to a city of complex information networks. The paradigm of “simple to complex” has been turned on its head. The mantra “slow, steady and inexorable incremental change” has failed. Those who have been able to cling to Darwinian ideas in the face of these changes will be less and less able to convince those coming up behind them – those who’ve grown up with the ideas of the information age and the lessons of ID.
    What should 150 years of Darwinism really tell us about the effects of a decade or so of ID? That Darwinism is done.

  27. 27
    Tiggy says:

    “Most science is described in words that only have meaning by analogy.”

    Not in this case it doesn’t. When I see people saying “natural processes can’t increase information in life forms” I need to know specifically what can’t be increased, and how specifically that quantity is measured so one can determine if it increased or not.

    “What you refer to is microevolutionary changes through adaptive mutation. This has not been shown to be very powerful. I think it it is ID’s contention that this process is highly constrained and merely tinkers around the adaptive edges of creative processes.”

    Look guys, I’m not arguing against ID, I was merely asking for your definition of information as it applies to life forms. If you don’t have one, that’s OK. If it’s the length of the encoding region of the DNA strand that’s OK. If it’s something else that’s OK. Please just pick one definition and stick with it. It only makes me scratch my head that the term is used so much by ID proponents without even being formally defined or without being measurable.

  28. 28
    Smidlee says:

    As long as there is atheism and materialism there will be some form of Darwinism. So I doubt Darwinism will ever die since it can continue to used “evolution of the gaps” defense. There has always been the idea that nature is it’s own creator. Many of the ancient idols were nature gods.

  29. 29
    Mats says:

    Just curious Mats – what ID assumptions would you use that are different than ‘Darwinian assumptions’, and how would using those different assumptions be better for find cures for diseases?

    Good question!
    I guess the most obvious area where the design hypothesis would be more sucessfull than the Darwinian hypothesis is in regards to the so called “junk DNA”. Dr Steven C. Meyer, in his debate with Dr Peter Ward, effectively said that the Darwinian framework led scientists to a dead end. YOu may say “Well, he is an ID scientist, so he has every motive to attack Darwin”. Well, Scientific American posted a few years back pretty much the same conclusion.

    In 2003, an article in Scientific American about the functionality of so-called “junk-DNA” called our failure to recognize introns as functional within the cell “one of the biggest mistakes in the history of molecular biology”:

    Yet the introns within genes and the long stretches of intergenic DNA between genes, Mattick says, “were immediately assumed to be evolutionary junk.”

    […]

    About two thirds of the conserved sequences lie in introns, and the rest are scattered among the intergenic “junk” DNA. “I think this will come to be a classic story of orthodoxy derailing objective analysis of the facts, in this case for a quarter of a century,” Mattick says. “The failure to recognize the full implications of this—particularly the possibility that the intervening noncoding sequences may be transmitting parallel information in the form of RNA molecules—may well go down as one of the biggest mistakes in the history of molecular biology.” (The Unseen Genome: Gems Among the Junk by Wayt T. Gibbs, Scientific American (November, 2003), emphasis added)

    http://www.evolutionnews.org/2.....s_a_2.html

  30. 30
    Tiggy says:

    Mats, maybe I should have been more specific:

    What ID assumptions would you use that are different than current Darwinian assumptions, and how would using those current different assumptions be better for finding cures for diseases? It is that second question that I am really curious to hear your answer for.

    Do you know of anyone anywhere in medicine using the ID paradigm to produce successful results where the evolutionary paradigm didn’t?

  31. 31
    sagebrush gardener says:

    Tiggy,

    When I see people saying “natural processes can’t increase information in life forms” I need to know specifically what can’t be increased, and how specifically that quantity is measured so one can determine if it increased or not.

    That’s a very interesting question (to me, anyway). There are, or so I’ve read, ways to measure the information content of a list of symbols. Is this applicable to the information content of the genome and has anyone attempted to do this?

  32. 32
    ofro says:

    Mats,
    “About two thirds of the conserved sequences lie in introns, and the rest are scattered among the intergenic “junk” DNA.”

    This joyful presentation of junk DNA as a failure of Darwinism is not quite fair and might well be a bit premature. It is true that in the beginning phase of genomic analysis researchers had no idea about what intergenic and intronic DNA might do. So the tentative conclusion was that it might be good-for-nothing or “junk. Because of systematic work of just these Darwinian workers, it turns out that not only there are regulatory elements in the promoters immediately upstream and downstream of a gene but also in introns and in regions that are at times far away from the gene. And there are regions that control the expression levels of genes in a long stretch encompassing several genes, etc, etc.. All this was worked out not because somebody said something like “there must be some design somewhere since it is DNA”, but because there were observations, for example with respect to gene regulation, that were not consistent with the notion of a short promoter stretch regulating all of that genes expression.

    So at the present, ID proponents are happily proclaiming “we told you so” even though I would like to hear precisely what ID has to predict to begin with. However, so far the non-junk DNA still makes up only a fraction of the entire genomic DNA; after all on the average there are 27 genes per million basepairs in the human genome, while an average gene might need only something like several thousand basepairs to code for a protein. Mammalian genomes are full of repeats, satellites, short and long interspersed elements etc. Not to say anything about stretches referred to as pseudogenes that are readily explained by Darwinian mechanisms but whose design is a lot less understood.

    There is still a _very_ good chance that when the smoke has cleared, a lot of a mammalian genome is “junk”, i.e. has no specified function except for “hanging out” because it is a remnant of earlier duplications, rearrangements etc. Or is there a role assigned by ID to every bit of genomic DNA by ID?

  33. 33
    BarryA says:

    Tiggy,

    Since you are still asking for a definition of information, you apparently did not follow the link that I gave you to the requested definition. This leads me to believe you are not really interested in the answer to your question and are only being tendentious. In any event, here is an excerpt from the article you refused to go read:

    “What then is information? The fundamental intuition underlying information is not, as is sometimes thought, the transmission of signals across a communication channel, but rather, the actualization of one possibility to the exclusion of others. As Fred Dretske (1981, p. 4) puts it, “Information theory identifies the amount of information associated with, or generated by, the occurrence of an event (or the realization of a state of affairs) with the reduction in uncertainty, the elimination of possibilities, represented by that event or state of affairs.” To be sure, whenever signals are transmitted across a communication channel, one possibility is actualized to the exclusion of others, namely, the signal that was transmitted to the exclusion of those that weren’t. But this is only a special case. Information in the first instance presupposes not some medium of communication, but contingency. Robert Stalnaker (1984, p. 85) makes this point clearly: “Content requires contingency. To learn something, to acquire information, is to rule out possibilities. To understand the information conveyed in a communication is to know what possibilities would be excluded by its truth.” For there to be information, there must be a multiplicity of distinct possibilities any one of which might happen. When one of these possibilities does happen and the others are ruled out, information becomes actualized. Indeed, information in its most general sense can be defined as the actualization of one possibility to the exclusion of others (observe that this definition encompasses both syntactic and semantic information).”

  34. 34
    Emkay says:

    John A. Davison — ‘What is even more baffling is why Huxley and Dobzhansky remained Darwinians! I guess they were just “born that way.”’

    Julian Huxley actually did give the reason why he remained a devout Darwinist. On Page 223 of “Essays of a Humanist,” (1966) he confesses: “The sense of spiritual relief which comes from rejecting the idea of God as a super-human being is enormous.”

    Sir Julian was even more explicit when, a decade later as director of the United Nations Educational, Scientific and Cultural Organization (Unesco), he explained to a PBS interviewer who wondered why the scientific and cultural elite of the West had so enthusistically embraced Darwinism. He said:

    “I suppose we leapt at ‘Origins’ because the idea of God interfered with our sexual mores.”

    “Science,” and even the pseudo-science, had little to do with it. It was all about wanting to be a Party Anemaal!


    A past evolution is unsupported by the evidence, a present evolution indemonstrable.

  35. 35
    Strangelove says:

    BarryA, can a number be applied with this definition? I read it, but I still don’t understand how I measure information. Talk slowly, I’m no information expert.

    Start with a sequence of numbers or letters and step me through the process.

  36. 36
    sagebrush gardener says:

    Strangelove,

    Here’s an article on Kolmogorov complexity that, I admit, goes completely over my head. Maybe you can get something from it!

    http://en.wikipedia.org/wiki/K.....complexity

    And here is a simplified introduction to Shannon theory and Kolmogorov-Chaitin theory as they relate to the information content of DNA:

    http://www.talkorigins.org/ori.....feb01.html

    Very interesting stuff. It makes me wish I was smart!

    [Kolmogorov-Chaitin information theory] has been used to discuss the information content of DNA (because with DNA, the information content is not determined solely by the gene sequence, but by the machinery that processes it).

  37. 37
    sagebrush gardener says:

    Strangelove,

    As someone once said, I don’t have any solution, but I certainly admire the problem!

  38. 38
    Tiggy says:

    “Since you are still asking for a definition of information, you apparently did not follow the link that I gave you to the requested definition. This leads me to believe you are not really interested in the answer to your question and are only being tendentious. In any event, here is an excerpt from the article you refused to go read:”

    BarryA, I have never just asked for a definition of information, I asked for a definition of information as applies to life forms, and I asked how to measure the information content of life forms. I only ask this because Ms. O’Leary referred to the information in life forms in her article.

    I have indeed read that paper before I ever posted here, and I read it again after you linked to it. Thank you for posting the generalized definition of information from the paper, although that doesn’t answer the questions asked. Now, as a favor, would you please post the portion of the paper that defines and tells how to measure information in life forms. I just can’t find it. Thanks – I’m not trying to be difficult, I’m just confused as to why I can’t get a straight answer to what I thought was a simple question.

  39. 39
    Tiggy says:

    Thanks sagebrush gardener, that TalkOrigins piece on K-C information really is interesting stuff, and along the lines of what I hoped to learn about measuring information in life forms. However, this part

    ”The creationist trick is to say that the term “entropy” means the same thing in both Shannon and K-C information theories. If that’s true, then you can take a measure of the information content of DNA, using K-C terms, and then argue that on the basis of Shannon theory, the information content of the DNA can never increase.

    The flaw here is actually pretty subtle. K-C says nothing about how information content can change. It simply talks about how to measure information content, and what, in fact, information content means in a mathematical/computational sense. But Shannon is working in a very limited field where there is a specific, predetermined upper bound on information content. K-C, by definition, has no such upper bound.

    Adding randomness to a system adds noise to the system. By Shannon theory, that means that the information content of the system decreases. But by K-C theory, the information content will likely increase by the addition of randomness. K-C allows noise to increase information content; Shannon doesn’t. Mix the two, you get something nonsensical, but you can create some very deep looking stuff that looks very dazzling to people who aren’t trained in either form of information theory.

    …seems to directly contradict the paper that BarryA referred me to. Any ideas which is correct, and why?

  40. 40
    russ says:

    Tiggy, there seems to be some point behind your insistance that someone here supply a means of measuring information in life forms. What is it? What is the object of your apparent skepticism, because you don’t seem “just curious”. Rather your posts come across as a challenge.

  41. 41
    russ says:

    “Do you know of anyone anywhere in medicine using the ID paradigm to produce successful results where the evolutionary paradigm didn’t?” – Tiggy

    Tiggy, I’m a layperson, so bear with me.

    Has NDE contributed anything to the search for disease cures? I ask because a while back there was a quote here from some emminent man of science who said that NDE had contributed absolutely nothing to the success of his research. Maybe someone else can link the UD thread that I’m thinking of.

  42. 42
    russ says:

    Tiggy, let me answer my own question:\\

    Here’s what Massimo Pigliucci, prominent defender of NDE has to say. Correct me if I’m wrong though, because he seems to be talking about microevolution, which no one in ID disagrees with…

    http://www.actionbioscience.or.....iucci.html

    ActionBioscience.org: How does evolution contribute to the understanding of human disease and medicine?

    Pigliucci: There is an entire field that has been developed over the last 20 years called evolutionary medicine. The idea of evolutionary medicine is that human beings are animals like any other species. We are not outside of nature. As such we are subject to the same sort of natural phenomena, including natural selection and other types of evolutionary mechanisms. So evolutionary medicine tries to understand the origin of disease, why we have certain kinds of disease, and how we can fight them using evolutionary principles. Here are two examples:
    Antibiotic use and HIV/AIDS treatment are examples of evolutionary medicine. One of the typical examples is the idea that is essentially evolutionary when we use antibiotics for our ailments. We should use antibiotics in an intelligent way. For example, we should be using multiple antibiotics in a careful regimen. If we use single antibiotics and we don’t use them carefully enough, what we do is cause natural selection in the pathogen to select for resistance. The origin of resistance in antibiotics is an imminently evolutionary mechanism, and if we understand how evolution works, then we can avoid it or at least we can slow it down.
    The same situation goes for the most successful approaches to complex diseases such as HIV/AIDS. One of the best approaches to fight that kind of battle is, in fact, to bombard the population of viruses with a variety of responses, not just with one. For the same reason as multiple antibiotics. The virus evolves very rapidly to respond with resistance to individual medical solutions or medications. When we use multiple ones, what we are doing is using the basic principle of evolution–living organisms simply cannot evolve resistance to complex environments because they cannot count on multiple divisions happening at the same time. That is an important principle that comes out of evolution.

  43. 43
    Tiggy says:

    Tiggy, there seems to be some point behind your insistance that someone here supply a means of measuring information in life forms. What is it? What is the object of your apparent skepticism, because you don’t seem “just curious”. Rather your posts come across as a challenge.

    I have not insisted on anything. I have politely asked several times for an explanation for how to measure the information content in life forms. I only ask because I honestly don’t understand how one can claim ‘information can’t increase in life forms’ when there is no way to measure what the information was to begin with. If you can’t measure something, how can you know if it increased, decreased, or stayed the same?

    This is not a challenge, it’s an honest question that not one person here has been able to clue me in on. I know very little about the details of ID, which is why I’m politely asking. What am I to think about the hostile non-answers I keep receiving?

  44. 44
    John A. Davison says:

    #34 by Emkay

    I take it from your chronic parody of my signature that you deny evolution (reproductive continuity with change) in any form. Would you please present your synopsis of the present situation and how it got to be that way?

    Incidentally the alternative to Darwinism is not “special creation.” The only inescapable requisite is one or more creations and/or front-loadings by one or more programmers (I call them Big Front Loaders or BFLs) an unknown number of times and at unknown sites in the geological column.

    It is unthinkable to deny organic evolution.

    “A past evolution is undeniable, a present evolution udemonstrable.”
    John A. Davison

  45. 45
    sagebrush gardener says:

    Tiggy,

    I am trying to make sense of a subject that is out of my field, and there must be someone who can address your questions better than I can. But I feel that we are both making an honest attempt to learn more, so I’ll give it the best shot I can. Maybe some good will come of it, even it’s only to prod someone more knowledgeable to clear up my misconceptions.

    You have asked “for an explanation for how to measure the information content in life forms.” I don’t think that anyone can yet say with precision that (for example) the human genome has information content x, while the fruit fly genome has information content y. We can speak in generalities, but there are many difficulties in measuring this precisely (or even defining what “information” is). For example, how would you rate non-coding DNA? Should it be given the same weight in calculations as DNA that has a known function? At this time I think we have not even quite arrived at the point where we know how much we don’t know about the genetic code.

    [The] TalkOrigins piece on K-C information … seems to directly contradict the paper that BarryA referred me to. Any ideas which is correct, and why?

    I’m sorry — I really tried hard to understand this. I studied both Chu-Carroll’s post at T-O and Dembski’s paper for a couple of hours until my eyes crossed and my head started to spin. I learned a lot about information theory and the difference between the Shannon definition of information vs. the K-C definition of information, but I wasn’t able to see how Chu-Carroll’s post was relevant to Dembski’s argument. I am not even sure that Chu-Carroll was specifically referring to Dembski’s argument here, or if so, if he was characterizing it fairly.

    Chu-Carroll at T-O wrote:

    The creationist trick is to say that the term “entropy” means the same thing in both Shannon and K-C information theories. If that’s true, then you can take a measure of the information content of DNA, using K-C terms, and then argue that on the basis of Shannon theory, the information content of the DNA can never increase.

    Dembski parenthetically refers to both Shannon and K-C theories, but I was unable to see where he was basing his conclusion on these, let alone confusing the two. Maybe someone who understands this better than me can explain.

  46. 46
    Chris Hyland says:

    “So when will Darwinism be history?”

    It very much depends on what your definition of ‘Darwinism’ is.

    If it means something like: ‘all variation is caused by DNA copying errors’ or ‘the direction of adaptive change is determined solely by natural selection’, or even ‘the same variation that changes moth colour and finch beaks is responsible for turning a hippo into a whale’ then I’d say pretty soon. But if it means somthing like ‘the major mechanisms of evolution that have been deduced have no discernable direction or goal’ then I can’t see that happening until another theory is proposed that has much better explanatory power.

    ps I know I seem to be saying that ‘textbook Darwinism’ will die. See the previous literature bluffing thread for my thoughts on that. Long story short if Sean Carrol and Wallace Arthur wrote a textbook on evolution I’d probably be happy.

  47. 47
    John A. Davison says:

    I thought I had presented an alternative to the Darwinian fairy tale with “A Prescribed Evolutionary Hypothesis,” Rivista di Biologia 98: 155-166, 2005. It is also widely available on the internet and was once here as well. Darwinism is the most failed hypothesis in the history of experimental science, while absolutely nothing pleads against the PEH. Everything we are now learning from molecular biology and chromosome structure and function indicates a preprogrammed emergent scenario for both ontogeny and phylogeny, a scenario in which no role for chance is or ever was apparent.

    What is happening is a perfect vindication for what Leo Berg claimed in 1922:

    “Evolution is in a great measure an unfolding of pre-existing rudiments.”
    Nomogenesis, page 406.

    “Neither in the one nor in the other is there room for chance.”
    ibid page 134.

    I only wish he had used the past tense in both statements.

    Bateson had preceded him as I noted previously here at Uncommon Descent and elsewhere.

    The persistence of the Darwinian hoax can only be explained as a manifestation of a congenital refusal to accept what is so obvious to some of us and which was so unambiguously presented by Einstein:

    “EVERYTHING is determined… by forces over which we have no control.”
    my emphasis

    “A past evolution is undeniable, a present evolution undemonstrable.”
    John A. Davison

  48. 48
    BarryA says:

    John writes: “It is unthinkable to deny organic evolution.”

    John, you should get out more often. One cannot disprove, in principal, the YEC claim that the entire universe was created on October 23, 4004 BC just after lunch. God could have created in an instant a universe with an illusion of vast age. Am I saying this is what happened? No. I’m just saying its “thinkable.”

  49. 49
    Tiggy says:

    “You have asked “for an explanation for how to measure the information content in life forms.” I don’t think that anyone can yet say with precision that (for example) the human genome has information content x, while the fruit fly genome has information content y. We can speak in generalities, but there are many difficulties in measuring this precisely (or even defining what “information” is). For example, how would you rate non-coding DNA? Should it be given the same weight in calculations as DNA that has a known function? At this time I think we have not even quite arrived at the point where we know how much we don’t know about the genetic code.”

    THANK YOU!! That is exactly what I have been trying (unsuccessfully) to get an answer to. If one can’t define information or measure information content in a life form (I sure can’t do it), how can one claim information in a life form can’t increase?

    I will continue searching the ID literature for an answer, as I’ve pretty much given up hope for getting one here. I will also ponder why no one on an ID blog could provide this answer, assuming one exists.

  50. 50
    John A. Davison says:

    BarryA

    I say it is unthinkable. What to you intend to do about it? Ban me? That is my usual fate at forums where I have presented my convictions. At seventy-eight and not in the best of health, I can assure you I couldn’t care less.

    “Of the few innocent pleasures left to men past middle life – the jamming common sense down the throats of fools is perhaps the keenest.”
    Thomas Henry Huxley

    “A past evolution is undeniable, a present evolution undemonstrable.”
    John A. Davison

  51. 51
    Patrick says:

    I chose to quote Richard Dawkins in case you have a problem with certain sources:

    The technical definition of “information” was introduced by the American engineer Claude Shannon in 1948. An employee of the Bell Telephone Company, Shannon was concerned to measure information as an economic commodity. It is costly to send messages along a telephone line. Much of what passes in a message is not information: it is redundant. You could save money by recoding the message to remove the redundancy. Redundancy was a second technical term introduced by Shannon, as the inverse of information. Both definitions were mathematical, but we can convey Shannon’s intuitive meaning in words.

    Redundancy is any part of a message that is not informative, either because the recipient already knows it (is not surprised by it) or because it duplicates other parts of the message. In the sentence “Rover is a poodle dog”, the word “dog” is redundant because “poodle” already tells us that Rover is a dog. An economical telegram would omit it, thereby increasing the informative proportion of the message. “Arr JFK Fri pm pls mt BA Cncrd flt” carries the same information as the much longer, but more redundant, “I’ll be arriving at John F Kennedy airport on Friday evening; please meet the British Airways Concorde flight”. Obviously the brief, telegraphic message is cheaper to send (although the recipient may have to work harder to decipher it – redundancy has its virtues if we forget economics). Shannon wanted to find a mathematical way to capture the idea that any message could be broken into the information (which is worth paying for), the redundancy (which can, with economic advantage, be deleted from the message because, in effect, it can be reconstructed by the recipient) and the noise (which is just random rubbish).

    “It rained in Oxford every day this week” carries relatively little information, because the receiver is not surprised by it. On the other hand, “It rained in the Sahara desert every day this week” would be a message with high information content, well worth paying extra to send. Shannon wanted to capture this sense of information content as “surprise value”. It is related to the other sense – “that which is not duplicated in other parts of the message” – because repetitions lose their power to surprise. Note that Shannon’s definition of the quantity of information is independent of whether it is true. The measure he came up with was ingenious and intuitively satisfying. Let’s estimate, he suggested, the receiver’s ignorance or uncertainty before receiving the message, and then compare it with the receiver’s remaining ignorance after receiving the message. The quantity of ignorance-reduction is the information content. Shannon’s unit of information is the bit, short for “binary digit”. One bit is defined as the amount of information needed to halve the receiver’s prior uncertainty, however great that prior uncertainty was (mathematical readers will notice that the bit is, therefore, a logarithmic measure).

    In practice, you first have to find a way of measuring the prior uncertainty – that which is reduced by the information when it comes. For particular kinds of simple message, this is easily done in terms of probabilities. An expectant father watches the Caesarian birth of his child through a window into the operating theatre. He can’t see any details, so a nurse has agreed to hold up a pink card if it is a girl, blue for a boy. How much information is conveyed when, say, the nurse flourishes the pink card to the delighted father? The answer is one bit – the prior uncertainty is halved. The father knows that a baby of some kind has been born, so his uncertainty amounts to just two possibilities – boy and girl – and they are (for purposes of this discussion) equal. The pink card halves the father’s prior uncertainty from two possibilities to one (girl). If there’d been no pink card but a doctor had walked out of the operating theatre, shook the father’s hand and said “Congratulations old chap, I’m delighted to be the first to tell you that you have a daughter”, the information conveyed by the 17 word message would still be only one bit.

    ……..

    DNA carries information in a very computer-like way, and we can measure the genome’s capacity in bits too, if we wish. DNA doesn’t use a binary code, but a quaternary one. Whereas the unit of information in the computer is a 1 or a 0, the unit in DNA can be T, A, C or G. If I tell you that a particular location in a DNA sequence is a T, how much information is conveyed from me to you? Begin by measuring the prior uncertainty. How many possibilities are open before the message “T” arrives? Four. How many possibilities remain after it has arrived? One. So you might think the information transferred is four bits, but actually it is two. Here’s why (assuming that the four letters are equally probable, like the four suits in a pack of cards). Remember that Shannon’s metric is concerned with the most economical way of conveying the message. Think of it as the number of yes/no questions that you’d have to ask in order to narrow down to certainty, from an initial uncertainty of four possibilities, assuming that you planned your questions in the most economical way. “Is the mystery letter before D in the alphabet?” No. That narrows it down to T or G, and now we need only one more question to clinch it. So, by this method of measuring, each “letter” of the DNA has an information capacity of 2 bits.

    Whenever prior uncertainty of recipient can be expressed as a number of equiprobable alternatives N, the information content of a message which narrows those alternatives down to one is log2N (the power to which 2 must be raised in order to yield the number of alternatives N). If you pick a card, any card, from a normal pack, a statement of the identity of the card carries log252, or 5.7 bits of information. In other words, given a large number of guessing games, it would take 5.7 yes/no questions on average to guess the card, provided the questions are asked in the most economical way. The first two questions might establish the suit. (Is it red? Is it a diamond?) the remaining three or four questions would successively divide and conquer the suit (is it a 7 or higher? etc.), finally homing in on the chosen card. When the prior uncertainty is some mixture of alternatives that are not equiprobable, Shannon’s formula becomes a slightly more elaborate weighted average, but it is essentially similar.

    ………..

    Remember, too, that even the total capacity of genome that is actually used is still not the same thing as the true information content in Shannon’s sense. The true information content is what’s left when the redundancy has been compressed out of the message, by the theoretical equivalent of Stuffit. There are even some viruses which seem to use a kind of Stuffit-like compression. They make use of the fact that the RNA (not DNA in these viruses, as it happens, but the principle is the same) code is read in triplets. There is a “frame” which moves along the RNA sequence, reading off three letters at a time. Obviously, under normal conditions, if the frame starts reading in the wrong place (as in a so-called frame-shift mutation), it makes total nonsense: the “triplets” that it reads are out of step with the meaningful ones. But these splendid viruses actually exploit frame-shifted reading. They get two messages for the price of one, by having a completely different message embedded in the very same series of letters when read frame-shifted. In principle you could even get three messages for the price of one, but I don’t know whether there are any examples.

    It is one thing to estimate the total information capacity of a genome, and the amount of the genome that is actually used, but it’s harder to estimate its true information content in the Shannon sense. The best we can do is probably to forget about the genome itself and look at its product, the “phenotype”, the working body of the animal or plant itself. In 1951, J W S Pringle, who later became my Professor at Oxford, suggested using a Shannon-type information measure to estimate “complexity”. Pringle wanted to express complexity mathematically in bits, but I have long found the following verbal form helpful in explaining his idea to students.

    We have an intuitive sense that a lobster, say, is more complex (more “advanced”, some might even say more “highly evolved”) than another animal, perhaps a millipede. Can we measure something in order to confirm or deny our intuition? Without literally turning it into bits, we can make an approximate estimation of the information contents of the two bodies as follows. Imagine writing a book describing the lobster. Now write another book describing the millipede down to the same level of detail. Divide the word-count in one book by the word-count in the other, and you have an approximate estimate of the relative information content of lobster and millipede. It is important to specify that both books describe their respective animals “down to the same level of detail”. Obviously if we describe the millipede down to cellular detail, but stick to gross anatomical features in the case of the lobster, the millipede would come out ahead.

    But if we do the test fairly, I’ll bet the lobster book would come out longer than the millipede book. It’s a simple plausibility argument, as follows. Both animals are made up of segments – modules of bodily architecture that are fundamentally similar to each other, arranged fore-and-aft like the trucks of a train. The millipede’s segments are mostly identical to each other. The lobster’s segments, though following the same basic plan (each with a nervous ganglion, a pair of appendages, and so on) are mostly different from each other. The millipede book would consist of one chapter describing a typical segment, followed by the phrase “Repeat N times” where N is the number of segments. The lobster book would need a different chapter for each segment. This isn’t quite fair on the millipede, whose front and rear end segments are a bit different from the rest. But I’d still bet that, if anyone bothered to do the experiment, the estimate of lobster information content would come out substantially greater than the estimate of millipede information content.

    Then of course you could buy one of Bill’s books. We’re concerned with CSI or “usable information”.

  52. 52
    Michael "Tutu" Tuite says:

    If volume of publication is any indication of a research field’s vitality, then (pardon me Twain) the rumors of evolution’s death have been greatly exaggerated. A quick search of citations on Web of Science yielded this steady climb in research output in evolution over the past half-dozen years:

    2005 : 31,559
    2004 : 30,299
    2003 : 27,315
    2002 : 25,231
    2001 : 24,192
    2000: 22,738

    How soon should we expect to see the increasing trend of intelligent design publications exceed the anticipated diminishing volume of evolution-related publications?

  53. 53
    Charlie says:

    Michael,
    There might be some significance in that polling – Barbara Forrest and the NSCE certainly think it speaks well of neo-Darwinism.
    On the other hand, it could just indicate the battle mode the defenders are in. Perhaps, like Letterman’s Bin Laden (“Oh yeah, death to America”) they are sometimes just repeating an obligatory mantra – a little tag-on, irrelevant to the body of the work, meant to mollify.
    I have an article on the 1 in a million genetic code, how precisely suited it is to information transfer and error correction. And there is a paragraph at the end, “How could this have evolved?”, which answers itself with something like “some say it couldn’t, this guy thinks it could have”. Evolution was irrelevant to the feature.
    Or in another here, talking about the “mystery ape” (just a popular magazine, but the point holds) one line is added near the end “we may be seeing evolution in action”. There was no reason to infer that from the article, nor did it add anything to the interpretations.
    Or this from Scientific American, on the Big Bang. Why would an article articulating the Big Bang need two introductory paragraphs about Darwinian evolution? Why would it need to reinforce the theory?

    A century and a half after On Origin Of Species, biologists still debate the mechanisms and implications (though not the reality) of Darwinism, while much of the public still flounders in pre-Darwinian cluelessness.

    These of course are more popular magazines, and perhaps not the ones in your survey, but the intent is obvious – circle the wagons, get the word out, evolution is true!

  54. 54
    ofro says:

    Question somewhat related to the topic of the “demise” of Darwinism:

    So the theory of evolution will be gone in the relative near future. What about nature? Is there a notion in ID how fauna and flora might look like in, say, 30 million years?

    According to evolutionary theory, evolution will go on, new species will emerge with better adaptation to the present or possibly changing environments. There is even a chance that humans, as we see each other now in the mirror, may look quite different due to genetic drift or other evolutionary driving forces. Perhaps we(they?) are so different that we/they qualify for being given a different species name.

    According to ID, would the human “design” remain the same more or less in perpetuity?
    (hoping, of course, that humans haven’t destroyed each other and the earth by then).

  55. 55
    tinabrewer says:

    ofro: the design inference does not say what you impute to it. It does not say that humans cannot change over time, or that any other species cannot change over time. It also doesn’t specify the mechanism by which this change will/will not take place. It only infers that certain aspects of nature are best explained as being a result of intelligent agency and not blind, purposeless processes. THat’s all. I myself am quite a fan of Rupert Sheldrake, whose views are quite dramatically evolutionary, while at the same time completely non-materialist. He argues for the existence of “information fields” which govern morphology and evolution. You might check his ideas out. While I cannot say whether he would be friendly to ID, I can say for certain that it is entirely possible to be a fan of his ideas while standing inside the “big tent” of ID.

  56. 56
    John A. Davison says:

    Richard Dawkins is unquestionably the biggest con artist in the history of scientific communication. He is never cited in peer reviewed papers because he has never presented a single tangible piece of data. He lives in his own world all alone. He no longer even lectures at Oxford. He is probably afraid a student might ask him a question. He is the perfect demonstration of the Prescribed Evolutionary Hypothesis, a victim of his own twisted, irreversible, congenital heritage.

    I personally offered him the opportunity to present his views at my blog when I sponsored the “First Annual Tournament of Evolutionary Mechanisms.” He never even responded. Gould never responded to my questions or to the receipt of my reprints either. For a while Mayr did but it stopped when he found it necessary to remind me of how many “thousands of words I have written on the subject of organic evolution.” It should surprise no one that these three most influential Darwinian propagandists have earned my characterization as collectively, the “Three Stooges” of the evolutionary literature.

    It is hard to believe isn’t it?

    “A past evolution is undeniable, a present evolution undemonstrable.”

Leave a Reply