Intelligent Design

New revelations on gene expression

Spread the love

Research led by Prof Frank Gannon has uncovered new revelations on possible ways to switch genes on and off and how cells interpret their DNA.

Only some genes are expressed in any given tissue. Proteins active in nerve cells are not expressed in the liver. How this is controlled is complex. One fundamental factor is whether the DNA is tagged or modified (methylated) in the region of a gene. This is important in gene expression and balancing the level proteins in different cell lines.

Although gene methylation (when a gene is turned on or turned off) was thought to be stable and unchangeable, this is not the case. Things are even more complicated than previously thought. Transient, cyclical and dynamic methylation is a general phenomenon occurring at many different genes and in many different cell types.

Yet another unexpected level of complexity needing a “just so story” from Darwinian devotees. The obvious Design is only an illusion!

37 Replies to “New revelations on gene expression

  1. 1
    jpark320 says:

    Epigenetics… what a wonderful thing.

    Seriously, I really want to hear how Darwinist will explain this. Environmental pressures selecting for epigenetic phenomenon? Crazy talk!

  2. 2
    leo says:

    jpark320,

    Why not? There is certainly a phenotype result to such actions. Aside from you reasoned argument of:
    “Crazy talk!”
    why don’t you think epigenetic changes can about through evolutionary forces.
    To my mind, imprinting, histone modifications, etc. actually lend themselves quite nicely to an evolutionary perspective. Fairly basic changes at the genetic level lead to vast changes in chromatin formation, histone deposition, methylation, acetylation. A change in CpG motifs (through repeats, transposons, or simple mutations) changes methylation patterns and therefore changes gene expression. It is also an excellent defense mechanism, as bacterial and viral species have a much higher GC content, which would select for some of these phenomenon. I did some undergrad work on imprinted genes, we looked for them by manual sequencing and SNP screening. Not exactly high tech stuff, but simple changes can cause far reaching effects.

    Now, I cannot comment on these two papers at this time, as I have yet to read them. I clearly don’t have the money that you do to afford a Nature subscription. I’ll have to look at work tomorrow.

    However, apart from any of that, seeing as you are so sure that this phenomenon did not form through evolution and you abhor the ‘just so story’, please, let me know how this was designed? And no ‘just so stories’ please.

  3. 3

    I’ve always thought that the mechanism for cell differentiation will be irreducibly complex. If you aren’t a materialist, it wouldn’t have taken a rocket scientist to figure that out.

  4. 4
    jerry says:

    leo,

    Glad to see you discussing biology. From what little I understand the egg contains mucho enzymes that are accessed at conception to affect gestation. It is only till much later that transcripted proteins start playing a major role. Is that your understanding?

    So do such enzymes present in the egg qualify at epigenesis? They were created by the genome of the mother. Also the genome of the mother created the egg cell and any spatial configurations that might affect cell division and lead to different genes being expressed at different times and different places in the development. Is this epigenesis or genetic?

    Are there any other factors besides these that affect cell division and cell differentiation that are known?

    I find this a fascinating topic, just why are certain genes expressed and others not as cells differentiate. This seems to be the origin of cell types but what causes it will be interesting.

    Methylation has been called epigenesis but is it really? Suppose it is found that the cause of it is due to some genetic component. While it does not change the DNA sequence it does put a layer on top of it. Is any of this inherited?

    These are just questions since much of this is new and interesting. I do not think this will affect the ID vs naturalism debate very much because I personally think the complexity is so immense to defy any naturalistic method of happening and this is just another layer of complexity so it will add little to the debate which already seems one sided. I am sure you will disagree but that is not the issue now which is trying to understand this phenomena.

  5. 5
    jpark320 says:

    @ Leo

    (Note: I hope my words don’t come as hostile! Sometimes in these dialogs it seems that way, but I’m really not!)

    I hope you understand my comment was not intended to go in depth on the inadequacy of environmental pressure to produce such changes. I think that is an unfair task, esp. on a blog where you can simply post to cheer on your side right? For instance, I really like the Shaq trade to Phoenix, can’t I just come on a blog and say “GO SHAQ” without explaining myself?

    You want me to tell you how this is designed? Well I’ll try, but I don’t think you will like the answer- God designed the genome that way. That’s why its called intelligent design, b/c no natural function known to man can create such complex order. The intricacy of the genome, screams out that it was programmed that way, not through happenstance become so complex. If you want a more satisfying answer (this prb doesn’t but I’m gonna try anyway): epigenetic control of the genome was part of the initial design of the cell and did not come about after successive environmental pressure. Things like hair color, body habitus, and natural ranges of intellectual ability may have, but from a design perspective (dare say logical), epigenetics is not something that came about via natural selection, much like the genome itself, it was in place with the design of the first organism.

    The example you gave, you a priori started out with a complex organism that already can control itself epigenetically, so of course if you have a fully programed/designed genome with the epigenetics already in place, slight changes in it will cause different methylation patterns.

    My question was more along the lines of, without intelligence, from the primitive soup, how did nucleic acids naturally come about in such a way that they not only reproduced, but were environmentally pressured to control itself in an intricate way of self checking and organizing it self via methylation. You mentioned histones, like it so obvious how that much supercoiling could evolve in the first place.

    The question I’m asking is not “how can mutations in a designed genome react to environmental pressures through existing epigenetic phenomenon” rather how did epigenetics arise without design. Currently, the tools we know that must have been at disposal for evolution, BEFORE the era of epigenetics are highly inadequate to create such a phenomenon.

    With Respect,

    jpark320

  6. 6
    idnet.com.au says:

    leo

    Design is not a “just so story”. We already know that intelligent engineers often design systems with feedback and control systems.

    We have no demonstrated mechanism for natural law creating complex systems.

  7. 7
    leo says:

    jerry,

    It’s good to talk about it. My main area is pathogens, so I fully admit from the start that this is outside my range, though I know a little about it and these papers seem really interesting.

    As far as my understanding goes, epigenetics refers to modifications to the DNA (like methylation) and chromatin structure that is (at least it was thought to be, perhaps no longer as these papers point out) stable through cell division. Meaning differences at the chromosome level as opposed to the sequence level. That being said, I don’t think that maternal enzymes would qualify unless they are involved directly in modifying the genome, i.e. methylases.

    I can talk a little bit about imprinting, as this is something that I worked on. Back then (2001), no one was quite sure what caused imprinting, just that it occurred and it was an important force in development (though most thought methylation was the key).

    Briefly, an imprinted gene is one in which either the maternal or the paternal copy is expressed while the other copy of the exact same gene is not (this can occur spatially (only in certain tissues), temporally (only during certain stages of development), or totally). Turns out that the copy that is not expressed is methylated somewhere in the promoter which stop it from being turned on, or there are histone modifications which do not allow the chromosome to uncoil and hence leave the gene inaccessible.

    The problem being, in the germline, the imprint is lost and then must be reformed according to the sex of the individual. Meaning, if the father passes on his maternal copy of a chromosome to a child, it has to have a paternal imprint. Thus, the modifications are not genetic, but epigenetic. – they occur above the sequence.

    Now, is the ultimate cause genetic? Likely, but the action occurs on the chromosome or DNA level as opposed to the sequence level and that is why the term epigenetic is applied. It refers to the site of action as opposed to the cause (at least in my understanding). And, of course, there are evolutionary hypotheses as to why/how this occurred.

    If these papers say that methylation can cycle on/off a promoter, that has far reaching consequences. I am very interested to read them.

  8. 8
    leo says:

    jpark320,

    The issue I have is that you have no evidence to back up your hypothesis. But fair enough, you believe what you believe.

    The only issue I have is with the question you ask.

    how did epigenetics arise without design

    I think the proper question to ask, and I believe the one that most scientists who study the issue ask, is:

    How did epigenetics arise?

  9. 9
    jpark320 says:

    Leo,

    I have plenty of evidence, the thing is you won’t accept it. From what I have seen you post, I haven’t seen a shred of evidence either! So I guess we’re even. But you are the one coming on to the ID website, maybe you have the onus to back up what you say.

    I’m sure that atheist/naturalist scientist approach the question that way and I’m okay with that.

    The question I am posing should be:

    Can epigenetics arise without design?

    This is a different question than yours:

    How did epigenetics arise?

    And I find min more interesting and answered in the negative.

  10. 10
    Joseph says:

    And I thought there was a little “Madonna” molecule floating around with the tag “Express Yourself…”

  11. 11
    gpuccio says:

    leo:

    The problems rised by these two papers are relevant to a big group of questions, which have been recently discussed rather exhaustively in recent threads.

    I try to sum up a few apsects here, from the point of view of ID, because I am indeed convinced that this area of study will prove to be extremely important for the ID position, and extremely uncomfortable for the darwinian approach.

    The problem here is not strictly epigenetic in itself. The problem is that we have a genome, whose main purpose according to the general approach seems to be to define the proteome of a species: no more according to a one gene -> one protein model, bur in the end the result is something like that:

    Species: umans

    Genome (protein coding): about 25,000 genes (1.5% of the total)

    Proteome: ? (probably hundreds of thousands proteins)

    Well, that would be still simple if we were unicellular beings, and if the single cell expressed all of that. In that case, we would “only” have to explain how evolution came to realize those 25,000 genes, each one of which is well beyond the serach possibilities of the whole universe, and in some way coordinate their relationship, sequence, expression, and so on. The bulk of the ID arguments (CSI, IC) has the purpose to show that that kind of complexity cannot in any way be explained by the present theory (RM + NS), and that it can only be explained assuming design.

    But. of course, things are not “that simple”. We are not unicellular beings. So there is another “small” problem to be explained: each of the billions of cells in our organism has a different transcriptome, that is selects a different set of those 25,000 genes to be expressed, and in different measures, and therefore realizes a different proteome. That is the cause of what any cell is: a leukocyte, a hepatocyte, a fibroblast, an adult stem cell, a spermatocyte, and so on. At the same time, it is also the cause of the special “state” in which the cell is (step of cell cycle, apoptosis, differentiation, and so on). Moreover, the state of each cell is coordinated by inter cellular messages of all kinds (thousands of different cytokines, neural control, environmental variables, and so on) from both near and immensely distant structures.

    I would definitely affirm that all that adds various new levels of complexity to the “simple” complexity of 25,000 protein coding genes.

    The problem is of two kinds:

    1) How is that complexity implemented? In other words, how does it work?

    2) How did that implementation “evolve”? Does it require design?

    About point 2), if you are aware of the ID arguments about design and CSI, and the total failure of naturalistic theories in trying to explain the designed complexity of biological beings, the, either you agree with those arguments or not (and I would bet that you don’t agree…), you can anyway easily understand that as we discover ever new and deeper levels of complexity, the ID arguments become stronger (if it ever was necessary).

    Reagarding point 1), I would like to reaffirm here that we don’t know how that works. While we can explain (in some measure) how a protein is synthesized startring from its gene, because we know the genetic code by which the sequence information is stored in protein coding DNA, and the mechanisms of transcription and translation, we have no idea of where the different transcriptomes are coded, and how their differential “reading” is implemented.

    Why am I saying that the different transcriptomes have to be coded somewhere? And that the differential, sequnecial procedures for their ordered implementation have to be coded somewhere?

    Because there is no other possibility, unless you believe in magic. Let’s be simple:

    a) An adult human body is made of (approximately) 10^14 cells, which can be categorized in a great number of different cell types, and have different spacial location, differentiation, state, characteristics, global information, and so on.

    b) All those cells derive forn one single cell, the zygote.

    c) The genome in all those cells (with minor exceptions) is the same.

    d) Transcriptome and proteome is practically different in each single cell: very different in different cell types, less different in a same cell type, according to cell state, differentiation level, specific activity and reaponse to environment, and so on.

    f) As I have shown in a previous thread, commenting on a similar reflection by Stuart Kaufmann, the search space of all possible transcriptomes is extremely huge, even if calculated with the gross simplification of assuming that each gene may have only two states (on – off), which would give 2^25,000 possible states, and becomes much bigger if we think that the “level” of activation is crucial, and that each gene can really produce many different proteins.

    g) In other words, as the genome is the same in all cells, if each cell at any given time had to select its transcriptome – proteome unguided, even in response to “environmental” signals (whatever they may be), that would immediately produce complete anarchy and any complex multicellular organization would be absolutely impossible (especially one involving 10^14 differentiated cells).

    h) It is therefore obvious that each living cell of a multicellular organism is, at any given moment, executing some specific and ordered and functional program of information, which allows the cell to “know” which genes to trancript, at what level, in what order, for what time, and so on. You may believe that it happens for a series of lucky feedbacks in response to random environmental variables, but I certainly don’t, and I hope most reasonable people will find that argument quite compelling.

    So, the problem is still there. Where is that information, and how does it unfold in a specific order, which has to take into account not only the necessities of each single cell, but also, and especially, the general plan and necessities of the unfolding macro-organism?

    The answer can only be: somewhere in the cell itself. It certainly is there in the zygote, and it probably is still there in descending cells.

    Is it in the genome? I think it certainly is, at least in part. Where? Most likely, in the 98.5% which is not protein coding. How? That nobody really knows (although something is being discovered, but still really little).

    Is it elsewhere? In the cytoplasm? I think it certainly is, at least in part. That’s where epigenetic comes in. We have to remember that cloning experiments show that, in some way, the oocyte’s cytoplasm has the power two recruit the full informational potential from the genome of a differentiated cell, a fact which really looks like a miracle.

    Probably, the best answer is that a complex interaction between the genome, especially its non coding parts, and the rest of the cell, allows the expression of that complex procedural information, and nobody knows how. But, in that complex set (genome + epigenetic factors), which is the same as saying, in the cell as a whole, in some way all that information must be present: the general plan for the organism, the set of functional transcriptomes, the order and allocation of the different functional transcriptomes to each different cell at any given moment, the coordination of it all, the error management software, the fundamental responses to a huge set of outward stimuli, the project for the complex regulatory networks (cytokines, immune system, nervous system) and so on.

    You see, it has all to be there, somewhere, somehow. In that first cell. Because it did not happen just once, for luck. It happens every day, in different but essentially similar forms. Therefore, it is controlled. There is no doubt about it.

    It has all to be there, and nobody knows where, and nobody knows how.

    And, obviously, it had to be built in some way. All that information. All that coordination. Which we still cannot understand or even localize.

    That’s not only the sequence of one protein. That’s not only the aggregation of a molecular machine. That’s more, much more.

    And if the ID arguments about the generation of biological information are true (and they are!)for the generation of functional proteins and of molecular machines, how much more they will be true for the generation of the procedural information for the different transcriptomes, for the body plan, for the tissue plan, for the organ plan, for the coordination networks, in other words for all the higher levels of aggregation of information, of intelligent, complex, functional information, which are absolutely indispensable for multicellular life, which have to be there, which will be gradually discovered and slowly understood?

  12. 12
    jerry says:

    gpuccio,

    Way to go!!!

  13. 13
    thoracicduck says:

    Rather than just have the readers of this blog debate what they think the paper says and how its content relates to ID, why doesn’t someone ask the authors?

    I assume that someone associated with this site has the academic credentials to contact the authors as colleagues an ask them how their research provides evidence for intelligent design and/or presents problems for current evolutionary theory.

  14. 14
    Turner Coates says:

    gpuccio,

    I’m convinced of your goodwill, and I certainly won’t accuse you of intentional deception, but you’re perpetuating a centuries-old fallacy:

    Well, that would be still simple if we were unicellular beings, and if the single cell expressed all of that. In that case, we would “only” have to explain how evolution came to realize those 25,000 genes, each one of which is well beyond the serach possibilities of the whole universe, and in some way coordinate their relationship, sequence, expression, and so on.

    The fallacy lies in assuming that natural processes had to “hit” the “target” you specify, rather than did produce “ratchets” that, with billions of “clicks” over hundreds of millions of years, managed to latch in complex methods of self-preservation and self-propagation. No evolutionary theorist would tell you that the trajectory of life on earth might not diverge radically with slight changes in initial conditions. No evolutionary theorist claims that the universe searched for the human genetic network.

    Evolutionary theory says that systems that “work differently” do emerge from systems that do the work of survival and reproduction, but does not say that any specified system had to emerge. And that is why arguments from improbability, in their sundry forms, are utterly inappropriate.

    Yes, I’m aware that William Dembski took a shot at penalizing “specification dredging” in his latest (last?) revamp of complex specified information. His reasoning hinges on a revolutionary approach to statistical hypothesis testing. I said revolutionary, not correct. The correctness of what he’s done to Fisherian statistics has nothing to do with ID per se, and it would take clinical paranoia to claim that his reputation as an ID advocate would keep him from getting fair reviews at statistics journals. For that matter, he could submit under a pseudonym, e.g., Student or Finch. I completed only a graduate minor in statistics, so my belief that the approach is wrong doesn’t carry much weight. But the fact that he’s had 3-4 years to get it through peer review as pure statistics, and has failed, is a strong hint that the statistics scholars are with me.

    The argument from improbability is no less wrong now that it’s been for hundreds of years. Given a population of living organisms — the starting point for evolutionary theory — variation and adaptation are certain. The theory of evolution does not specify where adaptation will lead. Present-day arguments from improbability push specification on evolutionary theory, and then make shows of enormous and minuscule numbers for those who did not recognize the initial error.

    Plug some different elements into the argument from improbability, and you can show that it’s impossible that anyone ever won the lottery.

  15. 15
    JGuy says:

    leo wrote:

    The only issue I have is with the question you ask.

    how did epigenetics arise without design

    I think the proper question to ask, and I believe the one that most scientists who study the issue ask, is:

    How did epigenetics arise?
    [my bold emphasis]

    Answer:

    It was designed.

  16. 16
    DaveScot says:

    Turner

    And that is why arguments from improbability, in their sundry forms, are utterly inappropriate.

    This is a gross misunderstanding of physics. Statistical mechanics ( the probabilities of things happening or not happening) is the core of our understanding of nature above the quantum scale. Without it we’d be lost in a vast maze of never being able to predict anything. By discounting probabilities you discount physics. This is a basic problem with most biologists. Their understanding of nature seems to stop with chemistry. They have no appreciation for the physics which explain chemistry.

  17. 17
    Ekstasis says:

    gpuccio, fantastic presentation of the challenging mountain that was somehow scaled!! Embedded in the discussion you say “… the fundamental responses to a huge set of outward stimuli …”

    I would like to ask a very rudimentary question, and I would very much appreciate the patience required for an anwer. The whole stimulus – response thing is a very simple concept. I was wondering how it supposedly evolved, whether at the cellular level or above. In order to benefit an organism for natural selection it appears that a stimulus-response mechanism must have four components in place and operating: 1. “perceive” the incoming stimulus, whether in the form of light, pressure, sound waves etc; 2. communicate the perception to a processing and control center, whether the cell nucleaus, organism brain, etc; 3. formulate a decision based on the stimulus, i.e., some sort of response, and 4. execute the response, whatever that might be.

    Could someone enlighten me about how such a mechanism might have evolved, not once but a myiad of times in different forms at multiple levels (cell, organ, and organism), with all four components, in an undirected process?

  18. 18
    irreducible_complacency says:

    Turner I agree with you (sorry Dave) about the limitations of applying the Fisherian approach to ID. I was curious as to your views of the applicability of the Bayesian approach to ID.

    It seems to be more fruitful, at first approximation, because we can actually use the knowledge we have of natural designers in constructing hypotheses, as opposed to the sterile argument from improbability that is confounded with what we don’t know.

    The first problem that comes to mind is the problem of category error in the assumption that natural designers (caterpillar cocoons, coral reefs, beaver dams, bird nests, etc) are the same sorts of designers that would hang the sun and moon and stars.

    But as christians we know that this is the sort of designer that we are talking about, and further that only this particular designer is sufficient to even allow for the logical methodology necessary to account for science itself.

    So it seems to be a damned if you do damned if you don’t situation. I am curious as to what you think is the appropriate maneuver for escaping both scylla and charybdis.

  19. 19
    DaveScot says:

    thoracic

    Experimental science gives us data. The data belongs to no person or theory. Individuals interpret the data as to how it fits or doesn’t fit with particular theories or hypotheses. Given that the vast majority of those working in experimental biology have an a priori commitment to neo-Darwinian evolution they seldom if ever interpret data as fitting with any other theoretical framework. Indeed, if they do try to fit it into a different framework they run a very real risk of being Expelled. Thus we get all these gratuitous and completely unnecessary explanations of how experimental results accord with the neo-Darwinian narrative lest someone accuse the experimenter of wandering off the reservation. Contacting the authors and asking them if they interpret their data as being contrary to neo-Darwinian theory is like asking them if they wish to say something that will ruin their careers. They won’t. Their careers are more important to them than the historical narrative of mud to man evolution.

  20. 20
    Ekstasis says:

    thoracic says: “Rather than just have the readers of this blog debate what they think the paper says and how its content relates to ID, why doesn’t someone ask the authors?”

    Just to add my one and a half cents,
    if we must go and ask authors for every written document before evaluating and interpreting, I guess we will simply throw out all the works from Shakespeare, Plato, etc etc.

  21. 21
    Borne says:

    Turner: “The argument from improbability is no less wrong now that it’s been for hundreds of years. Given a population of living organisms — the starting point for evolutionary theory — variation and adaptation are certain.”

    1) You really need to assimilate Dave Scot’s remark which is totally correct

    2) When you introduce the “starting point” as a “population of living organisms” as you’ve done, you’ve conveniently brushed off a great part of the Darwinian problem in one easy, but erroneous, sweep of illogic. Darwinists persistently pretend that origins of life is not their domain. But without DNA/RNA Darwinism is dead in the water before it gets wet. You cannot just brush off how DNA/RNA “evolved” – where does Darwinism actually start to explain anything at all if it can’t explain the 1st evolution of the first living self-replicating organism?

    “Although a biologist, I must confess that I do not understand how life came about. Of course, it depends on the definition of life. To me, autoreplication of a macromolecule does not yet represent life. Even a viral particle is not a life organism, it only can participate in life processes when it succeeds in becoming part of a living host cell. Therefore, I consider that life only starts at the level of a functional cell. The most primitive cells may require at least several hundred different specific biological macromolecules. How such already quite complex structures may have come together, remains a mystery to me. The possibility of the existence of a Creator, of God, represents to me a satisfactory solution to this problem.”

    (Werner Arber, [Professor of Microbiology at the University of Basel, Switzerland, shared Nobel Prize for Physiology/Medicine in 1978], “The Existence of a Creator Represents a Satisfactory Solution,” in Margenau H. & Varghese R.A., eds., “Cosmos, Bios, Theos: Scientists Reflect on Science, God, and the Origins of the Universe Life, and Homo Sapiens,” [1992], Open Court: La Salle IL, 1993, Second Printing, pp.142-143)

    Pretending that Darwinism does not relate to how the 1st living cell came about (which it nevertheless claims arose through material forces alone!), but only to the progression of RM’s + NS after life appeared, is mere escapism. No DNA/RNA = no living organism.

    3) There is a lot of mere “brush off” of the most obvious things in your statement. You’re basically saying, if I understand you, that probabilities have nothing to do with reality because – in bio systems at least – given enough time, adaptation abilities in bio-organisms will produce any level of complexity and morphology you wish to imagine.

    You then, amazingly, mention lotteries for which probabilities vary but which remain in the realm of possibility such as say the “6/49” which gives a ticket holder a chance of about 1 in 13 million to win. But 1 in 13 million (± 10^7.12) is a very far cry from the estimated 1 in 10^120 required for macro-evolution. That puts it in the realm of near impossibility.

    But you may say that it could have happened anyway. Suppose we agree. The problem then is that macro-evo claims this has happened multiplied millions of times in the last 4.5 billion years.

    Put it this way,

    “Zircon dating, which calculates a fossil’s age by measuring the relative amounts of uranium and lead within the crystals, had been whittling away at the Cambrian for some time. By 1990, for example, new dates obtained from early Cambrian sites around the world were telescoping the start of biology’s Big Bang from 600 million years ago to less than 560 million years ago. Now, with information based on the lead content of zircons from Siberia, virtually everyone agrees that the Cambrian started almost exactly 543 million years ago and, even more startling, that all but one of the phyla in the fossil record appeared within the first 5 million to 10 million years. `We now know how fast fast is,’ grins Bowring. `And what I like to ask my biologist friends is, How fast can evolution get before they start feeling uncomfortable?'”

    (Nash J.M., “When Life Exploded,” Time, December 4, 1995, p74)

    In any case a mere dismissal of statistical mechanics from natural combinatorial occurrences in bio-systems is just as off base as it would be in any other domain.

  22. 22
    larrynormanfan says:

    jpark320, in [9] above you write:

    The question I am posing should be:

    Can epigenetics arise without design?

    This is a different question than yours:

    How did epigenetics arise?

    And I find min[e] more interesting and answered in the negative.

    I think both questions are interesting. But leo’s question is more so because, unlike yours, it’s not a yes/no question. (In my view, how questions are almost always more interesting than yes/no questions.) More to the point, leo’s question is more amenable to scientific investigation.

  23. 23
    gpuccio says:

    Turner Coates:

    Turner Coates:

    Thank you for stimulating me to a discussion about the things I love most. I appreciate that you recognize my “goodwill”, but I really don’t need that, I just need confrontation, hard confrontation, on these problems. Are you with me? Let’s begin.

    You say:

    “ The fallacy lies in assuming that natural processes had to “hit” the “target” you specify, rather than did produce “ratchets” that, with billions of “clicks” over hundreds of millions of years, managed to latch in complex methods of self-preservation and self-propagation.”

    First of all, it’s not a fallacy. It’s a very reasonable assumptions, while yours is a very unreasonable one. I have to remark that the “target” that I specify is the only known target with the characteristics we are investigating. Only the living beings we know are alive. They are alive in different ways, with astonishing variety of big and small details, but they share some common fundamental principles and structures.

    So, if “you” are assuming, because it’s more comfortable for your ideology, that there are a lot of other possible functional targets which can bring to a similar expression of what we call life, well, you are welcome. This is a free world. But it’s you who are speaking of myths, not I. Where are those targets? Show me at least a theoretical model of them. Show me at least a theoretical reason why they should exist.

    And it’s not even true that all darwinists or materialists agree with your point of view. I am aware that the problem if, given the same conditions, life would emerge in the same way, or just if it would emerge at all, is a source of great debate even among traditional scientists. What about other planets? Did life evolve the same way, or are they peopled by strange living spirals of steel, or what else? Do you know? Can you infer?

    I am not assuming anything, least of all a fallacy. I am just reasoning with what I can observe. And what I can observe are living beings, and the unique mystery which we call life. I am reasoning with my experience of life (which, I suppose, is also yours). What are you reasoning with?

    Excuse me if I call you to empiricism, but it’s the least I can do with those who expose my fallacies… 🙂

    Anyway, if you insist with your imaginary targets, let’s suppose they exist. So, try just to bet: how big a part of the informational search space corresponds to functional, living, replicating beings? Think of that. We have time to speak of the search space, it’s a wonderful subject…

    You say:

    “Evolutionary theory says that systems that “work differently” do emerge from systems that do the work of survival and reproduction, but does not say that any specified system had to emerge. And that is why arguments from improbability, in their sundry forms, are utterly inappropriate.”

    Evolutionary theory can say what it likes. We are not obliged to believe, at least as long as this is still a free world. Not only arguments from improbability are perfectly appropriate, always. Indeed, it’s arguments from probability which I would expect from a scientific theory.

    In other words, if you say that functional systems with certain characteristics (survival and reproduction) could emerge in a random search space, it’s you who have the duty to show how “probable” that is, before I can even start to believe you. Thats’s how science works. I understand you have your faith, but unfortunately I can’t share it. So, please, give me an argument from probability: show me how probable in a random search in the appropriate search space your non existent targets are, considering that I have already shown you how utterly improbable the existing target of the existing life is in the appropriate search space. Then, and only then, you will be making a sensible argument.

    You say:

    “ Yes, I’m aware that William Dembski took a shot at penalizing “specification dredging” in his latest (last?) revamp of complex specified information. His reasoning hinges on a revolutionary approach to statistical hypothesis testing.”

    I can’t see anything revolutionary in Dembski’s statistical approach. His approach is perfectly fisherian, a classical hypothesis testing scenario. Dembski’s “revolutionary” approach (but really, it had been already suggested by others before) is to define specification, and to create the context of CSI. But his treatment of probability is completely classical. Once we have defined what a specified information is (and that’s in no way a statistical question) it’s very easy to compute its probabilities in a very classical way, and to test hypotheses related to it in a classical hypothesis testing fisherian scenario (probability of the null hypothesis). In this case, the null hypothesis is random search, and you will admit that Dembski has conceded to the “adversary” a really low alpha level (1:10^150). Where is the revolution?

    You say:

    “ I said revolutionary, not correct. The correctness of what he’s done to Fisherian statistics has nothing to do with ID per se, and it would take clinical paranoia to claim that his reputation as an ID advocate would keep him from getting fair reviews at statistics journals.”

    For the “correctness of what he’s done to Fisherian statistics”, see above. Regarding paranoia, well, I am paranoic. Clinically paranoic. But I am not stupid.

    You say:

    “ But the fact that he’s had 3-4 years to get it through peer review as pure statistics, and has failed, is a strong hint that the statistics scholars are with me.”

    Easy to say, when you are on the side of the powerful. It reminds me of the old arguments of our not so much missed ReligionProf. May be predictions are not always necessary for scientific success, but it seems that some conformism always has its fascination.

    You say:

    “ The argument from improbability is no less wrong now that it’s been for hundreds of years.”

    Or no less right. Truth does not usually change with time.

    You say:

    “ Given a population of living organisms — the starting point for evolutionary theory — variation and adaptation are certain.”

    That’s faith, if I ever saw one. Besides, I appreciate that you have carefully selected your starting point. Where did you find it?
    And anyway, I have to remark again that each single new complex functional protein is beyond Dembski’s UPB. Each single protein. Not to speak of molecular machines, network regulations, and so on. And those things can work only within the existing target of biological life, they are not supposed to work with spirals-of-steel people.
    But I forgot, you don’t accept Dembski. You have fisherian objections. And, after all, he is not so much peer reviewed (a little bit he is, I believe).

    You say:

    “ The theory of evolution does not specify where adaptation will lead.”

    There are, indeed, many other things that it does not specify.

    “ Present-day arguments from improbability push specification on evolutionary theory, and then make shows of enormous and minuscule numbers for those who did not recognize the initial error.”

    Again, “which” initial error? And please, if you could just give a little bit of serious attention to those “ enormous and minuscule numbers”, and maybe get back to your graduate minor in statistics, you could perhaps change your mind (I am not being ironic here, believe me. Just try).

    “ Plug some different elements into the argument from improbability, and you can show that it’s impossible that anyone ever won the lottery.”

    What do you mean? People do win the lottery, and that’s perfectly in accord with the same statistical laws Dembski uses for his computations. But try this: print 10^150, or more, different tickets, then destroy randomly all of them except 10^9. Then sell the remaining billion tickets, and extract the winner among all the original 10^150 tickets. And then, wait for the winner… Good luck!

    Is that so difficult to understand?

  24. 24
    Joseph says:

    Did someone say epigenetics?

    There happens to be a smallish Mexican “amphibian”- the axolotl.

    I said “amphibian” because although this thing could reproduce in this (larval) stage of (arrested) development, its true adult form (salamander) was realized once some were put into a lake in Paris which was rich(er) in iodine.

    It was (once) classified in two different suborders! – In one suborder as the (larval) axolotl and in another suborder as the (adult) Amblystoma. Oops.

    It appears that the extra iodine allowed for full development of the hormone required for metamorphosis.

  25. 25
    JPCollado says:

    thoracicduck:

    “Rather than just have the readers of this blog debate what they think the paper says and how its content relates to ID, why doesn’t someone ask the authors?”

    I doubt if the authors are willing to put their careers on the line for making a candid admission like that. In an era where such confessions could cost you dearly, it is best to give a nod through subversive means.

  26. 26
    sparc says:

    I never met

    transcripted proteins

    in 27 years in biology. Maybe you should have googled them beforehand.

  27. 27
    jpark320 says:

    @ larrynormanfan

    I think both questions are interesting. But leo’s question is more so because, unlike yours, it’s not a yes/no question. (In my view, how questions are almost always more interesting than yes/no questions.) More to the point, leo’s question is more amenable to scientific investigation.

    Hard to really disagree with that eh 😛

    I think what I was going after is that after investigating the “how,” my prediction would be RM + NS would not be sufficient causes to bring about epigenetic phenomenon. Only than would I ask my “not as interesting” Q!

  28. 28
    jerry says:

    sparc,

    A transcripted protein is one that is the result of the transcription process of the cell as opposed to one that appears from outside the organism. If you want to add translation etc, go ahead.

    Proteins can be created in a laboratory and thus would not be a transcripted protein. But that was not what I was talking about.

    Now in a zygote there are apparently many proteins that were placed there during development of the mother and not produced by the transcription process in the cells of the embryo. The zygote/embryo uses these proteins but does not produce them. Eventually after a time the cells will start to produce all their own proteins.

    Don’t ask me what these proteins do, because I do not think they understand then completely.

    I would think that after 27 years of biology you could have figured out what was meant and if it wasn’t precise, then offer a better description/explanation. The fact that you didn’t is interesting. Now we are aware of your background in biology, we will be expecting constructive help from you in the future. You have the experience to help and clarify.

  29. 29
    jerry says:

    Turner Coates,

    you said

    “The fallacy lies in assuming that natural processes had to “hit” the “target” you specify, rather than did produce “ratchets” that, with billions of “clicks” over hundreds of millions of years, managed to latch in complex methods of self-preservation and self-propagation. No evolutionary theorist would tell you that the trajectory of life on earth might not diverge radically with slight changes in initial conditions. No evolutionary theorist claims that the universe searched for the human genetic network.”

    If I am reading you correctly this comment is about origin of life. This means that there may be untold numbers of systems of chemical activity that could be called life and the one we observe and analyze is just one of these untold billions of billions of systems. Thus, the system we observe is just one that happened to emerge and that given slightly different circumstances none may have emerged or a completely different system all together from this universe of possible systems would have emerged. Thus, we cannot use the improbability argument since there are possibly an infinite number of systems possible and the one here we live in is just the lucky draw of the cards.

    My guess is for this to be a viable argument, then one would have to design different initial conditions and see if a different system emerged. Until one demonstrates that other systems are possible, then one can not argue that they could exist. It is only wishful speculation. I do not know if you are aware of origin of life research but if they could find any such system, even a minimal one unrelated to the current system of life, it would be an unbelievable finding.

    Until that time that others systems are found likely, then the possible existence of such a system would have to be considered highly improbable and arguments from improbability are then appropriate for the current system.

    Next I will address your comment about evolution, once our system arrived.

  30. 30
    jerry says:

    Turner Coates,

    you said

    “Evolutionary theory says that systems that “work differently” do emerge from systems that do the work of survival and reproduction, but does not say that any specified system had to emerge. And that is why arguments from improbability, in their sundry forms, are utterly inappropriate.
    The argument from improbability is no less wrong now that it’s been for hundreds of years. Given a population of living organisms — the starting point for evolutionary theory — variation and adaptation are certain. The theory of evolution does not specify where adaptation will lead. ”

    There are a lot of things wrong with this. You are assuming your conclusions in order to make your argument and this is called begging the question.

    You have no proof that natural processes can produce the variation needed to go in several different directions let alone an infinite number of directions. There is absolutely no evidence of this. You are starting here with a gene pool, maybe a primitive one and assuming that gene pool can be expanded in an almost infinite number of meaningful ways. There is no proof of this. It is your assumption and it is used as proof for your argument. You have to prove it, not assume it.

    The typical answer is deep time. Because it is assumed anything is possible in deep time. Look at your phrase “billions of “clicks” over hundreds of millions of years.” One can imagine all sorts of possibilities happening and here you are using probability to say that some path would emerge because variation and adaption are certain. But how much variation and adaptation is possible. You are assuming something you have to prove. It is like the argument before arguing there are an infinite number of possible systems of life. Here you are arguing there are an infinite number of possible life forms and the ones we see are just some small subset of this infinite number. Slightly different conditions would lead to the emergence of a myriad of other forms and there is no reason to suspect that this other set of organisms would look anything like what we see now or even have an intelligent species or maybe they could have several intelligent species. It is the infinite number of targets argument again. (By the way if you do not like my use of “infinite”, just substitute a really large number. Gazillions will do.)

    But now we are into a Darwinian argument and the likelihood that such a scenario could play out. We are in the land of probabilities again. But suppose Darwinian processes cannot produce any other system or even the system we observe on our planet today. You are assuming it can but again you must prove it, not assume it. How then can such an argument be made that our current set of life forms is only one of an infinite set.

    One of the things that is constantly touted by Darwinist is that Darwinian processes are not random. By that they mean that once the gene pool is given the range of forms possible is extremely limited. Environmental pressures will produce a large number of different forms but the forms are limited to an extremely small subset of all possible forms. They are limited by the gene pool. So given the gene pool and the environment the species are not completely determined but they are severely constrained. Big time constrained.

    The Darwinian out from this conundrum is to hypothesize that the gene pool is highly expandable through the generation of new elements. And here we have the achilles heel of Darwinian processes, the introduction of new variation. If no naturalistic processes can introduce new variation of any consequence, but new variation shows up in evolutionary biology history, what are its likely sources? If law and chance can not explain the new variation, then where did it come from? And if law and chance cannot explain what we have now, then how can one argue that there are another infinite number of possibilities out there.

    You cannot assume variation will come by naturalistic processes, you have to show it has come and has come many times by naturalistic processes. Else that old devil, probability, raises its ugly head. One must use probability to show how likely new variation is to occur as an argument for or against naturalistic processes. All the evidence now points to naturalistic processes not producing any meaningful new variation. Challenge that and we are into the real debate of evolution.

    Yes, Turner, natural selection does happen but it is extremely limited in what it can do and that devil probability has come back to haunt your argument.

    I always find the argument that we are just one in an infinite number of possible variations and thus really nothing special an interesting argument by humans. Essentially to make one feel righteous about the conclusion that we are meaningless, one has to hypothesize that all the qualities we posses are nothing more than a fluke but to support their hypothesis they then turn around and use these improbable qualities to conclude that we are indeed a fluke. This is especially ironical when one has to invoke all sorts of incredibly improbable arguments to get to that position. But whatever works.

  31. 31
    DaveScot says:

    Turner

    variation and adaptation are certain

    That’s not just theoretically wrong it’s empirically wrong. There are a great many “living fossils”. Nothing is certain.

    Moreover, I’ve blogged about two phenotypically identical organisms (only an expert on them can tell the two apart) that occupy the same ecological niche yet they don’t interbreed. Both these organisms (a worm barely visible to the naked eye) were sequenced and via molecular clock were determined to have been reproductively isolated for 240 million years. So by genotype they’re as different as mammals and birds but by phenotype they’re distinguishable only by experts.

    If variation and adaptation are certain results of Darwinian evolution then you can toss the theory in the trash can right now because it’s empirically disproven.

  32. 32
    Turner Coates says:

    gpuccio,

    The design theorist pursuing the argument from specified complexity bears the burden of bounding probabilities of events that include points in the sample space that have not been observed empirically. Evolutionary theory does not hinge on knowledge of those probabilities. And no design theorist has had any notable success at providing them. How long back did William Dembski promise that a computation of the CSI of the bacterial flagellum was forthcoming? Why have we not yet seen it?

    CSI is not only incomputable as a practical matter, but in some cases incomputable in the sense of computability theory. A forthcoming book, Design by Evolution, will go into the details of this.

    This book showcases the state of the art in evolutionary algorithms for design. The chapters are organized by experts in the following fields: evolutionary design and “intelligent design” in biology, art, computational embryogeny, and engineering.

    Sorry about the scare quotes.

    There is in Dembski’s work no rigorous analysis of his huge extension of Fisherian hypothesis testing. He basically states it and provides plain-language exposition. The extension most definitely has NOT appeared in any journal. And Design by Evolution shows, for instance, that the CSI of a program computing a partial recursive function is incomputable. (The argument invokes the fact that any nontrivial property of a partial recursive function is incomputable. If you’re emphasizing the informational aspects of life, this spells serious trouble for design inference.)

    Your claim that we should focus only on what actually is present in living things does not jibe with Dembski’s latest treatment of CSI. What might have happened is very much a part of the formulation. And a fundamental problem is that when we understand a phenomenon poorly, that generally equates to high entropy of a distribution on a space of possible observations. When we come to understand the phenomenon better, this generally equates to entropy reduction. The CSI of an entity will generally be higher when the entropy is high (we understand poorly), and lower when the entropy is low (we understand well). And that amounts to saying that, while the argument from specified complexity is not necessarily argument from ignorance, it is typically argument from ignorance when poor models are used to estimate or bound probabilities (complexity).

    It does not fall to me to tell you just how big the space of plausible life forms is. You have to be able to tell me how small it is to make your argument.

  33. 33
    Turner Coates says:

    Dave,

    variation and adaptation are certain

    That’s not just theoretically wrong it’s empirically wrong. There are a great many “living fossils”. Nothing is certain.

    Errors in transcription of information are guaranteed by the Second Law. Some errors will produce individuals that are adapted to the environment in ways the parents were not. To say that variation and adaptation are certain is not to say that everything changes. Thus to argue that something has not changed is to provide no contradiction.

    I think everyone here agrees that the vast majority of species that ever existed on earth are now extinct. I don’t see how you reach the conclusion that there are a great many living fossils. The number of such species is a small fraction of the species in existence today, let alone all those that ever existed.

  34. 34
    Turner Coates says:

    gpuccio,

    it’s very easy to compute its probabilities in a very classical way, and to test hypotheses related to it in a classical hypothesis testing fisherian scenario (probability of the null hypothesis). In this case, the null hypothesis is random search, and you will admit that Dembski has conceded to the “adversary” a really low alpha level (1:10^150). Where is the revolution?

    Sorry, but you’re clearly not referring to Specification: The Pattern That Signifies Intelligence. There is no reference to search. The constant 10^120 figures in the computation of CSI, not 10^150. And if you think 10^-120 is a confidence level (alpha level), you’re terribly lost.

    Also, you breeze right along with rhetoric as to how straightforward and classical the formulation is. Evidently you’re unaware of the highly unusual role of semiotic agents. I’ve never seen another statistical test that incorporated a construct remotely like semiotic agents. Would you care to show me one?

  35. 35
    Ekstasis says:

    One more look at Turner Coates’ statement: “Evolutionary theory says that systems that “work differently” do emerge from systems that do the work of survival and reproduction, but does not say that any specified system had to emerge. And that is why arguments from improbability, in their sundry forms, are utterly inappropriate.”

    The same narrow targets were hit multiple times, were they not? Birds, mammals (bats), and insects supposedly independently evolved the ability for flight, did they not? And yet, they utilize the same or similar aeronautical principles? Fish adapted to life in the water, and mammals (whales, dolphins) did the same, independently evolving many of the same principles (with some obvious differences as well such as gills)?

    Does this not reinforce the entire probability problem?

    And then we find that life somehow discovered all sorts of advance engineering principles. Not all of course, but an amazingly large number of the engineering methods have been previously discovered in nature. Natural adhesives for climbing is just one simple example. Or hydraulics. The list goes on and on.

    We may not be sure of how many potential pathways exist in the search universe. However, as said by so many on this thread, the real question is how did undirected material processes hit so many very narrow targets, even multiple times?

  36. 36
    Borne says:

    Ekstasis:

    “And then we find that life somehow discovered all sorts of advance engineering principles. Not all of course, but an amazingly large number of the engineering methods have been previously discovered in nature. Natural adhesives for climbing is just one simple example. Or hydraulics. The list goes on and on.”

    Indeed it does!
    How does an evolutionist explain organisms that create their own explosive weapons such as the bombardier beetle? By just-so stories of course! Stories that are in fact so simplistic and even childish that no one can take them seriously.
    How do they explain wasps that produce larvae that inject mind control chemicals into a spider host? Same answer.

    How do they explain ants that collectively build intelligently designed traps to catch insects much larger than themselves? Same.

    Or what about insects that inject necro-toxins, anti-coagulates or anesthetic chemicals into prey through incredibly well designed injection devices?
    Sonar echo-location?
    Infrared sensors (mosquitoes)?
    Pigment changing camouflage mechanisms?
    Stronger than steel webs?
    …..
    Add as many design marvels as you please, the answer is always the same – another just-so story with some lame postulation about enough time and chance.

    Yeah but you must remember that all these 13-14 million species are being constructed in the same periods of time!

    Sooner or later the rm + selection + time equation begins to look awfully suspicious!

    Of course the probability of any of these things occurring through RM + NS is ridiculously low, but hey now we are seeing the Darwinists denying that probabilities (in this case very obvious ones) even count!!

    Amazing the lengths they will go to hang on to their denial of reality.

    “Older folk in the know told me that selection didn’t operate to make complicated things out of complicated things, only to make complex things out of simple ones. I couldn’t understand how anything of the sort could be true, because, unlikely as it was, it would surely be less difficult to make a rabbit out of a potato than to make a rabbit out of sludge, which is what people said had happened, people with line after line of letters after their names who should have known what they were talking about, but obviously didn’t.”
    “Two points of principle are worth emphasis. The first is that the usually supposed logical inevitability of the theory of evolution by natural selection is quite incorrect. There is no inevitability, just the reverse. It is only when the present asexual model is changed to the sophisticated model of sexual reproduction accompanied by crossover that the theory can be made to work, even in the limited degree to be discussed …. This presents an insuperable problem for the notion that life arose out of an abiological organic soup through the development of a primitive replicating system. A primitive replicating system could not have copied itself with anything like the fidelity of present-day systems …. With only poor copying fidelity, a primitive system could carry little genetic information without L [the mutation rate] becoming unbearably large, and how a primitive system could then improve its fidelity and also evolve into a sexual system with crossover beggars the imagination.”

    (Hoyle, F., “Mathematics of Evolution,” [1987], Acorn Enterprises: Memphis TN, 1999, p.2 & 20)

  37. 37
    gpuccio says:

    Turner Coates:

    I cannot answer to your vague hints against Debski’s formalism for two reasons:

    1) They are, indeed, too vague and unspecified, referring to still unpublished sources. When those sources are published, Dembski and others will have the possibility to evaluate them and, if they want, to respond.

    2) I am not a mathematician, and it is not my job to address that kind of discussion at a detailed technical level.

    I can, however, address some of the more substantial points you make.

    You say:

    “The design theorist pursuing the argument from specified complexity bears the burden of bounding probabilities of events that include points in the sample space that have not been observed empirically. Evolutionary theory does not hinge on knowledge of those probabilities.”

    I can’t agree with you. The burden of any theory is to be plausible. In other words, a theory is worth the attention of a scientific audience only if it, in some believable way, succeds in explaining known observations. That does not necessarily make a theory generally accepted, but it is an important prerequisite for its relevance.

    The plausibility of a theory depends critically on the nature of the theory itself. If a theory, let’s say darwinian evolution, relies heavily on random causes for its expalnations, it is simply natural that it has the burden to at least attempt to compute realistically the probabilities of the random events it assumes. Therefore, your affirmation that “Evolutionary theory does not hinge on knowledge of those probabilities” is, in itself, an admission of a fundamental weakness of the theory.

    ID, on the contrary, explicitly addresses the problem of those probabilities, which are as important to ID in order to falsify darwinism, as they should be to darwinism in order to affirm its plausibility. The fact that darwinists, or just yourself, may disagree on how Dembski or other IDists consider the problem of those probabilities is perfectly natural, and should be the object of serious scientific debate. But the fact is that darwinists just refute and strenuously criticize any objection based on serious probability evaluations, and remain entrenched behind a vague defense of the kind of “those probabilities cannot be computed, and we don’t need them”. That, for a theory where pure randomness is the only source of variation and of useful information, is a very serious flaw.

    You say:

    “Your claim that we should focus only on what actually is present in living things does not jibe with Dembski’s latest treatment of CSI. What might have happened is very much a part of the formulation.”

    About that I don’t need to rely on Dembski’s formalism, because this is a much more empirical problem. Even if we admit a very theoretical concept of “what might have happened”, the only generic field of application of that could be OOL. So, let’s pretend that OOL could have started in many other ways. It’s not a big problem, because OOL is a question for which there is no plausible model outside ID, and not only for reasons of probability. OOL is just impossible, not only improbable, in all kinds of models which have been offered up to now.

    Anyway, let’s pretend that OOL happened (randomly) the way it happened, but that it could have (randomly) happened in many other ways. Let’s pretend that the target of all possible OOL modalities is numerous enough so that it represents a significant portion of the whole search space of all the possible modalities, including those where no life ever originates(it’s false, it’s evidently impossible, but let’s leave that behind us, for the moment).

    So, we were lucky enough, and one of the many (impossible) OOL modalities occurred. Let’s say an RNA world, followed by the completely irrational transition to DNA-RNA-protein systems. So, after having bypassed a few billions of impossibilities, we have our living precursors, something like modern bacteria and archea, I suppose. Maybe “just” ancient/modern bacteria and/or archea. We have membranes, we have the genetic code, we have the transcription and translation kits, and a few hundred functional proteins, luckily well interrelated and cooperating. After all, proteins may be of good character, more than humans.

    Well, the point I want to make is that such a scenario, now, does create important bounds to “what might happen”. Indeed, now we are no more in a scenario where many things can happen which are really useful. If you start from a very complex and organized system, and not from scratch, the range of possible, and especially of possible “simple” improvements, becomes extremely restricted. Why? Because many things are already set, many choices have already been made, and if you have to proceed by relatively simple and cumulative steps not many options are left, if you have to develop completely new perspectives, such as transforming prokaryotes into eukaryotes, unicellular beings into multicellular beings, cows into whales, and so on. I mean, we are not speaking here of small improvements of existing functions. We are talking of great revolutions of thought, new forms, new functions.

    It’s the experience of anybody who has worked at programming (or at any similar creative enterprise) that, in many cases, you can’t go on just re-utilizing, with variations, the existing solutions: to be succesful, you have necessarily to destroy part of what has been done, and to start again, at least in part, form scratch. Sometimes completely from scratch.

    And still, if we don’t believe in design, we have to hypothesize changes which are really beyond any conceivable limit: new proteins, new regulations, new aggregations of functions, and so on. You know well how big just the search space of a medium protein is. You probably know that most proteins act as complexes. You probably know that most complex proteins act in cascades. You probably know that those cascades are intricately interrelated, and often distance regulated, in many more ways than we have, up to now, discovered. You know that there are checkpoints, proof-readings, targeted use of induced random mutation in selected systems (see the immune system), neural controls, and so on.

    But still, you stick to a theory which “does not hinge on knowledge of those probabilities”, and criticize design theorists like Dembski who are trying to bring some realism and sense in all that, and to give it, even with great difficulties, a serious formalism.

    Just to cite a phrase often used here: biological beings scream design. Darwinists are welcome to affirm that it isn’t true, that, although it seems counter-intuitive, design is not there, that it is only an appearance. But please, bring something tangible to convince us. After all, quantum mechanics is, in many of its parts, counter-intuitive. But it has a very impressive and severe mathematical and empirical background, which makes those counterintuitive models really plausible if you can plunge deeply in its mathematical beauties (even if the same Einstein had difficulties to cope…). Can you say the same of darwinism?

    Finally, I would like to understand why you say that I am terribly lost when I say that Dembski uses his UPB (be it 1:10^120 or 1:10^150, that’s no big matter; it is, anyway, too generous) as an alpha level. Maybe I have understood incorrectly, after all I am not a mathematician, and I am ready to correct myself.

    What I meant is just that, once we have in some way defined a subset of specified targets, and we have to evaluate the null hypothesis that an observed specified target is the product of random causes (the alternative hypothesis being that it be traced to non random causes, specifically design), you have to compute the probability of the null hypothesis considering the number of specified configurations against the total number of possible configurations. If the probability of that null hypothesis (random causes) is less than the UPB, it is rejected. That appears to me as a perfectly traditional fisherian hypothesis testing, and the UPB the selected alpha level, that is the level of improbability beyond which we reject the null hypothesis of randomness. I will appreciate a more detailed critic to that simplified approach (and not to the formalism of specification, which, I am well aware, is different and more complex than that).

    In other words, if the 10^9 sold tickets are the only “functional” subset, and the 10^150 printed tickets are the whole configuration space, and still there is a winner, not only once, but millions of times, how can we not reject the null hypothesis that nobody has intelligently tampered with the extractions?

Leave a Reply