Intelligent Design

Treasure in the Genetic Goldmine: PZ Myers Fails on “Junk DNA”

Spread the love

Readers may recall my encounter with developmental biology professor PZ Myers earlier this year. In that brief interaction, I came to appreciate Myers’s ability to charm his adoring fans and followers irrespective of the scientific robustness of his claims, or the accuracy with which he represents the views of those with whom he disagrees.

Click here to continue reading>>>

34 Replies to “Treasure in the Genetic Goldmine: PZ Myers Fails on “Junk DNA”

  1. 1
    PaV says:

    Excellent summary of important findings. Thanks Jonathan M.

  2. 2
    paulmc says:

    Jonathan, we need to look a little more closely at Sorek and Ast (2003) here. We should be careful to not that they observed conserved regions of introns, and not that the entire intron is conserved, as one could conclude from what you’ve written. They note we are talking about “103 bases in the upstream intron and 94 bases in the downstream intron”. Introns within human coding sequences average 3,749 bp (Hong et al. 2006). Even if these conserved stretches occurred in all genes (instead of a minority of genes), this is still a trifling proportion of the intron sequence.

    Hence, we need to be clear here that there are small islands of function in introns – and of course there are lots of other examples of functional intronic tidbits than the one discussed here. But let’s keep this in perspective – the overwhelming majority of intronic sequences 1) lack any known function, 2) lack the sequence conservation indicative of function, 3) occur without evidence of facilitating alternative splicing, and 4) accumulate in species with sufficiently small population sizes that they are unable to remove all slightly deleterious mutations (such as intronic expansions).

  3. 3
    gpuccio says:

    paulmc:

    I think you are really misrepresenting the situation.

    Here is the beginning of Sorek and Start’s paper:

    “The recently published draft sequence of the mouse genome
    (Waterston et al. 2002) facilitates a great advance in searching for cis-regulatory sequence elements. The 75 million years that have passed since the divergence of the ancestor of the human and mouse lineages allowed a substantial divergence in neutral DNA; the constraint on functional elements has kept them conserved. Indeed, homologous human and mouse exons are, on the average, 85% identical in their sequences, but introns are more poorly conserved: 60% of the nonexonic sequences are nonalignable, and in the alignable regions the average identity level is 69% (Waterston et al. 2002).
    However, numerous regions that are conserved between
    human and mouse are also found in introns (Hardison et al.
    1997). Comparison between the human chromosome 21 and
    the corresponding genomic sequences in mouse revealed that
    only one-third of the conserved blocks are exons (Dermitzakis et al. 2002). The other two-thirds of highly conserved sequences are intronic and intergenic. These conserved elements were found to be unexpressed in microarray experiments. Thus, the conclusion was that they are probably cisregulatory sequence elements, but no function could be assigned to most of them (Dermitzakis et al. 2002). We decided to check the possible correlation between the conserved intronic sequences and alternative splicing regulation.”

    As you can see, things are a little bit different from what you stated. It is true that introns show in general less conservation than exons, but 40% of intron sequence shows some conservation, and two-thirds of highly conserved sequences are intronic and intergenic.

    I would not say that this equals to “the overwhelming majority of intronic sequences lack the sequence conservation indicative of function”.

    As can be seen form what I quoted, a significant part of introns, even if not the majority, shows conservation. Sorek and Ast just chose to concentrate their work on the possible function in alternative splicing of the regions flanking exons.

    moreover, as the function of non coding DNA in general, and of introns in particular, is probably mainly regulatory, it is perfectly possible that they can change more fron one species to anothers one than proten coding segments for functional reasons. It is extremely likely, indeed, that the differences between species, especially at the high end of evolution (we are discussing human and mouse here), are mainly regulatory, rather than mediated by big differences in the effectory protein genes. So, while good conservation almost certainly denotes function, we cannot assume that lower conservation equals lack of function, as we can see in the approach to the search of specific human functional genes by looking for HARs.

  4. 4
    Joe says:

    All introns have at least ONE function-> to separate exons to allow for alternative gene splicing.

    Differences in introns- those non-conserved areas, could either be species specific or just physical space to hold data (as RAM does in a computer).

  5. 5
    paulmc says:

    As you can see, things are a little bit different from what you stated. It is true that introns show in general less conservation than exons, but 40% of intron sequence shows some conservation, and two-thirds of highly conserved sequences are intronic and intergenic.

    I would not say that this equals to “the overwhelming majority of intronic sequences lack the sequence conservation indicative of function”.

    No, 60% are not able to be aligned at all, while 40% are alignable, but lack the degree of sequence conservation of exons, as I stated. There are only small sections that have high degrees of sequence conservation.

    It is extremely likely, indeed, that the differences between species, especially at the high end of evolution (we are discussing human and mouse here), are mainly regulatory, rather than mediated by big differences in the effectory protein genes.

    I would agree that regulatory differences are important in the ontogeny and evolution of complex multicellular organisms (let’s not say the ‘high end’). This does not indicate that the a large amount of intronic DNA is regulatory in any way. Introns comprise 30% of the genome. If 40% of this was implicated in regulatory functions, that is 12% of the total genome. This is 6 times the volume of regulatory DNA that is currently known. Such a finding would be both dramatic and unlikely.

    So, while good conservation almost certainly denotes function, we cannot assume that lower conservation equals lack of function, as we can see in the approach to the search of specific human functional genes by looking for HARs.

    Actually, high degrees of conservation indicate high degrees of purifying selection not function. Point in case, conserved nuclear elements in mice have been deleted without any observable effect despite the researchers choosing UCEs with high probability of producing a phenotype. This indicates that, despite their conservation, they do not play an important biological role. One reason they remain conserved is because many CNEs are implicated in disease – upon mutation they produce phenotypes. Hence they are subject to purifying selection.

    This is particularly important if we now return to introns. Lynch argues that the expansion of introns is achieved through fixation of a series of slightly deleterious mutations. Each insertion that occurs introduces new sequences of DNA that can mutate, for example sometimes produces spurious promoters that will then mess with gene regulation. Hence, they too may be subject to purifying selection (removing such occurrences) without having function.

  6. 6
    gpuccio says:

    paulmc:

    that is 12% of the total genome. This is 6 times the volume of regulatory DNA that is currently known. Such a finding would be both dramatic and unlikely.

    Let’s see…

    Point in case, conserved nuclear elements in mice have been deleted without any observable effect despite the researchers choosing UCEs with high probability of producing a phenotype. This indicates that, despite their conservation, they do not play an important biological role.

    I would not be so sure. Some functions are not immediately observable in the phenotype, if we don’t know what they are.

    This is particularly important if we now return to introns. Lynch argues that the expansion of introns is achieved through fixation of a series of slightly deleterious mutations. Each insertion that occurs introduces new sequences of DNA that can mutate, for example sometimes produces spurious promoters that will then mess with gene regulation. Hence, they too may be subject to purifying selection (removing such occurrences) without having function.

    You can be happy with that kind of “explanation”. I am not.

  7. 7
    paulmc says:

    I would not be so sure. Some functions are not immediately observable in the phenotype, if we don’t know what they are.

    Fine. Of course that’s true by definition, but let’s at least acknowledge that your counter is only conjecture and that there is no evidence for your position. And let’s also acknowledge that the dramatic effects the authors expected did not happen – so any speculative phenotype is likely very mild. With these things in mind and considering that we’re talking about UCEs, we must conclude that there isn’t a nice correlation between conservation and critical function after all. Therefore my point stands: conservation correlates directly with purifying selection, not with function.

    You can be happy with that kind of “explanation”. I am not.

    Well, I am not happy with that sentence constituting an argument against my position. Read Lynch et al. (2011) and see if you change your mind when the argument has been backgrounded and fleshed out. There are several quite elegant arguments presented therein that support the hypothesis. Another Lynch paper on organellar evolution is particularly illuminating: the differences in mitochondrial genomic structure between land plants and animals reflects the differences in their mt mutation rates, while the similarities in plant and animal nuclear genomic structure reflect the similarities in their genomic mutation rates.

  8. 8
    gpuccio says:

    paulmc:

    I will take time to read Lynch’s paper as soon as I can.

    For the moment, I would say that we should admit that the problem of non coding DNA’s functions remains open: there is certainly evidence for many functions in it, evidence that has been growing, but we cannot at present say how much of it is functional, and how.

  9. 9
    paulmc says:

    Excellent, I look forward to your response when you do have the chance.

    For the moment, I would say that we should admit that the problem of non coding DNA’s functions remains open: there is certainly evidence for many functions in it, evidence that has been growing, but we cannot at present say how much of it is functional, and how.

    While we continue to identify functional fragments in non-coding DNA, this barely makes a mark on the total volume of such DNA.

    Conversely, there are positive arguments for junk. For example, there are reasons to believe that there are limits on the number of conserved bases in mammalian genomes that might have function that would mean most of the genome couldn’t be functional. There is evidence of low sequence conservation across most of the genome, indicating a lack of function. Also, there is evidence that most accumulating sequences (introns and transposable elements) are the result of neutral evolution and not positive selection. This is why, on balance, I feel justified in making the junk claim.

  10. 10
    gpuccio says:

    paulmc:

    Yes, but you are not considering the following:

    a) Non conserved sequences can be functional just the same, if their function is more regulatory and more species specific.

    b) What appears as neutral evolution could well be a form of design.

    So, I take serious consideration of your arguments, because they are intelligent and pertinent, but still think that the problem remains widely open, and that accumulating data will change the perspective very much.

  11. 11
    paulmc says:

    a) Are you arguing that the lack of apparent conservation could reflect recent shifts in gene regulatory function? Wouldn’t this likely be reflected in lineage-specific increases in rates, and high degrees of conservation between similar species that regulate the genes similar (e.g. sister species of rats)?

    b) I am not sure how this would work – could you expand that idea a little?

  12. 12
    gpuccio says:

    paulmc:

    a) All depends on the type of regulatory function. I believe we still do not understand specific regulatory networks that control, for instance, instinctive behaviour, or just the transcriptome diversity. What I am saying is that, while we do understand quite well the final effector genes (the protein coding part) and some of their proximate regulation, we still have almost no understanding of higher level procedures and control. I don’t believe that such information can be only in the form of feedbacks, and we really don’t understand its control. That control could be more species specific than we believe, and could in some way imply non coding DNA.

    b) One of the possible ways a designer can gradually modify genomes according to a plan is by modifying non coding DNA by what would appear as neutral evolution, and preparing new genes. That could be acoomplished either by single point mutations or by transposon activity. The point is, functional information does have the formal properties of random sequences, except for its conveyed function. Sequence modification that prepare a future function would appear as neutral modification until the new function is achieved and activated. This is a possible model, but I think that the origin of new genes from non coding DNA, especially by transposon activity, is in some way documented, or at least proposed.

  13. 13
    Petrushka says:

    b) What appears as neutral evolution could well be a form of design.

    Since your version of design seems to take place continuously and at the same rate as evolution and at the same times and places as evolution, and requires no visible or testable intervention, what would be incompatible with design?

  14. 14
    gpuccio says:

    Petrushka:

    You make a common mistake. The problem is not what would be incompatible with design, but rather what would be incompatible with design detection.

    I will be more clear. The point in ID is that design is detectable. Something could be designed, but if the design is not detectable ID theory cannot say anything useful about that situation. A dwsign inference is not useful in those cases (one could still believe that the object is designed for non scientific reasons, such as religious convictions, but that has nothing to do with ID.

    To be detectable, deisng must be complex enough. IOWs, the designed object must exhibit dFSCI.

    Now, my point here with paulmc was that what appears as neutral evolution could be a gradual approach to a final functional state.

    As you yourself always emphasize, protein sequences are formally pseudo-random. That means that the mere mathemathical form of the sequence could easily be interpreted as a random sequence (except for some basic functional biochemical constraints, not really very relevant). What really makes a protein sequence functional is the final form of the protein, and its ability to implement a specific biochemical function.

    When we observe that function, we can know that dFSCI is present. If the function is not still achieved, and if our understanding of the process that is taking place and of the laws of protein folding and protein function is limited (as it is), we can well observe those modifications that are gradually accumulating the necessary dFSCI, and consider them as random neutral mutations.

    That’s exactly the argument I have tried to make for the human “de novo gene” many times discussed here. A segment of non coding DNA appears in primates, is in some way structured by transposon activity, and finally, in humans and only in humans, undersoes a final set of 4 mutations that generate an ORF and a (supposedly) functional protein of 184 AAs.

    That’s an example of how design could be detected, given appropriate experimental findings.

  15. 15
    Petrushka says:

    When we observe that function, we can know that dFSCI is present.

    When you observe the arrow, you draw the target. From this you know exactly nothing about the history of the object.

    Rather than observing ongoing processes and extrapolating, you construct a fantasy creature, the mythical designer, which has never been observed, has no testable attributes, no capabilities, no limitations, no time or place of action, no method of action.

    Just poof.

    Your particular version of the fantasy is even more extraordinary than the typical ID creation, because your designer looks almost exactly like evolution and uses evolution to design.

  16. 16
    Petrushka says:

    A segment of non coding DNA appears in primates, is in some way structured by transposon activity, and finally, in humans and only in humans, undergoes a final set of 4 mutations that generate an ORF and a (supposedly) functional protein of 184 AAs.

    What was the dFSCI of the sequence when only 180 AAs were in place?

  17. 17
    Petrushka says:

    One reason they remain conserved is because many CNEs are implicated in disease – upon mutation they produce phenotypes. Hence they are subject to purifying selection.

    It seems that once mutated they would meet the objective criterion for dFSCI.

  18. 18
    lastyearon says:

    The Intelligent Mutator.

  19. 19
    paulmc says:

    a) All depends on the type of regulatory function. I believe we still do not understand specific regulatory networks that control, for instance, instinctive behaviour, or just the transcriptome diversity.

    But how does this relate to the issue of a lack of conservation that we were discussing?

    b) One of the possible ways a designer can gradually modify genomes according to a plan is by modifying non coding DNA by what would appear as neutral evolution, and preparing new genes

    In theory perhaps. However, the gradual part of it seems hard to understand. It also seems difficult to conceive at a population level, without overturning our current understanding of population genetics.

  20. 20
    gpuccio says:

    paulmc (4.1.1.1.1):

    But how does this relate to the issue of a lack of conservation that we were discussing?

    In the sense that if specific behaviours or transcriptomes make a species what it is, we should expect “de novo” organization of the parts of the genome implied in those processes.

    It also seems difficult to conceive at a population level, without overturning our current understanding of population genetics.

    Why? Again, look at the example I give about the supposed human “de novo gene”:

    http://www.ploscompbiol.org/ar.....bi.1000734

    A segment of non coding DNA “appears” in primates, partially because of transposon activity. It is never translated in primates. In humans, it becomes (possibly) an ORF by 4 final mutations, and the resulting protein of 284 AAs is (possibly) functional.

    Neutral evolution or design? Here, only ID theory and a serious computation of the target space can help decide.

  21. 21
    gpuccio says:

    lastyearon:

    It would be too easy to just answer:

    “The Not Intelligent Poster”

    so I will not do it 🙂

  22. 22
    gpuccio says:

    Petrushka (4.1.2.1.2)

    Oh, no! No again. I have answered, to others and to you. I paste here my answer to Gordon Davisson, already pasted to you on another occasion, about the dFSCI of a quasi completed sequence:

    “I discussed some general problems with this approach earlier, but let me take a closer look at this particular argument. I think it’s pretty clear that evolutionary processes can produce increases in dFSCI, at least if your measure of dFSCI is sufficiently well-behaved. Consider that there exist point mutations that render genes nonfunctional, which I assume that you’d consider a decrease in dFSCI. Point mutations are essentially reversible, meaning that if genome A can be turned into genome B by a single point mutation, B can also be turned into A by a single point mutation. Therefore, the existance of point mutations that decrease dFSCI automatically implies the existance of point mutations that increase dFSCI.
    Ah! Now we are coming to something really interesting. I must say that I have really appreciated your discussion, and this is probably the only point where you are explicitly wrong. No problem, I will try to show why.
    Please go back to my (quick) definition of dFSCI in my post number 9 here. I quote myself:
    “No. The dFSCI of an object is a measure of its functional complexity, expressed as the probability to get that information in a purely random system.
    For instance. for a protein family, like in Durston’s paper, that probability is the probability of getting a functional sequence with that function through a random search or a random walk starting from an unrelated state (which is more or less the same).”
    Well, maybe that was too quick, so I will be more detailed.
    a) We have an object that can be read as a digital sequence of values.
    b) We want to evaluate the possible presence of dFSCI in that object.
    c) First of all we have to explicitly define a function for the digital information we can read in the object. I we cannot define a function, we cannot observe dFSCI in that object, It is a negative. Maybe a false negative. There are at different ways to be a false negative. The object could have a function but not be complex enough: it could still be designed, but we cannot say. Or we could not be able to understand the code or the function in the object.
    d) So, let’s say that we have defined a function explicitly. Then we measure the dFSCI for that function.
    e) To do that. we must measure the functional (target) space and the search space. Here various possiblities can be considered to approximate these measures. For proteins genes, the best way is to use the Durston method for protein families.
    f) The ratio of the target space to the search space if the complexity of our dFSCI for that object and that function. What does it express? As I said, it expresses one of two things, which are more or less equivalent:
    f1) The probability of obtaining that functional sequence from scrtach in a purely random system: IOWs, for a protein gene, the probability of obtaining any sequence that produces a protein with that function in a system that builds up sequences just adding randomly nucleotides.
    f2) The probability of obtaining that functional sequence through a random walk. That is more relevant to biology, because the usual theort for genes is that they are derived from other, existing sequences through variation. But the important point, that IO have explicitly stated in my previous post, is that it expresses “the probability of getting a functional sequence with that function through … a random walk starting from an unrelated state.
    Starting from an unrelated state. That’s the important point. Because that’s exactly what happens in biology.
    Basic protein domains are unrelated states. They are completely unrelated at the sequence level (you can easily verify that going to the SCOP site). Each basic protein domain (there are at least 2000) has less than 10% homology with any other. Indeed, the less than 10% homology rule bears about 6000 unrelated domains.
    Moreover, they also have different structure and folding, and different functions.
    So the question is: how does a new domain emerge? In the example I cited about the human de novo gene, it seems to come from non coding DMA. Many examples point to transposon activity. In no case a functional, related precursor is known. That’s why dFSCI is a good measure of the functional information we have to explain.
    Let’s go to your argument. You say:
    “Consider that there exist point mutations that render genes nonfunctional, which I assume that you’d consider a decrease in dFSCI.”
    No. That’s wrong. We have two different objects. In A, I can define a function and neasure dFSCI. In B, I cannot define a function, and dFSCI cannot be measured. Anyway, I could measure the dFSCI implicit in a transition from B to A. That would indeed be of one aminoacid (about 4 bits).
    And so? If you have a system where you already have B, I will be glad to admit that the transition from B to A is of only 4 bits, and it is perfectly in the range of a random system. IOWs. the dFSCI of that specific transition is of only 4 bits.
    But you have to already have B in the system. B is not unrelated to A. Indeed, you obtained B from A, and that is the only way you can obtain exactly B.
    So, can you see why your reasoning is wrong? You are not using the concept of dFSCI correctly. dFSCI tells us that we cannot obtain that object in a purely random system. It is absolutely trivila that we can obtain that object in a random system starting from an almost identical object. Is that a counter argument to dFSCI and its meaning? Absolutely not.
    For instance, if you can show that a basic protein domain could have originated from an unrelated state thorugh an intermediate that is partially related and is naturally selectable(let’s say from A to A1 to B, where A and B are unrelated, A1 is an intermediate between A and B, and A1 is naturally selectable), , then we are no more interested in the total dFSCI of B. What we have to evaluate is the dFSCI of the transition from A to A1, and the dFSCI of the transition from A1 to B. The assumption is that A1 can be expanded, and its probabilistic resources multiplied. Therefore, if the two (or as many as you want) transitions have low dFSI, and are in the range of the biological systems that are supposed to generate them, then the whole system can work.”

  23. 23
    gpuccio says:

    Petrushka (4.1.2.1.1):

    When you observe the arrow, you draw the target.

    Wrong. I observe the object (the functional sequence) and the target (its function). And I define the function (give an objective, shareble form to my recognition of the function, to be used in scientific reasoning).

    Please, let’s stop this nonsense about “drawing the target”. It’s simple stupid propaganda. When I study ATP Synthase, and I realize that it is a motor machine that transforms ADP into ATP, creating store of chemical energy, and that the energy for that comes from a mechanism that exlpoits a proton gradient to create mechanical movement, am I “drawing the target”? No. I am simply recognizing what is there. I cannot say that ATP Synthase catabolyzes glucose, because that’s not what it does.

    So, please, stop it. I don’t draw anything. The function is there, objectively recognizable by any conscious intelligent observer. It is true, we can use different words and categories to define in our language the function: we can be more or less generic or detailed, for instance, as it has been cleared in my answer to DrREC. But the functionis there. It’s not the observer that imagines it, or arbitrarily defines it.

    From this you know exactly nothing about the history of the object.

    True. The history must be studied and investigated separately. And so?

    Your particular version of the fantasy is even more extraordinary than the typical ID creation, because your designer looks almost exactly like evolution and uses evolution to design.

    Evolution is a word without meaning, or worse with too many meanings. Please, be precise when you write.

    My designer does not “look almost exactly like evolution”. He is a conscious intelligent being. He designs biological information, that information that allows evolution.

    The result of his design looks exactly like what we observe in the biological world: the biological beings, the genomes, the proteomes. The design model explains those things, while the neo darwinian model can’t do that.

    My designer does exactly the things that are observed in natural history. He inputs very intelligent and purposeful, and extremely complex information, often rfather suddenly. He determines OOL, the transition from prokaryotes to eukaryotes, the Ediacara and Cambrian explosions, and each single new protein domain that is necessary when it is necessary.

  24. 24
    Petrushka says:

    The reason it’s drawing the bull’s eye after the arrow has landed is that there was no specification prior to the event.

    What you are doing is called retrospective astonishment. You are defining the target after it has been achieved.

    Try that on your statistics teacher.

    Now if you can predict the design of a complex biological machine before it exists (isn’t that what designing is?) I would be impressed. That would demonstrate that design is possible.

  25. 25
    material.infantacy says:

    “The reason it’s drawing the bull’s eye after the arrow has landed is that there was no specification prior to the event.”

    The sequence which produces the function is the specification. The function exists independent of any observation of it, as does it’s sequence. This is unequivocal.

    “You are defining the target after it has been achieved.”

    The target is the function, and exists entirely independent of any observer’s defining a specification for it. The specification is the sequence. The sequence exists in an extremely narrow range of sample space. It’s called specified complexity, and it’s the very thing requiring an explanation.

    Sequence specificity is a highly contingent and extremely specific state of affairs. Denying it exists, and claiming that the observer is somehow cheating by assigning a specification, is patent absurdity.

    Functions are not arbitrary, nor are the sequences which specify them. Since an extreme minority of sequences can possibly be functional, their function, along with their specification (the sequence) is an aspect of objective reality, and not subject to denials nor equivocations.

  26. 26
    PeterJ says:

    I have been following this thread, and many like it, for quite some time now, and I can’t help but think that Petrushka will simply argue against the evidence no matter what.

    To be honest I used to look forward to Petrushkas posts, and whatever would follow it, greatly valuing his contributions, but this morning I can’t help but see just how dogmatized his mind really is.

    ‘Denying it exists, and claiming that the observer is somehow cheating by assigning a specification, is patent absurdity.’

    Take note Petrushka. You are verging on sounding very silly.

  27. 27
    kairosfocus says:

    MI:

    The sequence which produces the function is the specification. The function exists independent of any observation of it, as does it’s sequence. This is unequivocal . . . .

    The target is the function, and exists entirely independent of any observer’s defining a specification for it. The specification is the sequence. The sequence exists in an extremely narrow range of sample space. It’s called specified complexity, and it’s the very thing requiring an explanation.

    Sequence specificity is a highly contingent and extremely specific state of affairs. Denying it exists, and claiming that the observer is somehow cheating by assigning a specification, is patent absurdity.

    Functions are not arbitrary, nor are the sequences which specify them. Since an extreme minority of sequences can possibly be functional, their function, along with their specification (the sequence) is an aspect of objective reality, and not subject to denials nor equivocations.

    Prezactly!

    GEM of TKI

  28. 28
    gpuccio says:

    Petrushka:

    What you are doing is called retrospective astonishment. You are defining the target after it has been achieved. Try that on your statistics teacher.

    Well, I suppose that the reason why ordered states are never spontaneously achieved by the molecules of a gas is that some observer is drawing a target somewhere? Try that on your physics teacher, and please review the second law.

    Small subsets are simply small, and finding then is simply unlikely. Try that on your statistics teacher.

    So, “there was no specification prior to the event”? So, the laws by which some form of energy can be transormed into another form did not exist before the ATP Synthase emerged? You seem not to understand that functions are smart ways to achieve something by existing laws, somethimes almost in spite of them. That is the specification. Achieving something.

    ATP synthase does achieve something, either you want to admit it or not. Random noise achieves nothing. Try that on your common sense teacher.

  29. 29
    kairosfocus says:

    GP:

    I would add, that a relatively small blind sample of a very large space is per sampling theory, not going to plausibly pick up relatively unusual, and isolated subsets.

    The case I have repeatedly given is that a sample that maxes out the quantum state resources of our solar system, will be as one straw to a cubical hay stack 3 1/2 light days across, to the possibilities of a string of just 500 bits.

    Even if our solar system out to Pluto was in the stack, it would be utterly implausible for a one-straw sized blind sample to pick up anything but straw.

    And anyone who has done a bit of inferential statistics will know why.

    If you don’t, try the following thought exercise:

    1: Plot out a Normal distribution curve on a sheet of bristol board, and put it on the ground, marking the 0.1% tails.

    2: Get a step ladder and a few darts.

    3: go up the ladder and blindly drop darts until 30 scatter across the curve.

    4: We can take it that the odds of being in any given region are proportional to the area.

    5: It should not be hard to see that it is highly unlikely that you will have a dart hole in the 0.1% of area tails, or even the 1% tails.

    6: In short, you would get a picture of the distribution with the sample, but the narrow, special zones will be swamped out by the bulk.

    The point of the design inference on functionally specific and complex states, is that by definition, complex function — which will be special and tightly constrained in many ways [think about getting just a not and bolt to be together and tight by chance] — will be rare in a space of arbitrarily possible configs. Complexity, here means that we have so many possible configs that the solar system or the observable cosmos will not be enough to sample more than a tiny fraction.

    The balance of statistical weights of clusters of possible states, will be such that the same challenge will happen, but to a far more extreme degree.

    None of this is particularly hard to understand or is seriously objectionable, and it has been presented in detail, over and over again.

    So, it is plain that this is objection based on selective hyperskepticism.

    As for the notion that, say the functionality of ATP synthase as a nanotech electric motor is an arbitrary painting of the target after the arrow lands, let us just say that Tesla and others had long since invented the electric motor and the relevant constraints, issues and theories were long since understood.

    The enzyme was RECOGNISED to be a case of a motor [looks like a Nobel Prize was won over it], a two-port where ion concentration gradients and related forces are used on the input, energy supply port, to drive the output, mechanical energy output port.

    Refusal to acknowledge patent and well grounded facts of observation, which is what we are now seeing, is not a good sign.

    Sad to have to say.

    GEM of TKI

  30. 30
    kairosfocus says:

    F/N: A good way to see this, is to ask how much would you be willing to put up to win say $1,000, on the chance that 30 strikes would include at least one hit in a 1%, a 0.1%, a 0.01% etc target zone. Then, think about the fraction of a cubical hay bale 3 1/2 light days across would be taken up by our solar system out to Pluto. Then, think about picking a one-straw sized sample of that haystack. Do you think that a bet towards getting the $1,000 would have any reasonable chance of hitting pay-dirt, or would it to all intents and purposes be money tossed away?

  31. 31
    kairosfocus says:

    F/N 2: First, a happy Christmas to all.

    I just want to suggest to P and others that the may find it helpful to see here on. I hope this will help them clarify the issue, and see why the observation of FSCO/I in action is not merely subjective. (NB: ALL conscious human experiences, including acts or states of knowing, observing, analysing, evaluating, are necessarily subjective. For, we are subjects. But, many such things are also objective and warranted. FSCO/I is one of these.)

  32. 32
    gpuccio says:

    KF:

    Happy Christmas to you, and to all the friends here, on both sides, and from my heart. And a special, sincere wish to Petrushka, who has been a loyal companion and adversary for so long.

    I would like to quote here a passage form The Brothers Karamazov that I have always liked:

    “And what have Russian boys been doing up till now, some of them, I mean? In this stinking tavern, for instance, here, they meet and sit down in a corner. They’ve never met in their lives before and, when they go out of the tavern, they won’t meet again for forty years. And what do they talk about in that momentary halt in the tavern? Of the eternal questions, of the existence of God and immortality. And those who do not believe in God talk of socialism or anarchism, of the transformation of all humanity on a new pattern, so that it all comes to the same, they’re the same questions turned inside out. And masses, masses of the most original Russian boys do nothing but talk of the eternal questions! Isn’t it so?”

    I like to think that we here, on both sides, are those “Russian boys”. And I am happy to be part of the gang!

  33. 33
    material.infantacy says:

    Merry Christmas, KF, GP, and all.

  34. 34
    kairosfocus says:

    Many happy returns, MI and all. KF

Leave a Reply