Larry Moran has decided to educate me about junk DNA. I appreciate the level of detail he has provided. I am not an expert in this field. I do however have a brain and, as a physicist, a vastly superior brain (I joke, sort of). I am not an IDiot, nor am I a larey moron (nor is he), and I like to see clear careful thought. I do not see this in a lot of the anti-ID polemics on the internet, nor in general presentations of evolution in the media. Thus, Larry’s latest posts are much more edifying to read. However, I still don’t agree with all the reasoning, and I don’t think he has told both sides of the argument. For each post, the first thing I am going to do is summarise the latest lesson (the facts), and then secondly I will offer my critique of the reasoning that leads Larry to junk DNA. (By the way, * indicates “see response to post 4 below”)
POST 1: Pervasive Transcription
Larry gives a useful historical narrative of the development of knowledge. In the 70s, we discovered pervasive transcription, but did not discover the function, if any, of much of it. In the 80s, we discovered pseudogenes, especially transposon pseudogenes. In the 90s, we concluded that most of these transcripts were rapidly degraded introns, but in the 2000s we realised that there were hundreds of other sequences being transcribed.
Larry thinks this (especially the situation from the 80s on) amounts to solid evidence for junk DNA, but I honestly don’t see how it does. I think it’s a case of confirmation bias (something he accused us of previously). You see, those who really understand the theory, like Larry does, expect to see lots of junk for the simple reason that chance mutations are the only really creative force in this theory, and that you need to accumulate a lot of mutations to obtain significant innovations– otherwise natural selection will keep you pinned on a locally-optimal genotype (natural selection only winnows the results – in all variants of Evolutionism* including Larry’s – but in Larry’s variant chance is given a somewhat longer leash). So they didn’t find the function. They found some stuff you could interpret as damaged genes. Yay, that fits! (that’s the confirmation). So they told everyone its junk. But they haven’t proven that. Indeed some of those pseudogenes already turned out to be functional, and with a good functional reason why they should appear similar to true genes, so now the ‘damaged gene’ hypothesis is also questionable, especially if one allows the possibility it was engineered, which every good scientist should. Oops! But since it doesn’t fit the junk story, we poopoo it and don’t tell people (that’s the bias). Now, I appreciate it is very difficult in principle to prove non-function, and so lesser evidences could be accepted. Indeed it is almost always difficult to prove a negative to those who want to believe: at the Origin of Life we have an absurdly strong appearance of a negative, and more than 50 years of experimental evidence and discoveries that compound the problem, from an ateleological point of view – but it does not seem to dissuade those who are committed to limiting science to natural (unintelligent) causes. The difference in this case is that there are already counterexamples (here’s another one – Beta Globin Pseudogene) which show that at least some pseudogenes have function and thus that pseudogenes do not always indicate non-function; clearly pseudogenes cannot simply be assumed to be junk.
Could it be that many pseudogenes are functional under special conditions? Yes (same for many other classes of ‘junk’). Could it be that some are indeed genuinely broken genes, or copies of genes? Yes, even from an ID perspective that is a possibility. But in themselves, pseudogenes are simply not ‘solid evidence’ for junk DNA.
POST 2: Rare Transcripts [my immediate reactions are in square brackets]
Larry thinks rarely transcribed mRNA (RNA of the kind that gets translated into proteins) is unlikely to make much difference in terms of function. [that makes certain assumptions, for example, about how sensitive target systems could be to a single transcript]
Most RNA (measured by mass) in the cell comes from exons and is for building proteins. [ok, good]
Despite this, a slight majority of RNA (measured by sequences represented) comes from introns, the stuff between exons. But this large number of sequences accounts for only about 5.8% by mass. [ok, interesting – but I note from your previous post that rapid degradation of introns would explain this]
Accidental transcription of RNA is sufficient as a hypothesis to explain away these relatively low levels of abundance of most sequences. [hmm, yes, but functional, intentional transcription would also be sufficient, so where is the evidence that it is junk? all transcription has an element of random timing. that is not the same as accidental.]
This argument reveals a materialist way of looking at things; measuring RNA’s importance by mass. However, there is no reason why all the DNA should be transcribed at anything like the same level. Take my own physics simulation computer code. I estimate 90% of it is used for a total of less than 1% of the time, while say 5% of it uses up 90% of the processing resources. That does not mean the 90% is junk. It is necessary framework, and the computationally expensive part is not the only important part of the code. Translate this into DNA. Why should the whole genome be highly expressed (or transcribed) all the time? Some tasks that the cell does are limited by the need to accumulate a large quantity of a particular type of physical material (protein, or something built by proteins). Other tasks are not so expensive in material, but still critical. Personally, I would expect a large fraction of the genome to be relevant only to embryos. In computer speak this would be the ‘initialisation process’ of an organism. Once initialised, the organism is not likely to use that part of the program again. The genome of a higher organism has to do a huge number of advanced things that no human code has yet attempted, but why should the basic principles be different? And why should we assume that the business of cells and their genomes consists only in the accumulation of resources and building basic material structures? Or take a modern economy. Do we measure it only by the ore dug and bridges built – the physical activity? Or is the information processing activity also important? That brings us to a second point, about introns and other pieces of RNA that are degraded rapidly. Are introns really non-functional? Is it possible that they affect alternative splicing, or regulation in some other way? Yes, of course it’s possible. Think of transient flashes of light down glass cables that degrade almost instantly. Materially they are almost not there. So it may be with much of the genome – we have pieces of information being carried and processed on a very short-lived medium. Perhaps protein-coding mRNA has a slightly longer lifetime because it has to leave the nucleus. Let us please not measure function by relative mass of RNA or protein.
The onus should be on those who claim function to support their case. It’s not up to the opponents of pervasive functional transcription to prove lack of function. Function is not the default option as long as you understand that transcription is not perfect.
One must tread carefully here. I understand the problem: it is very difficult to prove a negative like ‘lack of function’, but in principle it is easy to disprove it by finding the function (assuming your work-force does not understand your Evolutionist* theory too well). On the other hand, if one assumes ‘function’ but does not know the function, it is still difficult to prove that there is no function. Therefore it is safer to assume the negative because at least that way you are assuming the thing that can be most easily corrected, in principle. Fine.
However, that is an epistemological strategy – a human strategy for coping with incomplete knowledge. It should not be confused with ontology – the things that are. In fact, that confusion is a deep problem for the whole of Western society over the last few centuries, but that goes way beyond the scope of our discussion here. Strictly speaking, the above quote is an appeal to presume DNA is junk. Earlier we had the rather stronger claim that there is strong evidence for junk. Note the difference everyone.
Larry explains in detail how promoters work, and why there is variation in binding strengths, and thus a continuum of levels of gene-promoting going on. It was very interesting, and I am not objecting to any issues of fact here.
But it all misses the point. I already acknowledged that RNA is transcribed at different levels, and that RNAP binds at non-promoter sites. Although I just learnt a whole lot more, everything Larry describes is consistent with my earlier suggestion that promoters mean ‘bind here more often’ rather than ‘bind here only’. All I am saying is that an intelligent designer could take into account the subtleties and limitations of the underlying operating system, and use the fact that everything is transcribed occasionally, and make almost every DNA sequence functional.
*POST 4: Darwinism
So, Larry does not want to be called a Darwinian. I hear him when he complains about conflating scientific issues with things like Social Darwinism. But that is not what I think of when I use the term, and I think his real motivation is that he wants us to call him an ‘evolutionary biologist’, emphasising his mainstream-ness. The first problem with that is that I have good friends who are evolutionary biologists, and also believe in ID, so that phrase really doesn’t capture the difference between Larry and us. The second problem is that whatever other processes evolution may include, Darwin’s mechanism is the only candidate to replace design. It is the only thing that winnows the chaff (junk) and skews probabilities in the direction of anything that would look like design (with adaptation/function standing in as a proxy for design). If Larry believes in evolution without new adaptations, then really he ought to join our club. Nearly everyone believes in neutral evolution, if I may interpret that as lots of essentially random, non-adaptive, minor changes. Neutral evolution really isn’t problematic to anyone. It’s the Darwinism that is problematic. If Larry is really innocent of the sin of Darwinism, then I am very sorry for calling him a Darwinian. For the purposes of this post I have called him an Evolutionist (capital E and –ist to denote the ideology that goes way beyond the observable facts of evolution) in the hope that he will not have a heart attack.
POST 5: Evidence for Junk
At last we get to evidence for junk. Larry thinks that the C-value paradox (the fact that the sizes of genomes often seem to bear no relation to the complexity of the organisms) indicates that most of the stuff is junk. In general, in fact, Larry thinks that copy number variation (having multiple copies of genetic elements) is evidence of junk. Then he refers us to insertions and deletions (including large deletions in mice) that appear to have no effect.
Firstly, some stuff probably genuinely is junk. Evolution creates junk, there has been at least some evolution, so there must be some junk. For the mouse examples, I accept that as evidence for junk. On the other hand, it is not proof of junk: the fact that those researchers did not find an effect from deleting huge chunks of mouse genome does not necessarily mean there is no effect from those sequences. It could be those were regions that are activated or relevant only under special conditions in the wild.
Secondly, having multiple copies of a code does not mean the code is junk, or that any of the copies are junk. Many cases of the C-value paradox can be partly explained by simple polyploidy. But apart from that, again from computer science there are reasons why one might wish to have a smaller code or a larger code that does much the same thing: Generally one wants small codes that can be stored and loaded efficiently. However, there are certain conditions where one might want a larger code: if you ‘unroll’ a loop (that means copying the code out several times so you don’t have to return to the start of the cycle quite as often). A biological equivalent might be a copied gene that speeds up the transcription and thence expression of a particular protein. It might be less efficient for cell division, but there are other advantages. Indeed, in some cases the purpose could be to slow cell division. Also, some computer codes edit themselves to adapt to particular needs. Could it be that different tissues types have different copy numbers of certain genes? Might there be a design reason for having ‘jumping genes’? Getting way off on a tangent: is it possible that there are genes that are designed to jump between cells? Is it possible that pathological viruses evolved from something designed and endogenous, for example something that locally changed the properties of some tissue (think of a wart – that’s a virus – but self-limiting and benign)? Have virus-like elements been associated with placenta-formation? Could it be that knocking out certain ERVs would prevent embryos from developing? Of course that’s crazy talk – to an Evolutionist – because its all fundamentally and originally junk.
They need to tell us why the vast majority of defective transposons evolved a function.
This does appear to be a perfect example of how Evolutionists can be utterly circular in their thinking. It is functional, but to them it looks like junk, so they still call it junk. What they should be doing is questioning whether transposons in general are junk!
I will finish off by linking to some of my extra-curricular education. This is a recent example of junk DNA turning out to be anything but:
Brain Development Is Guided by Junk DNA That Isn’t Really Junk
More science, everybody, please. And more free thought.