Uncommon Descent Serving The Intelligent Design Community

Atheist biologist makes an excellent case for Intelligent Design

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Matthew Cobb is a professor of zoology at the University of Manchester and a regular contributor over at Why Evolution Is True. Recently, while critiquing a cartoon from xkcd (shown above), he argued that our DNA is the mindless product of a series of historical accidents. But then he let the cat out of the bag, at the end of his post:

On a final note, in some cases, within this amazing noise, there are also astonishing examples of complexity which do indeed appear to be the result of optimisation – and they would boggle the mind of anyone, not just a cocky computer scientist in a hat. In Drosophila there is a gene called Dscam, which is involved in neuronal development and has four clusters of exons (bits of the gene that are expressed – hence exon – in contrast to the apparently inert introns).

Each of these exons can be read by the cell in twelve, forty-eight, thirty-three or two alternative ways. As a result, the single stretch of DNA that we call Dscam can encode 38,016 different proteins. (For the moment, this is the record number of alternative proteins produced by a single gene. I suspect there are many even more extreme examples.)

Cobb triumphantly concluded:

In other words, DNA is even more complicated than [xkcd cartoonist] Randall [Munroe] imagines – it is historical, messy, undesigned. And when occasionally it is optimised, the degree of complexity is mind-boggling. Biology is not quite impossible, it is just incredibly difficult!

But the damage was done. Even as he chided cartoonist Randall Munroe for claiming that DNA is subject to “the most aggressive optimisation process in the universe” and insisted that our genes are “a horrible, historical mess” consisting mostly of junk DNA, and that they are really the product of mindless tinkering rather than design, Cobb was forced to concede that amidst all this chaos, there were indeed some “astonishing examples of complexity which do indeed appear to be the result of optimisation” which “would boggle the mind of anyone, not just a cocky computer scientist in a hat.”

Intelligent Design supporters are often accused of appealing to something called an API: an Argument from Personal Incredulity. The acronym comes from Professor Richard Dawkins. The reasoning is supposed to go like this: I cannot imagine how complex structure X could have come about as a result of blind natural processes; therefore an intelligent being must have created it. This, Dawkins rightly points out, is not a rational argument. Certainly it has no place in a science classroom.

But my own conversion to Intelligent Design was not based on an API, but on something which I have decided to call the STOMPS Principle. STOMPS is an acronym for: Smarter Than Our Most Promising Scientists. The reasoning goes like this: if I observe a complex system which is capable of performing a task in a manner which is more ingenious than anything our best and most promising scientists could have ever designed, then it would be rational for me to assume that the system in question was also designed. That is not to say that nothing will shake my conviction, but if you claim that an unguided natural process could have done the job, then I am going to demand that you explain how the process in question could have accomplished this stupendous feat. I shall demand a specification of a mechanism, and a demonstration that this mechanism is at least capable of generating the complex system we are talking about, within the time available, without appealing to mathematical miracles (like winning the Powerball Jackpot ten times in a row). To demand any less would be the height of irrationality.

Professor Matthew Cobb concedes that our junky DNA contains genes which encode for proteins. He concedes that within the “noise” of our junky DNA, there are also “astonishing examples of complexity which do indeed appear to be the result of optimisation,” and that the complexity of this DNA code would “boggle the mind” of even “a cocky computer scientist in a hat.” This sounds like a perfect example of a case where the STOMPS Principle could be legitimately invoked. If Nature contains systems which accomplish a feat in a manner which is far better than what our best scientists can do, then it’s reasonable to infer that these systems were intelligently designed.

At this point, some evolutionists may respond by invoking what philosopher Daniel Dennett has termed Leslie Orgel’s second law: “Evolution is cleverer than you are.” The relevant question here is: cleverer at what? We have seen that all living things employ a genetic code: a set of rules by which information encoded in genetic material (DNA or RNA sequences) is translated into proteins (amino acid sequences) by living cells. Despite diligent inquiry on our part, we have yet to uncover a single instance in Nature of unguided processes generating a code of any sort – let alone one which would “boggle the mind” of even “a cocky computer scientist in a hat.” Whatever else evolution might be clever at, code-making is hardly its forte.

But, we shall be told, evolution refines the code in our DNA all the time – through natural selection winnowing random mutations, as well as purely chance-driven processes such as genetic drift. Who are we to say that it could not have generated this code by an incremental series of refinements, over billions of years?

I used to be a computer programmer, for ten years. I think I know what it means to refine computer code. Evolution doesn’t do anything like that: what it does is corrupt the code in organisms’ cells, in ways that occasionally turn out to improve those organisms’ prospects for survival. That might be good for the organisms, but from a code-bound perspective, it isn’t “good” at all: it’s just the corruption of a code. And corruption is the opposite of generation.

So when I hear someone tell me that “nature, heartless, witless nature” could have not only generated a code, but generated one which even our brightest scientists are in awe of, my response is: “You’re pulling my leg.”

Finally, I’d like to address Professor Matthew Cobb’s argument that “[o]ur genes are not perfectly adapted and beautifully designed,” because our DNA is littered with junk: they are instead the product of “evolution and natural selection.” My response to that argument is: so what? Even if Professor Cobb is right about junk DNA – and I’m inclined to think he is (for reasons I’ll discuss in another post) – that’s beside the point. At most, it shows is that DNA which doesn’t code for anything wasn’t designed. But my question is: what about the DNA which does code for proteins, and which does so in a manner that boggles the ingenuity of our brightest minds? Professor Cobb, it seems, is missing the wood for the trees here.

Junk DNA might be described as degenerate code – but there has to be a code in the first place, before it can degenerate. The existence of junk DNA cannot be used as an argument against design: all it establishes is that the designer of our DNA – whether out of benign neglect, laziness, illness, or ignorance that something has gone amiss – doesn’t always fix the code he created, when it becomes corrupted. Accordingly, junk DNA cannot be used as a legitimate argument against the proposition that the DNA in our cells which codes for genes was designed.

A personal story

A few years ago, I came across an article by an Australian botanist (who is also a creationist) named Alex Williams, entitled, “Astonishing Complexity of DNA Demolishes Neo-Darwinism” (Journal of Creation, 21(3), 2007). At the time I knew very little about specified complexity and other terms in the Intelligent Design lexicon. I heartily dislike jargon, and I was having difficulty deciding whether there was any real scientific merit to the Intelligent Design movement’s claim that certain biological systems must have been designed. But when I read Alex Williams’ article, the case for Intelligent Design finally made sense to me. What impressed me most, with my background in computer science, was that the coding in the cell was far, far more efficient than anything that our best scientists could have come up with. Here are some excerpts from the article:

The traditional understanding of DNA has recently been transformed beyond recognition. DNA does not, as we thought, carry a linear, one-dimensional, one-way, sequential code—like the lines of letters and words on this page… DNA information is overlapping-multi-layered and multi-dimensional; it reads both backwards and forwards… No human engineer has ever even imagined, let alone designed an information storage device anything like it…

  • There is no ‘beads on a string’ linear arrangement of genes, but rather an interleaved structure of overlapping segments, with typically five, seven or more transcripts coming from just one segment of code.
  • Not just one strand, but both strands (sense and antisense) of the DNA are fully transcribed.
  • Transcription proceeds not just one way but both backwards and forwards

  • There is not just one transcription triggering (switching) system for each region, but many.

(Bold emphasis mine – VJT.)

I’d like to make it clear that as someone who believes in a 13.8 billion-year-old universe and in common descent, I do not share Williams’ creationist views. In particular, I think his argument for a young cosmos, based on Haldane’s dilemma, rests on faulty premises. But I do think that Williams is on solid scientific ground when he writes that no human engineer has ever even imagined, let alone designed an information storage device anything like DNA. Here we have an appeal to the STOMPS principle: DNA encodes information in a way which is far cleverer than anything that our most intelligent programmers could have designed, so it is reasonable to infer that DNA itself was designed by a superhuman intelligent agent.

I’d like to conclude this post with a quote from someone whose impartiality is not in doubt: Bill Gates, the founder of Microsoft Corporation, who is also an agnostic:

Biological information is the most important information we can discover, because over the next several decades it will revolutionize medicine. Human DNA is like a computer program but far, far more advanced than any software ever created.
(The Road Ahead, Penguin: London, Revised Edition, 1996 p. 228.)

If an agnostic like Bill Gates, who is an acknowledged expert on computing, thinks that the complexity of human DNA surpasses that of any human software design, then it is surely reasonable to infer that human DNA – or at the very least, its four-billion-year-old progenitor, the DNA in the first living cell, was originally designed by some superhuman Intelligence.

Professor Cobb is undercut by one of his own commenters

We have seen that Professor Matthew Cobb’s argument against DNA having been designed is a philosophically flawed one. But reading through the comments attached to his post, I came across two comments by a reader named Eric (see here and here) which blew Professor Cobb’s case right out of the water, from a computing perspective:

… Matthew’s comment “Our genes are not perfectly adapted and beautifully designed. They are a horrible, historical mess” makes the analogy to human programming better, not worse….

I would guess that the entire etymology of computer programming languages is a result of historical contingency (i.e. a horrible, historical mess) as much as it is a result of optimization or rational choice. The reason Java forms the basis of so many internet-based languages is because that’s what was included in the earliest version of Netscape Navigator, which captured the market at the time. And the reason there are so many Visual Basic type programming languages is because Basic is what ran on the first generation of IBM personal computers. Geez, I know labs that were programming their nuclear physics detector setups in Fortran in the 1990s, and that is a language invented for use with punch cards.

Now computer programming languages will probably always require a more formal and rigorous syntax than natural language, but IMO the specific formal syntaxes that we used today are more due to the vagaries of human history than they are any sort of rational choice of the best options.

For that matter, why the frak do we even bother with www? Http vs. Https? That’s four redundant and therefore worthless letters out of five, the equivalent of 80% “junk DNA” in one of the most common and most recent human-built computer syntaxes. What sense does it make? None. Why do we have it? History.

Eric makes a very interesting point here. What do readers think?

(H/t: Denyse O’Leary.)

Comments
The most mind boggling thing about Darwinism is how utterly stupid it is. Darwinists are all a bunch of mental midgets playing scientists. How did this chicken-shit voodoo religion become the state religion?Mapou
November 19, 2015
November
11
Nov
19
19
2015
05:12 PM
5
05
12
PM
PDT
Go back and read just about every post Gil Dodgen ever wrote.
Gil is a gifted musician, computer programmer and former atheist who was Richard Dawkin's worst nightmare, and who single-handedly exposed the vacuity of Darwinism using ground-breaking simulations such as throwing computers out of airplanes. Granville Sewell
Gil’s point is simple and brilliant. If you are trying to model how molecular errors could accumulate to produce animals and plants, why assume that the only molecules in the cell subjected to errors are the DNA? If you really want to see what unintelligent forces alone can accomplish, through accidents, you should assume accidents can occur in other parts of the organism.
steveh
November 19, 2015
November
11
Nov
19
19
2015
05:00 PM
5
05
00
PM
PDT
Sounds like another API to me.Seversky
November 19, 2015
November
11
Nov
19
19
2015
04:57 PM
4
04
57
PM
PDT
Biology is not quite impossible, it is just incredibly difficult!
...yet we understand enough to know it evolved via NS+RM, as well as know what is useful and what isn't? Not only that, it's apparently ever changing, so like technology, what's relevant today maybe be invalid/irrelevant tomorrow. Biology is therefore nothing more than a wild goose chase, because evolution (RM+NS) happens all the time we will always be behind. Is biology therefore impossible since we will always be lagging/behind? If not, why not?computerist
November 19, 2015
November
11
Nov
19
19
2015
04:55 PM
4
04
55
PM
PDT
a few notes:
The Extreme Complexity Of Genes – Dr. Raymond G. Bohlin - video http://www.metacafe.com/watch/8593991/ Landscape of transcription in human cells – Sept. 6, 2012 Excerpt: Here we report evidence that three-quarters of the human genome is capable of being transcribed, as well as observations about the range and levels of expression, localization, processing fates, regulatory regions and modifications of almost all currently annotated and thousands of previously unannotated RNAs. These observations, taken together, prompt a redefinition of the concept of a gene.,,, Isoform expression by a gene does not follow a minimalistic expression strategy, resulting in a tendency for genes to express many isoforms simultaneously, with a plateau at about 10–12 expressed isoforms per gene per cell line. http://www.nature.com/nature/journal/v489/n7414/full/nature11233.html Time to Redefine the Concept of a Gene? - Sept. 10, 2012 Excerpt: As detailed in my second post on alternative splicing, there is one human gene that codes for 576 different proteins, and there is one fruit fly gene that codes for 38,016 different proteins! While the fact that a single gene can code for so many proteins is truly astounding, we didn’t really know how prevalent alternative splicing is. Are there only a few genes that participate in it, or do most genes engage in it? The ENCODE data presented in reference 2 indicates that at least 75% of all genes participate in alternative splicing. They also indicate that the number of different proteins each gene makes varies significantly, with most genes producing somewhere between 2 and 25. Based on these results, it seems clear that the RNA transcripts are the real carriers of genetic information. This is why some members of the ENCODE team are arguing that an RNA transcript, not a gene, should be considered the fundamental unit of inheritance. http://networkedblogs.com/BYdo8 Duality in the human genome - Nov. 28, 2014 Excerpt: The gene, as we imagined it, exists only in exceptional cases. "We need to fundamentally rethink the view of genes that every schoolchild has learned since Gregor Mendel's time. Moreover, the conventional view of individual mutations is no longer adequate. Instead, we have to consider the two gene forms and their combination of variants,",,, "Our investigations at the protein level have shown that 96 percent of all genes have at least 5 to 20 different protein forms.,,, http://medicalxpress.com/news/2014-11-duality-human-genome.html Design In DNA – Alternative Splicing, Duons, and Dual coding genes – video (5:05 minute mark) http://www.youtube.com/watch?v=Bm67oXKtH3s#t=305 Multiple Overlapping Genetic Codes Profoundly Reduce the Probability of Beneficial Mutation George Montañez 1, Robert J. Marks II 2, Jorge Fernandez 3 and John C. Sanford 4 - published online May 2013 Excerpt: In the last decade, we have discovered still another aspect of the multi- dimensional genome. We now know that DNA sequences are typically “ poly-functional” [38]. Trifanov previously had described at least 12 genetic codes that any given nucleotide can contribute to [39,40], and showed that a given base-pair can contribute to multiple overlapping codes simultaneously. The first evidence of overlapping protein-coding sequences in viruses caused quite a stir, but since then it has become recognized as typical. According to Kapronov et al., “it is not unusual that a single base-pair can be part of an intricate network of multiple isoforms of overlapping sense and antisense transcripts, the majority of which are unannotated” [41]. The ENCODE project [42] has confirmed that this phenomenon is ubiquitous in higher genomes, wherein a given DNA sequence routinely encodes multiple overlapping messages, meaning that a single nucleotide can contribute to two or more genetic codes. Most recently, Itzkovitz et al. analyzed protein coding regions of 700 species, and showed that virtually all forms of life have extensive overlapping information in their genomes [43]. 38. Sanford J (2008) Genetic Entropy and the Mystery of the Genome. FMS Publications, NY. Pages 131–142. 39. Trifonov EN (1989) Multiple codes of nucleotide sequences. Bull of Mathematical Biology 51:417–432. 40. Trifanov EN (1997) Genetic sequences as products of compression by inclusive superposition of many codes. Mol Biol 31:647–654. 41. Kapranov P, et al (2005) Examples of complex architecture of the human transcriptome revealed by RACE and high density tiling arrays. Genome Res 15:987–997. 42. Birney E, et al (2007) Encode Project Consortium: Identification and analysis of functional elements in 1% of the human genome by the ENCODE pilot project. Nature 447:799–816. 43. Itzkovitz S, Hodis E, Sega E (2010) Overlapping codes Conclusions: Our analysis confirms mathematically what would seem intuitively obvious - multiple overlapping codes within the genome must radically change our expectations regarding the rate of beneficial mutations. As the number of overlapping codes increases, the rate of potential beneficial mutation decreases exponentially, quickly approaching zero. Therefore the new evidence for ubiquitous overlapping codes in higher genomes strongly indicates that beneficial mutations should be extremely rare. This evidence combined with increasing evidence that biological systems are highly optimized, and evidence that only relatively high-impact beneficial mutations can be effectively amplified by natural selection, lead us to conclude that mutations which are both selectable and unambiguously beneficial must be vanishingly rare. This conclusion raises serious questions. How might such vanishingly rare beneficial mutations ever be sufficient for genome building? How might genetic degeneration ever be averted, given the continuous accumulation of low impact deleterious mutations? http://www.worldscientific.com/doi/pdf/10.1142/9789814508728_0006 Biological Information - Overlapping Codes 10-25-2014 by Paul Giem - video https://www.youtube.com/watch?v=OytcYD5791k&index=4&list=PLHDSWJBW3DNUUhiC9VwPnhl-ymuObyTWJ Overlapping Genetic Codes 12-6-2014 by Paul Giem - video https://www.youtube.com/watch?v=3WZy0n60_ZU Second, third, fourth… genetic codes - One spectacular case of code crowding - Edward N. Trifonov - video https://www.youtube.com/watch?v=fDB3fMCfk0E
In the preceding video, Trifonov elucidates codes that are, simultaneously, in the same sequence, coding for DNA curvature, Chromatin Code, Amphipathic helices, and NF kappaB. In fact, at the 58:00 minute mark he states, "Reading only one message, one gets three more, practically GRATIS!". And please note that this was just an introductory lecture in which Trifinov just covered the very basics and left many of the other codes out of the lecture. Codes which code for completely different, yet still biologically important, functions. In fact, at the 7:55 mark of the video, there are 13 codes that are listed on a powerpoint, although the writing was too small for me to read. Concluding powerpoint of the lecture (at the 1 hour mark):
"Not only are there many different codes in the sequences, but they overlap, so that the same letters in a sequence may take part simultaneously in several different messages." Edward N. Trifonov - 2010
bornagain
November 19, 2015
November
11
Nov
19
19
2015
04:48 PM
4
04
48
PM
PDT
"The existence of junk DNA cannot be used as an argument against design." Even less can it be used as an argument against design by the God of the Bible. To those who believe in the fall and the curse that followed it, degradation of the genome through the accumulation of junk would be unsurprising. I am not endorsing this admittedly speculative view. I am merely pointing it out.Barry Arrington
November 19, 2015
November
11
Nov
19
19
2015
04:07 PM
4
04
07
PM
PDT
ppolish: I can’t think of a single Intelligently Designed system/process that does not generate waste ie junk. It’s a Law I think. Sometimes it's intention and desireable. See #7mike1962
November 19, 2015
November
11
Nov
19
19
2015
04:02 PM
4
04
02
PM
PDT
I can't think of a single Intelligently Designed system/process that does not generate waste ie junk. It's a Law I think.ppolish
November 19, 2015
November
11
Nov
19
19
2015
03:59 PM
3
03
59
PM
PDT
"As a result, the single stretch of DNA that we call Dscam can encode 38,016 different proteins." As a software developer of some skill and expertise, I marvel. This is vastly beyond the skill level of anyone I know.bFast
November 19, 2015
November
11
Nov
19
19
2015
03:56 PM
3
03
56
PM
PDT
Enjoyable post, VJT "The existence of junk DNA cannot be used as an argument against design: all it establishes is that the designer of our DNA – whether out of benign neglect, laziness, illness, or ignorance" Or passive intention, if the productions of the evolutionary system were good enough for the designer's purposes. For example, if I were using a genetic algorithm to design an antenna, I might prefer that the antenna be extremely optimized and aesthetically pleasing to me. But I may decide an ugly antenna is acceptable if the best antenna turns out to be ugly and the antenna's efficiency was the top priority. It appears as if biological organisms are brilliantly optimized in some respects and not so much in others. This may be perfectly acceptable to the designer for a variety of reasons. EDIT: an example of intentional noise introduced into a system. In audio engineering, up-converting a lower digital format to a higher one, say from 16 bits to 32 bits, typically uses a technique that introduces random values in the bottom 16 bits of the 32 bit result. Believe it or not, this results in a more natural sound to human listeners. Someone analyzing the system without this understanding of human perception might easily conclude that the system had a serious flaw in it! But the random "junk" in the data is intentional and desirable.mike1962
November 19, 2015
November
11
Nov
19
19
2015
03:52 PM
3
03
52
PM
PDT
Mapou and I are a bit of a two-punch on "combinatorial expansion". But we come at it from two different directions. Mapou’s point is straightforward, coming directly from mathematics. Mine comes from semiotics. What is PHYSICALLY REQUIRED to have a formal system of combinatorial expansion? - - - - - - - - - - - First, you have to have semiotic system. No small thing. Second, you have to have the simultaneous appearance of three independent instances of physicochemical arbitrariness, each encoded within the information that they make possible … or the system simply won’t function. - - - - - - - - - - Mr Cobb should read more. I bet he's never even considered what is physically required to produce the system he thinks is such a mess. :)Upright BiPed
November 19, 2015
November
11
Nov
19
19
2015
02:35 PM
2
02
35
PM
PDT
Would be good to have Mr Dodgen contributing again.Jack Jones
November 19, 2015
November
11
Nov
19
19
2015
02:28 PM
2
02
28
PM
PDT
@2, You actually need a citation to understand that the combinatorial explosion is a show stopper when it comes to random searches? You're pathetic, Bob. Like all Darwinists, you're a moron.Mapou
November 19, 2015
November
11
Nov
19
19
2015
01:41 PM
1
01
41
PM
PDT
Bob, Go back and read just about every post Gil Dodgen ever wrote.Barry Arrington
November 19, 2015
November
11
Nov
19
19
2015
01:39 PM
1
01
39
PM
PDT
As any programmer can tell you, the combinatorial explosion kills all stochastic search mechanisms (e.g., RM+NS) dead.
[citation needed]Bob O'H
November 19, 2015
November
11
Nov
19
19
2015
01:36 PM
1
01
36
PM
PDT
Cobb is just another example in a long list of gutless impostors who pretend to be scientists. They knowingly teach lies in order to advance and protect their careers. There is no honor within the scientific community. Almost all biologists are outright liars. The very term "evolutionary biologist" betrays their pseudoscience and complicity in perpetrating the lie. Why? Because it assumes the a priori correctness of the theory (Darwinian evolution) that biology is supposed to be trying to falsify. As any programmer can tell you, the combinatorial explosion kills all stochastic search mechanisms (e.g., RM+NS) dead.Mapou
November 19, 2015
November
11
Nov
19
19
2015
01:31 PM
1
01
31
PM
PDT
1 2

Leave a Reply