Uncommon Descent Serving The Intelligent Design Community

On Signature in the Cell, Robert Saunders Still Doesn’t Get It


At his Wonderful Life blog, geneticist Robert Saunders has responded to my recent take down of his “critique” of Stephen Meyer’s arguments for intelligent design, offered and defended in Signature in the Cell. Of course, it wouldn’t be an anti-ID article without its share of condescending rhetoric. Saunders claims that I “have absorbed a typical strategy beloved of Intelligent Design creationists: of devising neologisms that don’t correspond to normally used science terminology, and combined this with ignorance of biology.” I have no doubt that Dr. Saunders is informed about his discipline but the arguments he presents here are weak.

Click here to continue reading>>>

Giving Mr Saunders a second chance...(and just documenting the exchange)
Ahhh, so now you would like to rewrite the events so as to place yourself on the firm grounds of decorum at the point where you started deleting comments. But, there is a problem with that. You were presented an argument for the puposes of discussing genomic information transfer and its observable dynamic properties. That argument leads to some potential conclusions which are in conflict with your personal worldview. Your response, therefore, was to immediately disengage the argument, leaving a disparaging remark about its content (which you entirely avoided). I then reminded you that "mockery was an political response, not a scientific one” and therefore your remarks “could not suffice as a valid response to the observations of physical evidence”. And THAT is the comment you deleted. So clearly, it is not decorum that caused you to delete my comments, its was something, let us say - less genuine. And besides, I see from your comments (as is typically the case) that you have no problem flinging insults. The question of course, is do you have the intellectual sovereingty to address the evidence against you?
Upright BiPed
Petrushka: Again, I agree with you. (I am slightly worried... :) ). gpuccio
Good. It will also be interesting to see what comes up in the evolution of regulatory networks, which certainly accounts for most evolution in metazoans and plants. Petrushka
Petrushka: On one thing I absolutely agree with you: I do hope that a lot of new and good research be done about the protein functional space. gpuccio
Good point. It is known that certain DNA has function, independent of sequence. DNA is not just a database, although it is certainly that. It is also the file structure and the physical structure as instantiated in physical elements. There are aspects of DNA that are functional as physical spacings, positionings, etc., that have nothing to do with the particular sequence. In reality, DNA is both software and hardware. In considering function of DNA, we therefore have to consider not just software functions, but the actual physical system architecture. This is why the old "genes to RNA to proteins to us" mantra, on which the junk DNA myth was based, is so woefully myopic in its view. My assessment is that there is very little junk DNA -- likely no more than 5-10%. But ID is able to accept that there is some amount of junk. Eric Anderson
Another post goes vamoose from Robert Suander's blog...
Yet you removed my post when I reminded you that "mockery is not a scientific response, but a political one". One which "cannot suffice as an answer to physical evidence". You've proven that you have no answer to that physical evidence, and are prepared to grant yourself the ideological right to ignore it. How does it feel to be a student of modern liberal arts, and a scientist no less, who censors physical evidence if that evidence is unflattering to your personal worldview?
...time to move on. Upright BiPed
<blockquoteAs already discussed, the measurement of dFSCI applies only to a transition or search or walk that is reasonably random. And it requires knowledge of the sequence's history to determine whether it is the result of a single creation event or the current state of a series of small changes. As far as I know, Thornton is the only researcher who has tested whether two known states (cousin sequences) can be bridged by a functional transition state. I doubt if he will be the last. Petrushka
Right- good catch, and in my scenario even that could be recycled. For example the "weasel" program had set resources from which to get its variation, ie the alphabet and spaces. Living organisms have spare DNA, ie the building blocks of the genetic alphabet. Another idea is that the truly non-functional DNA- just its physical presence-is a software way-station, like RAM, holding instructions for say the turning of the DNA/ histone spool when the genes inside the spool are needed for transcription. And seeing that imagination is evidence for "them", I am sure I could imagine a function for all alleged junk DNA. Joe
GCUGreyArea: My definition of complexity in dFSCI is very simple: given a digital string that carries the information for an explicitly defined function, complexity (expressed in bits as -log2) is simply the ratio between the number of functional states (the number of sequences carrying the information for the function) and the search space (the number of possible sequences). More in detail, some approximations must be made. For a protein family, the search space will usually be calculated for the mean length of the proteins in that family, as 20^length. The target space (the number of functional sequences) is the most difficult part to evaluate. The Durston method gives a good approximation for protein families, while in principle it can be approximated even for a single protein if enough is known about its structure function relationship (that at present cannot be easily done, but knowledge is growing increasingly in that field). This ratio expresses well the probability of finding the target space by a random search or a random walk form an unrelated state. dFSCI can be categorized in binary form (present or absent) if a threshold is established. The threshold must obviously be specific for each type of random system, and take into account the probabilistic resources available to the system itself. For a generic biological system on our planet, I have proposed a threshold of 150 bits (see a more detailed discussion here): https://uncommondescent.com/intelligent-design/id-foundations-11-borels-infinite-monkeys-analysis-and-the-significance-of-the-log-reduced-chi-metric-chi_500-is-500/comment-page-1/#comment-410355 As already discussed, the measurement of dFSCI applies only to a transition or search or walk that is reasonably random. Any explicitly known necessity mechanism that applies to the transition or search or walk will redefine the dFSCI for that object. Moreover, it is important to remember that the value of dFSCI is specific for one object and for one explicitly defined function. gpuccio
Wikipedia has a good page on complexity:
Specific meanings of complexity In several scientific fields, "complexity" has a specific meaning: In computational complexity theory, the amounts of resources required for the execution of algorithms is studied. The most popular types of computational complexity are the time complexity of a problem equal to the number of steps that it takes to solve an instance of the problem as a function of the size of the input (usually measured in bits), using the most efficient algorithm, and the space complexity of a problem equal to the volume of the memory used by the algorithm (e.g., cells of the tape) that it takes to solve an instance of the problem as a function of the size of the input (usually measured in bits), using the most efficient algorithm. This allows to classify computational problems by complexity class (such as P, NP ... ). An axiomatic approach to computational complexity was developed by Manuel Blum. It allows one to deduce many properties of concrete computational complexity measures, such as time complexity or space complexity, from properties of axiomatically defined measures. In algorithmic information theory, the Kolmogorov complexity (also called descriptive complexity, algorithmic complexity or algorithmic entropy) of a string is the length of the shortest binary program that outputs that string. Different kinds of Kolmogorov complexity are studied: the uniform complexity, prefix complexity, monotone complexity, time-bounded Kolmogorov complexity, and space-bounded Kolmogorov complexity. An axiomatic approach to Kolmogorov complexity based on Blum axioms (Blum 1967) was introduced by Mark Burgin in the paper presented for publication by Andrey Kolmogorov (Burgin 1982). The axiomatic approach encompasses other approaches to Kolmogorov complexity. It is possible to treat different kinds of Kolmogorov complexity as particular cases of axiomatically defined generalized Kolmogorov complexity. Instead, of proving similar theorems, such as the basic invariance theorem, for each particular measure, it is possible to easily deduce all such results from one corresponding theorem proved in the axiomatic setting. This is a general advantage of the axiomatic approach in mathematics. The axiomatic approach to Kolmogorov complexity was further developed in the book (Burgin 2005) and applied to software metrics (Burgin and Debnath, 2003; Debnath and Burgin, 2003). In information processing, complexity is a measure of the total number of properties transmitted by an object and detected by an observer. Such a collection of properties is often referred to as a state. In business, complexity describes the variances and their consequences in various fields such as product portfolio, technologies, markets and market segments, locations, manufacturing network, customer portfolio, IT systems, organization, processes etc. In physical systems, complexity is a measure of the probability of the state vector of the system. This should not be confused with entropy; it is a distinct mathematical measure, one in which two distinct states are never conflated and considered equal, as is done for the notion of entropy in statistical mechanics. In mathematics, Krohn-Rhodes complexity is an important topic in the study of finite semigroups and automata. In Intelligent Design theory, complexity refers to the number of bits needed to achieve “something”. In software engineering, programming complexity is a measure of the interactions of the various elements of the software. This differs from the computational complexity described above in that it is a measure of the design of the software. There are different specific forms of complexity: In the sense of how complicated a problem is from the perspective of the person trying to solve it, limits of complexity are measured using a term from cognitive psychology, namely the hrair limit. Complex adaptive system denotes systems that have some or all of the following attributes: The number of parts (and types of parts) in the system and the number of relations between the parts is non-trivial – however, there is no general rule to separate "trivial" from "non-trivial"; The system has memory or includes feedback; The system can adapt itself according to its history or feedback; The relations between the system and its environment are non-trivial or non-linear; The system can be influenced by, or can adapt itself to, its environment; and The system is highly sensitive to initial conditions.
Thanks UB, Wow, it has been a long time since I wrote Irreducible Complexity Reduced. I don't harbor any illusions that many people have actually read it, so thanks for the comment! One of these days I'll have to re-read it and see if I still agree with myself! :) It was a long (and somewhat tedious) analysis of a very nuanced point, so not a very interesting read for a lot of folks, but (I hope) it laid out in some detail the rhetorical context in which IC is (and can be) fruitfully discussed. Bill Dembski is the one who encouraged me to submit it, even though it critiqued some of his analogies and his approach to explaining IC (although it certainly supports Dembski's and Behe's basic arguments.) Eric Anderson
gpuccio, excellent summary. One tiny thought: Maybe you could retitle it "CSI for Dummies" (I guess I'm suggesting that the ID argument takes the existence of CSI plus our present knowledge of cause and effect in the universe, and draws an inference from that. What you've laid out is an excellent way of getting to the idea of the existence of CSI). "After all, Windows 7 could be the result of causal fluctuations in a quantum field . . ." LOL! Eric Anderson
Joe: 2A - Also, ID certainly doesn't have a problem with some amount of junk as a general matter (completely independent of any fall from grace). We know that machines, over time, degrade, break down, and so on. Eric Anderson
markf, you are in danger of becoming Exhibit A for my statement. I am quite familiar with Dembski's work and he most certainly does not define information as little more than "no known plausible explanation." You have misunderstood or are misrepresenting his work. First, the kind of information ID is interested in is complex specified information. That is the kind of information Dembski is interested in. He certainly understands that there is information which is neither complex nor specified. Otherwise, there would be no need for the adjectives "complex" and "specified" to apply to a particular type of information. Dembski never argues that information in general has no plausible explanation. Second, Dembski never argues that complex specified information is simply that which has no plausible explanation (by that, I presume you mean no naturalistic explanation). You have mixed up your arrows of causation/definition. There are three known types of causes in the universe: law, chance, intelligence. The fact that a particular feature cannot be explained by law or chance strengthens the inference that it may have been caused by intelligence. However, the inference is not only a negative one. Quite the contrary. Our uniform and repeated experience shows that complex specified information always comes from an intelligence. Thus, as a historical science looking at the inference to the best explanation for the cause of complex specified information when we see it, intelligence can be inferred as the most likely cause. Your complaint demonstrates that you do not understand the argument. Third, information is an objective reality. It is not simply a definitional label applied to that which has "no known plausible explanation." Information objectively exists. The question is where it comes from. Do you acknowledge that there is information in DNA? Do you acknowledge that there is complex specified information in DNA? If so, then we can have a rational discussion about where that information may have come from. Eric Anderson
1- ID is a "God of the gaps" argument in the same way archaeology is a "builder of the gaps" argument and forensics is a "criminal of the gaps" argument. 2- Junk DNA Jonathan M wrote:
Saunders further raises questions regarding the notion that ID predicts function for so-called "junk DNA." Indeed, this is one area where I think ID offers significant heuristic value. Where Darwinian thinking expects us to find junk and waste wherever we look in the cell, ID predicts the opposite -- that is to say, we should expect to find function and meaning. It seems unlikely that a designer would fill our genomes with millions of bases of nonsensical "filler." If ID is correct, therefore, it is to be expected that much of this "filler" will be found to be functional (though granting that various scenarios may well have resulted in the accumulation of some level of genuine junk).
A- Darwinism does not expect junk but can explain it. B- ID can explain junk- Christian IDists should be familiar with the fall from grace but also non-functioning DNA could be a reservoir for future components- a stockpile so to speak Joe
OT: Human cells build protein cages to trap invading Shigella Excerpt: Mostowy continue to investigate the properties of the individual septins, which total 14 in humans, to understand how they associate with other proteins as parts of complex nano-machines. http://www.physorg.com/news/2011-12-human-cells-protein-cages-invading.html bornagain77
And Mark, exactly why is it not considered by you to be much more of a 'neo-Darwinism of the Gaps' argument to attribute the generation of functional information to a material process that has never, ever, before been seen generating functional information above the level of functional information that was already observed being present, whereas intelligence uniformly, and repeatedly, has uniquely been observed generating such functional information above that which is already present??? Is it not really 'magic' that you yourself are postulating with this 'neo-Darwinism of the gaps' argument since it has never been observed before, whereas you yourself generate functional information every time you write a sentence? Or are you saying that you are not really intelligent. i.e. that you are 'absolutely stupid' as gpuccio put it?? :) bornagain77
UB: Good work as always. I am always amazed at how intelligent people cannot grasp even the most simple and obvious truths when they are ideologically biased. Just to try some "ID for dummies": Functionally specified information. It's not difficult. I need a string of bits that contains the minimal information necessary to do something. The information that is necessary to do something is functionally specified. Isn't that simple? Complexity: How many bits do I need to achieve that "something"? It's not difficult. It is simple. Programmers know very well that, if they want more functions, they have to write more code. Let's take the minimal code that can do some specific thing. That is the specified complexity for that function. Mark will say that if we think that Windows 7 is designed, that is a God of the Gaps arguments, because we are just saying that we have no plausible explenation for it other than Microsoft. Well, Mark is obviously intelligent, so how can I challenge that kind of statements with my low IQ? :) After all, Windows 7 could be the result of causal fluctuations in a quantum field, filtered by the omnipresent principle of natural computation. Ah, those IDiots and their God of the Gaps arguments! gpuccio
Eric: I am happy to realize that maybe my intelligence is normal! :) gpuccio
Mark: I am happy to realize that I am absolutely stupid! :) gpuccio
Hello JDNA and Eric, JDNA, I think your summary is spot on. Of course, that paints me in a favorable light, so my judgment may be suspect. ;) Really, thank you for the kind words. Eric, Thank you as well. I've seen those same arguments. What I think the semiotic argument adds is an accounting of the physicality. -- by the way, you have the same name as the guy who wrote "Irreducible Complexity Reduced". If you happen to be that guy, then I would imagine you have a special appreciation for the fact that semiosis is the most prolific example of IC in biology. Upright BiPed
Funnily enough I view the ability to understand why the ID argument is in the end a "God of the Gaps" argument as a somewhat advanced IQ test. It takes quite a lot of thinking to realise that "information" as defined by ID means little more than "no known plausible explanation". You have to go back to Dembski's work and read it in detail. markf
very few will attempt to take on the fundamental argument. In fact it is telling at how many have fled from your execution of it. liddle gets credit for taking a crack at it. But she had no choice, she couldn't engage you indirectly for months on the topic and then sit back an watch the Retreat of Moran and not execute some cover fire. Her response was more to save face than refute. She didn't want to Dawkins the challenge. junkdnaforlife
...he then removed my post because I reminded him that "mockery was an political response, not a scientific one" and therefore "could not suffice as a valid response to observations of physical evidence". He seems prepared to grant himself an exception nonetheless. Upright BiPed
Upright BiPed, You've laid out some very interesting and valuable thoughts regarding the semiotics involved in DNA (as elsewhere). Unfortunately, the point seems lost on some proponents of a naturalistic creation story. In many cases it seems they are unwilling, to the point of being literally unable, to grasp the basic concepts of information, representation, code, etc. I've seen so many feeble attempts to deny that there is any meaningful information in DNA, either by trying to dance the definition of the word "information" on the head of a pin, or by making the absurd allegation that the information doesn't exist apart from the physical representation of it. Is there another example we could use from ordinary everyday experience that would be so obvious it could not be denied, or are we doomed to never be able to have a productive discussion with such individuals? Again, I view the ability to accept the fact of information content and symbolic representation in the cell as something of a basic IQ test. If someone can't (or won't) acknowledge them, it is hard to have a rational discussion about where they came from. Eric Anderson
Mr. Saunders is eager to inform me that he would like to be immediately counted among those scientists who have no viable response to physical evidence which challenges their religious ideology. His famous last words: "I think that restrictions on comment length are well-advised, having looked at this comment." Upright BiPed
NOTE: I was trying to post to Robert Saunder's site, but his blog will only allow short comments, so I am posting here and will leave him a link. Mr Sauders, All competitiveness aside, I do hope you'll choose to return and revise your remarks regarding the semiotic argument for design. I am certainly willing to admit that I wish this for entirely self-serving reasons. The problem is this; I can't find a opponent who can sucessfully attack the argument on its merits. Like any empirical argument of substance, it is based upon a limited number of premises which are fully supported by observable evidence. The use of terms follow their widely-understood textbook definitions, the argument makes no violations of fact, and has no internal contradictions. To the contrary, since the argument discusses the inter-related physical entailments of recorded information, each individual observation coherently supports the others. Here are some observations of information transfer which you may wish to attack: 1) Information (from the latin verb, informare: to give form, to in-form) is recorded by a representational arrangement of matter/energy to cause an specific physical effect within a system. 2) For one thing to respresent something else within a system, it must be separate from it. 3) If a physical arrangement of matter/energy is separate from a physical effect, then there must be something that establishes the physical relationship between the two; a second arrangement of matter/energy (a physcial protocol). 4) Without the second arrangement of matter to establish the relationship between the first arrangement and the resulting effect, that relationship would not exist within the system. 5) The observed dynamic property at work is that each of these physical things (the physical represenations, the physical protocols, and the physical effects) remain discrete from one another. These observations lead to a list of four physical entailments of recorded information transfer, each of which can be observed to exist in any transfer of recorded information leading to an effect. They are: a) the existence of an arrangement of matter acting as a physical representation, b) the existence of an arrangement of matter to establish the relationship between a representation and the effect it represents within a system (the protocol), c) the existence of physical effects being driven by the input of the representations, and d) the dynamic property that they each remain discrete. Observations of systems that satisfy these four requirements confirms the existence of semiotic information transfer. You will notice that for such systems to operate requires two immaterial properties to be instantiated in the physical objects within the system (the physical representation carries an immaterial abstraction of an effect, and the physical protocol carries an immaterial rule that "this maps to that") and further, that these two objects must be coordinated, one to the other. Each of these entailments is found in every example of recorded information transfer - no matter whether that information is bound to humans, or human intelligence, or other living things, or non-living machines. Each of these entailments is also precisely exhibited by the transfer of information from the genome during protein synthesis. The prevailing first reaction is for the opponent to throw up his hands and declair "It's just chemical". But that objection is fully rejected in the argument. However, in that regard, (having the physical dynamics of information transfer on the table for observation) I also have asked the following question: "If in one instance we have a thing that actually is a symbolic representation, and in another we have something that just acts like a symbolic representation – then someone can surely look at the physical evidence and point to the distinction between the two". I have yet to have anyone take up an answer to that question. As I asked Dr Moran before he vacated the argument, and so now I will ask you: do you agree with these observations, or do you have evidence that attaching cytosine to thymine to adenine is mapped to “bind leucine next” in any physical context? - - - - - - - Here is the original summary argument made to Dr Moran. Here is Dr Liddle's strained (anthropocentric) response. And here is my repsonse to Dr Liddle. You may want to not follow her footsteps, although I must commend Dr Liddle for being the only opponent thus far who has had the intellectual soveriegnty to actually address the argument. (I am still awaitng her repsonse to my rebuttal). Upright BiPed
"but failing to even understand the argument is pretty sad ... If someone can’t even understand the basic and simple argument ID makes, then it is a pretty good sign that they are either incapable or unwilling to even have a rational discussion." I challenged Mr Saunders to attack the semiotic argument for design, perhaps teasing him a bit about being able to provide a more substantive response than either Larry Moran or Dr Liddle. I suppose I was hoping that a bit of gratuitous tempting would motivate him to actually address the physical evidence instead of just folding his cards. Alas, my hopes were dashed. He simply said it "makes little sense". Upright BiPed
But I explained in the very section he quotes why this is emphatically not the case. ID is based on standard principles of scientific methodology. When dealing with past events, scientists conventionally use the historical (abductive) method of inference to the best explanation. In order to reconstruct what happened in the remote past, scientists allow their present experience of cause and effect to guide their search for the best explanation. Thus, it follows that intelligence is the best -- most causally adequate -- candidate explanation for the information intrinsic to the hereditary molecules of DNA and RNA. I am having difficulty seeing how this can possibly be construed as a "god-of-the-gaps" argument.
You are absolutely right. Saunders' complaints about god-of-the-gaps are based on a misunderstanding of the ID argument. The ID argument is quite simple and limited. One can of course dispute whether design is the best explanation in specific situations, but failing to even understand the argument is pretty sad. I view the ability to understand the ID argument as something of a basic IQ test. If someone can't even understand the basic and simple argument ID makes, then it is a pretty good sign that they are either incapable or unwilling to even have a rational discussion. Eric Anderson

Leave a Reply