Uncommon Descent Serving The Intelligent Design Community

Keith S in a muddle over meaning, macroevolution and specified complexity

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

One of the more thoughtful critics of Intelligent Design is Keith S, from over at The Skeptical Zone. Recently, Keith S has launched a barrage of criticisms of Intelligent Design on Uncommon Descent, to which I have decided to reply in a single post.

Is Dembski’s design inference circular?

Keith S’s first charge is that Intelligent Design proponents have repeatedly ignored an argument he put forward two years ago in a comment on a post at TSZ (19 October 2012, at 5:28 p.m.), showing that Dr. William Dembski’s design inference is circular. Here is his argument:

I’ll contribute this, from a comment of mine in the other thread. It’s based on Dembski’s argument as presented in Specification: The Pattern That Signifies Intelligence.

Here’s the circularity in Dembski’s argument:

1. To safely conclude that an object is designed, we need to establish that it could not have been produced by unintelligent natural causes.

2. We can decide whether an object could have been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold).

3. To determine whether something has CSI, we use a multiplicative formula for SC that includes the factor P(T|H), which represents the probability of producing the object in question via “Darwinian and other material mechanisms.”

4. We compute that probability, plug it into the formula, and then take the negative log base 2 of the entire product to get an answer in “bits of SC”. The smaller P(T|H) is, the higher the SC value.

5. If the SC value exceeds the threshold, we conclude that unintelligent processes could not have produced the object. We deem it to have CSI and we conclude that it was designed.
6. To summarize: to establish that something has CSI, we need to show that it could not have been produced by unguided evolution or any other unintelligent process. Once we know that it has CSI, we conclude that it is designed – that is, that it could not have been produced by unguided evolution or any other unintelligent process.

7. In other words, we conclude that something didn’t evolve only if we already know that it didn’t evolve. CSI is just window dressing for this rather uninteresting fact.

I’m sorry to say that KeithS has badly misconstrued Dembski’s argument: he assumes that the “could not” in premise 1 refers to absolute impossibility, whereas in fact, it simply refers to astronomical improbability. Here is Dr. Dembski’s argument, restated without circularity:

1. To safely conclude that an object is designed, we need to establish that it exhibits specificity, and that it has an astronomically low probability of having been produced by unintelligent natural causes.

2. We can decide whether an object has an astronomically low probability of having been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold).

3. To determine whether something has CSI, we use a multiplicative formula for SC that includes the factor P(T|H), which represents the probability of producing the object in question via “Darwinian and other material mechanisms.”

4. We compute that probability, plug it into the formula, and then take the negative log base 2 of the entire product to get an answer in “bits of SC”. The smaller P(T|H) is, the higher the SC value.

5. If the SC value exceeds the threshold, we conclude that it is certain beyond reasonable doubt that unintelligent processes did not produce the object. We deem it to have CSI and we conclude that it was designed.

6. To summarize: to establish that something has CSI, we need to show that it exhibits specificity, and that it has an astronomically low probability of having been producedby unguided evolution or any other unintelligent process. Once we know that it has CSI, we conclude that it is designed – that is, that it it is certain beyond all reasonable doubt that it was not produced by unguided evolution or any other unintelligent process.

I conclude that KeithS’s claim that Dr. Dembski’s design argument is circular rests upon a misunderstanding of the argument.

Keith S’s bomb, and why it falls flat

Three weeks ago, on Bary Arrington’s post, titled, No Bomb After 10 Years, KeithS put forward what he considered to be a devastating argument against Intelligent Design: that unguided evolution is literally trillions of times better than Intelligent Design at explaining the objective nested hierarchies which characterize living things.

The argument, in a nutshell, goes like this:

1. We observe objective nested hierarchies (ONH)
2. Unguided evolution explains ONH
3. A designer explains ONH, but also a trillion alternatives.
4. Both unguided evolution and a designer are capable of causing ONH.
Conclusion: Unguided evolution is a trillion times better at explaining ONH.

I responded to this argument in my post, Why KeithS’s bomb is a damp squib, which made five points in reply to Keith S. My second point was as follows:

The problem is that KeithS has conflated two hypotheses: the hypothesis of common descent (which is very well-supported by the evidence that objective nested hierarchies exist in living things), and the hypothesis of unguided design (which he also claims is well-supported by the evidence that objective nested hierarchies exist in living things).
The first hypothesis is indeed well-supported by the evidence, as the only known processes that specifically generate unique, nested, hierarchical patterns are branching evolutionary processes. The probability that any other process would generate such hierarchies is vanishingly low.

But if KeithS wishes to argue against intelligently guided evolution, then the two alternative hypotheses he needs to consider are not:

A: a branching evolutionary process (also known as a Markov process) generated the objective nested hierarchies we find in living things; and

~A: an Intelligent Designer generated these objective nested hierarchies, but instead:

A: an unguided process generated the objective nested hierarchies we find in living things; and

~A: an intelligently guided process generated these objective nested hierarchies.

The point KeithS makes in his essay is that on hypothesis ~A, the likelihood of B (objective nested hierarchies in living things) is very low. However, it is also true that on hypothesis A, the likelihood of B is very low, as the vast majority of unguided processes don’t generate objective nested hierarchies.

KeithS’s reply here (in comment 76):

That’s not true.
In reality, mutation rates are low enough and vertical inheritance predominates enough that we can treat unguided evolution as a Markov process.

My reply:
Here, Keith S attempts to rebut my argument that “the vast majority of unguided processes don’t generate objective nested hierarchies” by pointing out (correctly) that the unguided evolution we observe during the history of animal life on Earth – if we ignore the prokaryotes here and focus on the 30 major taxa of animals, as Theobald does in his 29 Evidences for Macroevolutionis indeed a Markov process, since vertical inheritance predominates. However, this is not germane to the mathematical argument I put forward. The question is not whether a Markov process did indeed generate the 30 taxa of animals living on Earth, but rather whether the only unguided processes in Nature that would have been capable of generating various groups of animals on some planet harboring life were Markov processes (which are the only processes known to automatically generate objective nested hierarchies).

For instance, we might imagine a natural process X that generates various types of animals on life-bearing planet Z, where these animals do not exhibit objective nested hierarchies. This is just as fair – or just as unfair – as Keith S arguing that an Intelligent Designer might have produced various types of animals which did not exhibit objective nested hierarchies.

The only way for Keith S to refute the hypothetical scenario I proposed would be to argue that life-forms which did not exhibit objective nested hierarchies would not be viable (over the long-term), for some reason – which implies that the only life-forms we are likely to find in the cosmos are ones which do exhibit these hierarchies. But if that were the case, then the same argument would explain equally well why a Designer would refrain from making life-forms which did not exhibit objective nested hierarchies. And in that case, the Designer hypothesis explains the presence of objective nested hierarchies in living things just as well as the hypothesis of unguided evolution.

Why Ockham’s razor fails to support Keith S’s claim that ID is trillions of times worse than unguided evolution at explaining the objective nested hierarchy of life

In an effort to further discredit Intelligent Design, Keith S appeals to Ockham’s razor. Now I’ll address that argument in a moment, but for now, let’s just suppose (for the sake of argument) that Keith S is right, and that Intelligent Design is a redundant hypothesis, when it comes to explaining the properties of living things. Even if that were the case, that’s not the same thing as the mathematical claim that ID is trillions of times worse than unguided evolution at explaining the objective nested hierarchy of life. (We don’t say, for instance, that the hypothesis that angels push the planets round the Sun is trillions of times worse than the hypothesis that they are moved by the forces postulated in Newtonian mechanics; we just say that we have no need for the former hypothesis.) Ockham’s razor is a non-quantitative device for eliminating unnecessary explanations; hence it cannot be used to support quantitative claims regarding the superiority of one hypothesis over another.

I conclude that Keith S’s appeals to Ockham’s razor are completely beside the point. Even if he is right – and as we’ll see below, there are excellent grounds for thinking that he isn’t – the mathematical argument against Intelligent Design is invalid.

Keith S’s Fourfold Challenge and the Rain Fairy

And now, without further ado, let’s have a look at Keith S’s Fourfold Challenge (see also here):

Some more questions for the ID supporters out there:

1. Bob is walking through the desert with his friend, a geologist. They come across what appears to be a dry streambed. After some thought, Bob states that every rock, pebble, grain of sand and silt particle was deliberately placed in its exact position by a Streambed Designer. His friend says “That’s ridiculous. This streambed has exactly the features we would expect to see if it was created by flowing water. Why invoke a Streambed Designer?”

Who has the better theory, Bob or his friend?

2. Bob is invited to the scene of an investigation by a friend who is an explosive forensics expert. They observe serious damage radiating out in all directions from a central point, decreasing with distance, as if an explosion had taken place. Bob’s friend performs some tests and finds large amounts of explosive residue. Bob says, “Somebody went to a lot of trouble to make it look like there was an explosion here. They even planted explosive residue on the scene! Of course, there wasn’t really an explosion.”

Who has the better theory, Bob or his friend?

3. Bob and another friend, an astronomer, observe the positions of the planets over several years. They determine that the planets are moving in ellipses, with the sun at one of the foci. Bob says, “Isn’t that amazing? The angels pushing the planets around are following exactly the paths that the planets would have followed if gravity had been acting on them!” The astronomer gives Bob a funny look and says “Maybe gravity is working on those planets, with no angels involved at all. Doesn’t that seem more likely to you?”

Who has the better theory, Bob or his friend?

4. Bob is hanging out at the office of a friend who is an evolutionary biologist. The biologist shows Bob how the morphological and molecular data establish the phylogenetic tree of the 30 major taxa of life to an amazing accuracy of 38 decimal places. “There couldn’t be a better confirmation of unguided evolution,” the biologist says. “Don’t be ridiculous,” Bob replies. “All of those life-forms were clearly designed. It’s just that the Designer chose to imitate unguided evolution, instead of picking one of the trillions of other options available to him.”

Who has the better theory, Bob or his friend?

Share your answers with us. Did your answers to the four questions differ? If so, please explain exactly why.
And ponder this: If you are an ID supporter, then you are making exactly the same mistake as Bob does in the four examples above, using the same broken logic. Isn’t that a little embarrassing? It might be time to rethink your position.

And don’t forget the Rain Fairy.

Keith S describes the Rain Fairy hypothesis here:

The only designer hypothesis that fits the evidence is one in which the designer mimics (by desire, coincidence, or limitation) the patterns of unguided evolution. The only Rain Fairy hypothesis that fits the evidence is one in which the Rain Fairy mimics (by desire, coincidence, or limitation) the patterns of unguided meteorology. Any reasonable person will reject the Rain Fairy and Designer hypotheses in favor of their competitors, which explain the evidence far, far better.

I’d like to make two points in reply. The first is that there is an overarching natural hypothesis which explains all of the features of the non-biological phenomena which figure in KeithS’s examples: streambeds, chemical explosions, the movement of the planets and weather patterns. By contrast, in Keith S’s example relating to the tree of life, the Darwinian hypothesis of branching evolution explains only the patterns we find in the tree of life. It does not explain the other features of living things. In other words, Darwinian evolution (or mutation-driven evolution, for that matter) needs to be able to provide a comprehensive theory of living things and their properties, before we can confidently declare that we have no need for the hypothesis of Intelligent Design.

The second (and related) point I’d like to make with respect to the Rain Fairy example is that meteorological phenomena exhibit no patterns with a high degree of specified complexity – and even if they did, none of these patterns is functional. The biological world, is rife with patterns exhibiting a high degree of functional specified complexity – proteins, for instance. Hence the Rain Fairy analogy does not hold.

Why ID supporters would not be fazed if an unguided process could be shown to have generated the objective nested hierarchy found in animals

But let us be generous, and suppose (for argument’s sake) that Keith S can come up with a good natural reason showing why (a) the only kinds of animals that are likely to be generated on a life-bearing planet by unguided processes will be ones exhibiting objective nested hierarchies, whereas (b) an Intelligent Designer, on the other hand, would not be bound by such constraints. Even so, Keith S’s argument is still vulnerable to the third objection which I listed in my post, Why KeithS’s bomb is a damp squib:

My third point is that KeithS’s argument assumes that the genetic and morphological features on the basis of which living things are classified into objective nested hierarchies were generated by the same process as the (unguided, Markovian) processes which generates the branches in the hierarchies. This is unlikely, even on a standard evolutionary view: features take time to evolve, and therefore would presumably have appeared at some time subsequent to the branch nodes themselves. Thus it could well be the case that while unguided processes explain the existence of objective nested hierarchies in the living world, guided processes are required to explain some or all of the features in these hierarchies. (Italics added – VJT.)

Features that might need to be explained by guided processes include new proteins appearing in animals, as well as new cell types in distinct lineages of animals and the appearance of new control hierarchies regulating body plans in animals.

Unfortunately, KeithS’s reply here (in comment 89 on my post) misses the point I was trying to make:

I’m not sure why you think this is an issue. The taxa in a cladogram are always at the ends of the branches, never at the nodes.

It isn’t enough to show that guided processes might be involved. You need to show that they must be involved, because otherwise you are still at the trillions-to-one disadvantage.

In his first sentence, Keith S makes a valuable concession, without realizing it. He concedes that the processes which generated the branches in the tree of animal life need not be the same as the processes which generated the features which distinguish the various types of animals. Hence it could be the case that the former are unguided, while the latter are guided. That was the point I wished to make. Arguing against Intelligent Design by appealing to the branching process which generated the tree of life is futile, because ID advocates don’t regard the branching process as evidence of intelligent design in the first place. In other words, even if unguided evolution is trillions of times better than Intelligent Design at explaining the objective nested hierarchies which characterize living things, ID advocates can still answer: “So what? At best, you’ve shown that the unguided branching processes are a better explanation for objective nested hierarchies in living things; but you’ve failed to demonstrate that these processes are sufficient to explain the characteristics of living things.”

Keith S goes on to point out, correctly, that “It isn’t enough to show that guided processes might be involved.” Intelligent Design proponents need to show that guided processes must be involved in generating these features. He spoils his argument somewhat by referring to the “trillions-to-one disadvantage” which the Intelligent Design hypothesis allegedly suffers from (and which I’ve discredited above). Nevertheless, Ockham’s razor alone would suffice to rule Intelligent Design out of court, unless ID advocates could demonstrate the insufficiency of unguided processes to explain the biological features of animal life. So the question we need to answer is: are there any barriers to the evolution of the 30 major groups of animals, via unguided processes?

Barriers to macroevolution – they’re real!

Keith S rightly contends that the onus is on the Intelligent Design proponent to demonstrate the existence of barriers to macroevolution. My recent post, titled, Barriers to macroevolution: what the proteins say, described one such barrier: the evolution of proteins. (As any biochemist will tell you, there are many kinds of proteins which are unique to each of the 30 major taxa of animals, so this problem is quite separate to the origin-of-life problem.) I’ll quote just the first three paragraphs of my post:

KeithS has been requesting scientific evidence of a genuine barrier to macroevolution. The following is a condensed, non-technical summary of Dr. Douglas Axe’s paper, The Case Against a Darwinian Origin of Protein Folds. Since (i) proteins are a pervasive feature of living organisms, (ii) new proteins and new protein folds have been continually appearing throughout the four-billion-year history of life on Earth, and (iii) at least some macroevolutionary events must have involved the generation of new protein folds, it follows that if Dr. Axe’s argument is correct and neo-Darwinian processes are incapable of hitting upon new functional protein folds, then there are indeed genuine barriers to macroevolution, in at least some cases. The argument put forward by Dr. Axe is robustly quantifiable, and it is fair to say that Dr. Axe carefully considers the many objections that might be put forward against his argument. If there is a hole in his logic, then I defy KeithS to find it.

Finally I would like to thank Dr. Axe for putting his paper online and making it available for public discussion. The headings below are my own; the text is entirely taken from his paper.

Abstract

Four decades ago, several scientists suggested that the impossibility of any evolutionary process sampling anything but a minuscule fraction of the possible protein sequences posed a problem for the evolution of new proteins. This potential problem – the sampling problem – was largely ignored, in part because those who raised it had to rely on guesswork to fill some key gaps in their understanding of proteins. The huge advances since that time call for a careful reassessment of the issue they raised. Focusing specifically on the origin of new protein folds, I argue here that the sampling problem remains. The difficulty stems from the fact that new protein functions, when analyzed at the level of new beneficial phenotypes, typically require multiple new protein folds, which in turn require long stretches of new protein sequence. Two conceivable ways for this not to pose an insurmountable barrier to Darwinian searches exist. One is that protein function might generally be largely indifferent to protein sequence. The other is that relatively simple manipulations of existing genes, such as shuffling of genetic modules, might be able to produce the necessary new folds. I argue that these ideas now stand at odds both with known principles of protein structure and with direct experimental evidence. If this is correct, the sampling problem is here to stay, and we should be looking well outside the Darwinian framework for an adequate explanation of fold origins.

I then issued a further invitation to Keith S to respond in a subsequent comment:

KeithS,
I only have a few minutes, but I’d like to say that you are welcome to post scientific criticisms of Dr. Axe’s argument on this thread, if you have any.

Another commenter on the thread invited him to do the same:

I think that you would gain much credibility with many, if you were to take that advice. Why not start with scientific responses to the issues raised in “Barriers to Macroevolution: what the proteins say”.

And what was KeithS’s response? An appeal to circular, blatantly question-begging logic!

If you’ve been following UD lately, you’ll know that I have presented an argument demonstrating that ID is literally trillions of times worse at explaining the evidence when compared to unguided evolution.

And I’ve been trying to tell Keith S that the evolution of proteins constitutes such a barrier, by appealing to the paper by Dr. Douglas Axe from which I quoted above.

To my dismay and disappointment, the rest of my thread on Barriers to macroevolution was taken up with an arcane discussion of censorship of previous posts on Uncommon Descent, which is neither here nor there.

I repeat my challenge: can Keith S kindly tell me what’s wrong with the reasoning in Dr. Axe’s paper, The Case Against a Darwinian Origin of Protein Folds, which I summarized in a non-technical form in my recent post?

In a muddle over meaning

Not content with leaving matters there, Keith S issued a challenge of his own over at gpuccio’s post, An attempt at computing dFSCI for English language. In his post, GPuccio wrote:

Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in English is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB [Upper Probability Bound – VJT]. As I am aware of no simple algorithm which can generate English sonnets from single characters, I infer design. I am certain that this is not a false positive.

Was I wrong? You decide.

I don’t want to discuss the mathematics behind gpuccio’s calculation here, except to say that it erred unduly on the side of generosity, in conceding the existence of a pool of 200,000 English words (an under-estimate, by the way), and asking what percentage of 600-letter sequences made up entirely of these words would constitute a meaningful sonnet. Some commenters objected that there isn’t a clear black-and-white dividing line between meaningful poetry and meaningless strings of words which obey the rules of English syntax, as the history of the Ern Malley hoax shows. But let’s face it: if we saw a message with the words, “Colorless green ideas sleep furiously” written 100 times, we’d all conclude that it was designed, either directly (by a human being) or indirectly (by a computer programmed by a human being).

In my opinion, however, a much fairer question to ask would be: if we received a binary signal from outer space and decoded it into (say) ASCII code, only to find that it spelt out a Shakespearean sonnet, what would the odds be that it was generated via an unguided process? I believe this example is a more appropriate one, as it doesn’t start with a pool of words, or even letters, but with simple binary signals which can be used to make letters, which can be arranged into English words, which can in turn be arranged into meaningful sentences. And even if the boundary between meaningful and meaningless sentences is a little blurry at times, the boundary between syntactically valid sentences and sentences with bad syntax is a lot clearer and less ambiguous. Using my analogy, we can certainly show that the odds of a binary signal from space spelling out a sonnet of any kind are less than 1 in 2^500.

And what was Keith S devastating reply to gpuccio? The main points that he makes can be found in comments 9, 11 and 13 on gpuccio’s post. I’ll address them one at a time.

gpuccio,

We can use your very own test procedure to show that dFSCI is useless.

Procedure 1:
1. Look at a comment longer than 600 characters.
2. If you recognize it as meaningful English, conclude that it must be designed.
3. Perform a pointless and irrelevant dFSCI calculation.
4. Conclude that the comment was designed.

Procedure 2:

1. Look at a comment longer than 600 characters.
2. If you recognize it as meaningful English, conclude that it must be designed.
3. Conclude that the comment was designed.

The two procedures give exactly the same results, yet the second one doesn’t even include the dFSCI step. All the work was done by the other steps. The dFSCI step was a waste of time, mere window dressing.

Even your own test procedure shows that dFSCI is useless, gpuccio.

Keith S’s argument misses the point here. What he fails to ask is: why did we choose 600 characters as a cutoff point and not six? Because we can show that unguided processes are fully capable of generating six-character strings, like “Stop it”.
If I discovered a binary signal from outer space that spelt out these characters when converted into ASCII, I certainly would not conclude that it was designed.

On the other hand, we can calculate that the probability of unguided processes coming up with a meaningful 600-characters string are so low that we would not expect this event to happen even once, in the history of the observable cosmos – in other words, the probability is less than 1 in 2^500, or 1 in 10^150. Since the string in question is specified (as it has a semantic meaning), a design inference is warranted.

Keith S continues:

gpuccio,

We’ve been over this many times, but the problem with your dFSCI calculations is that the number they produce is useless.

The dFSCI number reflects the probability that a given sequence was produced purely randomly, without selection. No evolutionary biologists thinks the flagellum (or any other complex structure) arose through a purely random process; everyone thinks selection was involved. By neglecting selection, your dFSCI number is answering a question that no one is asking. It’s useless.

There is a second aspect of dFSCI that is a boolean (true/false) variable, but it depends on knowing beforehand whether or not the structure in question could have evolved. You can’t use dFSCI to show that something couldn’t have evolved, because you already need to know that it couldn’t have evolved before you attribute dFSCI to it. It’s hopelessly circular.

What a mess. The numerical part of dFSCI is useless because it neglects selection, and the boolean part is also useless because the argument that employs it is circular.

dFSCI is a fiasco.

Gpuccio’s calculations were perfectly appropriate for the class of entities he was discussing – namely, character strings. Character strings are not alive, so they are incapable of evolving by the non-random process of natural selection.
In addition, natural selection does not select for semantic meaning; what it selects for is functionality. The latter can be refined over the course of time by evolution, whereas the former cannot, as unguided evolution is blind to it.
Of course, that leaves us with the question of whether gpuccio’s post can be used to undermine the theory of evolution by natural selection. But gpuccio never discussed that question in his post, which was simply an attempt to calculate the dFSCI in a Shakespearean sonnet.

Finally, Keith S writes:

Dembski’s problems are that 1) he can’t calculate P(T|H), because H encompasses “Darwinian and other material mechanisms”; and 2) his argument would be circular even if he could calculate it.

KF’s problem is that although he claims to be using Dembski’s P(T|H), he actually isn’t, because he isn’t taking Darwinian and other material mechanisms into account. It’s painfully obvious in this thread, in which Elizabeth Liddle and I press KF on this problem and he squirms to avoid it.

Gpuccio avoids KF’s problem by explicitly leaving Darwinian mechanisms out of the numerical calculation. However, that makes his numerical dFSCI value useless, as I explained above. And gpuccio’s dFSCI has a boolean component that does depend on the probability that a sequence or structure can be explained by “Darwinian and other material mechanisms”, so his argument is circular, like Dembski’s.

All three concepts are fatally flawed and cannot be used to detect design.

I repeat: if Keith S wants a decent probabilistic calculation which takes account of “Darwinian and other material mechanisms”, then why doesn’t he respond to the probability calculations contained in the paper I cited above by Dr. Axe (see pages 10 and 11), which is titled “The Case Against a Darwinian Origin of Protein Folds”? Do that, Keith S, and then we’ll talk.

Comments
KF @ 37
The shortfall is itself a staggering figure—some 80 to 127 orders of magnitude (comparing the above prevalence range to the cutoff value of 1 in 5×10 23 ). So it appears that even when m is taken into account, protein sequences that perform particular functions are far too rare to be found by random sampling.
Excerpt from Interplay of physics and evolution in the likely origin of protein biochemical function Full paper Here
The results for 1,284,577 entries extracted from the ChEMBL15 (32) and BindingDB (33) databases are reported. We note that the reported number of ligand–receptor interactions is a lower bound as many such interactions are currently uncharacterized. Even so, in more than 1,400 ligands, each binds to 40 or more nonhomologous proteins. Thus, there is considerable experimental evidence that a given ligand interacts with many proteins in a proteome; viz. such interactions are quite promiscuous. The clear implication is that the fundamental physical–chemical properties of proteins are sufficient to explain many of their structural and molecular functional properties
In simple terms, proteins can easily bind small molecules and mutations can easily find amino acid sequences that generate functional proteins.Me_Think
November 14, 2014
November
11
Nov
14
14
2014
07:27 AM
7
07
27
AM
PST
Andre sneeringly claimed: "Duh…. Crystals lack complexity!" I have some questions for you, Andre, and for all other IDists: Are atoms complex? Is there CSI-dFSCI-FSCO/I in atoms? Are atoms intelligently designed? Is light complex? Is there CSI-dFSCI-FSCO/I in light? Is light intelligently designed? Is gravity complex? Is there CSI-dFSCI-FSCO/I in gravity? Is gravity intelligently designed? Is the universe a 'system'? Is the entire universe intelligently designed?Reality
November 14, 2014
November
11
Nov
14
14
2014
06:54 AM
6
06
54
AM
PST
If you read post #40 then thank you. If you didn't read it, too bad. You've missed something important to keep in mind in many of your future discussions. Go back and read it. Still have time. :) Now you may continue your interesting discussion on stats calculations. :)Dionisio
November 14, 2014
November
11
Nov
14
14
2014
06:47 AM
6
06
47
AM
PST
Well, well, kairosfocus starts off with his usual "loaded with personalities" incendiary, accusatory pomposity, and Joe continues his grunting, accusatory one-liners even though Barry issued a "final warning" to him days ago. What a surprise. Not. Barry, did your final warning to Joe mean anything or was it just an empty bluff? Look at the comments so far in this thread, especially by kairosfocus, Joe, and Andre. Who's trying to start a "quarrel"? Joe grunted: "It’s as if our opponents don’t know anything about science and they think that helps them somehow." "And we know that you and your ilk do not value open discussion." "That keith s can’t get that fact demonstrates he is not into an open discussion. keith s wants to dominate discussions with his strawmen, lies and misrepresentations." "Obviously you don’t know anything about science..."Reality
November 14, 2014
November
11
Nov
14
14
2014
06:40 AM
6
06
40
AM
PST
#39 addendum Ok, no need to click on any link. Here it is: *********************************************************** *********************************************************** *********************************************************** Very interesting summary written by gpuccio:
Indeed, what we see in research about cell differentiation and epigenomics is a growing mass of detailed knowledge (and believe me, it is really huge and daily growing) which seems to explain almost nothing. What is really difficult to catch is how all that complexity is controlled. Please note, at this level there is almost no discussion about how the complexity arose: we have really non idea of how it is implemented, and therefore any discussion about its origin is almost impossible. Now, there must be information which controls the flux. It is a fact that cellular differentiation happens, that it happens with very good order and in different ways in different species, different tissues, and so on. That cannot happen without a source of information. And yet, the only information that we understand clearly is then protein sequence information. Even the regulation of protein transcription at the level of promoters and enhancers by the transcription factor network is of astounding complexity. Please, look at this paper: Uncovering Enhancer Functions Using the ?-Globin Locus. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4199490/pdf/pgen.1004668.pdf In particular Fig. 2. And this is only to regulate the synthesis of alpha globin in red cells, a very straightforward differentiation task. So, I see that, say, 15 TFs are implied in regulating the synthesis of one protein, I want to know why, and what controls the 15 TFs, and what information guides that control. My general idea is that, unless we find some completely new model, information that guides a complex process, like differentiation, in a reliable, repetitive way must be written, in some way, somewhere. That’s what I want to know: where that information is written, how it is written, how does it work, and, last but not least, how did it originate? — gpuccio
*********************************************************** *********************************************************** ***********************************************************Dionisio
November 14, 2014
November
11
Nov
14
14
2014
06:40 AM
6
06
40
AM
PST
Please, forget for a moment all these discussions about stats calculations and all that interesting stuff. Please, pay attention to this: read carefully this very important message gpuccio wrote in another thread: https://uncommondescent.com/evolution/a-third-way-of-evolution/#comment-528351 That's all. Thank you.Dionisio
November 14, 2014
November
11
Nov
14
14
2014
06:33 AM
6
06
33
AM
PST
MT, tangential, we are discussing particular classes of functional specificity. Yes at its own scale fine tuning of physics and cosmos enabling C-Chemistry, aqueous medium terrestrial planet in habitable zones life is an interesting design inference issue but that is not our focus here. As a simple point, break a grain of parrot fish poo [yup, that's what it really is . . . but nicely dried out as rock], and you still have grains of coral sand, just a bit smaller. Break a protein AA string and generally no functional protein. KFkairosfocus
November 14, 2014
November
11
Nov
14
14
2014
06:13 AM
6
06
13
AM
PST
F/N: Let's kick Axe's remarks into play from his recent (2010) paper, abstract and pp 9 - 11: ______________ ABSTRACT: >> Four decades ago, several scientists suggested that the impossibility of any evolutionary process sampling anything but a miniscule fraction of the possible protein sequences posed a problem for the evolution of new proteins. This potential problem—the sampling problem—was largely ignored, in part because those who raised it had to rely on guesswork to fill some key gaps in their understanding of proteins. The huge advances since that time call for a care -ful reassessment of the issue they raised. Focusing specifically on the origin of new protein folds, I argue here that the sampling problem remains. The difficulty stems from the fact that new protein functions, when analyzed at the level of new beneficial phenotypes, typically require multiple new protein folds, which in turn require long stretches of new protein sequence. Two conceivable ways for this not to pose an insurmountable barrier to Darwinian searches exist. One is that protein function might generally be largely indifferent to protein sequence. The other is that rela-tively simple manipulations of existing genes, such as shuffling of genetic modules, might be able to produce the necessary new folds. I argue that these ideas now stand at odds both with known principles of protein structure and with direct experimental evidence . . . >> Pp 5 - 6: >> . . . we need to quantify a boundary value for m, meaning a value which, if exceeded, would solve the whole sampling problem. To get this we begin by estimating the maximum number of opportunities for spontane-ous mutations to produce any new species-wide trait, meaning a trait that is fixed within the population through natural selection (i.e., selective sweep). Bacterial species are most conducive to this because of their large effective population sizes. 3 So let us assume, generously, that an ancient bacterial species sustained an effective population size of 10 ^10 individuals [26] while passing through 10^4 generations per year. After five billion years, such a species would produce a total of 5 × 10 ^ 23 (= 5 × 10^ 9 x 10^4 x 10 ^10 ) cells that happen (by chance) to avoid the small-scale extinction events that kill most cells irrespective of fitness. These 5 × 10 ^23 ‘lucky survivors’ are the cells available for spontaneous muta-tions to accomplish whatever will be accomplished in the species. This number, then, sets the maximum probabilistic resources that can be expended on a single adaptive step. Or, to put this another way, any adaptive step that is unlikely to appear spontaneously in that number of cells is unlikely to have evolved in the entire history of the species. In real bacterial populations, spontaneous mutations occur in only a small fraction of the lucky survivors (roughly one in 300 [27]). As a generous upper limit, we will assume that all lucky survivors happen to receive mutations in portions of the genome that are not constrained by existing functions 4 , making them free to evolve new ones. At most, then, the number of different viable genotypes that could appear within the lucky survivors is equal to their number, which is 5 × 10^ 23 . And again, since many of the genotype differences would not cause distinctly new proteins to be produced, this serves as an upper bound on the number of new protein sequences that a bacterial species may have sampled in search of an adaptive new protein structure. Let us suppose for a moment, then, that protein sequences that produce new functions by means of new folds are common enough for success to be likely within that number of sampled sequences. Taking a new 300-residue structure as a basis for calculation (I show this to be modest below), we are effectively supposing that the multiplicity factor m introduced in the previous section can be as large as 20 ^300 / 5×10^ 23 ~ 10 ^366 . In other words, we are supposing that particular functions requiring a 300-residue structure are real-izable through something like 10 ^366 distinct amino acid sequences. If that were so, what degree of sequence degeneracy would be implied? More specifically, if 1 in 5×10 23 full-length sequences are supposed capable of performing the function in question, then what proportion of the twenty amino acids would have to be suit-able on average at any given position? The answer is calculated as the 300 th root of (5×10 23 ) -1 , which amounts to about 83%, or 17 of the 20 amino acids. That is, by the current assumption proteins would have to provide the function in question by merely avoid-ing three or so unacceptable amino acids at each position along their lengths. No study of real protein functions suggests anything like this degree of indifference to sequence. In evaluating this, keep in mind that the indifference referred to here would have to charac-terize the whole protein rather than a small fraction of it. Natural proteins commonly tolerate some sequence change without com- plete loss of function, with some sites showing more substitutional freedom than others. But this does not imply that most mutations are harmless. Rather, it merely implies that complete inactivation with a single amino acid substitution is atypical when the start-ing point is a highly functional wild-type sequence (e.g., 5% of single substitutions were completely inactivating in one study [28]). This is readily explained by the capacity of well-formed structures to sustain moderate damage without complete loss of function (a phenomenon that has been termed the buffering effect [25]). Conditional tolerance of that kind does not extend to whole proteins, though, for the simple reason that there are strict limits to the amount of damage that can be sustained. A study of the cumulative effects of conservative amino acid substitutions, where the replaced amino acids are chemically simi-lar to their replacements, has demonstrated this [23]. Two unrelat-ed bacterial enzymes, a ribonuclease and a beta-lactamase, were both found to suffer complete loss of function in vivo at or near the point of 10% substitution, despite the conservative nature of the changes. Since most substitutions would be more disruptive than these conservative ones, it is clear that these protein functions place much more stringent demands on amino acid sequences than the above supposition requires. Two experimental studies provide reliable data for estimating the proportion of protein sequences that perform specified func -tions [--> note the terms] . One study focused on the AroQ-type chorismate mutase, which is formed by the symmetrical association of two identical 93-residue chains [24]. These relatively small chains form a very simple folded structure (Figure 5A). The other study examined a 153-residue section of a 263-residue beta-lactamase [25]. That section forms a compact structural component known as a domain within the folded structure of the whole beta-lactamase (Figure 5B). Compared to the chorismate mutase, this beta-lactamase do-main has both larger size and a more complex fold structure. In both studies, large sets of extensively mutated genes were produced and tested. By placing suitable restrictions on the al-lowed mutations and counting the proportion of working genes that result, it was possible to estimate the expected prevalence of working sequences for the hypothetical case where those restric-tions are lifted. In that way, prevalence values far too low to be measured directly were estimated with reasonable confidence. The results allow the average fraction of sampled amino acid substitutions that are functionally acceptable at a single amino acid position to be calculated. By raising this fraction to the power ?, it is possible to estimate the overall fraction of working se-quences expected when ? positions are simultaneously substituted (see reference 25 for details). Applying this approach to the data from the chorismate mutase and the beta-lactamase experiments gives a range of values (bracketed by the two cases) for the preva-lence of protein sequences that perform a specified function. The reported range [25] is one in 10 ^77 (based on data from the more complex beta-lactamase fold; ? = 153) to one in 10 ^53 (based on the data from the simpler chorismate mutase fold, adjusted to the same length: ? = 153). As remarkable as these figures are, par-ticularly when interpreted as probabilities, they were not without precedent when reported [21, 22]. Rather, they strengthened an existing case for thinking that even very simple protein folds can place very severe constraints on sequence. Rescaling the figures to reflect a more typical chain length of 300 residues gives a prevalence range of one in 10 ^151 to one in 10 ^104 . On the one hand, this range confirms the very highly many-to-one mapping of sequences to functions. The corresponding range of m values is 10 ^239 (=20 ^300 /10 ^151 ) to 10 ^286 (=20 ^300 /10 ^104 ), meaning that vast numbers of viable sequence possibilities exist for each protein function. But on the other hand it appears that these functional sequences are nowhere near as common as they would have to be in order for the sampling problem to be dis-missed. The shortfall is itself a staggering figure—some 80 to 127 orders of magnitude (comparing the above prevalence range to the cutoff value of 1 in 5×10 23 ). So it appears that even when m is taken into account, protein sequences that perform particular functions are far too rare to be found by random sampling.>> Pp 9 - 11: >> . . . If aligned but non-matching residues are part-for-part equivalents, then we should be able to substitute freely among these equivalent pairs without impair-ment. Yet when protein sequences were even partially scrambled in this way, such that the hybrids were about 90% identical to one of the parents, none of them had detectable function. Considering the sensitivity of the functional test, this implies the hybrids had less than 0.1% of normal activity [23]. So part-for-part equiva-lence is not borne out at the level of amino acid side chains. In view of the dominant role of side chains in forming the bind-ing interfaces for higher levels of structure, it is hard to see how those levels can fare any better. Recognizing the non-generic [--> that is specific and context sensitive] na-ture of side chain interactions, Voigt and co-workers developed an algorithm that identifies portions of a protein structure that are most nearly self-contained in the sense of having the fewest side-chain contacts with the rest of the fold [49]. Using that algorithm, Meyer and co-workers constructed and tested 553 chimeric pro-teins that borrow carefully chosen blocks of sequence (putative modules) from any of three natural beta lactamases [50]. They found numerous functional chimeras within this set, which clearly supports their assumption that modules have to have few side chain contacts with exterior structure if they are to be transport-Able. At the same time, though, their results underscore the limita-tions of structural modularity. Most plainly, the kind of modular-ity they demonstrated is not the robust kind that would be needed to explain new protein folds. The relatively high sequence simi-larity (34–42% identity [50]) and very high structural similarity of the parent proteins (Figure 8) favors successful shuffling of modules by conserving much of the overall structural context. Such conservative transfer of modules does not establish the ro-bust transportability that would be needed to make new folds. Rather, in view of the favorable circumstances, it is striking how low the success rate was. After careful identification of splice sites that optimize modularity, four out of five tested chimeras were found to be completely non-functional, with only one in nine being comparable in activity to the parent enzymes [50]. In other words, module-like transportability is unreliable even under extraordinarily favorable circumstances [--> these are not generally speaking standard bricks that will freely fit together in any freely plug- in compatible pattern to assemble a new structure] . . . . Graziano and co-workers have tested robust modularity directly by using amino acid sequences from natural alpha helices, beta strands, and loops (which connect helices and/or strands) to con-struct a large library of gene segments that provide these basic structural elements in their natural genetic contexts [52]. For those elements to work as robust modules, their structures would have to be effectively context-independent, allowing them to be com-bined in any number of ways to form new folds. A vast number of combinations was made by random ligation of the gene segments, but a search through 10^8 variants for properties that may be in-dicative of folded structure ultimately failed to identify any folded proteins. After a definitive demonstration that the most promising candidates were not properly folded, the authors concluded that “the selected clones should therefore not be viewed as ‘native-like’ proteins but rather ‘molten-globule-like’” [52], by which they mean that secondary structure is present only transiently, flickering in and out of existence along a compact but mobile chain. This contrasts with native-like structure, where secondary structure is locked-in to form a well defined and stable tertiary Fold . . . . With no discernable shortcut to new protein folds, we conclude that the sampling problem really is a problem for evolutionary accounts of their origins. The final thing to consider is how per-vasive this problem is . . . Continuing to use protein domains as the basis of analysis, we find that domains tend to be about half the size of complete protein chains (compare Figure 10 to Figure 1), implying that two domains per protein chain is roughly typical. This of course means that the space of se-quence possibilities for an average domain, while vast, is nowhere near as vast as the space for an average chain. But as discussed above, the relevant sequence space for evolutionary searches is determined by the combined length of all the new domains needed to produce a new beneficial phenotype. [--> Recall, courtesy Wiki, phenotype: "the composite of an organism's observable characteristics or traits, such as its morphology, development, biochemical or physiological properties, phenology, behavior, and products of behavior (such as a bird's nest). A phenotype results from the expression of an organism's genes as well as the influence of environmental factors and the interactions between the two."] As a rough way of gauging how many new domains are typi-cally required for new adaptive phenotypes, the SUPERFAMILY database [54] can be used to estimate the number of different protein domains employed in individual bacterial species, and the EcoCyc database [10] can be used to estimate the number of metabolic processes served by these domains. Based on analysis of the genomes of 447 bacterial species 11, the projected number of different domain structures per species averages 991 (12) . Compar-ing this to the number of pathways by which metabolic processes are carried out, which is around 263 for E. coli,13 provides a rough figure of three or four new domain folds being needed, on aver-age, for every new metabolic pathway 14 . In order to accomplish this successfully, an evolutionary search would need to be capable of locating sequences that amount to anything from one in 10 ^159 to one in 10 ^308 possibilities 15 , something the neo-Darwinian model falls short of by a very wide margin. >> _______________ Those who argue for incrementalism or exaptation and fortuitous coupling or the like need to address these and similar issues. KFkairosfocus
November 14, 2014
November
11
Nov
14
14
2014
06:04 AM
6
06
04
AM
PST
Andre @ 30
Proteins = Complex + Specified Coral Sand = specified Plutonium = complex
At atomic level (ref comment @ 27) everything is complex (think electrons, muon, tau, quarks, guage bosons) and specific ( if not specific, the element will change to some other element).Me_Think
November 14, 2014
November
11
Nov
14
14
2014
06:01 AM
6
06
01
AM
PST
#33 Andre
Improbable and specified are the same characteristic?? I’m lost in what you are trying to say please clarify…..
I wrote something can be A and B at the same time - for example both improbable and specified (or large and red). That is not the same as saying they are the same characteristic. I also never wrote that something that has a chance value of 1 in 10^2500 is probable - but somehow you read that into something I wrote somewhere. If someone misunderstands what I have written I try to criticise myself for not being clear enough. But really I don't see how to be clearer in these cases.markf
November 14, 2014
November
11
Nov
14
14
2014
05:18 AM
5
05
18
AM
PST
@ Joe #13 'LoL!@ Me Think- All of science relies on observations- proper observations.' Oh no, Joe. As Dawkins pointed out so tellingly, things only appear to be what they appear to be. Empirical science is a busted flush.Axel
November 14, 2014
November
11
Nov
14
14
2014
05:17 AM
5
05
17
AM
PST
Mark Frank Improbable and specified are the same characteristic?? I'm lost in what you are trying to say please clarify.....Andre
November 14, 2014
November
11
Nov
14
14
2014
04:31 AM
4
04
31
AM
PST
Andre #22
So when something has a chance value of 10^2500 you deem it probable?
No. Where on earth did you get the idea I asserted that? Do you understand my point in #11 that something can be A and B at the same time when these are characteristics?markf
November 14, 2014
November
11
Nov
14
14
2014
04:24 AM
4
04
24
AM
PST
F/N: I probably should note that I do reckon with darwinian evo mechanisms by pointing out the role of incremental hill climbing and its limitations in the context of the need to account for novel body plans that exhibit massive FSCO/I. The problem is that FSCO/I locks you to islands of function on the requisites of specific interactions to gain functionality, and to get the requisite complexity incrementally you have to cross seas of non function landing you back in the challenge of blind chance walks with drift that is not correlated with locations of islands of function. The notion, usually implicit, of a vast continent of functionality that is incrementally accessible in a branching tree pattern, lacks empirical warrant. Just look at the discussions on teh challenge to ground the tree of life empirically over the past two years and you will see copious documentation, so again I have been strawmannised. Darwinian mechanisms may explain minor changes such as loss of eyes or wings, or finch beak variations or possibly industrial melanism and insecticide or drug resistance, typically by breaking things and facing a fitness cost, but not the creative origin of body plans requiring 10 - 100+ mn bases of fresh functionally co-ordinated genetic info. If you have an answer to this, the offer to host the essay is still open after two years and more. KFkairosfocus
November 14, 2014
November
11
Nov
14
14
2014
04:00 AM
4
04
00
AM
PST
Me think..... Proteins = Complex + Specified Coral Sand = specified Plutonium = complex the key here about specified complexity is that the parts are packed in a very specific way to produce a very specific effect.Andre
November 14, 2014
November
11
Nov
14
14
2014
03:54 AM
3
03
54
AM
PST
MT: observation is what sets the ball rolling and keeps it in bounds in science. I used to reach students O Hi PET: observe, hypothesise, infer and predict empirically test, where of course there was a student in some of those classes called . . . Pet (now, a medical doctor, I see and chat with her dad -- a retired Police Sergeant and MBE, every so often). We are no longer in C19 when we could vaguely say protoplasm. We know that proteins depend on specifically sequenced AA strings, folding [with chaperoning and prions with scrapies and mad cow disease or even maybe Alzheimer's lurking in the wings], and coded numerically controlled machines. That, one has a false negative (here: not recognising FSCO/I because of lack of proper instruments and work due to state of the art) and is not able as yet c C19 to make relevant observations does nothing to side track the reality that we do have observed FSCO/I to deal with in the cell. Besides, c 1804, Paley had long since put ont eh table the thought exercise of the time-keeping, self replicating watch as a context that was already deeply insightful and suggestive on the issues of FSCO/I. So even macro-level observations and a careful use of the vera causa principle would have counselled caution even then. And post 1953 - 1970, we no longer have any such excuses. We can and do make observations and analysis that point clearly to the FSCO/I in the cell, and we need to reflect on the issues of getting to FSCO/I. At OOL and again at origin of body plans. KFkairosfocus
November 14, 2014
November
11
Nov
14
14
2014
03:50 AM
3
03
50
AM
PST
Me Think. We also know that natural laws are capable of making crystals....... sonnets not so much.... Ever seen a sonnet blown by the wind? The water perhaps wrote one? Gravity perhaps? strong nuclear forces? Anything other than intelligence create a sonnet before? Has this been observed? So in our uniform experience, we know natural forces can not write sonnets......Andre
November 14, 2014
November
11
Nov
14
14
2014
03:49 AM
3
03
49
AM
PST
Andre, Here's my question @ 21 Let’s take this further. If we observe Proteins, Coral Sand and Plutonium at atomic level, which do you think will be more complex and which will have high dFSCI ? Note: Atomic structures are specific and have complexityMe_Think
November 14, 2014
November
11
Nov
14
14
2014
03:48 AM
3
03
48
AM
PST
correct :So, why would a sonnet or any sentences in any script qualify for dFSCI calculation, where as crystals don’t ?Me_Think
November 14, 2014
November
11
Nov
14
14
2014
03:46 AM
3
03
46
AM
PST
Me_Think Duh.... Crystals lack complexity!Andre
November 14, 2014
November
11
Nov
14
14
2014
03:46 AM
3
03
46
AM
PST
Andre @ 21,
The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity
So, why would a sonnet qualify for dFSCI calculation, where as crystals don't ?Me_Think
November 14, 2014
November
11
Nov
14
14
2014
03:44 AM
3
03
44
AM
PST
Here is a cool website to learn about probabilities in math...... http://www.mathsisfun.com/data/probability.htmlAndre
November 14, 2014
November
11
Nov
14
14
2014
03:42 AM
3
03
42
AM
PST
Mark F So when something has a chance value of 10^2500 you deem it probable?Andre
November 14, 2014
November
11
Nov
14
14
2014
03:37 AM
3
03
37
AM
PST
Me_Think. Leslie Orgel answered that already.... Did you not get the memo? "In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity."Andre
November 14, 2014
November
11
Nov
14
14
2014
03:33 AM
3
03
33
AM
PST
A level playing field means using the correct tools for each object.Joe
November 14, 2014
November
11
Nov
14
14
2014
03:30 AM
3
03
30
AM
PST
Joe, This is a thought experiment, which is pretty routine. If your aim is to detect design, shouldn't you give a level playing field ? Let's take this further. If we observe Proteins, Coral Sand and Plutonium at atomic level, which do you think will be more complex and which will have high dFSCI ?Me_Think
November 14, 2014
November
11
Nov
14
14
2014
03:24 AM
3
03
24
AM
PST
Me Think, Obviously you don't know anything about science as science requires proper observations. We do not use a microscope to observe planets. All objects require the proper observation tools.Joe
November 14, 2014
November
11
Nov
14
14
2014
03:21 AM
3
03
21
AM
PST
Joe @ 16 Shouldn't both structures be observed at equal resolution if the objective is to detect design ?Me_Think
November 14, 2014
November
11
Nov
14
14
2014
03:14 AM
3
03
14
AM
PST
Me Think:
I still want to know why coral sand should be observed with naked eyes where as protein should be observed using EM for dFSCI calculation.
I told you why.Joe
November 14, 2014
November
11
Nov
14
14
2014
03:11 AM
3
03
11
AM
PST
keith s:
I know that you personally value open discussion,
And we know that you and your ilk do not value open discussion. CSI exists regardless of how it was formed. That keith s can't get that fact demonstrates he is not into an open discussion. keith s wants to dominate discussions with his strawmen, lies and misrepresentations.Joe
November 14, 2014
November
11
Nov
14
14
2014
03:10 AM
3
03
10
AM
PST
1 2 3 4

Leave a Reply