Uncommon Descent Serving The Intelligent Design Community

Keith S in a muddle over meaning, macroevolution and specified complexity

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

One of the more thoughtful critics of Intelligent Design is Keith S, from over at The Skeptical Zone. Recently, Keith S has launched a barrage of criticisms of Intelligent Design on Uncommon Descent, to which I have decided to reply in a single post.

Is Dembski’s design inference circular?

Keith S’s first charge is that Intelligent Design proponents have repeatedly ignored an argument he put forward two years ago in a comment on a post at TSZ (19 October 2012, at 5:28 p.m.), showing that Dr. William Dembski’s design inference is circular. Here is his argument:

I’ll contribute this, from a comment of mine in the other thread. It’s based on Dembski’s argument as presented in Specification: The Pattern That Signifies Intelligence.

Here’s the circularity in Dembski’s argument:

1. To safely conclude that an object is designed, we need to establish that it could not have been produced by unintelligent natural causes.

2. We can decide whether an object could have been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold).

3. To determine whether something has CSI, we use a multiplicative formula for SC that includes the factor P(T|H), which represents the probability of producing the object in question via “Darwinian and other material mechanisms.”

4. We compute that probability, plug it into the formula, and then take the negative log base 2 of the entire product to get an answer in “bits of SC”. The smaller P(T|H) is, the higher the SC value.

5. If the SC value exceeds the threshold, we conclude that unintelligent processes could not have produced the object. We deem it to have CSI and we conclude that it was designed.
6. To summarize: to establish that something has CSI, we need to show that it could not have been produced by unguided evolution or any other unintelligent process. Once we know that it has CSI, we conclude that it is designed – that is, that it could not have been produced by unguided evolution or any other unintelligent process.

7. In other words, we conclude that something didn’t evolve only if we already know that it didn’t evolve. CSI is just window dressing for this rather uninteresting fact.

I’m sorry to say that KeithS has badly misconstrued Dembski’s argument: he assumes that the “could not” in premise 1 refers to absolute impossibility, whereas in fact, it simply refers to astronomical improbability. Here is Dr. Dembski’s argument, restated without circularity:

1. To safely conclude that an object is designed, we need to establish that it exhibits specificity, and that it has an astronomically low probability of having been produced by unintelligent natural causes.

2. We can decide whether an object has an astronomically low probability of having been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold).

3. To determine whether something has CSI, we use a multiplicative formula for SC that includes the factor P(T|H), which represents the probability of producing the object in question via “Darwinian and other material mechanisms.”

4. We compute that probability, plug it into the formula, and then take the negative log base 2 of the entire product to get an answer in “bits of SC”. The smaller P(T|H) is, the higher the SC value.

5. If the SC value exceeds the threshold, we conclude that it is certain beyond reasonable doubt that unintelligent processes did not produce the object. We deem it to have CSI and we conclude that it was designed.

6. To summarize: to establish that something has CSI, we need to show that it exhibits specificity, and that it has an astronomically low probability of having been producedby unguided evolution or any other unintelligent process. Once we know that it has CSI, we conclude that it is designed – that is, that it it is certain beyond all reasonable doubt that it was not produced by unguided evolution or any other unintelligent process.

I conclude that KeithS’s claim that Dr. Dembski’s design argument is circular rests upon a misunderstanding of the argument.

Keith S’s bomb, and why it falls flat

Three weeks ago, on Bary Arrington’s post, titled, No Bomb After 10 Years, KeithS put forward what he considered to be a devastating argument against Intelligent Design: that unguided evolution is literally trillions of times better than Intelligent Design at explaining the objective nested hierarchies which characterize living things.

The argument, in a nutshell, goes like this:

1. We observe objective nested hierarchies (ONH)
2. Unguided evolution explains ONH
3. A designer explains ONH, but also a trillion alternatives.
4. Both unguided evolution and a designer are capable of causing ONH.
Conclusion: Unguided evolution is a trillion times better at explaining ONH.

I responded to this argument in my post, Why KeithS’s bomb is a damp squib, which made five points in reply to Keith S. My second point was as follows:

The problem is that KeithS has conflated two hypotheses: the hypothesis of common descent (which is very well-supported by the evidence that objective nested hierarchies exist in living things), and the hypothesis of unguided design (which he also claims is well-supported by the evidence that objective nested hierarchies exist in living things).
The first hypothesis is indeed well-supported by the evidence, as the only known processes that specifically generate unique, nested, hierarchical patterns are branching evolutionary processes. The probability that any other process would generate such hierarchies is vanishingly low.

But if KeithS wishes to argue against intelligently guided evolution, then the two alternative hypotheses he needs to consider are not:

A: a branching evolutionary process (also known as a Markov process) generated the objective nested hierarchies we find in living things; and

~A: an Intelligent Designer generated these objective nested hierarchies, but instead:

A: an unguided process generated the objective nested hierarchies we find in living things; and

~A: an intelligently guided process generated these objective nested hierarchies.

The point KeithS makes in his essay is that on hypothesis ~A, the likelihood of B (objective nested hierarchies in living things) is very low. However, it is also true that on hypothesis A, the likelihood of B is very low, as the vast majority of unguided processes don’t generate objective nested hierarchies.

KeithS’s reply here (in comment 76):

That’s not true.
In reality, mutation rates are low enough and vertical inheritance predominates enough that we can treat unguided evolution as a Markov process.

My reply:
Here, Keith S attempts to rebut my argument that “the vast majority of unguided processes don’t generate objective nested hierarchies” by pointing out (correctly) that the unguided evolution we observe during the history of animal life on Earth – if we ignore the prokaryotes here and focus on the 30 major taxa of animals, as Theobald does in his 29 Evidences for Macroevolutionis indeed a Markov process, since vertical inheritance predominates. However, this is not germane to the mathematical argument I put forward. The question is not whether a Markov process did indeed generate the 30 taxa of animals living on Earth, but rather whether the only unguided processes in Nature that would have been capable of generating various groups of animals on some planet harboring life were Markov processes (which are the only processes known to automatically generate objective nested hierarchies).

For instance, we might imagine a natural process X that generates various types of animals on life-bearing planet Z, where these animals do not exhibit objective nested hierarchies. This is just as fair – or just as unfair – as Keith S arguing that an Intelligent Designer might have produced various types of animals which did not exhibit objective nested hierarchies.

The only way for Keith S to refute the hypothetical scenario I proposed would be to argue that life-forms which did not exhibit objective nested hierarchies would not be viable (over the long-term), for some reason – which implies that the only life-forms we are likely to find in the cosmos are ones which do exhibit these hierarchies. But if that were the case, then the same argument would explain equally well why a Designer would refrain from making life-forms which did not exhibit objective nested hierarchies. And in that case, the Designer hypothesis explains the presence of objective nested hierarchies in living things just as well as the hypothesis of unguided evolution.

Why Ockham’s razor fails to support Keith S’s claim that ID is trillions of times worse than unguided evolution at explaining the objective nested hierarchy of life

In an effort to further discredit Intelligent Design, Keith S appeals to Ockham’s razor. Now I’ll address that argument in a moment, but for now, let’s just suppose (for the sake of argument) that Keith S is right, and that Intelligent Design is a redundant hypothesis, when it comes to explaining the properties of living things. Even if that were the case, that’s not the same thing as the mathematical claim that ID is trillions of times worse than unguided evolution at explaining the objective nested hierarchy of life. (We don’t say, for instance, that the hypothesis that angels push the planets round the Sun is trillions of times worse than the hypothesis that they are moved by the forces postulated in Newtonian mechanics; we just say that we have no need for the former hypothesis.) Ockham’s razor is a non-quantitative device for eliminating unnecessary explanations; hence it cannot be used to support quantitative claims regarding the superiority of one hypothesis over another.

I conclude that Keith S’s appeals to Ockham’s razor are completely beside the point. Even if he is right – and as we’ll see below, there are excellent grounds for thinking that he isn’t – the mathematical argument against Intelligent Design is invalid.

Keith S’s Fourfold Challenge and the Rain Fairy

And now, without further ado, let’s have a look at Keith S’s Fourfold Challenge (see also here):

Some more questions for the ID supporters out there:

1. Bob is walking through the desert with his friend, a geologist. They come across what appears to be a dry streambed. After some thought, Bob states that every rock, pebble, grain of sand and silt particle was deliberately placed in its exact position by a Streambed Designer. His friend says “That’s ridiculous. This streambed has exactly the features we would expect to see if it was created by flowing water. Why invoke a Streambed Designer?”

Who has the better theory, Bob or his friend?

2. Bob is invited to the scene of an investigation by a friend who is an explosive forensics expert. They observe serious damage radiating out in all directions from a central point, decreasing with distance, as if an explosion had taken place. Bob’s friend performs some tests and finds large amounts of explosive residue. Bob says, “Somebody went to a lot of trouble to make it look like there was an explosion here. They even planted explosive residue on the scene! Of course, there wasn’t really an explosion.”

Who has the better theory, Bob or his friend?

3. Bob and another friend, an astronomer, observe the positions of the planets over several years. They determine that the planets are moving in ellipses, with the sun at one of the foci. Bob says, “Isn’t that amazing? The angels pushing the planets around are following exactly the paths that the planets would have followed if gravity had been acting on them!” The astronomer gives Bob a funny look and says “Maybe gravity is working on those planets, with no angels involved at all. Doesn’t that seem more likely to you?”

Who has the better theory, Bob or his friend?

4. Bob is hanging out at the office of a friend who is an evolutionary biologist. The biologist shows Bob how the morphological and molecular data establish the phylogenetic tree of the 30 major taxa of life to an amazing accuracy of 38 decimal places. “There couldn’t be a better confirmation of unguided evolution,” the biologist says. “Don’t be ridiculous,” Bob replies. “All of those life-forms were clearly designed. It’s just that the Designer chose to imitate unguided evolution, instead of picking one of the trillions of other options available to him.”

Who has the better theory, Bob or his friend?

Share your answers with us. Did your answers to the four questions differ? If so, please explain exactly why.
And ponder this: If you are an ID supporter, then you are making exactly the same mistake as Bob does in the four examples above, using the same broken logic. Isn’t that a little embarrassing? It might be time to rethink your position.

And don’t forget the Rain Fairy.

Keith S describes the Rain Fairy hypothesis here:

The only designer hypothesis that fits the evidence is one in which the designer mimics (by desire, coincidence, or limitation) the patterns of unguided evolution. The only Rain Fairy hypothesis that fits the evidence is one in which the Rain Fairy mimics (by desire, coincidence, or limitation) the patterns of unguided meteorology. Any reasonable person will reject the Rain Fairy and Designer hypotheses in favor of their competitors, which explain the evidence far, far better.

I’d like to make two points in reply. The first is that there is an overarching natural hypothesis which explains all of the features of the non-biological phenomena which figure in KeithS’s examples: streambeds, chemical explosions, the movement of the planets and weather patterns. By contrast, in Keith S’s example relating to the tree of life, the Darwinian hypothesis of branching evolution explains only the patterns we find in the tree of life. It does not explain the other features of living things. In other words, Darwinian evolution (or mutation-driven evolution, for that matter) needs to be able to provide a comprehensive theory of living things and their properties, before we can confidently declare that we have no need for the hypothesis of Intelligent Design.

The second (and related) point I’d like to make with respect to the Rain Fairy example is that meteorological phenomena exhibit no patterns with a high degree of specified complexity – and even if they did, none of these patterns is functional. The biological world, is rife with patterns exhibiting a high degree of functional specified complexity – proteins, for instance. Hence the Rain Fairy analogy does not hold.

Why ID supporters would not be fazed if an unguided process could be shown to have generated the objective nested hierarchy found in animals

But let us be generous, and suppose (for argument’s sake) that Keith S can come up with a good natural reason showing why (a) the only kinds of animals that are likely to be generated on a life-bearing planet by unguided processes will be ones exhibiting objective nested hierarchies, whereas (b) an Intelligent Designer, on the other hand, would not be bound by such constraints. Even so, Keith S’s argument is still vulnerable to the third objection which I listed in my post, Why KeithS’s bomb is a damp squib:

My third point is that KeithS’s argument assumes that the genetic and morphological features on the basis of which living things are classified into objective nested hierarchies were generated by the same process as the (unguided, Markovian) processes which generates the branches in the hierarchies. This is unlikely, even on a standard evolutionary view: features take time to evolve, and therefore would presumably have appeared at some time subsequent to the branch nodes themselves. Thus it could well be the case that while unguided processes explain the existence of objective nested hierarchies in the living world, guided processes are required to explain some or all of the features in these hierarchies. (Italics added – VJT.)

Features that might need to be explained by guided processes include new proteins appearing in animals, as well as new cell types in distinct lineages of animals and the appearance of new control hierarchies regulating body plans in animals.

Unfortunately, KeithS’s reply here (in comment 89 on my post) misses the point I was trying to make:

I’m not sure why you think this is an issue. The taxa in a cladogram are always at the ends of the branches, never at the nodes.

It isn’t enough to show that guided processes might be involved. You need to show that they must be involved, because otherwise you are still at the trillions-to-one disadvantage.

In his first sentence, Keith S makes a valuable concession, without realizing it. He concedes that the processes which generated the branches in the tree of animal life need not be the same as the processes which generated the features which distinguish the various types of animals. Hence it could be the case that the former are unguided, while the latter are guided. That was the point I wished to make. Arguing against Intelligent Design by appealing to the branching process which generated the tree of life is futile, because ID advocates don’t regard the branching process as evidence of intelligent design in the first place. In other words, even if unguided evolution is trillions of times better than Intelligent Design at explaining the objective nested hierarchies which characterize living things, ID advocates can still answer: “So what? At best, you’ve shown that the unguided branching processes are a better explanation for objective nested hierarchies in living things; but you’ve failed to demonstrate that these processes are sufficient to explain the characteristics of living things.”

Keith S goes on to point out, correctly, that “It isn’t enough to show that guided processes might be involved.” Intelligent Design proponents need to show that guided processes must be involved in generating these features. He spoils his argument somewhat by referring to the “trillions-to-one disadvantage” which the Intelligent Design hypothesis allegedly suffers from (and which I’ve discredited above). Nevertheless, Ockham’s razor alone would suffice to rule Intelligent Design out of court, unless ID advocates could demonstrate the insufficiency of unguided processes to explain the biological features of animal life. So the question we need to answer is: are there any barriers to the evolution of the 30 major groups of animals, via unguided processes?

Barriers to macroevolution – they’re real!

Keith S rightly contends that the onus is on the Intelligent Design proponent to demonstrate the existence of barriers to macroevolution. My recent post, titled, Barriers to macroevolution: what the proteins say, described one such barrier: the evolution of proteins. (As any biochemist will tell you, there are many kinds of proteins which are unique to each of the 30 major taxa of animals, so this problem is quite separate to the origin-of-life problem.) I’ll quote just the first three paragraphs of my post:

KeithS has been requesting scientific evidence of a genuine barrier to macroevolution. The following is a condensed, non-technical summary of Dr. Douglas Axe’s paper, The Case Against a Darwinian Origin of Protein Folds. Since (i) proteins are a pervasive feature of living organisms, (ii) new proteins and new protein folds have been continually appearing throughout the four-billion-year history of life on Earth, and (iii) at least some macroevolutionary events must have involved the generation of new protein folds, it follows that if Dr. Axe’s argument is correct and neo-Darwinian processes are incapable of hitting upon new functional protein folds, then there are indeed genuine barriers to macroevolution, in at least some cases. The argument put forward by Dr. Axe is robustly quantifiable, and it is fair to say that Dr. Axe carefully considers the many objections that might be put forward against his argument. If there is a hole in his logic, then I defy KeithS to find it.

Finally I would like to thank Dr. Axe for putting his paper online and making it available for public discussion. The headings below are my own; the text is entirely taken from his paper.

Abstract

Four decades ago, several scientists suggested that the impossibility of any evolutionary process sampling anything but a minuscule fraction of the possible protein sequences posed a problem for the evolution of new proteins. This potential problem – the sampling problem – was largely ignored, in part because those who raised it had to rely on guesswork to fill some key gaps in their understanding of proteins. The huge advances since that time call for a careful reassessment of the issue they raised. Focusing specifically on the origin of new protein folds, I argue here that the sampling problem remains. The difficulty stems from the fact that new protein functions, when analyzed at the level of new beneficial phenotypes, typically require multiple new protein folds, which in turn require long stretches of new protein sequence. Two conceivable ways for this not to pose an insurmountable barrier to Darwinian searches exist. One is that protein function might generally be largely indifferent to protein sequence. The other is that relatively simple manipulations of existing genes, such as shuffling of genetic modules, might be able to produce the necessary new folds. I argue that these ideas now stand at odds both with known principles of protein structure and with direct experimental evidence. If this is correct, the sampling problem is here to stay, and we should be looking well outside the Darwinian framework for an adequate explanation of fold origins.

I then issued a further invitation to Keith S to respond in a subsequent comment:

KeithS,
I only have a few minutes, but I’d like to say that you are welcome to post scientific criticisms of Dr. Axe’s argument on this thread, if you have any.

Another commenter on the thread invited him to do the same:

I think that you would gain much credibility with many, if you were to take that advice. Why not start with scientific responses to the issues raised in “Barriers to Macroevolution: what the proteins say”.

And what was KeithS’s response? An appeal to circular, blatantly question-begging logic!

If you’ve been following UD lately, you’ll know that I have presented an argument demonstrating that ID is literally trillions of times worse at explaining the evidence when compared to unguided evolution.

And I’ve been trying to tell Keith S that the evolution of proteins constitutes such a barrier, by appealing to the paper by Dr. Douglas Axe from which I quoted above.

To my dismay and disappointment, the rest of my thread on Barriers to macroevolution was taken up with an arcane discussion of censorship of previous posts on Uncommon Descent, which is neither here nor there.

I repeat my challenge: can Keith S kindly tell me what’s wrong with the reasoning in Dr. Axe’s paper, The Case Against a Darwinian Origin of Protein Folds, which I summarized in a non-technical form in my recent post?

In a muddle over meaning

Not content with leaving matters there, Keith S issued a challenge of his own over at gpuccio’s post, An attempt at computing dFSCI for English language. In his post, GPuccio wrote:

Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in English is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB [Upper Probability Bound – VJT]. As I am aware of no simple algorithm which can generate English sonnets from single characters, I infer design. I am certain that this is not a false positive.

Was I wrong? You decide.

I don’t want to discuss the mathematics behind gpuccio’s calculation here, except to say that it erred unduly on the side of generosity, in conceding the existence of a pool of 200,000 English words (an under-estimate, by the way), and asking what percentage of 600-letter sequences made up entirely of these words would constitute a meaningful sonnet. Some commenters objected that there isn’t a clear black-and-white dividing line between meaningful poetry and meaningless strings of words which obey the rules of English syntax, as the history of the Ern Malley hoax shows. But let’s face it: if we saw a message with the words, “Colorless green ideas sleep furiously” written 100 times, we’d all conclude that it was designed, either directly (by a human being) or indirectly (by a computer programmed by a human being).

In my opinion, however, a much fairer question to ask would be: if we received a binary signal from outer space and decoded it into (say) ASCII code, only to find that it spelt out a Shakespearean sonnet, what would the odds be that it was generated via an unguided process? I believe this example is a more appropriate one, as it doesn’t start with a pool of words, or even letters, but with simple binary signals which can be used to make letters, which can be arranged into English words, which can in turn be arranged into meaningful sentences. And even if the boundary between meaningful and meaningless sentences is a little blurry at times, the boundary between syntactically valid sentences and sentences with bad syntax is a lot clearer and less ambiguous. Using my analogy, we can certainly show that the odds of a binary signal from space spelling out a sonnet of any kind are less than 1 in 2^500.

And what was Keith S devastating reply to gpuccio? The main points that he makes can be found in comments 9, 11 and 13 on gpuccio’s post. I’ll address them one at a time.

gpuccio,

We can use your very own test procedure to show that dFSCI is useless.

Procedure 1:
1. Look at a comment longer than 600 characters.
2. If you recognize it as meaningful English, conclude that it must be designed.
3. Perform a pointless and irrelevant dFSCI calculation.
4. Conclude that the comment was designed.

Procedure 2:

1. Look at a comment longer than 600 characters.
2. If you recognize it as meaningful English, conclude that it must be designed.
3. Conclude that the comment was designed.

The two procedures give exactly the same results, yet the second one doesn’t even include the dFSCI step. All the work was done by the other steps. The dFSCI step was a waste of time, mere window dressing.

Even your own test procedure shows that dFSCI is useless, gpuccio.

Keith S’s argument misses the point here. What he fails to ask is: why did we choose 600 characters as a cutoff point and not six? Because we can show that unguided processes are fully capable of generating six-character strings, like “Stop it”.
If I discovered a binary signal from outer space that spelt out these characters when converted into ASCII, I certainly would not conclude that it was designed.

On the other hand, we can calculate that the probability of unguided processes coming up with a meaningful 600-characters string are so low that we would not expect this event to happen even once, in the history of the observable cosmos – in other words, the probability is less than 1 in 2^500, or 1 in 10^150. Since the string in question is specified (as it has a semantic meaning), a design inference is warranted.

Keith S continues:

gpuccio,

We’ve been over this many times, but the problem with your dFSCI calculations is that the number they produce is useless.

The dFSCI number reflects the probability that a given sequence was produced purely randomly, without selection. No evolutionary biologists thinks the flagellum (or any other complex structure) arose through a purely random process; everyone thinks selection was involved. By neglecting selection, your dFSCI number is answering a question that no one is asking. It’s useless.

There is a second aspect of dFSCI that is a boolean (true/false) variable, but it depends on knowing beforehand whether or not the structure in question could have evolved. You can’t use dFSCI to show that something couldn’t have evolved, because you already need to know that it couldn’t have evolved before you attribute dFSCI to it. It’s hopelessly circular.

What a mess. The numerical part of dFSCI is useless because it neglects selection, and the boolean part is also useless because the argument that employs it is circular.

dFSCI is a fiasco.

Gpuccio’s calculations were perfectly appropriate for the class of entities he was discussing – namely, character strings. Character strings are not alive, so they are incapable of evolving by the non-random process of natural selection.
In addition, natural selection does not select for semantic meaning; what it selects for is functionality. The latter can be refined over the course of time by evolution, whereas the former cannot, as unguided evolution is blind to it.
Of course, that leaves us with the question of whether gpuccio’s post can be used to undermine the theory of evolution by natural selection. But gpuccio never discussed that question in his post, which was simply an attempt to calculate the dFSCI in a Shakespearean sonnet.

Finally, Keith S writes:

Dembski’s problems are that 1) he can’t calculate P(T|H), because H encompasses “Darwinian and other material mechanisms”; and 2) his argument would be circular even if he could calculate it.

KF’s problem is that although he claims to be using Dembski’s P(T|H), he actually isn’t, because he isn’t taking Darwinian and other material mechanisms into account. It’s painfully obvious in this thread, in which Elizabeth Liddle and I press KF on this problem and he squirms to avoid it.

Gpuccio avoids KF’s problem by explicitly leaving Darwinian mechanisms out of the numerical calculation. However, that makes his numerical dFSCI value useless, as I explained above. And gpuccio’s dFSCI has a boolean component that does depend on the probability that a sequence or structure can be explained by “Darwinian and other material mechanisms”, so his argument is circular, like Dembski’s.

All three concepts are fatally flawed and cannot be used to detect design.

I repeat: if Keith S wants a decent probabilistic calculation which takes account of “Darwinian and other material mechanisms”, then why doesn’t he respond to the probability calculations contained in the paper I cited above by Dr. Axe (see pages 10 and 11), which is titled “The Case Against a Darwinian Origin of Protein Folds”? Do that, Keith S, and then we’ll talk.

Comments
HeKS, forgive. I was tired and did not remember to notify of your headlined comment, here. KF kairosfocus
Ok, well I'll await your response over there. I'll look for it tomorrow. HeKS
HeKS, KF has started yet another thread to discuss my argument, using your comment above as the OP, so let's continue our discussion there. keith s
In this thread, I noticed Keiths posting a summary of his supposed 'bomb' argument. I haven't been around much lately and haven't seen too much of the discussion around his argument that has apparently been taking place, but seeing his summary I decided to offer a few initial thoughts and ask a few questions. Keith responded by pointing me to his original article at TSZ. After reading it, I came away thinking his argument was worse than I had originally thought and asked that he respond to my previous comments so we could move forward from there as we have time. He asked me to repost my comments here, so that's what I'm doing. The following will be the brief history of our interactions in that thread and then Keith can respond as he sees fit. Keith's summary of his argument was posted in this comment. This was my initial response: ------------------------- I haven’t been around too much lately cause I’ve been busy with other stuff, but seeing your argument in #59, I have a few questions and then, if I have time, I might address it further in coming days. You say:
3. We know that unguided evolution exists. Even the most rabid IDer/YEC will admit that antibiotic resistance can evolve
This seems to me like a cheat. First, in order for this claim to have any value at all for your argument, we would have to assume that the biological processes that make unguided evolution (if indeed it is unguided) even possible are themselves not designed. A system can be designed to allow for inputs that are not specifically predicted and generate outputs that are not specifically intended, and yet the framework that allows for this to happen can be designed to specifically fulfill this purpose. It’s also possible for a system to be designed to generate outputs within certain constraints when it receives one or more of a wide range of predicted inputs. Furthermore, a system can be designed to degrade gracefully when certain functions or data become unavailable, so that the system as a whole can continue functioning in some form, though it sports a lesser array of features, or, alternatively, it can throw up some kind of fatal error that completely crashes the system when core features or data are missing. People who program for the web and for various browsers and devices (desktop, tablet, phone) do this kind of thing all the time, and programming and markup languages include features to make this kind of stuff easier. So, when you say that we know that unguided evolution exists, all we really know is that specific events happen that we couldn’t predict in advance, and they sometimes result in relatively minor changes in organisms. We do not know that the systems that allow this to happen were not designed or that the possibility of this happening was not a specifically intended function of the system to allow for biological diversity and adaptation to changing environments. Second, your argument assumes that this “unguided evolution”, if it exists, is of a sort that it could, at least in principle, offer some kind of possible explanation for the macroevolutionary changes that would be necessary to produce an objective nested hierarchy naturalistically, even if it would require a large degree of extrapolation. The problem is that the type of “unguided evolution” we observe is not one that is observed to add novel functional information to the genome. It slightly alters and degrades genetic information, and it breaks existing functions or sometimes fixes functions that had previously been broken by simple point mutations, but we do not see it adding brand new complex (in the sense of “many well-matched parts”) functionality that didn’t exist before. So the type of “unguided evolution” that “even the most rabid IDer/YEC” observes is not of the kind that they would have any reason to think can offer, even in principle, a possible explanation for the macroevolutionary changes needed to produce an ONH naturalistically at any point that the ONH requires a significant increase in functional genetic information. Wherever that would be necessary, any appeal to the known existence of “unguided evolution” as a basic feature of reality would not even simply be an extreme unwarranted extrapolation of the available evidence, but would actually be the misleading invocation of a process that does pretty much exactly the opposite of what we observe “unguided evolution” doing. So, if by your #3 you mean something like this: We know that there exists an unguided natural mechanism of a sort that might, at least in principle, be able to explain the significant increases in functional genetic information at particular nodes of the supposed ONH of life. Then I have to say, no, we don’t know of any such thing. We don’t know that the apparently “unguided evolution” we observe is not made possible by designed systems intended to allow for that evolution to happen in the first place, and we don’t know that there exists any unguided mechanism that could, in principle, account for significant increases in functional genetic information or significant changes in body plans, whereas as we do know of constraints that would seem to prevent such things. Of course, if you want to say that the ONH results from a gradual and unguided degrading of genetic information, that could work, at least to a certain point, and could be viewed as a reasonable extrapolation of the “unguided evolution” we observe. Of course, this raises the question of where the high information-content of the ancestor genome came from in the first place and we would have to account for the places in the hierarchy where a significant increase or change in functional information seems to have arisen.
4. We don’t know that the putative designer exists, so ID is already behind in the race.
We don’t begin with a knowledge that the designer exists, but we do know that intelligent design exists as a form of causation, that it is capable of generating significant amounts of functional information, and that it is capable of arranging many parts into complicated relationships that carry out specific functions. We even know that human intelligent design is capable of building molecular machines, as in the work of Dr. James Tour. So, in terms of invoking some kind of causal force or mechanism that is actually known to exist and that could, in principle, explain what we see in nature at various nodes of the alleged ONH, including systems that would allow for the graceful degrading of genetic information, ID is far ahead in the race.
UE is literally trillions of times better than design at explaining the evidence …. 12. If we take that approach and assume, temporarily and for the sake of argument alone, that the designer is responsible for the diversity of life, we can see that ID does not predict an objective nested hierarchy out of the trillions of possibilities.
What are these trillions of possibilities? How did you come up with “trillions”, since you say “literally trillions”? Can you give me some examples of how else the designer might have designed life? How many ways might he have designed life if we don’t assume that he designed every current species in its current form all at once? How many of those trillions of ways require that the designer ignore efficient and flexible design principles? Or that he endlessly reinvent the wheel? Also, what method are you using to reasonably constrain predictions of what approach the designer might use, and what pattern to life might ensue, without any knowledge or hypothesis of what the designer was wanting to achieve or even what degree of specificity the designer might have had in mind for the species we currently observe? Anyways, those are a few initial thoughts I have about your argument. There’s probably not much point in going any further or addressing any other issues until I hear your thoughts on this stuff. ------------------------- Keiths suggested I might want to do some background reading on his argument (i.e. read his original article, which he linked me to) before trying to tackle it. I did, and responded with the following: ------------------------- I went and read your article at TSZ as requested. Having done so, I now think your argument is worse than I originally thought, so why don’t you start by addressing what I said and we can go from there as we have time. ------------------------- He responded with: ------------------------- HeKS,
I went and read your article at TSZ as requested. Having done so, I now think your argument is worse than I originally thought…
A lot of people say things like that. Then they try to refute the argument, and fail. It’s been almost a month now with no refutation.
…so why don’t you start by addressing what I said and we can go from there as we have time.
Repost your comment on this thread, which is the most recent thread discussing my argument. I’ll respond there. ------------------------- And now my brief response to that:
A lot of people say things like that. Then they try to refute the argument, and fail. It’s been almost a month now with no refutation.
A lot of people have said that the more they understand your argument the worse of an argument it seems to them? That's not really surprising. You say that it has been almost a month with no refutation, but are we really supposed to expect that you would readily admit a refutation to an argument which you are obviously quite fond of? It tends to be the case that when someone offers an argument that appears to be poorly reasoned and then goes on to loudly promote that argument as a powerful refutation of an opposing point of view, it is highly unlikely that the person will be prone to recognizing when serious flaws are pointed out in it, much less that it has been soundly refuted. I highly doubt that I'm going to convince you that your argument is flawed, or that anyone else could either, but some of the flaws seem rather obvious. Anyway, please offer some response on my initial comments and questions, and feel free to ask for clarification if you don't understand any particular point I'm making. HeKS
keiths:
In my 2-dimensional landscape, one dimension is vertical and the other is horizontal. You even quoted me saying exactly that:
indeed. that's why I was mocking you. http://en.wikipedia.org/wiki/Horizontal_and_vertical Mung
keiths:
If I’m banned, y’all can find me at The Skeptical Zone.
lol. Get over yourself. Mung
Andre: How did an unguided process create a guided process to prevent unguided procesess from happening, for people that have this all figured out you sure are elusive Still not sure what you're asking. If you want a detailed description of historical events, that's not known at this time. Bacteria have mechanisms to cause cell death, and long ago bacteria are thought to have formed symbiotic relationships with cells. The bacterial machinery of cell death was coopted for the advantage of both the bacteria and the host. However, as we said, the details are not known. The basic process is still variation and natural selection, though. Zachriel
Zachriel How did an unguided process create a guided process to prevent unguided procesess from happening, for people that have this all figured out you sure are elusive..... I know what PCD does in cell health and disease management.... PCD kills the cells between fingers during development. Just tell me how PCD emerged please...... Andre
IOW Zachriel has no idea how unguided evolution could have produced such a thing. Joe
Andre: I’m asking you to demonstrate how Apoptosis, autophagy and necrosis evolved We already answered your question with regard to apoptosis. The details of the evolutionary history is still murky, but the basic principle is that it can sometimes be advantageous for a cell to die in order to propagate its clone. The result is the propagation of the genome, including the trait regarding apoptosis. Zachriel
Zachriel 1.) I'm asking you to demonstrate how Apoptosis, autophagy and necrosis evolved........ How did unguided evolution build a guided process to prevent unguided processes from happening? 2.) PCD is vital to multi and unicellular organisms, it stops working and the organism dies...... so please enlighten us how this evolutionary conserved mechanism emerged? Andre
Andre: How did unguided processes create this guided process to prevent unguided processes from happening? Sorry, thought you were asking how apoptosis evolved, not the origin of life. Andre: When PCD becomes dysregulated the organism dies….. It is evolutionary conserved to just to boot…. Huh? Zachriel
Zachriel That is all pretty intestine but it does not answer the question..... How did unguided processes create this guided process to prevent unguided processes from happening? When PCD becomes dysregulated the organism dies..... It is evolutionary conserved to just to boot..... Andre
vjtoryley: With Intelligent Design, the situation is reversed: unguided processes are unable to explain the origin of protein folds. Gap argument, which is filled by what we know about the frequency of folds in sequence space. vjtoryley: What about the hypothesis that an unguided process X generated the ONHs we find in Nature, while another unguided process [a Markovian one] accounts for observed microevolution? Observed rates of mutation roughly match historical rates from the posited phylogeny. vjtoryley: A: a single, unguided process generated (i) the objective nested hierarchies that we find in living things on Earth and (ii) the actual microevolution we observe, which is a Markov process, with slow mutation rates and predominantly vertical inheritance; While it's reasonable to say that the branching process is *intrinsic* (that is, it's ad hoc to say a designer created each of billions of furcations), we can't tell from the existence of the objective nested hierarchy whether the overall pattern of the tree has been shaped by design, or even whether the process was started with a particular goal in mind. Think of a Cosmic Gardener who plants the tree, then prunes it as necessary to create the desired shape. http://www.thelovelyplants.com/wp-content/uploads/2010/11/topiary-sculpture.jpg Other evidence is required to explain the shape of the tree (natural selection, contingent events). However, the historical branching process is strongly supported by the objective nested hierarchy. Fish and fishers share a common ancestor.
HAMLET: A man may fish with the worm that hath eat of a king, and cat of the fish that hath fed of that worm. KING CLAUDIUS: What dost you mean by this? HAMLET: Nothing but to show you how a king may go a progress through the guts of a beggar. http://www.rhymezone.com/r/gwic.cgi?Path=shakespeare/tragedies/hamlet/iv_iii//&Word=a+man+may+fish+with+the+worm+that+hath+eat+of+a#w
Zachriel
Andre: Then you need to explain this {programmed cell death}! Evolutionary success depends on successful reproduction. Each cell exists because it is a continuation of a long line of parent cells. However, the cell can continue either by reproducing itself, or by helping another closely related cell reproduce. In metazoa, apoptosis isn't an thorny issue because the body's cells are clones. Salmon die after spawning, but this doesn't matter in the long run, as long as they have offspring. What's interesting is seeing apoptosis in single-celled organisms. You would think that every cell to itself! But apoptosis in single-celled organisms isn't too surprising when you think about it. Single-celled organisms generally reproduce by cloning, so a colony of such organisms are going to share almost the exact same genetic content. If the neighboring cell survives, it's almost as if the original cell survived. So, roughly speaking, what helps the colony helps each cell. Zachriel
Hi Keith S, I'm in a bit of a hurry now, but I'll just respond to two comments of yours. First, re the Rain Fairy and the other invisible designer hypotheses, you write:
Those explanations are extremely poor because they are unfalsifiable. Unguided meteorology is a much better explanation than the Rain Fairy because it makes testable predictions that are resoundingly confirmed. Ditto for orbital mechanics. Unguided evolution is a much better explanation than the Designer for the same reason.
I would say that those explanations are extremely poor because the invisible agent does no extra work: there's nothing we can observe that he/she can explain and that unguided natural processes can't. With Intelligent Design, the situation is reversed: unguided processes are unable to explain the origin of protein folds. You also write:
I am not saying that we observe an ONH, and should therefore assume that unguided evolution is a Markov process. It’s the other way around: we can see that actual, observed evolution is a Markov process, with slow mutation rates and predominantly vertical inheritance. We know that those characteristics predict an ONH. We observe the predicted ONH, out of trillions of possibilities. This is a fantastic success for the UE hypothesis. You can’t do the same for the Designer, because you know nothing about him/her/it.
OK, so it seems to me that you are expanding the evidence set. You are saying the two competing hypotheses should be: A: a single, unguided process generated (i) the objective nested hierarchies that we find in living things on Earth and (ii) the actual microevolution we observe, which is a Markov process, with slow mutation rates and predominantly vertical inheritance; and ~A: an intelligently guided process generated (i) and (ii). You're saying that the second hypothesis is ad hoc, so the first hypothesis, which automatically predicts a Markovian process, is not. Two quick comments: first, your hypothesis A includes the fact that observed evolution is Markovian, so it amounts to saying, "A Markovian process generated a Markovian process [and all of the ONHs we observe in Nature as well]," which strikes me as cheating; second and more importantly, the two hypotheses are not mutually exhaustive. What about the hypothesis that an unguided process X generated the ONHs we find in Nature, while another unguided process [a Markovian one] accounts for observed microevolution? You need to be very careful about how you state these hypotheses. But as I wrote above, even if you were right on this one, the really important question relates to barriers to macroevolution. Here's a question for you and Me-Think: You've copied excerpts from Wagner's book. What I'd like to know, in a nutshell, is this: what's wrong with Dr. Axe's argument, which I summarized for you on a recent thread? Which assumption of his is factually wrong, in your view? vjtorley
keith s:
I am not saying that we observe an ONH, and should therefore assume that unguided evolution is a Markov process. It’s the other way around: we can see that actual, observed evolution is a Markov process, with slow mutation rates and predominantly vertical inheritance. We know that those characteristics predict an ONH.
That is a big fat lie. There isn't any possible way that keith s could ever support what he posts. For one unguided evolution can't even get beyond populations of prokaryotes given populations of prokaryotes to start with. For another the process keith s mentions would produce numerous transitional forms which would ruin all attempts at constructing an objective nested hierarchy. So either keith s is totally and willfully ignorant or he is very dishonest to the point he believes his own lies. Joe
Keith S
There is an inescapable asymmetry here. Unguided evolution really is trillions of times better than ID at explaining the evidence.
Really? Then you need to explain this! http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2117903/ Please explain to me how unguided processes created a guided process to prevent unguided processes from happening? I await your answer with great anticipation....... Andre
vjtorley #76, Regarding the circularity issue, I'll refer you to my comments in these two threads: The Circularity of the Design Inference Has Specified Complexity Changed? Moving on, you write:
Regarding my claim that you appeal to Ockham’s razor, you write:
I don’t remember invoking Ockham in that context. Could you refresh my memory by providing a quote and a link?
I was referring to your Fourfold challenge and your story of the Rain Fairy. I take it that the point here was that just as we don’t need to resort to the hypothesis of a Designer in order to explain streambed patterns, patterns in chemical explosions, the movement of the planets and meteorological patterns, we don’t need to invoke the hypothesis of a Designer in order to ex[plain the diversity of life on Earth. That sounds like Ockham's razor to me - or as Laplace famously put it to Napoleon Bonaparte, "Sire, I have no need for that hypothesis."
No, I am not invoking Ockham's Razor there. (I could, but I'm not.) I am drawing a parallel between those four scenarios and ID. In each case, there is a straightforward natural explanation that fits the evidence perfectly, and an ad hoc design explanation that fits poorly because any possibility is compatible with it. No matter what meteorological data you observe, you can claim "The Rain Fairy did it." No matter what bizarre orbit a planet follows, you can say "The angels did it." No matter what pattern you observe in the morphological and molecular data, you can say "The Designer did it." Those explanations are extremely poor because they are unfalsifiable. Unguided meteorology is a much better explanation than the Rain Fairy because it makes testable predictions that are resoundingly confirmed. Ditto for orbital mechanics. Unguided evolution is a much better explanation than the Designer for the same reason.
The two competing, mutually exclusive and mutually exhaustive hypotheses are as follows: A: an unguided process generated the objective nested hierarchies that we find in living things on Earth; and ~A: an intelligently guided process generated these objective nested hierarchies.
No, they aren't. I am pitting actual, observed evolution of the kind posited by evolutionary biologists against a hypothetical designer of whom we know nothing.
You may respond that in fact, we observe that life-forms on Earth exhibit objective nested hierarchies, which are an inevitable result of Markov processes, and that we are therefore justified in going back and modifying hypothesis A to read: a Markov process generated the objective nested hierarchies that we find in living things on Earth. But that is tantamount to the fallacy of affirming the consequent. And a defender of Intelligent Design would be no less (and no more) justified in going back and modifying their hypothesis ~A to read: an intelligently guided process guided by a Designer Who wanted to create ONHs on Earth generated these objective nested hierarchies. Both moves amount to cheating.
Yes, both of those moves amount to cheating, but I am not making the move you attribute to me. I am not saying that we observe an ONH, and should therefore assume that unguided evolution is a Markov process. It's the other way around: we can see that actual, observed evolution is a Markov process, with slow mutation rates and predominantly vertical inheritance. We know that those characteristics predict an ONH. We observe the predicted ONH, out of trillions of possibilities. This is a fantastic success for the UE hypothesis. You can't do the same for the Designer, because you know nothing about him/her/it. You can't observe him/her/it. If you assume that the Designer creates an ONH, you are committing the Rain Fairy fallacy. There is an inescapable asymmetry here. Unguided evolution really is trillions of times better than ID at explaining the evidence. keith s
Keith S You've defended nothing, you can't even give me a simple answer on how unguided processes created a guided process to prevent unguided processes from happening..... You have nothing Keith, so blowing your own trumpet is futile...... To make a claim that unguided evolution is the best explanation Keith you have to show how this said unguided evolution created PCD...... PCD kills unguided evolution Keith because when these systems become disregulated the organism dies Keith! You have to explain it before you can claim it Keith! Andre
vjtorley:
My initial reaction is that Dr. Wagner’s proposal may well be correct as an explanation of how metabolic pathways evolved over billions of years. However, it is proteins that are the workhorses of the cell, and the search space for a protein that can fold up properly and perform a biologically useful task is astronomically large. I would be very interested to see detailed calculations by Dr. Wagner, showing that this search space could have been traversed by life, during the past four billion years. If Dr. Wagner can do that, without invoking any new, highly specified laws of Nature to assist him, then his book will indeed be very bad news for Intelligent Design.
As Me_Think has already indicated, the protein "library" has similar characteristics to the metabolic library. It is, indeed, very bad news for ID. keith s
If I'm banned, y'all can find me at The Skeptical Zone. Just a reminder, as Barry might be warming up his ban hammer. Let's hope not. It's been fun defending The Bomb against your criticisms. keith s
vjtorley: However, it is proteins that are the workhorses of the cell, and the search space for a protein that can fold up properly and perform a biologically useful task is astronomically large. Gpuccio recently cited an interesting paper, Hayashi et al., Experimental Rugged Fitness Landscape in Protein Sequence Space, PLOS ONE 2006, and they found functional esterases in random sequence libraries. Using point mutation and selection, they achieved about 40% of the activity of the wild esterase, not bad considering they didn't include homologous recombination in the experiment. Zachriel
Before you ask, to be sure, there is nothing specific to Protein folding probabilities ( as opposed to finding new protein structure). No one thinks protein folding is a probabilistic mechanism. There are various mechanisms - both chemical and structural for protein folding. You can see some methods Here Me_Think
Both metabolic and protein libraries are full of genotype networks composed of synonymous texts that reach far through a vast multidimensional hypercube, and both harbor unimaginably many diverse neighborhoods. They have much in common with each other, but little with human libraries. And that’s not surprising: They were here long before us.
And just as in the protein library, different neighborhoods are more like medieval villages than cookie-cutter suburbs. Each neighborhood contains many different shapes, and any two neighborhoods do not share many of them. All this hints that innovability in RNA follows the same rules as in proteins. And recent experiments show that this is indeed the case.
Me_Think
vjtorley @ 77, Wagner's refers to genotype network for discovery of new proteins too. It is not exclusive for discovering metabolism alone.Excerpt from the book:
In an ingenious experiment performed in the year 2000, Erik Schultes and David Bartel from the Massachusetts Institute of Technology blazed a trail through the RNA library.59 The experiment started from two short RNA texts with fewer than a hundred letters each. The texts are far apart in the library and differ in many letters, but they are not just any two strings. Both molecules are enzymes—ribozymes, because they are composed of RNA rather than protein. Each of them wiggles into a different three-dimensional shape and catalyzes a different reaction. The first molecule can cleave an RNA string into two pieces, while the second does the exact opposite, joining two RNA strings by fusing their ends with atomic bonds. Let’s call these enzymes the “splitter” and the “fuser.” If you already had a splitter, and you needed to find a fuser somewhere in the library, would that be easy or hard? And what about the opposite, creating a splitter from a fuser? In other words, can you create a specific molecular innovation from either one of these molecules by exploring the library as evolution would? If you were ignorant about genotype networks, you would think that should be impossible, because the two molecules are far apart. And even if were possible, it might be exceedingly difficult, since a single misstep .that creates a defective molecule spells death in evolution. Undaunted, Schultes and Bartel started from one of the molecules and walked toward the other, modifying its letter sequence step by step while requiring that each such step preserve the molecule’s function, just as natural selection would demand. They used their chemical knowledge to predict viable steps through the library, manufactured each candidate mutant as an RNA string, and asked whether it could still catalyze the same reaction as its ancestor. If not, they tried a different step.60 What they found may no longer surprise you. Starting from the fuser, they were able to change forty letters in small steps toward the splitter without changing the molecule’s ability to fuse two RNA strings. And starting from the splitter, they could also change about forty letters in small steps toward the fuser without changing its ability to split two RNA molecules. About halfway between the two molecules, something fascinating happened: Fewer than three further steps completely transformed the function of either molecule. They changed the fuser into a splitter and vice versa.61 Like many good experiments, this one carries more than one powerful message. The first is that many RNA texts can express the molecular meaning of the starting fuser and splitter molecules. Second, trails connect these molecules in the library, and they allow you to find a new meaningful text, even if each step must preserve the old meaning. (Genotype networks make all this possible.) Third, while you walk along one of these trails, the innovation you are searching for will appear at some point in a small neighborhood near you.
Me_Think
From 1997:
Many Scientists See God's Hand in Evolution While most US scientists think humans are simply smarter apes, at least 4 in 10 believe a creator "guided" evolution so that Homo sapiens are ruled by a soul or consciousness, a new survey shows. Scientists almost unanimously accept Darwinian evolution over millions of years as the source of human origins. But 40% of biologists, mathematicians, physicians, and astronomers include God in the process. http://ncse.com/rncse/17/6/many-scientists-see-gods-hand-evolution
For Darwinian theory: whether "evolution" is "guided" or not is a matter of religious faith, not science. Gary S. Gaulin
Hi Keith S, You write:
Vincent, Of all the points you raise in your OP, Axe’s argument is going to be the most fun for me to criticize, but also the most technically involved. I will be quoting liberally from Andreas Wagner’s new book Arrival of the Fittest. I highly recommend this book to anyone involved in the ID debate, whether pro or con. You will be hearing about it again and again, so you need to understand its contents. Denyse did an OP on the book, thinking it was anti-Darwinian. Boy oh boy, was she ever wrong. This book is full of bad news for ID. It’s well-written and fascinating. I think that ID supporters will enjoy it, if they can get past the sinking feeling they’ll experience when they realize the dire implications for ID.
I've just been reading Mark Pagel's review of the book in Nature, here. It looks very interesting. There's also an interview with Dr. Wagner here. My initial reaction is that Dr. Wagner's proposal may well be correct as an explanation of how metabolic pathways evolved over billions of years. However, it is proteins that are the workhorses of the cell, and the search space for a protein that can fold up properly and perform a biologically useful task is astronomically large. I would be very interested to see detailed calculations by Dr. Wagner, showing that this search space could have been traversed by life, during the past four billion years. If Dr. Wagner can do that, without invoking any new, highly specified laws of Nature to assist him, then his book will indeed be very bad news for Intelligent Design. But as they say, the proof of the pudding is in the eating. I'd be very grateful if you could put up a post, Keith S, summarizing Dr. Wagner's calculations for proteins and showing how they invalidate Dr. Axe's arguments. Until then, the ball's in your court. By the way, I have a little free time today, as I'm not in an Internet cafe, so I'll fix up that problem you mentioned earlier with the question-marks that should be dashes (which happened because I had to save my post as a .txt file). Cheers. vjtorley
Hi Keith S, Thank you for your posts. Regarding your critique of Dr. Dembski's inference to Intelligent Design, you write that when you use the term, "could not have been produced," you are speaking of "practical impossibility, not absolute impossibility." Very well, then; I'm glad we've cleared that point up. You then quote an earlier comment of yours in which you attempt to expose the flaw on Dr. Dembski's argument:
Dembski's refinement runs into trouble... because he admits that to determine that a system has CSI, we must estimate the probability of its production by natural means. Systems with CSI have a low probability of arising through natural means. This renders the reasoning circular: 1. Some systems in nature cannot have been produced through undirected natural means. 2. Which ones? The ones with high CSI. 3. How do you determine the CSI of a system? Measure the probability that it was produced through undirected natural means. If the probability was vanishingly small, it has CSI. 4. Ergo, the systems that could not have been produced through undirected natural means are the ones which could not have been produced through undirected natural means.
Conclusion 4 may be tautologous, but it is not circular. And in any case, your reconstruction of Dr. Dembski's argument goes wrong in step 3. Intelligent Design proponents don't attempt to measure the probability that a system was produced through undirected natural means; rather, they attempt to calculate it. And in that case, the conclusion in step 4 should be:
4. Ergo, the systems that could not have been produced through undirected natural means are the ones for which we can calculate the the probability of their having been produced through undirected natural means is astronomically low.
The above conclusion is neither tautologous nor circular. Regarding my claim that you appeal to Ockham's razor, you write:
I don’t remember invoking Ockham in that context. Could you refresh my memory by providing a quote and a link?
I was referring to your Fourfold challenge and your story of the Rain Fairy. I take it that the point here was that just as we don't need to resort to the hypothesis of a Designer in order to explain streambed patterns, patterns in chemical explosions, the movement of the planets and meteorological patterns, we don't need to invoke the hypothesis of a Designer in order to ex[plain the diversity of life on Earth. That sounds like Ockham's razor to me - or as Laplace famously put it to Napoleon Bonaparte, "Sire, I have no need for that hypothesis." In my post, I pointed out that Ockham's razor is a non-quantitative argument, which cannot be used to support claims that the hypothesis of unguided evolution is trillions of times better than the hypothesis of guided evolution. Regarding the ONHs [objective nested hierarchies] found in living things, you write:
If there are trillions of non-ONH options available for design, but only ONH is available to and produced by unguided evolution, and we see an ONH, then unguided evolution is trillions of times better at explaining that fact... We are talking about terrestrial life, and the question is this: Is the diversity of life on earth best explained by an unspecified designer, or by the operation of unguided evolution — the same process that we see producing microevolutionary phenomena such as antibiotic resistance? Let me stress this point: the designer is unspecified, but the competing hypothesis of unguided evolution is quite specific, has been observed, and is known to produce ONHs. We don't know anything about the designer, but we know a lot about UE.
This is fallacious reasoning. The two competing, mutually exclusive and mutually exhaustive hypotheses are as follows: A: an unguided process generated the objective nested hierarchies that we find in living things on Earth; and ~A: an intelligently guided process generated these objective nested hierarchies. You argue that the likelihood of objective nested hierarchies in living things on Earth is very low, on the hypothesis of an Intelligent Designer. That may or may not be so. However, even if it is true, it is also true that on hypothesis of an unguided process, the likelihood of objective nested hierarchies in living things on Earth is also very low, as the vast majority of unguided processes don't generate objective nested hierarchies. It is no use for you to point out here that unguided Markov processes always generate objective nested hierarchies. That is quite correct, but what you need to show is that other unguided processes could not have generated a diverse range of life-forms on Earth, and that only Markov processes could have done the job. You may respond that in fact, we observe that life-forms on Earth exhibit objective nested hierarchies, which are an inevitable result of Markov processes, and that we are therefore justified in going back and modifying hypothesis A to read: a Markov process generated the objective nested hierarchies that we find in living things on Earth. But that is tantamount to the fallacy of affirming the consequent. And a defender of Intelligent Design would be no less (and no more) justified in going back and modifying their hypothesis ~A to read: an intelligently guided process guided by a Designer Who wanted to create ONHs on Earth generated these objective nested hierarchies. Both moves amount to cheating. Finally, you write that "the competing hypothesis of unguided evolution is quite specific, has been observed, and is known to produce ONHs." I'm sorry, but "unguided evolution" is about as non-specific as you can possibly get: all it rules out is the existence of a Designer. vjtorley
It is a false choice to ask whether the diversity of life on earth is "best explained by an unspecified designer, or by the operation of unguided evolution." Firstly, ID specifies the designer as intelligent. That's what the "I" in "ID" stands for for. And we know lots about both intelligence and what it can produce, and the natural forces (including natural selection) to which unguided evolution is limited, by definition, and what they can produce (see Behe, 2007). And secondly, however long or short the odds against a posited designer's actions might be according to some contrived premise, it does not follow that "the same process that we see producing microevolutionary phenomena such as antibiotic resistance" therefore had to have brought about life in all of is complexities irrespective of its demonstrable ability to do so. As if all that were required to "be fruitful and multiply" were an ONH. Like I said, it's a false choice. After the identification of which, a brief citation of David Berlinski ought to suffice to illustrate the kinds of questions that are thereby being begged:
The interesting argument about the whale -- which is a mammal after all; it belongs to the same group of organisms as a dog, a human being, a chimpanzee or a tiger -- the interesting argument a whale is, if its origins were land based originally, then we have some crude way of assessing quantitatively -- not qualitatively but quantitatively -- the scope of the project of transformation. The project is very simple. Let's put it in vividly accessible terms. You've got a cow. You want to teach it how to live all of its life in the open ocean, still retaining its air breathing characteristics. What do you have to do from an engineering point of view to change the cow into a whale? This is crude, but it gives you the essential idea. Now, if the same question were raised with respect to a car, and you asked what would it take to change a car into a submarine, we would understand immediately it would take a great many changes. The project is a massive engineering project of redesign and adaptation. Well, the same question occurs with respect to that proverbial cow. Virtually every feature of the cow has to be changed, it has to be adapted. But since we know that life on earth and life on the water are fundamentally different enterprises, we have some sense of the number of changes. You know, any time that science avoids coming to grips with numbers, and somehow immersing itself in perhaps an unavoidable, but certainly unattractive, miasma, here's a chance actually to put some numbers on calculations. We're not talking about genetics. We're talking about simple numbers. The skin has to change completely, it has to become impermeable to water. That's one change. Breathing apparatus has to change. A diving apparatus has to be put in place. Lactation systems have to be designed. The eyes have to be protected. Hearing has to be altered. Salivary organs have to be changed. Feeding mechanisms have to be changed, after all a cow eats grass, a whale doesn't. As I say, I've tried to do some of these calculations. The calculations are certainly, certainly not hard. But they're interesting, because I stopped at fifty thousand, that is, morphological changes. And don't forget these changes are not independent, they're all linked. If you change an organism's visual system you have to change a great many parts of its cerebellum, its cerebrum, its nervous system. All of these changes are coordinated. So when we're talking about an evolutionary sequence such as this, when we're talking about the cow to whale transition -- and I'm just using this as an easily accessible idea -- what's interesting about the cow to what transition is that we can see, a different environment is going to impose severe design constraints on a possible evolutionary sequence. How are these constraints met, if there are roughly fifty thousand? If there are two million constraints, how were those met? And what does this suggest about what we should see in the fossil record? To my way of thinking, if Darwinian hypotheses are correct, it should suggest an enormous plethora of animals intermediary between, say, between Ambulocetus and the next step. That won't solve all problems, one wants to know what's directing this change, if anything. But at least it will put it in the ball park of quantitative estimate. Which is hardly ever done.
jstanley01
vjtorley writes:
Here, Keith S attempts to rebut my argument that “the vast majority of unguided processes don’t generate objective nested hierarchies” by pointing out (correctly) that the unguided evolution we observe during the history of animal life on Earth - if we ignore the prokaryotes here and focus on the 30 major taxa of animals, as Theobald does in his 29 Evidences for Macroevolution - is indeed a Markov process, since vertical inheritance predominates.
And mutation is slow enough that we are able to infer an objective nested hierarchy.
However, this is not germane to the mathematical argument I put forward. The question is not whether a Markov process did indeed generate the 30 taxa of animals living on Earth, but rather whether the only unguided processes in Nature that would have been capable of generating various groups of animals on some planet harboring life were Markov processes (which are the only processes known to automatically generate objective nested hierarchies).
No, because we aren't talking about other planets. We are talking about terrestrial life, and the queston is this: Is the diversity of life on earth best explained by an unspecified designer, or by the operation of unguided evolution -- the same process that we see producing microevolutionary phenomena such as antibiotic resistance? Let me stress this point: the designer is unspecified, but the competing hypothesis of unguided evolution is quite specific, has been observed, and is known to produce ONHs. We don't know anything about the designer, but we know a lot about UE. That important asymmetry is at the root of the trillions-to-one advantage of evolution over ID. keith s
Mung, I can't believe you still don't get this. In my 2-dimensional landscape, one dimension is vertical and the other is horizontal. You even quoted me saying exactly that:
In a two-dimensional landscape, height still represents fitness, but horizontal motion is limited to one dimension — a line, rather than a plane. Motion is limited to two directions, right and left.
You are the mathematical genius who added 1 to 1 and got 3. keith s
keiths, you're amusing, I'll give you that. You assert that I claimed a 2D landscape cannot contain a vertical dimension, and to demonstrate this you provide a quote where I am mocking your own ignorance. I understand how you, over at TSZ, are immune to criticism, but this is not TSZ. How is it that you, keiths, can find three dimensions in a 2D landscape? That's the question. And from this question you infer that I am confused about how many dimensions there are in a 2D landscape. Brilliant. It's just a scratch though. Right? Mung
Ok Keith, let's link the latest thread: How Keith’s “Bomb” Turned Into A Suicide Mission. Box
Box #69, Instead of reposting, let's link to our latest exchange and let readers judge for themselves. keith s
Keith #68, Why are you reposting this? Do I have to repost my rebuttal as well? Ok ... here goes:
What it boils down to is this: you state that there are trillions of options available for a designer and that he/she/it could have chosen either one, but we simply have no way of knowing. There is no grounding for your claim. We do not know if a designer is capable of producing trillions of different orderings of life – for all we know the designer’s capability is limited to only one option. But even if there are trillions of options available, we have no way of knowing if there are compelling reasons – any reasons – for the designer to choose for ONH. That is the problem with free agents … And what we certainly cannot know is – your implicit claim – that the designer is completely indifferent about the ordering of life and that he based his decision on the role of a die.
Box
DATCG:
If a Designer has 10 options it does not lead to a false assertion that a blind, unguided process is 10 times better at explaining occurrence of Z.
If there are trillions of non-ONH options available for design, but only ONH is available to and produced by unguided evolution, and we see an ONH, then unguided evolution is trillions of times better at explaining that fact. Here's how I expressed it in another thread:
Box, It’s astonishing to me that you still don’t get this, but let me try once more. Suppose you have two objects: 1. A coin with ONH stamped on both sides. 2. A trillion-sided die with ONH engraved on one and only one side. A friend of yours takes both objects into another room, out of your sight. She randomly picks one of the two objects and flips it. “I randomly picked one of the objects and flipped it, and it landed with ONH up,” she shouts to you. Your job is to guess which of the objects she flipped — the coin with ONH on both sides, or the trillion-sided die with ONH on only one side. If you can’t figure out the best answer, I’m afraid there’s little hope that you will ever understand my argument.
keith s
Littlejohn: I believe our understanding of evolution has evolved far beyond just Darwinism.
Are you referring to a guided or an unguided process? Box
Mung @ 61
There are no gaps in the fitness landscape.
Sure there is - Dembski uses White Noise landscape to fool you into thinking that evolution can't climb the peaks. The truth is there is no White Noise landscape in Biology. Me_Think
littlejohn:
I believe our understanding of evolution has evolved far beyond just Darwinism.
Sure, but the basic Darwinian idea -- random variation with natural selection -- is still there, and very important.
It is also true that God still occupies the gap from geochemistry to biochemistry...
Until he gets squeezed out of that gap, too. keith s
Mung:
There are no gaps in the fitness landscape. So just where do you think God has retreated to, and why do you think God even needs to retreat?
Mung, Do you understand how ridiculous that statement makes you look to knowledgeable people? It's almost as good as this one, where you got confused because you thought a two-dimensional fitness landscape couldn't include a vertical dimension:
keiths:
In a two-dimensional landscape, height still represents fitness, but horizontal motion is limited to one dimension — a line, rather than a plane. Motion is limited to two directions, right and left.
Mung:
So in a two-dimensional landscape there three dimensions? Left, Right. Up. Down. Define your terms. Horizontal. Plane. Motion. Landscape. In a two dimensional landscape there is no height. In a two dimensional landscape there is no landscape. There is no plane, in your two-dimensional landscape. Hah. Unbelievable.
Unbelievable, indeed. What kind of mathematical training have (or haven't) you had, Mung? keith s
We evolved from monkeys. I know it's true. Keiths proves it. Vishnu
... making assertions as a blind, unguided process is just a blind assertion. If a Designer has 10 options it does not lead to a false assertion that a blind, unguided process is 10 times better at explaining occurrence of Z. A designer utilizing intelligence, wisdom, experience gained, utilizes the best option for given scenario to produce Z. Stated another way. Why swim to London from Paris when you can fly, go rail, or take a boat? Same way with blind versus guided process. Why allow a blind algorithm to run making mistakes without error checking vs a guided one that sets boundaries for selected end goals utilizing error correction? What is the purpose of error correction? The selection process by intelligence to remove errors is the killer to a blind process. A blind process does not create error correction. It merely runs down hill. Why? Because a blind process cannot compare without first having a known selection criteria to match for outcome Z. Where does the matching criteria come from in a blind process? Let alone two blind processes evolving magically that must be utilized together? Search - Target. The faithful in blind processing wants us to believe selection criteria for Error Code evolved, same as error correcting devices and reading devices. Talk about a bomb! This is nearly as arrogant as the failed assertion that 98% DNA was JUNK due to blind, unguided processes by the faithful Darwinist.
"And for good reason. The Bomb doesn’t merely tilt the scales in favor of unguided evolution — it blows ID out of all serious consideration as a contender."
More hyperbolic statements based upon accidental reasoning by an unguided process which can neither detect unguided or guided processes without utilizing information rich system target data. Yet the accidental process proclaims in truth he is blindly conceived due to... get this, he has less options. As if blind processes can make intelligible assertions at all without guidance to know if less options are better. How does a Google car drive on a highway - blind process? Did blind processors create it? Or intelligent agents? Following the logic of a blind processor. What will they say..., Google engineers had 1000 more options than a blind, unguided process? Therefore, the blind process option is 1000 times more likely to create a navigable Car for the road? Simply because the blind process adheres to a series of nested loops? Or, wait.. simply because the Designer of the car adheres to a series of nested loops? But.. the kicker, the Designer of the Car had more options. Thus, if you find a car on a road, with a navigation system, you now know it was blindly conceived of by a blind process. Ha! Since a designer has 1000 more options than the only option due to blind process of 1, the blind process loop is 1000 times better explanation of the outcome of Zzzz Car? Ha! Truly? What a joke. This is blind thought by desperate people. It's like being stuck on a perpetual U-Turn. There is no logic they will not throw away to keep faithful in their religion. Bomb? Yes.. stink bomb, adrift of logic, but steeped in fairy tales. DATCG
keiths:
It’s interesting to me that Dembski is now focusing his attention not on disproving unguided evolution — he failed to do that with his CSI notion — but rather on showing that even if evolution proceeds by Darwinian means, it is ultimately teleological because the fitness landscape contains information that must have been put there by Someone. His God has retreated to another gap.
There are no gaps in the fitness landscape. So just where do you think God has retreated to, and why do you think God even needs to retreat? Mung
Keith, #57 I believe our understanding of evolution has evolved far beyond just Darwinism. It is also true that God still occupies the gap from geochemistry to biochemistry, and perhaps one day, we will discover how the Creator made the arrangements for that as well. littlejohn
keiths:
I don’t remember invoking Ockham in that context. Could you refresh my memory by providing a quote and a link?
Oh please. You didn't actually say Ockham. Is that your defense? Mung
Has keiths finally stopped threatening us with his imminent banning? Mung
littlejohn:
Keith, you are mistaken, Wagner’s book and the models he explores demonstrate a powerful design signal. The genotype network landscape that he purposes is a fail-safe, meaning the game is rigged so that evolution cannot fail, EVEN IF navigation is by a blind search, or random walk.
littlejohn, If you're willing to accept that evolution proceeds by random mutation, natural selection, and drift, then congratulations! You're a Darwinist! It's interesting to me that Dembski is now focusing his attention not on disproving unguided evolution -- he failed to do that with his CSI notion -- but rather on showing that even if evolution proceeds by Darwinian means, it is ultimately teleological because the fitness landscape contains information that must have been put there by Someone. His God has retreated to another gap. keith s
#51 Keith, you are mistaken, Wagner's book and the models he explores demonstrate a powerful design signal. The genotype network landscape that he purposes is a fail-safe, meaning the game is rigged so that evolution cannot fail, EVEN IF navigation is by a blind search, or random walk. He acknowleges that the physical universe is conceptual and computational, meaning that the universe was conceived, a concept. In Wagner's opinion, formalisms like mathematics and other forms of communication are intrinsic attributes of nature. But in reality, they powerful design signals, and he even recognizes the fact that these things were discovered, not invented. He unfortunately assumes that it all came about by the principles of self-organization, however, as we all know, such perceptions are completely undemonstrated. He also seems to believe that evolvability evolved, but it has been present in full potency since the first life forms appeared. The point being, at this stage of the game, design is equally valid as any other hypothesis, and the fact that evolution is teleological is unavoidable. Now, the challenge is to demonstrate that evolution is intentional, and likely the very mechanism the Creator employed to fill the universe with life. littlejohn
Gordon #46: "PS: So far R has indulged in trollish behaviour, beyond the limits of reasonable behaviour. I have some ideas as to who or at least what sort of circle stands behind that handle and it is not pretty; the obvious interest there is not serious discussion but to trollishly poison the atmosphere and frustrate discussion by making ill-founded accusations and the like. I suggest you do not wish to find yourself an enabler of the sort of behaviour I am alluding to. As in don’t feed the trolls." I assume that this is Mullings code for "You're Banned". But, really, is this really trollish behaviour? From what I have read, Reality has simply identified multiple examples of trollish behaviour from ID proponents. He/she has never been rude, insulting or offensive. Unless pointing out examples of rude, insulting and offensive behaviour somehow is offensive to someone. From my perspective, Reality is just identifying opportunities for improvement. The fact that G. Elliot M. Is incapable of identifying this is very telling. Please do better. centrestream
The only thing that fizzled out was KF's courage. keith s
So says the brave kairosfocus, who closed comments on those two threads because he was afraid of criticism. keith s
KS, your claims long since fizzled as question begging and in some respects strawmannish. As has been shown any number of times by any number of people. KF kairosfocus
Vincent, Of all the points you raise in your OP, Axe's argument is going to be the most fun for me to criticize, but also the most technically involved. I will be quoting liberally from Andreas Wagner's new book Arrival of the Fittest. I highly recommend this book to anyone involved in the ID debate, whether pro or con. You will be hearing about it again and again, so you need to understand its contents. Denyse did an OP on the book, thinking it was anti-Darwinian. Boy oh boy, was she ever wrong. This book is full of bad news for ID. It's well-written and fascinating. I think that ID supporters will enjoy it, if they can get past the sinking feeling they'll experience when they realize the dire implications for ID. The 'islands of function' argument for ID was already unsustainable, but this book nails the coffin lid shut. Just thought I'd give readers advance notice in case they want to order the book or download it onto their e-readers. PS Thanks again, Denyse, for bringing the book to my attention. :-) keith s
beau, Are you kidding? I love commenting at UD, which is why I jumped at the chance to come back when Barry issued his "general amnesty". I brought my argument (aka the Bomb, in honor of Barry's metaphor) here because there is no place on the Web with a higher concentration of people who are determined to defend ID. For three weeks, UDers have been trying unsuccessfully to defuse the Bomb. And for good reason. The Bomb doesn't merely tilt the scales in favor of unguided evolution -- it blows ID out of all serious consideration as a contender. When one hypothesis is literally trillions of times better than another, you cannot rationally continue to believe the latter. Every IDer reading this knows that his or her continued acceptance of ID is not rational unless someone finds a refutation. UD needs to defuse the Bomb. keith s
I'm sensing that KeithS is setting the stage to go full Houdini, retreat to TSK and claim he was banned. I hope I'm wrong. beau
Mark
Define condition X as A AND B. Therefore to determine if X is true it is necessary to determine B is true (as well as A) Therefore to use the presence of X to detect B is superfluous as we had to determine B was true to find out if X was true.
Dembski IF A and B, then X If C then B If D, then C IF C, then X KeithS If A, then X If X, then A StephenB
MT: Abstract: >>The intrinsic ability of protein structures to exhibit the geometric and sequence properties required for ligand binding without evolutionary selection is shown by the coincidence of the properties of pockets in native, single domain proteins with those in computationally generated, compact homopolypeptide, artificial (ART) structures. The library of native pockets is covered by a remarkably small number of representative pockets (?400), with virtually every native pocket having a statistically significant match in the ART library, suggesting that the library is complete. When sequences are selected for ART structures based on fold stability, pocket sequence conservation is coincident to native. The fact that structurally and sequentially similar pockets occur across fold classes combined with the small number of representative pockets in native proteins implies that promiscuous interactions are inherent to proteins. Based on comparison of PDB (real, single domain protein structures found in the Protein Data Bank) and ART structures and pockets, the widespread assumption that the co-occurrence of global structure, pocket similarity, and amino acid conservation demands an evolutionary relationship between proteins is shown to significantly underestimate the random background probability. Indeed, many features of biochemical function arise from the physical properties of proteins that evolution likely fine-tunes to achieve specificity. Finally, our study suggests that a repertoire of thermodynamically (marginally) stable proteins could engage in many of the biochemical reactions needed for living systems without selection for function, a conclusion with significant implications for the origin of life. >> Computer simulation, kindly note above that the actual body of evidence is that we do not have modular bricks that can be assembled like matching Lego bricks, cf Axe as clipped above. KF kairosfocus
MT: There is considerable evidence that our universe as a universe is designed, hence the link and onward info on fine tuning. That includes the physics behind atoms. The question above in the thread is on a different subject in a different context the world of life, given atoms, physics and chemistry, is that all we need to explain the FSCO/I in life and particularly proteins? Axe as I clipped in fair length from a much longer detailed paper (cf also VJT's excerpt earlier) so we can see some of the reasons behind the issue of islands of function in AA sequence space vs sparse search on available atomic resources, knowing that just to get a novel body plan from the original unicellular stuff, we are talking of, dozens of times over accounting for 10 - 100+ mn bases of genome to account for cell types, tissues, organs and systems that must be expressed in embryonic development and issue in a viable population. The cumulativ e search challenge is patently beyond the reaach of blind watchmaker mechanisms as suggested. KF PS: So far R has indulged in trollish behaviour, beyond the limits of reasonable behaviour. I have some ideas as to who or at least what sort of circle stands behind that handle and it is not pretty; the obvious interest there is not serious discussion but to trollishly poison the atmosphere and frustrate discussion by making ill-founded accusations and the like. I suggest you do not wish to find yourself an enabler of the sort of behaviour I am alluding to. As in don't feed the trolls. kairosfocus
KF @ 38
MT, tangential, we are discussing particular classes of functional specificity.
So atoms are designed ? Pl refer to Reality @ 43 comments too. Me_Think
KF @ 37
The shortfall is itself a staggering figure—some 80 to 127 orders of magnitude (comparing the above prevalence range to the cutoff value of 1 in 5×10 23 ). So it appears that even when m is taken into account, protein sequences that perform particular functions are far too rare to be found by random sampling.
Excerpt from Interplay of physics and evolution in the likely origin of protein biochemical function Full paper Here
The results for 1,284,577 entries extracted from the ChEMBL15 (32) and BindingDB (33) databases are reported. We note that the reported number of ligand–receptor interactions is a lower bound as many such interactions are currently uncharacterized. Even so, in more than 1,400 ligands, each binds to 40 or more nonhomologous proteins. Thus, there is considerable experimental evidence that a given ligand interacts with many proteins in a proteome; viz. such interactions are quite promiscuous. The clear implication is that the fundamental physical–chemical properties of proteins are sufficient to explain many of their structural and molecular functional properties
In simple terms, proteins can easily bind small molecules and mutations can easily find amino acid sequences that generate functional proteins. Me_Think
Andre sneeringly claimed: "Duh…. Crystals lack complexity!" I have some questions for you, Andre, and for all other IDists: Are atoms complex? Is there CSI-dFSCI-FSCO/I in atoms? Are atoms intelligently designed? Is light complex? Is there CSI-dFSCI-FSCO/I in light? Is light intelligently designed? Is gravity complex? Is there CSI-dFSCI-FSCO/I in gravity? Is gravity intelligently designed? Is the universe a 'system'? Is the entire universe intelligently designed? Reality
If you read post #40 then thank you. If you didn't read it, too bad. You've missed something important to keep in mind in many of your future discussions. Go back and read it. Still have time. :) Now you may continue your interesting discussion on stats calculations. :) Dionisio
Well, well, kairosfocus starts off with his usual "loaded with personalities" incendiary, accusatory pomposity, and Joe continues his grunting, accusatory one-liners even though Barry issued a "final warning" to him days ago. What a surprise. Not. Barry, did your final warning to Joe mean anything or was it just an empty bluff? Look at the comments so far in this thread, especially by kairosfocus, Joe, and Andre. Who's trying to start a "quarrel"? Joe grunted: "It’s as if our opponents don’t know anything about science and they think that helps them somehow." "And we know that you and your ilk do not value open discussion." "That keith s can’t get that fact demonstrates he is not into an open discussion. keith s wants to dominate discussions with his strawmen, lies and misrepresentations." "Obviously you don’t know anything about science..." Reality
#39 addendum Ok, no need to click on any link. Here it is: *********************************************************** *********************************************************** *********************************************************** Very interesting summary written by gpuccio:
Indeed, what we see in research about cell differentiation and epigenomics is a growing mass of detailed knowledge (and believe me, it is really huge and daily growing) which seems to explain almost nothing. What is really difficult to catch is how all that complexity is controlled. Please note, at this level there is almost no discussion about how the complexity arose: we have really non idea of how it is implemented, and therefore any discussion about its origin is almost impossible. Now, there must be information which controls the flux. It is a fact that cellular differentiation happens, that it happens with very good order and in different ways in different species, different tissues, and so on. That cannot happen without a source of information. And yet, the only information that we understand clearly is then protein sequence information. Even the regulation of protein transcription at the level of promoters and enhancers by the transcription factor network is of astounding complexity. Please, look at this paper: Uncovering Enhancer Functions Using the ?-Globin Locus. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4199490/pdf/pgen.1004668.pdf In particular Fig. 2. And this is only to regulate the synthesis of alpha globin in red cells, a very straightforward differentiation task. So, I see that, say, 15 TFs are implied in regulating the synthesis of one protein, I want to know why, and what controls the 15 TFs, and what information guides that control. My general idea is that, unless we find some completely new model, information that guides a complex process, like differentiation, in a reliable, repetitive way must be written, in some way, somewhere. That’s what I want to know: where that information is written, how it is written, how does it work, and, last but not least, how did it originate? — gpuccio
*********************************************************** *********************************************************** *********************************************************** Dionisio
Please, forget for a moment all these discussions about stats calculations and all that interesting stuff. Please, pay attention to this: read carefully this very important message gpuccio wrote in another thread: https://uncommondescent.com/evolution/a-third-way-of-evolution/#comment-528351 That's all. Thank you. Dionisio
MT, tangential, we are discussing particular classes of functional specificity. Yes at its own scale fine tuning of physics and cosmos enabling C-Chemistry, aqueous medium terrestrial planet in habitable zones life is an interesting design inference issue but that is not our focus here. As a simple point, break a grain of parrot fish poo [yup, that's what it really is . . . but nicely dried out as rock], and you still have grains of coral sand, just a bit smaller. Break a protein AA string and generally no functional protein. KF kairosfocus
F/N: Let's kick Axe's remarks into play from his recent (2010) paper, abstract and pp 9 - 11: ______________ ABSTRACT: >> Four decades ago, several scientists suggested that the impossibility of any evolutionary process sampling anything but a miniscule fraction of the possible protein sequences posed a problem for the evolution of new proteins. This potential problem—the sampling problem—was largely ignored, in part because those who raised it had to rely on guesswork to fill some key gaps in their understanding of proteins. The huge advances since that time call for a care -ful reassessment of the issue they raised. Focusing specifically on the origin of new protein folds, I argue here that the sampling problem remains. The difficulty stems from the fact that new protein functions, when analyzed at the level of new beneficial phenotypes, typically require multiple new protein folds, which in turn require long stretches of new protein sequence. Two conceivable ways for this not to pose an insurmountable barrier to Darwinian searches exist. One is that protein function might generally be largely indifferent to protein sequence. The other is that rela-tively simple manipulations of existing genes, such as shuffling of genetic modules, might be able to produce the necessary new folds. I argue that these ideas now stand at odds both with known principles of protein structure and with direct experimental evidence . . . >> Pp 5 - 6: >> . . . we need to quantify a boundary value for m, meaning a value which, if exceeded, would solve the whole sampling problem. To get this we begin by estimating the maximum number of opportunities for spontane-ous mutations to produce any new species-wide trait, meaning a trait that is fixed within the population through natural selection (i.e., selective sweep). Bacterial species are most conducive to this because of their large effective population sizes. 3 So let us assume, generously, that an ancient bacterial species sustained an effective population size of 10 ^10 individuals [26] while passing through 10^4 generations per year. After five billion years, such a species would produce a total of 5 × 10 ^ 23 (= 5 × 10^ 9 x 10^4 x 10 ^10 ) cells that happen (by chance) to avoid the small-scale extinction events that kill most cells irrespective of fitness. These 5 × 10 ^23 ‘lucky survivors’ are the cells available for spontaneous muta-tions to accomplish whatever will be accomplished in the species. This number, then, sets the maximum probabilistic resources that can be expended on a single adaptive step. Or, to put this another way, any adaptive step that is unlikely to appear spontaneously in that number of cells is unlikely to have evolved in the entire history of the species. In real bacterial populations, spontaneous mutations occur in only a small fraction of the lucky survivors (roughly one in 300 [27]). As a generous upper limit, we will assume that all lucky survivors happen to receive mutations in portions of the genome that are not constrained by existing functions 4 , making them free to evolve new ones. At most, then, the number of different viable genotypes that could appear within the lucky survivors is equal to their number, which is 5 × 10^ 23 . And again, since many of the genotype differences would not cause distinctly new proteins to be produced, this serves as an upper bound on the number of new protein sequences that a bacterial species may have sampled in search of an adaptive new protein structure. Let us suppose for a moment, then, that protein sequences that produce new functions by means of new folds are common enough for success to be likely within that number of sampled sequences. Taking a new 300-residue structure as a basis for calculation (I show this to be modest below), we are effectively supposing that the multiplicity factor m introduced in the previous section can be as large as 20 ^300 / 5×10^ 23 ~ 10 ^366 . In other words, we are supposing that particular functions requiring a 300-residue structure are real-izable through something like 10 ^366 distinct amino acid sequences. If that were so, what degree of sequence degeneracy would be implied? More specifically, if 1 in 5×10 23 full-length sequences are supposed capable of performing the function in question, then what proportion of the twenty amino acids would have to be suit-able on average at any given position? The answer is calculated as the 300 th root of (5×10 23 ) -1 , which amounts to about 83%, or 17 of the 20 amino acids. That is, by the current assumption proteins would have to provide the function in question by merely avoid-ing three or so unacceptable amino acids at each position along their lengths. No study of real protein functions suggests anything like this degree of indifference to sequence. In evaluating this, keep in mind that the indifference referred to here would have to charac-terize the whole protein rather than a small fraction of it. Natural proteins commonly tolerate some sequence change without com- plete loss of function, with some sites showing more substitutional freedom than others. But this does not imply that most mutations are harmless. Rather, it merely implies that complete inactivation with a single amino acid substitution is atypical when the start-ing point is a highly functional wild-type sequence (e.g., 5% of single substitutions were completely inactivating in one study [28]). This is readily explained by the capacity of well-formed structures to sustain moderate damage without complete loss of function (a phenomenon that has been termed the buffering effect [25]). Conditional tolerance of that kind does not extend to whole proteins, though, for the simple reason that there are strict limits to the amount of damage that can be sustained. A study of the cumulative effects of conservative amino acid substitutions, where the replaced amino acids are chemically simi-lar to their replacements, has demonstrated this [23]. Two unrelat-ed bacterial enzymes, a ribonuclease and a beta-lactamase, were both found to suffer complete loss of function in vivo at or near the point of 10% substitution, despite the conservative nature of the changes. Since most substitutions would be more disruptive than these conservative ones, it is clear that these protein functions place much more stringent demands on amino acid sequences than the above supposition requires. Two experimental studies provide reliable data for estimating the proportion of protein sequences that perform specified func -tions [--> note the terms] . One study focused on the AroQ-type chorismate mutase, which is formed by the symmetrical association of two identical 93-residue chains [24]. These relatively small chains form a very simple folded structure (Figure 5A). The other study examined a 153-residue section of a 263-residue beta-lactamase [25]. That section forms a compact structural component known as a domain within the folded structure of the whole beta-lactamase (Figure 5B). Compared to the chorismate mutase, this beta-lactamase do-main has both larger size and a more complex fold structure. In both studies, large sets of extensively mutated genes were produced and tested. By placing suitable restrictions on the al-lowed mutations and counting the proportion of working genes that result, it was possible to estimate the expected prevalence of working sequences for the hypothetical case where those restric-tions are lifted. In that way, prevalence values far too low to be measured directly were estimated with reasonable confidence. The results allow the average fraction of sampled amino acid substitutions that are functionally acceptable at a single amino acid position to be calculated. By raising this fraction to the power ?, it is possible to estimate the overall fraction of working se-quences expected when ? positions are simultaneously substituted (see reference 25 for details). Applying this approach to the data from the chorismate mutase and the beta-lactamase experiments gives a range of values (bracketed by the two cases) for the preva-lence of protein sequences that perform a specified function. The reported range [25] is one in 10 ^77 (based on data from the more complex beta-lactamase fold; ? = 153) to one in 10 ^53 (based on the data from the simpler chorismate mutase fold, adjusted to the same length: ? = 153). As remarkable as these figures are, par-ticularly when interpreted as probabilities, they were not without precedent when reported [21, 22]. Rather, they strengthened an existing case for thinking that even very simple protein folds can place very severe constraints on sequence. Rescaling the figures to reflect a more typical chain length of 300 residues gives a prevalence range of one in 10 ^151 to one in 10 ^104 . On the one hand, this range confirms the very highly many-to-one mapping of sequences to functions. The corresponding range of m values is 10 ^239 (=20 ^300 /10 ^151 ) to 10 ^286 (=20 ^300 /10 ^104 ), meaning that vast numbers of viable sequence possibilities exist for each protein function. But on the other hand it appears that these functional sequences are nowhere near as common as they would have to be in order for the sampling problem to be dis-missed. The shortfall is itself a staggering figure—some 80 to 127 orders of magnitude (comparing the above prevalence range to the cutoff value of 1 in 5×10 23 ). So it appears that even when m is taken into account, protein sequences that perform particular functions are far too rare to be found by random sampling.>> Pp 9 - 11: >> . . . If aligned but non-matching residues are part-for-part equivalents, then we should be able to substitute freely among these equivalent pairs without impair-ment. Yet when protein sequences were even partially scrambled in this way, such that the hybrids were about 90% identical to one of the parents, none of them had detectable function. Considering the sensitivity of the functional test, this implies the hybrids had less than 0.1% of normal activity [23]. So part-for-part equiva-lence is not borne out at the level of amino acid side chains. In view of the dominant role of side chains in forming the bind-ing interfaces for higher levels of structure, it is hard to see how those levels can fare any better. Recognizing the non-generic [--> that is specific and context sensitive] na-ture of side chain interactions, Voigt and co-workers developed an algorithm that identifies portions of a protein structure that are most nearly self-contained in the sense of having the fewest side-chain contacts with the rest of the fold [49]. Using that algorithm, Meyer and co-workers constructed and tested 553 chimeric pro-teins that borrow carefully chosen blocks of sequence (putative modules) from any of three natural beta lactamases [50]. They found numerous functional chimeras within this set, which clearly supports their assumption that modules have to have few side chain contacts with exterior structure if they are to be transport-Able. At the same time, though, their results underscore the limita-tions of structural modularity. Most plainly, the kind of modular-ity they demonstrated is not the robust kind that would be needed to explain new protein folds. The relatively high sequence simi-larity (34–42% identity [50]) and very high structural similarity of the parent proteins (Figure 8) favors successful shuffling of modules by conserving much of the overall structural context. Such conservative transfer of modules does not establish the ro-bust transportability that would be needed to make new folds. Rather, in view of the favorable circumstances, it is striking how low the success rate was. After careful identification of splice sites that optimize modularity, four out of five tested chimeras were found to be completely non-functional, with only one in nine being comparable in activity to the parent enzymes [50]. In other words, module-like transportability is unreliable even under extraordinarily favorable circumstances [--> these are not generally speaking standard bricks that will freely fit together in any freely plug- in compatible pattern to assemble a new structure] . . . . Graziano and co-workers have tested robust modularity directly by using amino acid sequences from natural alpha helices, beta strands, and loops (which connect helices and/or strands) to con-struct a large library of gene segments that provide these basic structural elements in their natural genetic contexts [52]. For those elements to work as robust modules, their structures would have to be effectively context-independent, allowing them to be com-bined in any number of ways to form new folds. A vast number of combinations was made by random ligation of the gene segments, but a search through 10^8 variants for properties that may be in-dicative of folded structure ultimately failed to identify any folded proteins. After a definitive demonstration that the most promising candidates were not properly folded, the authors concluded that “the selected clones should therefore not be viewed as ‘native-like’ proteins but rather ‘molten-globule-like’” [52], by which they mean that secondary structure is present only transiently, flickering in and out of existence along a compact but mobile chain. This contrasts with native-like structure, where secondary structure is locked-in to form a well defined and stable tertiary Fold . . . . With no discernable shortcut to new protein folds, we conclude that the sampling problem really is a problem for evolutionary accounts of their origins. The final thing to consider is how per-vasive this problem is . . . Continuing to use protein domains as the basis of analysis, we find that domains tend to be about half the size of complete protein chains (compare Figure 10 to Figure 1), implying that two domains per protein chain is roughly typical. This of course means that the space of se-quence possibilities for an average domain, while vast, is nowhere near as vast as the space for an average chain. But as discussed above, the relevant sequence space for evolutionary searches is determined by the combined length of all the new domains needed to produce a new beneficial phenotype. [--> Recall, courtesy Wiki, phenotype: "the composite of an organism's observable characteristics or traits, such as its morphology, development, biochemical or physiological properties, phenology, behavior, and products of behavior (such as a bird's nest). A phenotype results from the expression of an organism's genes as well as the influence of environmental factors and the interactions between the two."] As a rough way of gauging how many new domains are typi-cally required for new adaptive phenotypes, the SUPERFAMILY database [54] can be used to estimate the number of different protein domains employed in individual bacterial species, and the EcoCyc database [10] can be used to estimate the number of metabolic processes served by these domains. Based on analysis of the genomes of 447 bacterial species 11, the projected number of different domain structures per species averages 991 (12) . Compar-ing this to the number of pathways by which metabolic processes are carried out, which is around 263 for E. coli,13 provides a rough figure of three or four new domain folds being needed, on aver-age, for every new metabolic pathway 14 . In order to accomplish this successfully, an evolutionary search would need to be capable of locating sequences that amount to anything from one in 10 ^159 to one in 10 ^308 possibilities 15 , something the neo-Darwinian model falls short of by a very wide margin. >> _______________ Those who argue for incrementalism or exaptation and fortuitous coupling or the like need to address these and similar issues. KF kairosfocus
Andre @ 30
Proteins = Complex + Specified Coral Sand = specified Plutonium = complex
At atomic level (ref comment @ 27) everything is complex (think electrons, muon, tau, quarks, guage bosons) and specific ( if not specific, the element will change to some other element). Me_Think
#33 Andre
Improbable and specified are the same characteristic?? I’m lost in what you are trying to say please clarify…..
I wrote something can be A and B at the same time - for example both improbable and specified (or large and red). That is not the same as saying they are the same characteristic. I also never wrote that something that has a chance value of 1 in 10^2500 is probable - but somehow you read that into something I wrote somewhere. If someone misunderstands what I have written I try to criticise myself for not being clear enough. But really I don't see how to be clearer in these cases. markf
@ Joe #13 'LoL!@ Me Think- All of science relies on observations- proper observations.' Oh no, Joe. As Dawkins pointed out so tellingly, things only appear to be what they appear to be. Empirical science is a busted flush. Axel
Mark Frank Improbable and specified are the same characteristic?? I'm lost in what you are trying to say please clarify..... Andre
Andre #22
So when something has a chance value of 10^2500 you deem it probable?
No. Where on earth did you get the idea I asserted that? Do you understand my point in #11 that something can be A and B at the same time when these are characteristics? markf
F/N: I probably should note that I do reckon with darwinian evo mechanisms by pointing out the role of incremental hill climbing and its limitations in the context of the need to account for novel body plans that exhibit massive FSCO/I. The problem is that FSCO/I locks you to islands of function on the requisites of specific interactions to gain functionality, and to get the requisite complexity incrementally you have to cross seas of non function landing you back in the challenge of blind chance walks with drift that is not correlated with locations of islands of function. The notion, usually implicit, of a vast continent of functionality that is incrementally accessible in a branching tree pattern, lacks empirical warrant. Just look at the discussions on teh challenge to ground the tree of life empirically over the past two years and you will see copious documentation, so again I have been strawmannised. Darwinian mechanisms may explain minor changes such as loss of eyes or wings, or finch beak variations or possibly industrial melanism and insecticide or drug resistance, typically by breaking things and facing a fitness cost, but not the creative origin of body plans requiring 10 - 100+ mn bases of fresh functionally co-ordinated genetic info. If you have an answer to this, the offer to host the essay is still open after two years and more. KF kairosfocus
Me think..... Proteins = Complex + Specified Coral Sand = specified Plutonium = complex the key here about specified complexity is that the parts are packed in a very specific way to produce a very specific effect. Andre
MT: observation is what sets the ball rolling and keeps it in bounds in science. I used to reach students O Hi PET: observe, hypothesise, infer and predict empirically test, where of course there was a student in some of those classes called . . . Pet (now, a medical doctor, I see and chat with her dad -- a retired Police Sergeant and MBE, every so often). We are no longer in C19 when we could vaguely say protoplasm. We know that proteins depend on specifically sequenced AA strings, folding [with chaperoning and prions with scrapies and mad cow disease or even maybe Alzheimer's lurking in the wings], and coded numerically controlled machines. That, one has a false negative (here: not recognising FSCO/I because of lack of proper instruments and work due to state of the art) and is not able as yet c C19 to make relevant observations does nothing to side track the reality that we do have observed FSCO/I to deal with in the cell. Besides, c 1804, Paley had long since put ont eh table the thought exercise of the time-keeping, self replicating watch as a context that was already deeply insightful and suggestive on the issues of FSCO/I. So even macro-level observations and a careful use of the vera causa principle would have counselled caution even then. And post 1953 - 1970, we no longer have any such excuses. We can and do make observations and analysis that point clearly to the FSCO/I in the cell, and we need to reflect on the issues of getting to FSCO/I. At OOL and again at origin of body plans. KF kairosfocus
Me Think. We also know that natural laws are capable of making crystals....... sonnets not so much.... Ever seen a sonnet blown by the wind? The water perhaps wrote one? Gravity perhaps? strong nuclear forces? Anything other than intelligence create a sonnet before? Has this been observed? So in our uniform experience, we know natural forces can not write sonnets...... Andre
Andre, Here's my question @ 21 Let’s take this further. If we observe Proteins, Coral Sand and Plutonium at atomic level, which do you think will be more complex and which will have high dFSCI ? Note: Atomic structures are specific and have complexity Me_Think
correct :So, why would a sonnet or any sentences in any script qualify for dFSCI calculation, where as crystals don’t ? Me_Think
Me_Think Duh.... Crystals lack complexity! Andre
Andre @ 21,
The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity
So, why would a sonnet qualify for dFSCI calculation, where as crystals don't ? Me_Think
Here is a cool website to learn about probabilities in math...... http://www.mathsisfun.com/data/probability.html Andre
Mark F So when something has a chance value of 10^2500 you deem it probable? Andre
Me_Think. Leslie Orgel answered that already.... Did you not get the memo? "In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity." Andre
A level playing field means using the correct tools for each object. Joe
Joe, This is a thought experiment, which is pretty routine. If your aim is to detect design, shouldn't you give a level playing field ? Let's take this further. If we observe Proteins, Coral Sand and Plutonium at atomic level, which do you think will be more complex and which will have high dFSCI ? Me_Think
Me Think, Obviously you don't know anything about science as science requires proper observations. We do not use a microscope to observe planets. All objects require the proper observation tools. Joe
Joe @ 16 Shouldn't both structures be observed at equal resolution if the objective is to detect design ? Me_Think
Me Think:
I still want to know why coral sand should be observed with naked eyes where as protein should be observed using EM for dFSCI calculation.
I told you why. Joe
keith s:
I know that you personally value open discussion,
And we know that you and your ilk do not value open discussion. CSI exists regardless of how it was formed. That keith s can't get that fact demonstrates he is not into an open discussion. keith s wants to dominate discussions with his strawmen, lies and misrepresentations. Joe
LoL!@ Me Think- All of science relies on observations- proper observations. :-) I still want to know why coral sand should be observed with naked eyes where as protein should be observed using EM for dFSCI calculation. Me_Think
LoL!@ Me Think- All of science relies on observations- proper observations. It's as if our opponents don't know anything about science and they think that helps them somehow. Joe
KF @ 9
MF, no. Complexity is directly observable and measurable as is especially functional specificity pivoting on interaction of properly arranged and coupled parts leading to islands of function in very large config spaces. The scope of implied config spaces and the resulting sparse blind needle in haystack search challenge to find islands of function are analytical, empirically testable consequences of observed FSCO/I.
Let's say I am observing both coral sand and protein using a microscope, the complexity of coral sand will be far more than the protein structure (perhaps you can just make out globular shape of protein -no more). Only if I observe using an Electron microscope will I know the structure of protein in detail. So dFSCI depends on what resolution you observe a structure. So will you add a observation resolution component to dFSCI ? Me_Think
Andre - you do misunderstand what I am saying. Something can certainly be A and B at the same time when A and B are characteristics such as improbable and specified. In fact that is exactly what StephenB is asserting in 6. markf
Mark F Perhaps I miss understand what you're trying to say but something can not be both A and B at the same time, the law of the excluded middle takes care of that in logic 101..... http://en.wikipedia.org/wiki/Law_of_excluded_middle Elementary my dear Watson....... Andre
MF, no. Complexity is directly observable and measurable as is especially functional specificity pivoting on interaction of properly arranged and coupled parts leading to islands of function in very large config spaces. The scope of implied config spaces and the resulting sparse blind needle in haystack search challenge to find islands of function are analytical, empirically testable consequences of observed FSCO/I. So, there is no circularity or redundancy. Repeat, for emphasis: it is recognition of observed FSCO/I that entails, on analysis, the needle in haystack challenge, not the other way around. KF kairosfocus
It is all good and well to give Keith a platform to air his views, it is also a good thing to discuss those views, but we are giving somebody who readily admits his own uncertainty way too much airtime..... Andre
#6 SB Define condition X as A AND B. Therefore to determine if X is true it is necessary to determine B is true (as well as A) Therefore to use the presence of X to detect B is superfluous as we had to determine B was true to find out if X was true. markf
Dembski's argument: 1. To safely conclude that an object is designed, we need to establish that it exhibits specificity, and that it has an astronomically low probability of having been produced by unintelligent natural causes..... KeithS description: 1. To safely conclude that an object is designed, we need to establish that it could not have been produced by unintelligent natural causes..... Do you notice anything missing in the second account? StephenB
VJT: I just add, that claimed mechanisms for origin of the underlying phenomenon, functionally specific complex interactive organisation and associated information (FSCO/I) should have observational warrant for the claimed ability. In the current GP sonnet thread, I commented at 358, picked up by GP at 443:
the problem here is that on evolutionary materialism contemplation must reduce to computation, but the deterministic mechanistic side of algors is not creative and the stochastic side is not creative enough and powerful enough to account for FSCO/I and particularly spectacular cases of dFSCI such as the sonnets in question.
That is, I point to the vera causa principle promoted by Newton. Namely, that causal factors of relevance need to be observed as adequate to the effect they are held to produce or enable. Or else, we are in a thicket of fast growing unbridled entangling metaphysical and/or ideological speculation. Were blind chance and mechanical necessity sufficient per observation to credibly account for FSCO/I or dFSCI beyond 500 - 1,000 bits, that would be decisive against the design inference on the world of life. This is of course the context of info generation challenges, and it is the root of questions regarding origin of cell based life and body plans by claimed blind watchmaker style mechanisms. Starting with the general point that interactive function based on correct arrangements and coupling of parts, strongly constrains a functional entity in the space of possible clumped and scattered configs. Where, as Axe emphasises in the same paper, blind search constrained by atomic and temporal resources available, can only carry out a rather sparse search of the haystack of possibilities. At about 10^13 - 15 fast Chem rxn time events per second, with 10^57 atoms in sol system and with 10^80 in the cosmos we observe, 10^87 or 89 and 10^110 or 112 or so events in 10^17 s. The config space for 500 or 1000 bits is as 3.27*10^150 and 1.07*10^301. On fair comment, to date I have not seen an adequate answer by evolutionary materialists, and too often I have seen scientism and other ideological question begging substituting for the principle Galileo insisted on: scientific ideas should be empirically grounded, tied to observational reality. There's the tower of Pisa a-leaning, here are the two candidate balls, let's drop them over the side and see if each will fall about as fast. Or, is there a frictional factor at work that makes a difference? KF kairosfocus
Vincent, One more comment before I head to bed. You write:
Why Ockham’s razor fails to support Keith S’s claim that ID is trillions of times worse than unguided evolution at explaining the objective nested hierarchy of life In an effort to further discredit Intelligent Design, Keith S appeals to Ockham’s razor.
I don't remember invoking Ockham in that context. Could you refresh my memory by providing a quote and a link? Thanks. Good night. keith s
Hi Vincent, Thank you for your thoughtful OP. With my arguments being discussed in so many threads, it will be that much harder for Barry to ban me. You're helping to secure a place for me here at UD, at least temporarily. Thank you. :-) I know that you personally value open discussion, and I hope that our example will help persuade Barry that openness is a good policy. The banning of ID critics and the censorship of their dissenting views are harmful not only to the the already tarnished reputation of UD, but also to the quality of the discussions taking place here. Let me respond tonight to your objection to my circularity argument, and tomorrow or over the weekend to your other points. You write:
I’m sorry to say that KeithS has badly misconstrued Dembski’s argument: he assumes that the “could not” in premise 1 refers to absolute impossibility, whereas in fact, it simply refers to astronomical improbability.
That's actually not true. By 'could not have been produced', I am speaking of practical impossibility, not absolute impossibility. Speaking of absolute impossibility would be silly, because the CSI equation takes the logarithm of the probability. If I were speaking of absolute impossibility, the probability would be zero and the logarithm would be undefined! Here is a comment that makes this clear. I posted it eight years ago(!) at UD, commenting as 'Karl Pfluger':
Dembski’s refinement runs into trouble, though, because he admits that to determine that a system has CSI, we must estimate the probability of its production by natural means. Systems with CSI have a low probability of arising through natural means. This renders the reasoning circular: 1. Some systems in nature cannot have been produced through undirected natural means. 2. Which ones? The ones with high CSI. 3. How do you determine the CSI of a system? Measure the probability that it was produced through undirected natural means. If the probability was vanishingly small, it has CSI. 4. Ergo, the systems that could not have been produced through undirected natural means are the ones which could not have been produced through undirected natural means.
PS There are a few places in the OP where dashes show up as question marks, at least in my browser. Do you see the same problem? PPS (Hi, KF!) - I was banned by DaveScot shortly after making that comment. My crime was that I corrected Dave's misconceptions regarding the transistor-level modeling of microprocessors. keith s
VJ: Thank you for this very good summary of very good arguments (yours, not keith's! :) ). I really appreciate your clear and impartial thoughts. gpuccio
VJ – a digestible OP – thanks One point – I have no doubt Keith S will pick up the majority:
In my opinion, however, a much fairer question to ask would be: if we received a binary signal from outer space and decoded it into (say) ASCII code, only to find that it spelt out a Shakespearian sonnet, what would the odds be that it was generated via an unguided process?  …… Using my analogy, we can certainly show that the odds of a binary signal from space spelling out a sonnet of any kind are less than 1 in 2^500.
No you can’t. You can show that the odds of a specific unguided process producing the string is less then 1 in 2^500.  This is a quite different calculation.  To calculate the odds of a binary signal from space spelling out a sonnet you would need some prior probabilities and apply Bayes theorem. That follows as a mathematical certainty. As we lack any basis for prior probabilities it is an almost impossible calculation. Before Barry jumps in on the lines of “anyone who thinks it is probable is mad”. I absolutely accept that it would be extraordinary if the signal spelled out a sonnet in ASCII and would conclude that the signal was almost certainly in some way related to the sonnet (maybe one caused the other, or there was a common cause, – not necessarily designed). I just think we should be clear about the basis for that conclusion. markf

Leave a Reply