Uncommon Descent Serving The Intelligent Design Community

Keith S in a muddle over meaning, macroevolution and specified complexity

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

One of the more thoughtful critics of Intelligent Design is Keith S, from over at The Skeptical Zone. Recently, Keith S has launched a barrage of criticisms of Intelligent Design on Uncommon Descent, to which I have decided to reply in a single post.

Is Dembski’s design inference circular?

Keith S’s first charge is that Intelligent Design proponents have repeatedly ignored an argument he put forward two years ago in a comment on a post at TSZ (19 October 2012, at 5:28 p.m.), showing that Dr. William Dembski’s design inference is circular. Here is his argument:

I’ll contribute this, from a comment of mine in the other thread. It’s based on Dembski’s argument as presented in Specification: The Pattern That Signifies Intelligence.

Here’s the circularity in Dembski’s argument:

1. To safely conclude that an object is designed, we need to establish that it could not have been produced by unintelligent natural causes.

2. We can decide whether an object could have been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold).

3. To determine whether something has CSI, we use a multiplicative formula for SC that includes the factor P(T|H), which represents the probability of producing the object in question via “Darwinian and other material mechanisms.”

4. We compute that probability, plug it into the formula, and then take the negative log base 2 of the entire product to get an answer in “bits of SC”. The smaller P(T|H) is, the higher the SC value.

5. If the SC value exceeds the threshold, we conclude that unintelligent processes could not have produced the object. We deem it to have CSI and we conclude that it was designed.
6. To summarize: to establish that something has CSI, we need to show that it could not have been produced by unguided evolution or any other unintelligent process. Once we know that it has CSI, we conclude that it is designed – that is, that it could not have been produced by unguided evolution or any other unintelligent process.

7. In other words, we conclude that something didn’t evolve only if we already know that it didn’t evolve. CSI is just window dressing for this rather uninteresting fact.

I’m sorry to say that KeithS has badly misconstrued Dembski’s argument: he assumes that the “could not” in premise 1 refers to absolute impossibility, whereas in fact, it simply refers to astronomical improbability. Here is Dr. Dembski’s argument, restated without circularity:

1. To safely conclude that an object is designed, we need to establish that it exhibits specificity, and that it has an astronomically low probability of having been produced by unintelligent natural causes.

2. We can decide whether an object has an astronomically low probability of having been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold).

3. To determine whether something has CSI, we use a multiplicative formula for SC that includes the factor P(T|H), which represents the probability of producing the object in question via “Darwinian and other material mechanisms.”

4. We compute that probability, plug it into the formula, and then take the negative log base 2 of the entire product to get an answer in “bits of SC”. The smaller P(T|H) is, the higher the SC value.

5. If the SC value exceeds the threshold, we conclude that it is certain beyond reasonable doubt that unintelligent processes did not produce the object. We deem it to have CSI and we conclude that it was designed.

6. To summarize: to establish that something has CSI, we need to show that it exhibits specificity, and that it has an astronomically low probability of having been producedby unguided evolution or any other unintelligent process. Once we know that it has CSI, we conclude that it is designed – that is, that it it is certain beyond all reasonable doubt that it was not produced by unguided evolution or any other unintelligent process.

I conclude that KeithS’s claim that Dr. Dembski’s design argument is circular rests upon a misunderstanding of the argument.

Keith S’s bomb, and why it falls flat

Three weeks ago, on Bary Arrington’s post, titled, No Bomb After 10 Years, KeithS put forward what he considered to be a devastating argument against Intelligent Design: that unguided evolution is literally trillions of times better than Intelligent Design at explaining the objective nested hierarchies which characterize living things.

The argument, in a nutshell, goes like this:

1. We observe objective nested hierarchies (ONH)
2. Unguided evolution explains ONH
3. A designer explains ONH, but also a trillion alternatives.
4. Both unguided evolution and a designer are capable of causing ONH.
Conclusion: Unguided evolution is a trillion times better at explaining ONH.

I responded to this argument in my post, Why KeithS’s bomb is a damp squib, which made five points in reply to Keith S. My second point was as follows:

The problem is that KeithS has conflated two hypotheses: the hypothesis of common descent (which is very well-supported by the evidence that objective nested hierarchies exist in living things), and the hypothesis of unguided design (which he also claims is well-supported by the evidence that objective nested hierarchies exist in living things).
The first hypothesis is indeed well-supported by the evidence, as the only known processes that specifically generate unique, nested, hierarchical patterns are branching evolutionary processes. The probability that any other process would generate such hierarchies is vanishingly low.

But if KeithS wishes to argue against intelligently guided evolution, then the two alternative hypotheses he needs to consider are not:

A: a branching evolutionary process (also known as a Markov process) generated the objective nested hierarchies we find in living things; and

~A: an Intelligent Designer generated these objective nested hierarchies, but instead:

A: an unguided process generated the objective nested hierarchies we find in living things; and

~A: an intelligently guided process generated these objective nested hierarchies.

The point KeithS makes in his essay is that on hypothesis ~A, the likelihood of B (objective nested hierarchies in living things) is very low. However, it is also true that on hypothesis A, the likelihood of B is very low, as the vast majority of unguided processes don’t generate objective nested hierarchies.

KeithS’s reply here (in comment 76):

That’s not true.
In reality, mutation rates are low enough and vertical inheritance predominates enough that we can treat unguided evolution as a Markov process.

My reply:
Here, Keith S attempts to rebut my argument that “the vast majority of unguided processes don’t generate objective nested hierarchies” by pointing out (correctly) that the unguided evolution we observe during the history of animal life on Earth – if we ignore the prokaryotes here and focus on the 30 major taxa of animals, as Theobald does in his 29 Evidences for Macroevolutionis indeed a Markov process, since vertical inheritance predominates. However, this is not germane to the mathematical argument I put forward. The question is not whether a Markov process did indeed generate the 30 taxa of animals living on Earth, but rather whether the only unguided processes in Nature that would have been capable of generating various groups of animals on some planet harboring life were Markov processes (which are the only processes known to automatically generate objective nested hierarchies).

For instance, we might imagine a natural process X that generates various types of animals on life-bearing planet Z, where these animals do not exhibit objective nested hierarchies. This is just as fair – or just as unfair – as Keith S arguing that an Intelligent Designer might have produced various types of animals which did not exhibit objective nested hierarchies.

The only way for Keith S to refute the hypothetical scenario I proposed would be to argue that life-forms which did not exhibit objective nested hierarchies would not be viable (over the long-term), for some reason – which implies that the only life-forms we are likely to find in the cosmos are ones which do exhibit these hierarchies. But if that were the case, then the same argument would explain equally well why a Designer would refrain from making life-forms which did not exhibit objective nested hierarchies. And in that case, the Designer hypothesis explains the presence of objective nested hierarchies in living things just as well as the hypothesis of unguided evolution.

Why Ockham’s razor fails to support Keith S’s claim that ID is trillions of times worse than unguided evolution at explaining the objective nested hierarchy of life

In an effort to further discredit Intelligent Design, Keith S appeals to Ockham’s razor. Now I’ll address that argument in a moment, but for now, let’s just suppose (for the sake of argument) that Keith S is right, and that Intelligent Design is a redundant hypothesis, when it comes to explaining the properties of living things. Even if that were the case, that’s not the same thing as the mathematical claim that ID is trillions of times worse than unguided evolution at explaining the objective nested hierarchy of life. (We don’t say, for instance, that the hypothesis that angels push the planets round the Sun is trillions of times worse than the hypothesis that they are moved by the forces postulated in Newtonian mechanics; we just say that we have no need for the former hypothesis.) Ockham’s razor is a non-quantitative device for eliminating unnecessary explanations; hence it cannot be used to support quantitative claims regarding the superiority of one hypothesis over another.

I conclude that Keith S’s appeals to Ockham’s razor are completely beside the point. Even if he is right – and as we’ll see below, there are excellent grounds for thinking that he isn’t – the mathematical argument against Intelligent Design is invalid.

Keith S’s Fourfold Challenge and the Rain Fairy

And now, without further ado, let’s have a look at Keith S’s Fourfold Challenge (see also here):

Some more questions for the ID supporters out there:

1. Bob is walking through the desert with his friend, a geologist. They come across what appears to be a dry streambed. After some thought, Bob states that every rock, pebble, grain of sand and silt particle was deliberately placed in its exact position by a Streambed Designer. His friend says “That’s ridiculous. This streambed has exactly the features we would expect to see if it was created by flowing water. Why invoke a Streambed Designer?”

Who has the better theory, Bob or his friend?

2. Bob is invited to the scene of an investigation by a friend who is an explosive forensics expert. They observe serious damage radiating out in all directions from a central point, decreasing with distance, as if an explosion had taken place. Bob’s friend performs some tests and finds large amounts of explosive residue. Bob says, “Somebody went to a lot of trouble to make it look like there was an explosion here. They even planted explosive residue on the scene! Of course, there wasn’t really an explosion.”

Who has the better theory, Bob or his friend?

3. Bob and another friend, an astronomer, observe the positions of the planets over several years. They determine that the planets are moving in ellipses, with the sun at one of the foci. Bob says, “Isn’t that amazing? The angels pushing the planets around are following exactly the paths that the planets would have followed if gravity had been acting on them!” The astronomer gives Bob a funny look and says “Maybe gravity is working on those planets, with no angels involved at all. Doesn’t that seem more likely to you?”

Who has the better theory, Bob or his friend?

4. Bob is hanging out at the office of a friend who is an evolutionary biologist. The biologist shows Bob how the morphological and molecular data establish the phylogenetic tree of the 30 major taxa of life to an amazing accuracy of 38 decimal places. “There couldn’t be a better confirmation of unguided evolution,” the biologist says. “Don’t be ridiculous,” Bob replies. “All of those life-forms were clearly designed. It’s just that the Designer chose to imitate unguided evolution, instead of picking one of the trillions of other options available to him.”

Who has the better theory, Bob or his friend?

Share your answers with us. Did your answers to the four questions differ? If so, please explain exactly why.
And ponder this: If you are an ID supporter, then you are making exactly the same mistake as Bob does in the four examples above, using the same broken logic. Isn’t that a little embarrassing? It might be time to rethink your position.

And don’t forget the Rain Fairy.

Keith S describes the Rain Fairy hypothesis here:

The only designer hypothesis that fits the evidence is one in which the designer mimics (by desire, coincidence, or limitation) the patterns of unguided evolution. The only Rain Fairy hypothesis that fits the evidence is one in which the Rain Fairy mimics (by desire, coincidence, or limitation) the patterns of unguided meteorology. Any reasonable person will reject the Rain Fairy and Designer hypotheses in favor of their competitors, which explain the evidence far, far better.

I’d like to make two points in reply. The first is that there is an overarching natural hypothesis which explains all of the features of the non-biological phenomena which figure in KeithS’s examples: streambeds, chemical explosions, the movement of the planets and weather patterns. By contrast, in Keith S’s example relating to the tree of life, the Darwinian hypothesis of branching evolution explains only the patterns we find in the tree of life. It does not explain the other features of living things. In other words, Darwinian evolution (or mutation-driven evolution, for that matter) needs to be able to provide a comprehensive theory of living things and their properties, before we can confidently declare that we have no need for the hypothesis of Intelligent Design.

The second (and related) point I’d like to make with respect to the Rain Fairy example is that meteorological phenomena exhibit no patterns with a high degree of specified complexity – and even if they did, none of these patterns is functional. The biological world, is rife with patterns exhibiting a high degree of functional specified complexity – proteins, for instance. Hence the Rain Fairy analogy does not hold.

Why ID supporters would not be fazed if an unguided process could be shown to have generated the objective nested hierarchy found in animals

But let us be generous, and suppose (for argument’s sake) that Keith S can come up with a good natural reason showing why (a) the only kinds of animals that are likely to be generated on a life-bearing planet by unguided processes will be ones exhibiting objective nested hierarchies, whereas (b) an Intelligent Designer, on the other hand, would not be bound by such constraints. Even so, Keith S’s argument is still vulnerable to the third objection which I listed in my post, Why KeithS’s bomb is a damp squib:

My third point is that KeithS’s argument assumes that the genetic and morphological features on the basis of which living things are classified into objective nested hierarchies were generated by the same process as the (unguided, Markovian) processes which generates the branches in the hierarchies. This is unlikely, even on a standard evolutionary view: features take time to evolve, and therefore would presumably have appeared at some time subsequent to the branch nodes themselves. Thus it could well be the case that while unguided processes explain the existence of objective nested hierarchies in the living world, guided processes are required to explain some or all of the features in these hierarchies. (Italics added – VJT.)

Features that might need to be explained by guided processes include new proteins appearing in animals, as well as new cell types in distinct lineages of animals and the appearance of new control hierarchies regulating body plans in animals.

Unfortunately, KeithS’s reply here (in comment 89 on my post) misses the point I was trying to make:

I’m not sure why you think this is an issue. The taxa in a cladogram are always at the ends of the branches, never at the nodes.

It isn’t enough to show that guided processes might be involved. You need to show that they must be involved, because otherwise you are still at the trillions-to-one disadvantage.

In his first sentence, Keith S makes a valuable concession, without realizing it. He concedes that the processes which generated the branches in the tree of animal life need not be the same as the processes which generated the features which distinguish the various types of animals. Hence it could be the case that the former are unguided, while the latter are guided. That was the point I wished to make. Arguing against Intelligent Design by appealing to the branching process which generated the tree of life is futile, because ID advocates don’t regard the branching process as evidence of intelligent design in the first place. In other words, even if unguided evolution is trillions of times better than Intelligent Design at explaining the objective nested hierarchies which characterize living things, ID advocates can still answer: “So what? At best, you’ve shown that the unguided branching processes are a better explanation for objective nested hierarchies in living things; but you’ve failed to demonstrate that these processes are sufficient to explain the characteristics of living things.”

Keith S goes on to point out, correctly, that “It isn’t enough to show that guided processes might be involved.” Intelligent Design proponents need to show that guided processes must be involved in generating these features. He spoils his argument somewhat by referring to the “trillions-to-one disadvantage” which the Intelligent Design hypothesis allegedly suffers from (and which I’ve discredited above). Nevertheless, Ockham’s razor alone would suffice to rule Intelligent Design out of court, unless ID advocates could demonstrate the insufficiency of unguided processes to explain the biological features of animal life. So the question we need to answer is: are there any barriers to the evolution of the 30 major groups of animals, via unguided processes?

Barriers to macroevolution – they’re real!

Keith S rightly contends that the onus is on the Intelligent Design proponent to demonstrate the existence of barriers to macroevolution. My recent post, titled, Barriers to macroevolution: what the proteins say, described one such barrier: the evolution of proteins. (As any biochemist will tell you, there are many kinds of proteins which are unique to each of the 30 major taxa of animals, so this problem is quite separate to the origin-of-life problem.) I’ll quote just the first three paragraphs of my post:

KeithS has been requesting scientific evidence of a genuine barrier to macroevolution. The following is a condensed, non-technical summary of Dr. Douglas Axe’s paper, The Case Against a Darwinian Origin of Protein Folds. Since (i) proteins are a pervasive feature of living organisms, (ii) new proteins and new protein folds have been continually appearing throughout the four-billion-year history of life on Earth, and (iii) at least some macroevolutionary events must have involved the generation of new protein folds, it follows that if Dr. Axe’s argument is correct and neo-Darwinian processes are incapable of hitting upon new functional protein folds, then there are indeed genuine barriers to macroevolution, in at least some cases. The argument put forward by Dr. Axe is robustly quantifiable, and it is fair to say that Dr. Axe carefully considers the many objections that might be put forward against his argument. If there is a hole in his logic, then I defy KeithS to find it.

Finally I would like to thank Dr. Axe for putting his paper online and making it available for public discussion. The headings below are my own; the text is entirely taken from his paper.

Abstract

Four decades ago, several scientists suggested that the impossibility of any evolutionary process sampling anything but a minuscule fraction of the possible protein sequences posed a problem for the evolution of new proteins. This potential problem – the sampling problem – was largely ignored, in part because those who raised it had to rely on guesswork to fill some key gaps in their understanding of proteins. The huge advances since that time call for a careful reassessment of the issue they raised. Focusing specifically on the origin of new protein folds, I argue here that the sampling problem remains. The difficulty stems from the fact that new protein functions, when analyzed at the level of new beneficial phenotypes, typically require multiple new protein folds, which in turn require long stretches of new protein sequence. Two conceivable ways for this not to pose an insurmountable barrier to Darwinian searches exist. One is that protein function might generally be largely indifferent to protein sequence. The other is that relatively simple manipulations of existing genes, such as shuffling of genetic modules, might be able to produce the necessary new folds. I argue that these ideas now stand at odds both with known principles of protein structure and with direct experimental evidence. If this is correct, the sampling problem is here to stay, and we should be looking well outside the Darwinian framework for an adequate explanation of fold origins.

I then issued a further invitation to Keith S to respond in a subsequent comment:

KeithS,
I only have a few minutes, but I’d like to say that you are welcome to post scientific criticisms of Dr. Axe’s argument on this thread, if you have any.

Another commenter on the thread invited him to do the same:

I think that you would gain much credibility with many, if you were to take that advice. Why not start with scientific responses to the issues raised in “Barriers to Macroevolution: what the proteins say”.

And what was KeithS’s response? An appeal to circular, blatantly question-begging logic!

If you’ve been following UD lately, you’ll know that I have presented an argument demonstrating that ID is literally trillions of times worse at explaining the evidence when compared to unguided evolution.

And I’ve been trying to tell Keith S that the evolution of proteins constitutes such a barrier, by appealing to the paper by Dr. Douglas Axe from which I quoted above.

To my dismay and disappointment, the rest of my thread on Barriers to macroevolution was taken up with an arcane discussion of censorship of previous posts on Uncommon Descent, which is neither here nor there.

I repeat my challenge: can Keith S kindly tell me what’s wrong with the reasoning in Dr. Axe’s paper, The Case Against a Darwinian Origin of Protein Folds, which I summarized in a non-technical form in my recent post?

In a muddle over meaning

Not content with leaving matters there, Keith S issued a challenge of his own over at gpuccio’s post, An attempt at computing dFSCI for English language. In his post, GPuccio wrote:

Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in English is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB [Upper Probability Bound – VJT]. As I am aware of no simple algorithm which can generate English sonnets from single characters, I infer design. I am certain that this is not a false positive.

Was I wrong? You decide.

I don’t want to discuss the mathematics behind gpuccio’s calculation here, except to say that it erred unduly on the side of generosity, in conceding the existence of a pool of 200,000 English words (an under-estimate, by the way), and asking what percentage of 600-letter sequences made up entirely of these words would constitute a meaningful sonnet. Some commenters objected that there isn’t a clear black-and-white dividing line between meaningful poetry and meaningless strings of words which obey the rules of English syntax, as the history of the Ern Malley hoax shows. But let’s face it: if we saw a message with the words, “Colorless green ideas sleep furiously” written 100 times, we’d all conclude that it was designed, either directly (by a human being) or indirectly (by a computer programmed by a human being).

In my opinion, however, a much fairer question to ask would be: if we received a binary signal from outer space and decoded it into (say) ASCII code, only to find that it spelt out a Shakespearean sonnet, what would the odds be that it was generated via an unguided process? I believe this example is a more appropriate one, as it doesn’t start with a pool of words, or even letters, but with simple binary signals which can be used to make letters, which can be arranged into English words, which can in turn be arranged into meaningful sentences. And even if the boundary between meaningful and meaningless sentences is a little blurry at times, the boundary between syntactically valid sentences and sentences with bad syntax is a lot clearer and less ambiguous. Using my analogy, we can certainly show that the odds of a binary signal from space spelling out a sonnet of any kind are less than 1 in 2^500.

And what was Keith S devastating reply to gpuccio? The main points that he makes can be found in comments 9, 11 and 13 on gpuccio’s post. I’ll address them one at a time.

gpuccio,

We can use your very own test procedure to show that dFSCI is useless.

Procedure 1:
1. Look at a comment longer than 600 characters.
2. If you recognize it as meaningful English, conclude that it must be designed.
3. Perform a pointless and irrelevant dFSCI calculation.
4. Conclude that the comment was designed.

Procedure 2:

1. Look at a comment longer than 600 characters.
2. If you recognize it as meaningful English, conclude that it must be designed.
3. Conclude that the comment was designed.

The two procedures give exactly the same results, yet the second one doesn’t even include the dFSCI step. All the work was done by the other steps. The dFSCI step was a waste of time, mere window dressing.

Even your own test procedure shows that dFSCI is useless, gpuccio.

Keith S’s argument misses the point here. What he fails to ask is: why did we choose 600 characters as a cutoff point and not six? Because we can show that unguided processes are fully capable of generating six-character strings, like “Stop it”.
If I discovered a binary signal from outer space that spelt out these characters when converted into ASCII, I certainly would not conclude that it was designed.

On the other hand, we can calculate that the probability of unguided processes coming up with a meaningful 600-characters string are so low that we would not expect this event to happen even once, in the history of the observable cosmos – in other words, the probability is less than 1 in 2^500, or 1 in 10^150. Since the string in question is specified (as it has a semantic meaning), a design inference is warranted.

Keith S continues:

gpuccio,

We’ve been over this many times, but the problem with your dFSCI calculations is that the number they produce is useless.

The dFSCI number reflects the probability that a given sequence was produced purely randomly, without selection. No evolutionary biologists thinks the flagellum (or any other complex structure) arose through a purely random process; everyone thinks selection was involved. By neglecting selection, your dFSCI number is answering a question that no one is asking. It’s useless.

There is a second aspect of dFSCI that is a boolean (true/false) variable, but it depends on knowing beforehand whether or not the structure in question could have evolved. You can’t use dFSCI to show that something couldn’t have evolved, because you already need to know that it couldn’t have evolved before you attribute dFSCI to it. It’s hopelessly circular.

What a mess. The numerical part of dFSCI is useless because it neglects selection, and the boolean part is also useless because the argument that employs it is circular.

dFSCI is a fiasco.

Gpuccio’s calculations were perfectly appropriate for the class of entities he was discussing – namely, character strings. Character strings are not alive, so they are incapable of evolving by the non-random process of natural selection.
In addition, natural selection does not select for semantic meaning; what it selects for is functionality. The latter can be refined over the course of time by evolution, whereas the former cannot, as unguided evolution is blind to it.
Of course, that leaves us with the question of whether gpuccio’s post can be used to undermine the theory of evolution by natural selection. But gpuccio never discussed that question in his post, which was simply an attempt to calculate the dFSCI in a Shakespearean sonnet.

Finally, Keith S writes:

Dembski’s problems are that 1) he can’t calculate P(T|H), because H encompasses “Darwinian and other material mechanisms”; and 2) his argument would be circular even if he could calculate it.

KF’s problem is that although he claims to be using Dembski’s P(T|H), he actually isn’t, because he isn’t taking Darwinian and other material mechanisms into account. It’s painfully obvious in this thread, in which Elizabeth Liddle and I press KF on this problem and he squirms to avoid it.

Gpuccio avoids KF’s problem by explicitly leaving Darwinian mechanisms out of the numerical calculation. However, that makes his numerical dFSCI value useless, as I explained above. And gpuccio’s dFSCI has a boolean component that does depend on the probability that a sequence or structure can be explained by “Darwinian and other material mechanisms”, so his argument is circular, like Dembski’s.

All three concepts are fatally flawed and cannot be used to detect design.

I repeat: if Keith S wants a decent probabilistic calculation which takes account of “Darwinian and other material mechanisms”, then why doesn’t he respond to the probability calculations contained in the paper I cited above by Dr. Axe (see pages 10 and 11), which is titled “The Case Against a Darwinian Origin of Protein Folds”? Do that, Keith S, and then we’ll talk.

Comments
LoL!@ Me Think- All of science relies on observations- proper observations. :-) I still want to know why coral sand should be observed with naked eyes where as protein should be observed using EM for dFSCI calculation.Me_Think
November 14, 2014
November
11
Nov
14
14
2014
03:09 AM
3
03
09
AM
PDT
LoL!@ Me Think- All of science relies on observations- proper observations. It's as if our opponents don't know anything about science and they think that helps them somehow.Joe
November 14, 2014
November
11
Nov
14
14
2014
03:06 AM
3
03
06
AM
PDT
KF @ 9
MF, no. Complexity is directly observable and measurable as is especially functional specificity pivoting on interaction of properly arranged and coupled parts leading to islands of function in very large config spaces. The scope of implied config spaces and the resulting sparse blind needle in haystack search challenge to find islands of function are analytical, empirically testable consequences of observed FSCO/I.
Let's say I am observing both coral sand and protein using a microscope, the complexity of coral sand will be far more than the protein structure (perhaps you can just make out globular shape of protein -no more). Only if I observe using an Electron microscope will I know the structure of protein in detail. So dFSCI depends on what resolution you observe a structure. So will you add a observation resolution component to dFSCI ?Me_Think
November 14, 2014
November
11
Nov
14
14
2014
03:04 AM
3
03
04
AM
PDT
Andre - you do misunderstand what I am saying. Something can certainly be A and B at the same time when A and B are characteristics such as improbable and specified. In fact that is exactly what StephenB is asserting in 6.markf
November 14, 2014
November
11
Nov
14
14
2014
02:58 AM
2
02
58
AM
PDT
Mark F Perhaps I miss understand what you're trying to say but something can not be both A and B at the same time, the law of the excluded middle takes care of that in logic 101..... http://en.wikipedia.org/wiki/Law_of_excluded_middle Elementary my dear Watson.......Andre
November 14, 2014
November
11
Nov
14
14
2014
02:33 AM
2
02
33
AM
PDT
MF, no. Complexity is directly observable and measurable as is especially functional specificity pivoting on interaction of properly arranged and coupled parts leading to islands of function in very large config spaces. The scope of implied config spaces and the resulting sparse blind needle in haystack search challenge to find islands of function are analytical, empirically testable consequences of observed FSCO/I. So, there is no circularity or redundancy. Repeat, for emphasis: it is recognition of observed FSCO/I that entails, on analysis, the needle in haystack challenge, not the other way around. KFkairosfocus
November 14, 2014
November
11
Nov
14
14
2014
02:33 AM
2
02
33
AM
PDT
It is all good and well to give Keith a platform to air his views, it is also a good thing to discuss those views, but we are giving somebody who readily admits his own uncertainty way too much airtime.....Andre
November 14, 2014
November
11
Nov
14
14
2014
02:30 AM
2
02
30
AM
PDT
#6 SB Define condition X as A AND B. Therefore to determine if X is true it is necessary to determine B is true (as well as A) Therefore to use the presence of X to detect B is superfluous as we had to determine B was true to find out if X was true.markf
November 14, 2014
November
11
Nov
14
14
2014
02:18 AM
2
02
18
AM
PDT
Dembski's argument: 1. To safely conclude that an object is designed, we need to establish that it exhibits specificity, and that it has an astronomically low probability of having been produced by unintelligent natural causes..... KeithS description: 1. To safely conclude that an object is designed, we need to establish that it could not have been produced by unintelligent natural causes..... Do you notice anything missing in the second account?StephenB
November 14, 2014
November
11
Nov
14
14
2014
02:09 AM
2
02
09
AM
PDT
VJT: I just add, that claimed mechanisms for origin of the underlying phenomenon, functionally specific complex interactive organisation and associated information (FSCO/I) should have observational warrant for the claimed ability. In the current GP sonnet thread, I commented at 358, picked up by GP at 443:
the problem here is that on evolutionary materialism contemplation must reduce to computation, but the deterministic mechanistic side of algors is not creative and the stochastic side is not creative enough and powerful enough to account for FSCO/I and particularly spectacular cases of dFSCI such as the sonnets in question.
That is, I point to the vera causa principle promoted by Newton. Namely, that causal factors of relevance need to be observed as adequate to the effect they are held to produce or enable. Or else, we are in a thicket of fast growing unbridled entangling metaphysical and/or ideological speculation. Were blind chance and mechanical necessity sufficient per observation to credibly account for FSCO/I or dFSCI beyond 500 - 1,000 bits, that would be decisive against the design inference on the world of life. This is of course the context of info generation challenges, and it is the root of questions regarding origin of cell based life and body plans by claimed blind watchmaker style mechanisms. Starting with the general point that interactive function based on correct arrangements and coupling of parts, strongly constrains a functional entity in the space of possible clumped and scattered configs. Where, as Axe emphasises in the same paper, blind search constrained by atomic and temporal resources available, can only carry out a rather sparse search of the haystack of possibilities. At about 10^13 - 15 fast Chem rxn time events per second, with 10^57 atoms in sol system and with 10^80 in the cosmos we observe, 10^87 or 89 and 10^110 or 112 or so events in 10^17 s. The config space for 500 or 1000 bits is as 3.27*10^150 and 1.07*10^301. On fair comment, to date I have not seen an adequate answer by evolutionary materialists, and too often I have seen scientism and other ideological question begging substituting for the principle Galileo insisted on: scientific ideas should be empirically grounded, tied to observational reality. There's the tower of Pisa a-leaning, here are the two candidate balls, let's drop them over the side and see if each will fall about as fast. Or, is there a frictional factor at work that makes a difference? KFkairosfocus
November 14, 2014
November
11
Nov
14
14
2014
01:44 AM
1
01
44
AM
PDT
Vincent, One more comment before I head to bed. You write:
Why Ockham’s razor fails to support Keith S’s claim that ID is trillions of times worse than unguided evolution at explaining the objective nested hierarchy of life In an effort to further discredit Intelligent Design, Keith S appeals to Ockham’s razor.
I don't remember invoking Ockham in that context. Could you refresh my memory by providing a quote and a link? Thanks. Good night.keith s
November 14, 2014
November
11
Nov
14
14
2014
12:18 AM
12
12
18
AM
PDT
Hi Vincent, Thank you for your thoughtful OP. With my arguments being discussed in so many threads, it will be that much harder for Barry to ban me. You're helping to secure a place for me here at UD, at least temporarily. Thank you. :-) I know that you personally value open discussion, and I hope that our example will help persuade Barry that openness is a good policy. The banning of ID critics and the censorship of their dissenting views are harmful not only to the the already tarnished reputation of UD, but also to the quality of the discussions taking place here. Let me respond tonight to your objection to my circularity argument, and tomorrow or over the weekend to your other points. You write:
I’m sorry to say that KeithS has badly misconstrued Dembski’s argument: he assumes that the “could not” in premise 1 refers to absolute impossibility, whereas in fact, it simply refers to astronomical improbability.
That's actually not true. By 'could not have been produced', I am speaking of practical impossibility, not absolute impossibility. Speaking of absolute impossibility would be silly, because the CSI equation takes the logarithm of the probability. If I were speaking of absolute impossibility, the probability would be zero and the logarithm would be undefined! Here is a comment that makes this clear. I posted it eight years ago(!) at UD, commenting as 'Karl Pfluger':
Dembski’s refinement runs into trouble, though, because he admits that to determine that a system has CSI, we must estimate the probability of its production by natural means. Systems with CSI have a low probability of arising through natural means. This renders the reasoning circular: 1. Some systems in nature cannot have been produced through undirected natural means. 2. Which ones? The ones with high CSI. 3. How do you determine the CSI of a system? Measure the probability that it was produced through undirected natural means. If the probability was vanishingly small, it has CSI. 4. Ergo, the systems that could not have been produced through undirected natural means are the ones which could not have been produced through undirected natural means.
PS There are a few places in the OP where dashes show up as question marks, at least in my browser. Do you see the same problem? PPS (Hi, KF!) - I was banned by DaveScot shortly after making that comment. My crime was that I corrected Dave's misconceptions regarding the transistor-level modeling of microprocessors.keith s
November 13, 2014
November
11
Nov
13
13
2014
11:42 PM
11
11
42
PM
PDT
VJ: Thank you for this very good summary of very good arguments (yours, not keith's! :) ). I really appreciate your clear and impartial thoughts.gpuccio
November 13, 2014
November
11
Nov
13
13
2014
10:53 PM
10
10
53
PM
PDT
VJ – a digestible OP – thanks One point – I have no doubt Keith S will pick up the majority:
In my opinion, however, a much fairer question to ask would be: if we received a binary signal from outer space and decoded it into (say) ASCII code, only to find that it spelt out a Shakespearian sonnet, what would the odds be that it was generated via an unguided process?  …… Using my analogy, we can certainly show that the odds of a binary signal from space spelling out a sonnet of any kind are less than 1 in 2^500.
No you can’t. You can show that the odds of a specific unguided process producing the string is less then 1 in 2^500.  This is a quite different calculation.  To calculate the odds of a binary signal from space spelling out a sonnet you would need some prior probabilities and apply Bayes theorem. That follows as a mathematical certainty. As we lack any basis for prior probabilities it is an almost impossible calculation. Before Barry jumps in on the lines of “anyone who thinks it is probable is mad”. I absolutely accept that it would be extraordinary if the signal spelled out a sonnet in ASCII and would conclude that the signal was almost certainly in some way related to the sonnet (maybe one caused the other, or there was a common cause, – not necessarily designed). I just think we should be clear about the basis for that conclusion.markf
November 13, 2014
November
11
Nov
13
13
2014
10:47 PM
10
10
47
PM
PDT
1 2 3 4

Leave a Reply