Uncommon Descent Serving The Intelligent Design Community

“Actually Observed” Means, Well, “Actually Observed”

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
arroba Email

In a comment to a recent thread I made the following challenge to the materialists:

Show me one example – just one; that’s all I need – of chance/law forces creating 500 bits of complex specified information. [Question begging not allowed.] If you do, I will delete all of the pro-ID posts on this website and turn it into a forum for the promotion of materialism. . . .

There is no need to form any hypothesis whatsoever to meet the challenge. The provenance of the example of CSI that will meet the challenge will be ACTUALLY KNOWN. That is why I put the part about question begging in there. It is easy for a materialist to say “the DNA code easily has more than 500 bits of CSI and we know that it came about by chance/law forces.” Of course we know no such thing. Materialists infer it from the evidence, but that is not the only possible explanation.

Let me give you an example. If you watch me put 500 coins on a table and I turn all of them “heads” up, you will know that the provenance of the pattern is “intelligent design.” You do not have to form a chance hypothesis and see if it is rejected. You sat there and watched me. There is no doubt that the pattern resulted from intelligent agency.

My challenge will be met when someone shows a single example of chance/law forces having been actually observed creating 500 bits of CSI.

R0bb responded not by meeting the challenge (no surprise there) but by suggesting I erred when I said CSI can be “assessed without a chance hypothesis.” (And later keith s adopted this criticism).

I find this criticism odd to say the least. The word “hypothesis” means:

A proposition . . . set forth as an explanation for the occurrence of some specified group of phenomena, either asserted merely as a provisional conjecture to guide investigation (working hypothesis) or accepted as highly probable in the light of established facts.

It should be obvious from this definition that we form a hypothesis regarding a phenomenon only when the cause of the phenomenon is unknown, i.e., has not been actually observed. As I said above, in my coin example there is no need to form any sort of hypothesis to explain the cause of the coin pattern. The cause of the coin pattern is actually known.

I don’t know why this is difficult for R0bb to understand, but there you go. To meet the challenge, the materialists will have to show me where a chance/law process was “actually observed” to have created 500 bits of CSI. Efforts have been made. All have failed. The now defunct infinite monkeys program being just one example. It took 2,737,850 million billion billion billion monkey-years to get the first 24 characters from Henry IV part 2.

 

UPDATE:

R0bb  responds at comment  11:

That’s certainly true, but we’re not trying to explain the cause of the coin pattern. We trying to determine whether the coin pattern has CSI. Can you please tell us how to do that without a chance hypothesis?

To which I responded:

1. Suppose you watched me arrange the coins. You see a highly improbable (500 bits) pattern conforming to a specification. Yes, it has CSI.

2. Now, suppose you and I were born at the same time as the big bang and did not age. Suppose further that instead of intentionally arranging the coins you watched me actually flip the coins at the rate of one flip per second. While it is not logically impossible for me to flip “all 500 heads,” it is not probable that we would see that specification from the moment of the big bang until now.

So you see, we’ve actually observed the cause of each pattern. The specification was achieved in scenario 1 by an intelligent agent with a few minutes’ effort. In scenario 2 the specification was never achieved from the moment of the big bang until now.

The essence of the design inference is this: Chance/law forces have never been actually observed to create 500 bits of specified information. Intelligent agents do so routinely. When we see 500 bits of specified information, the best explanation (indeed, the only explanation that has actually been observed to be a vera causa) is intelligent agency.

To meet my challenge, all you have to do is show me where chance/law forces have been observed to create 500 bits of specified information.

 

Comments
Jerad #178:
I didn’t say I wouldn’t investigate. I’ve been very clear about that.
Right, but I'm arguing that you can go straight from the 500-head observation to the conclusion that something fishy is going on. In other words, even if you couldn't investigate and couldn't repeat the trial, you would still be justified in concluding that the coin and/or the flipping weren't fair.
BUT you would need the same number of coins to be sure to force any other sequence of 500 Hs and Ts. You could force any sequence you want with the same approach.
Sure. That's why I wrote this:
We know that the under the fairness assumption, the probability of getting a “special” pattern is only n in 2^500, where n is the number of patterns that you would consider special.
The probability of each 500-flip sequence is the same. The "content" only matters for deciding whether the sequence falls into the "special" category or the "not special" category. The "special" category can be quite fluid. If I sit down at my computer and use it to generate a pseudorandom 500-flip sequence, and then proceed to run the actual 500-flip experiment, I will know something fishy is going on if I flip the same sequence that my computer just gave me. In other words, a completely random-looking sequence can become "special" simply because I choose to designate it as special. As long as the "special" category is small enough relative to the "not special" category, we can conclude that something fishy is going on every time we get a "special" 500-flip sequence. keith s
Joe, Dr Dembski has made a claim. It's up to a claimant to uphold and defend their position. As Dr Dembski has abandoned his position (for the most part) and now spends most of his time, as far as I understand it, teaching theology it would seem that the hypothesis presented in his 2005 paper is now an orphan dying a death of attrition and neglect. Regardless, biologists and mathematicians are under no obligations to provide anything to uphold or defend or explain their fields or views because of this one, defunct outpost. Dr Dembski planted a flag and then neglected to protect it. If you want to rally around it then it's up to you to find the evidence and prove its validity. Can you do that? Time to put up or shut up. Jerad
Joe #179, 180
Our position has to match Dr Dembski’s definition?
LoL! It isn’t Dembski’s definition. Your position is the chance hypothesis. Deal with it.
Dr Dembski defined an assumption, a hypothesis without running it past evolutionary biologists (or mathematicians for that matter) and now you expect the biological world to even care? Do you realise how arrogant that sounds? I've already pointed out that natural selection is not a chance/random process. Perhaps if Dr Dembski wanted to come up with a viable model of evolutionary processes he should have spoken to some people who know those processes. I shall also point out that his paper has been reviewed by biologists and mathematicians and they found it to be incorrect in many ways. Maybe you should deal with that eh?
Dembski got it from peer-reviewed evolutionary biologists.
Did he speak to them or just use his own interpretations of their work? If he mis-interpreted their work then . . .
Natural selection does not operate by chance.
Sure it does. Just because the probability of being eliminated is not equal doesn’t mean it isn’t driven by chance
Too bad the entire filed of evolutionary biology disagrees with you. Maybe you should deal with that eh?
See this is why several of us are trying to figure out what kind of probability distribution you think P(T|H) is.
Then you should try to figure out what natural selection really is. It is a result that has chance components as inputs. The variation is all chance. What will be inherited is partly chance. What will be eliminated is also partly chance.
So, evolutionary biology should respond to a non-published, non-peer reviewed paper based on your mis-interpretation of natural selection?
You haven’t shown any ability at all…
Let them with eyes, see. Do you even know what a probability density function is? Without looking it up on Wikipedia. Jerad
Jerad:
Well, at least you’ve shown you can read the notation!!
You haven't shown any ability at all... Joe
Jerad:
Our position has to match Dr Dembski’s definition?
LoL! It isn't Dembski's definition. Your position is the chance hypothesis. Deal with it.
Well, I’m not agreeing that my position is spelled out in a 2005 paper written by Dr Dembski that was not peer-reviewed by evolutionary biologists.
Dembski got it from peer-reviewed evolutionary biologists.
Natural selection does not operate by chance.
Sure it does. Just because the probability of being eliminated is not equal doesn't mean it isn't driven by chance.
See this is why several of us are trying to figure out what kind of probability distribution you think P(T|H) is.
Then you should try to figure out what natural selection really is. It is a result that has chance components as inputs. The variation is all chance. What will be inherited is partly chance. What will be eliminated is also partly chance. Joe
keith s #177
Let’s put it this way: If every particle in the observable universe were a coin, you would still need about 10^70 such universes to have enough coins to get you to 500 heads. Now do you see why we can be certain, even after only one 500-flip trial, that something fishy is going on?
I didn't say I wouldn't investigate. I've been very clear about that. BUT you would need the same number of coins to be sure to force any other sequence of 500 Hs and Ts. You could force any sequence you want with the same approach. So, on a single trial of 500 flips, why wouldn't you say something fishy was going on when one of those other sequences shows up? Because some outcomes more closely match your expectation of roughly half Hs and half Ts? But a sequence of exactly alternating Hs and Ts would set off your alarm bells correct? So it's not just the proportion of Hs and Ts that might cause you consternation. It's the 'pattern' in the sequence as well. Let's say I got the following sequence of flips: HHHTHHHHTHHHHHTTTTTTTTTHHTTTTTTHHHHHTTTHHHHHTTTTTTTTHHHHHHHHHTTTTTTTHHHHHHHHHTTT and quite a lot more. Would that one make you go hmmmmmm? It sure would me. I'll tell you why if you wish. It's not because of the clumps of Hs and Ts, randomness is clumpy. How about this one? HHTTTTTTTHTTTTTTTTHHTTTTTTTTHTTTTTTTTHHTTTTTTTTHHHHTTTTTHHHHHHHHHTTTTHHHHH Also sets my pattern recognition bells a'ringing. People are good at spotting patterns but not at spotting randomness. So the patterns become suspicious when we think there shouldn't be any. Jerad
Jerad #147:
As I said before you can pretty much guarantee getting 20 Hs in a row by doing the following: Start with 2 million coins. Flip them all. Take out the ones that come out Ts. Repeat. By the time you get down to 1 coin left that coin should have 20 or more Hs in a row. If it were me I would carefully examine that coin but the mathematics is correct. If you use 2 million fair coins you can force 20 Hs in a row. (Thanks to Neil deGrasse Tyson for the idea.) 500 Hs in a row would take a lot more coins obviously. I’ll leave it up to the readers to figure out how many.
Let's put it this way: If every particle in the observable universe were a coin, you would still need about 10^70 such universes to have enough coins to get you to 500 heads. Now do you see why we can be certain, even after only one 500-flip trial, that something fishy is going on? keith s
Joe #171
So P(T}H) is just a conditional probability function- that is the probability of T given H, where H is are the chance hypotheses that our opponents need to but cannot provide.
Well, at least you've shown you can read the notation!! Jerad
Joe #173, 174
Jerad, It doesn’t matter that you didn’t come up with the formula. Your position is H regardless if you like it or not.
What? Our position has to match Dr Dembski's definition? Really?
You evos have no shame and no clue. How entertaining is that!?
Well, I'm not agreeing that my position is spelled out in a 2005 paper written by Dr Dembski that was not peer-reviewed by evolutionary biologists.
Give me an example of the kind of chance hypothesis Dr Dembski is referring to.
Natural selection, drift- any differing accumulations of genetic accidents
Natural selection does not operate by chance. See this is why several of us are trying to figure out what kind of probability distribution you think P(T|H) is. Jerad
Give me an example of the kind of chance hypothesis Dr Dembski is referring to.
Natural selection, drift- any differing accumulations of genetic accidents Joe
Jerad, It doesn't matter that you didn't come up with the formula. Your position is H regardless if you like it or not. You evos have no shame and no clue. How entertaining is that!? Joe
Zach #168
If by H, they mean any proposed hypothesis, including evolution, then the calculation is not only intractable, but it is entailed also in phi_S, meaning the terms are no longer independent.
I hate it when that happens. :-) Jerad
So P(T}H) is just a conditional probability function- that is the probability of T given H, where H is are the chance hypotheses that our opponents need to but cannot provide. Joe
Joe #166, 167, 169
No Jerad, I am sick of discussing this with people who are obviously on some moronic agenda.
Just trying to see if you understand Dr Dembski's paper. I can see that you did eventually copy and paste a pertinent paragraph.
It is up to YOU to provide H- period. You can’t and you want to try to blame us.
We're not the ones that came up with the formulation!! Give me an example of the kind of chance hypothesis Dr Dembski is referring to.
P(T|H) is just ONE of several formulas by Dembski. It is not about calculating CSI. It is about using specification to provide a design inference.
P(T|H) is a conditional probability.
What I said: P= probability. T= the rejection region (of an event, object or structure) and H are the chance hypotheses. Your position = H And Jerad sed I was wrong…
Dr Dembski defines H in his paper. So I guess he provided it eh? And on page 18 Dr Dembski uses a different definition for T. Do you think they are equivalent? Jerad
More formally, the problem is to justify a significance level ? (always a positive real number less than one) such that whenever the sample (an event we will call E) falls within the rejection region (call it T) and the probability of the rejection region given the chance hypothesis (call it H) is less than ? (i.e., P(T|H) < ?), then the chance hypothesis H can be rejected as the explanation of the sample.
What I said: P= probability. T= the rejection region (of an event, object or structure) and H are the chance hypotheses. Your position = H And Jerad sed I was wrong... Joe
Jerad: Find me where in Dr Dembski’s paper it says that someone has to ‘provide’ the chance hypothesis. Absent anything else, we might suppose a uniform probability distribution, which is what kairosfocus and others who mess with it seem to be doing. We would appreciate it if they were explicit. If so, we can discuss William's Victorian house. If by H, they mean any proposed hypothesis, including evolution, then the calculation is not only intractable, but it is entailed also in phi_S, meaning the terms are no longer independent. Zachriel
P(T|H) is just ONE of several formulas by Dembski. It is not about calculating CSI. It is about using specification to provide a design inference. Joe
No Jerad, I am sick of discussing this with people who are obviously on some moronic agenda. It is up to YOU to provide H- period. You can't and you want to try to blame us. Joe
Joe #163
Jerad, read the paper and stop being such a child. And you can’t provide the chance hypothesis. That is the whole point.
I have read the paper. I understood it. You don't seem to. You don't know what T stands for. You don't know what the '|' in P(T|H) stands for. Find me where in Dr Dembski's paper it says that someone has to 'provide' the chance hypothesis. Jerad
KF #160
I have now pointed out enough times that the transformation by log reduction allows us to access direct and statistical measures of information. Which, we do routinely observe, e.g. most readily in DNA and proteins.
`I am specifically interested in why you chose to replace P(T|H) in Dr Dembski's formulation with another term.
At this point, I have to view the continued harping on any number of cogently answered talking points (including the continued misrepresentation of the design inference process on signs, through rain fairy talking points and whatnot) as question-begging agit-prop message dominance tactics, that have nothing to do with serious thought on the merits, an important matter.
I'm not sure you've understood and interpreted Dr Dembski's work correctly.
As for Joe’s point, he is right to point out that the chance-driven theories do need to logically justify, articulate and empirically warrant their claims.
I don't believe you've properly interpreted Dr Dembski's work on this.
Let us not forget, that starts with the root of the tree of life, origin of living cells in Darwin’s pond or the like environment driven by the physics and chemistry at work. We do know a fair amount about that, and as design thinkers since Thaxton et al have pointed out, the thermodynamics and chemical kinetics are not favourable to blind chance and mechanical necessity coming even close for putting together life forms. That is the reason why Orgel and Shapiro came to mutual ruin some years ago.
What does this have to do with P(T|H)?
The only reasonable, empirically wrranted answer for the FSCO/I in the living cell, the root of the tree of life, is design. Only design — on trillions of examples — is empirically warranted as a causal source of FSCO/I. And if you want to play at the game of dismissing that reality, I again point to a world around us full of functionally organised complex entities where the functionality arises from interaction per a Wicken wiring diagram. The 6500 C3 reel, is a case in point, and using the nodes-arcs form of the wiring diagram to generate 3-d meshes, the gear train is a further subset that by itself is another case.
Many, many, many people disagree with you.
If you want to push the talking point that oh life forms reproduce, chemicals don’t. ANd that is where you must start.
Can we not change the subject? Let's make sure we're clear about P(T|H) first.
Indeed, as part of the ongoing, two years standing unanswered darwin essay challenge, you need to address the origin of a code and algorithm using von Neumann kinematic self replicator joined to gated, encapsulated, metabolising automata, as an integral part of the claimed blind chance and mechanical necessity origin of life assertions.
And that means I can't ask you about P(T|H)?
Where, I beg to remind you that Paley in 1804, in Ch II of his Nat Theol, moved on to the time keeping self reicating watch, and highlighted that the origin of that system is itself a major challenge and is credibly a further instance of artifice and design. This part of his argument of course has been generally not brought up in debates and assertions by darwinists over the past 150 years. But it is material, tot he point that the omission looks suspiciously like a strawman tactic.
I'm not interested in debating Paley's argument actually.
So, right from the root, FSCO/I points to the viability of design as a vera causa plausible explanation of life and the tree of life insofar as a tree is a valid representation.
So, when can we get back to P(T|H)?
When we move on to chance variation plus culling by differential reproductive success leading to descent with modification held to incrementally account for the tree of life above the root, that too runs straight into major challenges.
A worthy discussion but some other time please.
Notice, something vital: the vaunted natural selection is not a creative source of information, but instead it describes the after the fact loss of information by culling out varieties that go extinct as they are sufficiently less viable in niches to die off.
Mutations give and mutations take away.
This means that the source of creative information is chance variations. This is held to incrementally, slowly, climb a tree pattern across a continent of viable forms, to get to the branching tree of life.
Sounds pretty good to me.
While that is commonly taken for granted, it is in fact poorly founded. FSCO/I naturally leads to islands of function; as the simple exercise of putting 6500 c3 reel parts in a bag and shaking to explore the space of clumped configs. Predictably it will fail to find the ones that make a viable reel. Too deeply isolated for the search resources to credibly chance upon it by a 4andom walk or a scattered dust or a combination.
But, descent with modification does NOT lead to islands of function. We've had this discussion before.
So, it is utterly unsurprising to see how systematically, after 250k+ fossil species and millions of specimens in museums with billions in the field, we have not got the many, many, many missing links that mark the key branching points in the tree. A few circumpolar species and the like are not enough, starting from the Cambrian revolution and origin of major body plans onwards.
Good thing we've got other lines of evidence then eh?
Where, at foundation, after trillions of observed cases of origin of FSCO/I and many attempts, there is no current observational evidence that reasonably points to blind chance and mechanical necessity being an actual cause of FSCO/I.
I disagree. Anyway, that's enough of my responding to your off-topic lecture. We were specifically asking you about P(T|H). You chose to ignore those fundamental and fairly basic queries. So noted. Jerad
Jerad, read the paper and stop being such a child. And you can't provide the chance hypothesis. That is the whole point. Joe
Joe #161
P= probability. T= the rejection region (of an event, object or structure) and H are the chance hypotheses. Your position = H
Are you sure about T? What about the '|' between the T and H. How can I 'provide' the chance hypothesis? Jerad
Jerad:
I tell you what, just to be sure we’re understanding each other, why don’t you explain what the expression P(T|H) (as written by Dr Dembski) means. You’ve taken some math classes and have a high IQ so it should be easy. After that we can work on ‘providing’ H.
P= probability. T= the rejection region (of an event, object or structure) and H are the chance hypotheses. Your position = H Joe
Jerad et al: I have now pointed out enough times that the transformation by log reduction allows us to access direct and statistical measures of information. Which, we do routinely observe, e.g. most readily in DNA and proteins. At this point, I have to view the continued harping on any number of cogently answered talking points (including the continued misrepresentation of the design inference process on signs, through rain fairy talking points and whatnot) as question-begging agit-prop message dominance tactics, that have nothing to do with serious thought on the merits, an important matter. As for Joe's point, he is right to point out that the chance-driven theories do need to logically justify, articulate and empirically warrant their claims. Let us not forget, that starts with the root of the tree of life, origin of living cells in Darwin's pond or the like environment driven by the physics and chemistry at work. We do know a fair amount about that, and as design thinkers since Thaxton et al have pointed out, the thermodynamics and chemical kinetics are not favourable to blind chance and mechanical necessity coming even close for putting together life forms. That is the reason why Orgel and Shapiro came to mutual ruin some years ago. The only reasonable, empirically wrranted answer for the FSCO/I in the living cell, the root of the tree of life, is design. Only design -- on trillions of examples -- is empirically warranted as a causal source of FSCO/I. And if you want to play at the game of dismissing that reality, I again point to a world around us full of functionally organised complex entities where the functionality arises from interaction per a Wicken wiring diagram. The 6500 C3 reel, is a case in point, and using the nodes-arcs form of the wiring diagram to generate 3-d meshes, the gear train is a further subset that by itself is another case. If you want to push the talking point that oh life forms reproduce, chemicals don't. ANd that is where you must start. Indeed, as part of the ongoing, two years standing unanswered darwin essay challenge, you need to address the origin of a code and algorithm using von Neumann kinematic self replicator joined to gated, encapsulated, metabolising automata, as an integral part of the claimed blind chance and mechanical necessity origin of life assertions. Where, I beg to remind you that Paley in 1804, in Ch II of his Nat Theol, moved on to the time keeping self reicating watch, and highlighted that the origin of that system is itself a major challenge and is credibly a further instance of artifice and design. This part of his argument of course has been generally not brought up in debates and assertions by darwinists over the past 150 years. But it is material, tot he point that the omission looks suspiciously like a strawman tactic. So, right from the root, FSCO/I points to the viability of design as a vera causa plausible explanation of life and the tree of life insofar as a tree is a valid representation. When we move on to chance variation plus culling by differential reproductive success leading to descent with modification held to incrementally account for the tree of life above the root, that too runs straight into major challenges. Notice, something vital: the vaunted natural selection is not a creative source of information, but instead it describes the after the fact loss of information by culling out varieties that go extinct as they are sufficiently less viable in niches to die off. This means that the source of creative information is chance variations. This is held to incrementally, slowly, climb a tree pattern across a continent of viable forms, to get to the branching tree of life. While that is commonly taken for granted, it is in fact poorly founded. FSCO/I naturally leads to islands of function; as the simple exercise of putting 6500 c3 reel parts in a bag and shaking to explore the space of clumped configs. Predictably it will fail to find the ones that make a viable reel. Too deeply isolated for the search resources to credibly chance upon it by a 4andom walk or a scattered dust or a combination. So, it is utterly unsurprising to see how systematically, after 250k+ fossil species and millions of specimens in museums with billions in the field, we have not got the many, many, many missing links that mark the key branching points in the tree. A few circumpolar species and the like are not enough, starting from the Cambrian revolution and origin of major body plans onwards. Where, at foundation, after trillions of observed cases of origin of FSCO/I and many attempts, there is no current observational evidence that reasonably points to blind chance and mechanical necessity being an actual cause of FSCO/I. (That, I suspect, is a big reason why so much effort was futilely invested in trying to dismiss the validity of he concept and/or to suggest that it cannot be quantified. Both have collapsed, and for years the facts have been on teh table, but were brushed aside. Now, we have it right there from 1973 and 1979, not form those despised de3sign thinkers, but from Orgel and Wicken. In the case of Orgel, he specifically used the common method of state identification and measuring the y/n chain of q's to specify state. Much as Shannon did in his well known 1948 paper.) Blind chance and mechanical necessity cannot be shown to be an observed cause of FSCO/I. It fails the vera causa test. So, the attempt to insist that the tree of life is a proof of such blind chance and mechanical necessity producing life and its forms from microbes to man, has failed to ground its causal force claims on actual observations adequate to explain a critical feature of life forms, FSCO/I. Which precise feature is a routine product of design. All the huffing and puffing about injecting supernatural into science, and on despising theism and denigrating theists collapses. The root issue is that we have an imposition of a priori materialism on science, and it cannot pass the empirical observation, vera causa test. Question-begging, in the teeth of where inductive logic points. FSCO/I is a reliable sign of design, inductively and on analysis of the challenge of sparse needle in haystack search, given the scope of config spaces vs atomic and temporal resources. Issues that are routinely derided and brushed aside, but have not been adequately answered. Until blind chance and mechanical necessity can pass the vera causa test of being observed to cause FSCO/I, it is not a reasonable candidate to explain FSCO/I as observed in traces from the past of origins of life and of body plans. KF kairosfocus
Joe #158
Clueless until the end, eh. Your position is the chance hypothesis, duh. And as such needs to provide H or admit that it has nothing. Oops we already know that.
I tell you what, just to be sure we're understanding each other, why don't you explain what the expression P(T|H) (as written by Dr Dembski) means. You've taken some math classes and have a high IQ so it should be easy. After that we can work on 'providing' H. Jerad
Jerad:
P(T|H) is not part of any supposition of mine.
Clueless until the end, eh. Your position is the chance hypothesis, duh. And as such needs to provide H or admit that it has nothing. Oops we already know that. Joe
LoL! @ keith:
For literally years I’ve been pointing out that he can’t calculate P(T|H) correctly, because like Dembski and everyone else, he doesn’t have the required knowledge.
keith admits that unguided evolution is based on our ignorance. Nice own goal, again. Joe
keith s - 144
For literally years I’ve been pointing out that he can’t calculate P(T|H) correctly, because like Dembski and everyone else, he doesn’t have the required knowledge.
I rather suspect he doesn't really understand what P(T|H) means. I also suspect that the reason that Dr Dembski no longer pushes his notion is that he realised it was a dead end. I know that KF starts off citing Dr Dembski's original expression but generally, on this forum, he uses a modified version where P(T|H) does not appear. The blog post you linked to is one of his endless regurgitations of his reasoning for doing so. He's got the same thing up on a website somewhere and he usually just copies and pastes it. I have read through it many times. His ability to lay out a mathematical case is sadly lacking clarity. Jerad
keith s - #153 I completely agree that one coin coming up heads 500 times is a row is highly suspicious. Clearly that deserves inspection. As I've said many times: if it happened to me I'd check things out. As you say flipping one coin 500 times is really 500 trials. I think my statement: try it again is better applied to a situation where 500 coins are flipped. That could be thought of as one trial. Sort of. Regardless, I completely agree that scrutiny is required. I am not saying, in either case, that I wouldn't be suspicious and highly skeptical. I would examine all aspects of the case as thoroughly as possible. I would not refer to that skepticism as inferring design however as that phrase has connotations which are best avoided (and unnecessary). We're not really disagreeing. We're just saying things a bit differently. And I'm trying to show the lack of rigour in KF's attempts at his kind of design inference. Material flaws or plain, old fraud are much more likely to be explanations than some undetected, undefined, non-material intelligent designer. It is interesting to think of the scenario of flipping a couple million fair coins and throwing out the ones that come up tails. Repeat. At the end you will get a fair coin that came up heads quite a few times in a row. No sleight of hand. No cheating. Nothing fishy at all. Just probabilities. Nothing to investigate. No design to infer. Jerad
Jerad,
P(T|H) is not part of any supposition of mine. It was part of a paper written by Dr Dembski. KF has eliminated that term in his restatement of Dr Dembski’s conjecture. It’s not up to us to explain why he did that or to help by ‘providing’ H. Why doesn’t KF just work with P(T|H)?
KF does use P(T|H) in his "chi_500" expression. See here, for example. For literally years I've been pointing out that he can't calculate P(T|H) correctly, because like Dembski and everyone else, he doesn't have the required knowledge. keith s
Jerad,
I haven’t got much else to say though. Since it is possible to get 500 Hs flipping 500 coins by chance alone. I’m not saying I wouldn’t be suspicious. As I said my first reaction would be: do it again.
My point is that we'd have good reason to conclude that something fishy was going on even if we couldn't try it again. One trial is plenty. Here are a couple of ways of looking at it. First, the 500-flip number is arbitrary. You're thinking of it as a single trial, but you could just as easily think of it as two 250-flip trials, or four 125-flip trials. You might be falling prey to a cognitive bias that says "I can't conclude anything on the basis of a single trial", but what about four consecutive trials in which 125 consecutive flips come up heads each time? That's pretty impressive. Would you be merely "suspicious" after seeing that? The way you frame the problem could be making a difference in how conclusive the evidence seems. A second way of looking at it is to remember that we are really comparing hypotheses. The first hypothesis is "The coin and the flips are fair, and I just happened to get a special-seeming pattern by pure luck." The second hypothesis is "Something fishy is going on that caused me to get a special-seeming pattern." We're trying to determine the probability that something fishy is going on -- in other words, that the coin and the flips aren't fair. We know that the under the fairness assumption, the probability of getting a "special" pattern is only n in 2^500, where n is the number of patterns that you would consider special. Under various unfairness assumptions, the probability of getting a special pattern becomes as high as 1 (in the case of a two-headed coin, for example). So unless you've fairly exhaustively eliminated these unfairness possibilities (by careful inspection and testing of the coin, the flipping apparatus, the room, etc.) the probability of unfairness remains higher than the probability of getting a special pattern by pure chance. If you sit down and watch someone flip 500 heads in a row, you can be virtually certain that something fishy is going on. No second try needed. keith s
It is rather too bad that KF has decided not to respond to the requests posted on this thread. Let's hope this is down to time commitments on his part. Jerad
#150 P(T|H) is not part of any supposition of mine. It was part of a paper written by Dr Dembski. KF has eliminated that term in his restatement of Dr Dembski's conjecture. It's not up to us to explain why he did that or to help by 'providing' H. Why doesn't KF just work with P(T|H)? Aren't you interested? I would think someone with an investigative frame of mind would want to know. Jerad
LoL! @ Jerad- Your position can't provide the "H", and it can't.
Bottom line: getting 500 Hs on a single trial of flipping 500 coins is not a sufficient reason to infer design.
I am so glad that you are not an investigator Joe
#148 Keep asking KF about P(T|H). He won't answer but it's good to keep the question active. As I said, if I got 500 Hs in a row I'd investigate. But because that sequence is just as likely as any other sequence if you don't find evidence of tampering with the fairness of the trial then try it again!! Jerad
Jerad: (KF’s) restatement of Dr Dembski’s fCSI detection algorithm he did away with P(T|H) Thank you, Jerad. Yes, we understand that CSI has mutated and diversified under relaxed selection. Kairosfocus keeps talking about coins flips and combinatorials, which makes it look like a standard probability distribution. Without the extended prose, what is kairofocus's formulation? Jerad: Bottom line: getting 500 Hs on a single trial of flipping 500 coins is not a sufficient reason to infer design. It's sufficient to indicate some underlying cause. Whether that's design or a two-headed coin, or the effects of a large magnet, requires further investigation. Jerad: I think everyone agrees that all non-designer arguments must be examined and eliminated (if possible) before making a design inference. More accurately, you compare hypotheses by testing them to determine which is likely true. You don't have to exhaustively eliminate natural causes if you have evidence of design. They are all just competing hypotheses. Zachriel
#146
Note the 10^150 ?, that’s the reason IDers chose 500 coins – it is close to their oft repeated Universal Probability Bound.
I am aware of their 'limit'. But I'm just arguing against the 'logic' of inferring design IF one was to flip 500 (or any number really) of coins once and getting all Hs. You can't 'magic' a designer out of an improbability/probability. Everyone admits it is possible to get all Hs on one trial. I think everyone agrees that all non-designer arguments must be examined and eliminated (if possible) before making a design inference. But it would be interesting to ask: Would it be fair to infer design if I got all Hs with 10 coins? 20? 100? Where exactly is the line? As I said before you can pretty much guarantee getting 20 Hs in a row by doing the following: Start with 2 million coins. Flip them all. Take out the ones that come out Ts. Repeat. By the time you get down to 1 coin left that coin should have 20 or more Hs in a row. If it were me I would carefully examine that coin but the mathematics is correct. If you use 2 million fair coins you can force 20 Hs in a row. (Thanks to Neil deGrasse Tyson for the idea.) 500 Hs in a row would take a lot more coins obviously. I'll leave it up to the readers to figure out how many. Jerad
Jerad @ 143,
We don’t disagree about the mathematics. I don’t see a ‘paradox’. I think I’m just not that interested in the psychology of the situation. Bottom line: getting 500 Hs on a single trial of flipping 500 coins is not a sufficient reason to infer design @ 145 That could be interesting. I haven’t got much else to say though. Since it is possible to get 500 Hs flipping 500 coins by chance alone. I’m not saying I wouldn’t be suspicious. As I said my first reaction would be: do it again.
The formula for getting number of required toss is 2*(2^N – 1), where N is the number of heads/tails, so for 500 Heads in a row you need 6.5*10^150 tosses . Note the 10^150 ?, that's the reason IDers chose 500 coins - it is close to their oft repeated Universal Probability Bound. Me_Think
#144
Bottom line: getting 500 Hs on a single trial of flipping 500 coins is not a sufficient reason to infer design.
Sure it is, but we can talk about that tomorrow. It’s bedtime for me.
That could be interesting. I haven't got much else to say though. Since it is possible to get 500 Hs flipping 500 coins by chance alone. I'm not saying I wouldn't be suspicious. As I said my first reaction would be: do it again. Jerad
Jerad,
Bottom line: getting 500 Hs on a single trial of flipping 500 coins is not a sufficient reason to infer design.
Sure it is, but we can talk about that tomorrow. It's bedtime for me. Good night. keith s
#141
I address that issue in my OP. The set of ‘special’ sequences is not the same for everybody, but as long as the set is small enough, it is significant when you flip a sequence that is already special to you. Take a look at the OP and the comments. I go into quite a bit of detail about this.
We don't disagree about the mathematics. I don't see a 'paradox'. I think I'm just not that interested in the psychology of the situation. Bottom line: getting 500 Hs on a single trial of flipping 500 coins is not a sufficient reason to infer design. Jerad
#138 I agree that part of the real question is: is the game rigged? And a single roll or trial is not enough to determine that. Flipping 500 coins and getting 500 Hs is NOT a good enough reason to infer design. If I flipped 500 coins and got 500 Hs the first thing I would do is: DO IT AGAIN!! And again. And again. And again. #139 Looks like Eric and KF have left that particular battle ground. Not saying this applies to any one here or there but it always amuses me to listen to theologians who can analyse sacred texts down to the smallest minutiae but cannot follow a basic mathematical argument that goes against their belief structure. I can understand the reluctance of accepting that we (individuals and as a species) are not 'special' or 'determined'. That doesn't feel right because our whole experience of the world is from our individual perspective. We literally cannot see the world from another point of view without great difficulty. I suppose that's why out of body experiences can be so transforming. But I don't understand why it's so hard for some to grasp the immense power of cumulative selection. It's clear from human-directed breeding programs (of dogs and brassicas for example) that there is a lot of natural variation thrown up by natural reproduction. And when you filter that through generations of selection . . . Jerad
Jerad,
BUT I could take the sequence of 500 Hs and Ts that my girlfriend and I generated on our first date (not really, I’m not that boring) as my special, recognisable sequence and I could say that the chances of any one else coming up with that sequence is nigh well onto impossible. I could claim that it signifies a special, once-in-a-universe moment never to be repeated.
I address that issue in my OP. The set of 'special' sequences is not the same for everybody, but as long as the set is small enough, it is significant when you flip a sequence that is already special to you. Take a look at the OP and the comments. I go into quite a bit of detail about this. keith s
#138
I actually agree with the IDers on this one. While it’s true that 500 heads is no more or less probable than any other particular sequence, it is special, precisely because it belongs to a small set of sequences that we regard as special.
BUT I could take the sequence of 500 Hs and Ts that my girlfriend and I generated on our first date (not really, I'm not that boring) as my special, recognisable sequence and I could say that the chances of any one else coming up with that sequence is nigh well onto impossible. I could claim that it signifies a special, once-in-a-universe moment never to be repeated. But it doesn't make it special or different or significant except to me. 'Regarding' some sequences as 'special' is just psychology. It doesn't change the mathematics. Humans like recognisable patterns. But, in this case, the mathematics doesn't care. A pebble you find brighter and shinier is still a pebble. Jerad
Jerad #137:
In fact, has any ID proponent calculated P(T|H) for any significant example? Meaning something other than situations we know can be analysed as purely deterministic, like coin tossing.
Eric Anderson is dancing around that question right now on another thread. keith s
Jerad #133:
500 Hs is no different probabilistically or mathematically form any other sequence of 500 Hs and Ts. It only looks ‘special’ but it’s just as random as anything else.
Jerad, I actually agree with the IDers on this one. While it's true that 500 heads is no more or less probable than any other particular sequence, it is special, precisely because it belongs to a small set of sequences that we regard as special. I did an OP on this last year at TSZ: A resolution of the ‘all-heads paradox’ keith s
To be fair Zachriel and Pachyaena, I don't think KF has posted any comments in hours and hours. Not that I'm expecting him to tell you if P(T|H) is a standard probability distribution (a fairly elementary question). In his (KF's) restatement of Dr Dembski's fCSI detection algorithm he did away with P(T|H) which I take to mean he was unable to compute it for his examples. In fact, has any ID proponent calculated P(T|H) for any significant example? Meaning something other than situations we know can be analysed as purely deterministic, like coin tossing. Jerad
KF, pardon but your predictable avoidance of Zachriel's question is sadly telling. Kindly do better. Is P(T|H) a standard probability distribution? Yes or no? Pachyaena
kairosfocus: Z simply refuses to acknowledge that ... We simply asked a question. Is P(T|H) a standard probability distribution? The way you treat it certainly looks like a standard probability distribution. Please start your answer with a yes or no if possible. Zachriel
#131
Where, let us note, over two years ago, the open invitation was put on the table to host a pro-darwinist essay that gave the framework of observation backed evidence for the ToL from the root — OOL — to the main branches and onwards the twigs
Let us not forget that no one in the evolution camp claims to have anything other than a guess regarding the origin of life problem. Let us also no forget that the ID community also cannot be specific about what the original 'life' on earth looked like. There is a difference between the camps though: the evolutionists are trying. I don't see any one in ID seriously trying. Perhaps because no one is yet clear what ID is saying regarding even the when of design. Answering once or many times would be a good start. Jerad
#131
500 H is patently distinguishable and separately describable, comes from a set of similar cases such as 500 T etc, and T is immensely smaller than G.
500 Hs is no different probabilistically or mathematically form any other sequence of 500 Hs and Ts. It only looks 'special' but it's just as random as anything else.
A blind chance search hoping to find something from T or just happening on T is not credible. But, as design is a known cause of high contingency, it is easily seen save to the selectively hyperskeptical that while say an outcome from G is readily explained on chance, 500 H is best explained on intelligently directed configuration.
Again, as all given sequences of Hs and Ts are equally probable there is no justification for saying that the occurrence of a particular sequence or a group of sequences is 'better' explained by design. AND, as you cannot rule out a chance occurrence you are unjustified in making a design inference.
Beyond, Z simply refuses to acknowledge that — three years ago when the issue was brought up — it was shown that simply carrying through the log reduction of the Dembski 2005 Chi metric expression, we see that it is an info beyond a threshold metric.
I am saying that you modified Dr Dembski's original derivation and I have yet to see his approval of your restatement. If he thought you wouldn't need to calculate P(T|H) then he would have left it out himself. But he didn't.
Of course, none of this will make any impression on the determined objectors. We are dealing with zero concession selective hyperskepticism and the agit-prop of polarisation and message dominance as I well recall from dealing with Marxists decades ago.
I disagree with your interpretation of some mathematical issues, that is all.
The answer is to simply stand your ground and lay out a reasonable case.
I am stating the mathematical truths as I see them.
Where, let us note, over two years ago, the open invitation was put on the table to host a pro-darwinist essay that gave the framework of observation backed evidence for the ToL from the root — OOL — to the main branches and onwards the twigs. If that could be done it would shatter design theory and the design inference on FSCO/I as regards the world of life.
If you recall I did make a brief attempt. Anyway, you're changing the subject. I'm now just talking about a specific point of probability and making the design inference.
Let the record stand clear: no serious attempt after two and more years. That speaks volumes on the true state of the matter.
If you didn't agree with the popular books on evolution written by Drs Dawkins, Miller, Coyne, Shermer and Carl Zimmer then I don't see how I could possibly change your mind. Jerad
PS: Remember, Mung just extended what we have from Orgel, drawing out how what he spoke of was indeed FSCO/I, with a metric for info added in. On seeing that, I did not notice any significant acknowledgement on the part of those who were so hotly contending the opposite. Likewise, when the strawman misrepresentation of the design inference in the rain fairies etc talking points was made, there was zero concession, zero responsiveness; cf the just linked. Take that pattern -- there are many similar cases, with KS's black knight tactics a particularly rich motherlode -- as a yardstick. kairosfocus
F/n: Predictably . . . 500 H is patently distinguishable and separately describable, comes from a set of similar cases such as 500 T etc, and T is immensely smaller than G. A blind chance search hoping to find something from T or just happening on T is not credible. But, as design is a known cause of high contingency, it is easily seen save to the selectively hyperskeptical that while say an outcome from G is readily explained on chance, 500 H is best explained on intelligently directed configuration. Beyond, Z simply refuses to acknowledge that -- three years ago when the issue was brought up -- it was shown that simply carrying through the log reduction of the Dembski 2005 Chi metric expression, we see that it is an info beyond a threshold metric. Taking that as a base we can note that information is readily measurable by noting say the string of Y/N q's to specify the wiring diagram config for relevant function. Or, if you wish, statistical studies can be used. That is info is readily quantified from observations. And in the case of living forms, say the variability of AA's in known functional, fold-stable key-lock fitting proteins allows us to infer to the statistical distributions for the functional state. The 20-state, one of end gives 4.32 bits per AA locus, if we go as loose as hydrophilic/hydrophobic that gives us 1 bit. (Which, on average is way too loose.) Nevertheless take that, take 100 AA's not the 300 or so that is typical, and say only 100 proteins are required for a simplistic early cell life form. That's 10 kbits of info, an order of magnitude below what reasonable genome sizes say. But it matters not, 10 kbits is a factor of ten beyond the 500 - 1,000 bit threshold for FSCO/I. Of course, none of this will make any impression on the determined objectors. We are dealing with zero concession selective hyperskepticism and the agit-prop of polarisation and message dominance as I well recall from dealing with Marxists decades ago. The answer is to simply stand your ground and lay out a reasonable case. Where, let us note, over two years ago, the open invitation was put on the table to host a pro-darwinist essay that gave the framework of observation backed evidence for the ToL from the root -- OOL -- to the main branches and onwards the twigs. If that could be done it would shatter design theory and the design inference on FSCO/I as regards the world of life. Let the record stand clear: no serious attempt after two and more years. That speaks volumes on the true state of the matter. KF kairosfocus
kairosfocus: any single outcome of a toss of 500 coins faces odds of 1 in 3.27*10^150, as the latter is the number of possible outcomes, W. chi = – log2 [ 10 ^ –120 * phi~S(T) * P(T|H) ] Is P(T|H) a probability distribution? Zachriel
My prediction has been confirmed. KF, the "substance" that matters is whether you IDers can and will support your claims about CSI, dFSCI, FSCO/I, IC, etc., in life forms and all of the other things in nature that "you and ilk" claim contain CSI, dFSCI, FSCO/I, IC, etc. Fishing reels, Shakespearean sonnets, and the other man-made things that "you and ilk" trot out are already known to be designed. Pachyaena
#123
Yes, any single outcome of a toss of 500 coins faces odds of 1 in 3.27*10^150, as the latter is the number of possible outcomes, W.
The set of all possible outcomes, W, from tossing a coin 500 times and recording the sequence of Hs and Ts has 2^250 elements in it. Agreed.
What you (in the teeth of repeated correction for literally years to my certain knowledge) insistently leave out is clustering of patterns of outcomes; something that is a commonplace of say statistical thermodynamics used to for instance analyse why the 2nd law of thermodynamics obtains. Indeed, my favourite intro to stat thermo-d, L K Nash, discusses just the example of coins, though it goes for 1,000.
You want to assign some special status to certain classes of outcomes. The only thing that makes some groups of outcomes more likely is the number of outcomes you have put in the groups. Whether or not there is a pattern you 'see' is irrelevant.
As the binomial theorem will instantly show, the overwhelming bulk of coin toss outcomes will be 500 coins in a near 50-50 H/T distribution, in no particular pattern, i.e. gibberish, let us call this subset G. By contrast, let us define 500 H as E in a set T of simply describable, relatively rare patterns such as 500 H, 500 T, alternating H/T, and the close like. The proper comparison is E to G, or else even T to G.
Depending on what you mean by 'near'. Again, since all possible outcomes are equally likely the probability of getting an outcome in a particular group or cluster or class depends only on how many outcomes are in that group/cluster/class. It does not depend on any 'meaning'. Also, because you have admitted that it is possible that you could get, say, 500 Hs by chance then you cannot ascribe such an outcome to design. You have to exhaust all the possible non-design explanations before you make that inference.
And the odds of being in G rather than T are utterly overwhelming.
Only because of the relative sizes of G and T.
Where, on tossing a set of 500 coins, using the 10^57 atoms of the sol system as tosser-observers for as many sets of coins, 10^14 times per s for 10^17 s, one would sample as one straw to a cubical haystack comparably thick as our galaxy. Under those circumstances, zone T (and E in it . . . ) is effectively unobservable by blind chance coin toss.
Again, only because of the relative sizes of the groupings you've defined.
Which is the whole point of Dembski’s now longstanding subset T in W discussion of Complex Specified Information in NFL.
Didn't Dr Dembski also say that you have to rule out all non-design explanations? He also said you have to calculate P(T|H)?
I predict, on track record, that you will duck, dodge, twist or brush aside and/or studiously ignore this correction.
If you compare groups of outcomes and some groups have many fewer outcomes in them than others than those groups will have much lower probability. But it's you who have picked the groups and therefore affected the relative probabilities. No outcome is more or less likely than any other outcome so any grouping of those outcomes you make are purely arbitrary and no special significance can be assigned to them.
That, would be a breath of fresh air and a sign that we are finally seeing movement beyond the bigotry and dismissive contempt of the blatant no concessions to “IDiots” policy.
Do not put words in my mouth please. I disagree with you, that is all. Jerad
PS: It is already easy to see from just the genomes, that a first cell based life reasonably has 100 k - 1 mn bases, and a new body plan -- to account for cell types, tissues and organs in integrated systems -- 10 - 100+ mns. We could take the two bits per base first rule of thumb, or we could afford to be well below that (which would be implausible, AAs in proteins to achieve fold-function are not THAT flexible). It matters not, OOL and origin of body plans are well beyond what is remotely plausible for blind chance and mechanical necessity on gamut of sol system or observable cosmos on any reasonable blind search; 500 - 1,000 bits. Magically arrived at golden searches that have no observational warrant don't count. Life forms, from first cells to dozens of body plans have but one credible vera causa plausible explanation of wiring diagram, correct component in correct arrangement functionality. Design, intelligently directed configuration. Until you and ilk can provide observational evidence on OOL in a Darwin's pond, vent, comet core etc, and/or for origin of body plans that meets vera causa, that remains the undeniable reality. kairosfocus
kairosfocus @ 123 I agreed with the true statement that probability of any sequence of coins is the same and UD has put up posts about 500 coins time and again. I don't see how that leads you to conclude I set up a strawman, to knock it over and claim a tainted rhetorical triumph. I haven't come across even Jerad doing anything of that sort. Me_Think
Pachy, again you have failed to address substance and hope to change the subject; whilst in the above I am manifestly correct despite your dismissal. That is, it is patent that a blind sample of W of feasible scale will reliably observe G not T, for needle in haystack, sparse search reasons. Unfortunately, you have allowed hostility to design thought to blind you to what is obvious to the point of being proverbial. Thus, you inadvertently illustrate the no concession to the point of absurdity problem I highlighted. KF kairosfocus
KF, why should there be concessions to "IDiots" who are constantly wrong? I have another question for you: How much CSI, dFSCI, and FSCO/I is there in a Leptodactylus fallax? Show your work. I predict, on track record, that you will duck, dodge, twist or brush aside and/or studiously ignore the questions. Pachyaena
MT & Jerad: Again and again, you . . . I speak here to J and ilk . . . have set up a strawman, to knock it over and claim a tainted rhetorical triumph. I must speak in such stringent terms, for cause. I explain, mostly for benefit of the onlooker. Yes, any single outcome of a toss of 500 coins faces odds of 1 in 3.27*10^150, as the latter is the number of possible outcomes, W. What you (in the teeth of repeated correction for literally years to my certain knowledge) insistently leave out is clustering of patterns of outcomes; something that is a commonplace of say statistical thermodynamics used to for instance analyse why the 2nd law of thermodynamics obtains. Indeed, my favourite intro to stat thermo-d, L K Nash, discusses just the example of coins, though it goes for 1,000. As the binomial theorem will instantly show, the overwhelming bulk of coin toss outcomes will be 500 coins in a near 50-50 H/T distribution, in no particular pattern, i.e. gibberish, let us call this subset G. By contrast, let us define 500 H as E in a set T of simply describable, relatively rare patterns such as 500 H, 500 T, alternating H/T, and the close like. The proper comparison is E to G, or else even T to G. And the odds of being in G rather than T are utterly overwhelming. Where, on tossing a set of 500 coins, using the 10^57 atoms of the sol system as tosser-observers for as many sets of coins, 10^14 times per s for 10^17 s, one would sample as one straw to a cubical haystack comparably thick as our galaxy. Under those circumstances, zone T (and E in it . . . ) is effectively unobservable by blind chance coin toss. Which is the whole point of Dembski's now longstanding subset T in W discussion of Complex Specified Information in NFL. I predict, on track record, that you will duck, dodge, twist or brush aside and/or studiously ignore this correction. Please, prove me wrong. That, would be a breath of fresh air and a sign that we are finally seeing movement beyond the bigotry and dismissive contempt of the blatant no concessions to "IDiots" policy. It is high time for such a change. But, I am not holding my breath. KF kairosfocus
Jerad @ 121 Yes,probability of any sequence of coins is the same.I found old threads about it here at UD. That doesn't stop UD from putting up more 500 coins threads. Me_Think
Regarding generating so many heads in a row when flipping a fair coin. Let's suppose you start with one million people all with a fair coin. On each 'pass' you ask everyone to flip their coin and if they get tails they sit down. Just before this process ends you expect to get, by pure chance, someone who flipped about 20 heads in a row. As has been said several times in this thread all sequences of 500 Hs and Ts are of equal, extremely low probability. The probability of getting all Hs is the same as getting any other specified pattern. But, if you flip a coin 500 times (or flip 500 coins) you get a highly improbably sequence every time. It is only the human mind-set that assigns some special attributes to a sequence of all heads. Barry is wrong, it is possible, albeit highly improbable, to get 500 heads by chance. Just like it's highly improbable, but possible, to get any other specified sequence of Hs and Ts. Jerad
#96 error correction: missing word 'in' at the end of the third question.
Forget the coin flipping. Let’s get real. :) See the below questions, based on an interesting commentary gpuccio posted in another thread:
can we regard the constant flux of information between epigenome-genome-epigenome as the only possible way to correctly describe cell differentiation? Is that flux ever interrupted, from the zygote to the adult being to a new zygote? Which of the following levels does the information reside in? 1 – genome, both coding and non coding 2 – genome methylation 3 – histone modifications 4 – chromatin modifications 5 – transcription factors network 6 – regulatory RNAs (all the various forms) 7 – post-translational modifications 8 – asymmetric mytosis 9 – cell to cell signaling 10- all of the above 11- none of the above how do those different strata interact? are they independent, parallel networks which ensure a supreme redundancy and robustness, or do they work, at least in part, in sequences?
Dionisio
wd400 @ 69 -
Whoops, I know see that Bob O’H already presented the sample with replacement example. And yes, Learned Hand, you have it.
And I took it from Peredur, the son of Evrawk. Bob O'H
HeKS:
First of all, Barry is right in saying that you did not meet the challenge.
Barry seems to have missed my response to the challenge altogether, and I don't want him to be deprived of the opportunity to explain why it fails.
Second, why not link to my actual post so people can decided for themselves whether or not the agree?
Sorry, that was sheer laziness on my part. I don't always provide links for everything.
On the matter of your response to this challenge, Ewert agreed that I was “exactly right” with respect to what would be required to meet the challenge and in my criticism of your use of him as a source (see here).
I'd be very interested to read your conversation with Ewert. Can you ask his permission to share it?
Here’s my extensive post about CSI, which also clears up much of the definitional confusion present in this discussion: https://uncommondesc.wpengine.com.....ent-518656
There certainly is definitional confusion. I invited you repeatedly to provide a reference from the ID literature to support your interpretation. That invitation still stands. R0bb
Barry:
Robb,
Without a chance hypothesis, how do I determine that it’s highly improbable?
*sigh* Never mind R0bb. If all you want to do is play definition derby in response to a straightforward challenge, that is all the answer I need. You’ve got nothing. OK.
You're right. I've got nothing. I'm a loser. But maybe you could humor me and answer the question anyway. R0bb
BA, It's hard to tell exactly what your sneering jabs at wd400 mean. Perhaps you think that by limiting your comments to sarcastic unpleasantries rather than substantive comments you won't be caught out in another error? Obviously it makes it hard to continue a discussion. (That might be a good thing from your perspective; these last few threads have hardly covered you in glory. It's never too late to take a deep breath and try, as kairosfocus would say, to do better. Bydand.) It seems to me—and please tell me if I'm wrong—that you're claiming these are opposite statements: A: "In 58 Keith knows the process that generated the sequence — flips of fair coin. Are you saying we have to know what process generated something in order to calculate CSI of that thing?" B: "I read it again. Keith says the outcome is unlikely if the coin is fair and being tossed fairly. So he concludes that specific chance hypothesis is very unlikely to explain the result." Those are compatible statements, not opposites. Keith assumes a fair coin toss. Given that well-understood universe of probabilities, he can say that if he gets 500 heads then something other than raw probability is affecting the outcome. But he can only say that if he knows the odds of a fair coin toss. If he doesn't know whether the coins are double-sided or weighted, or whether there's a selective process at work, then he can't even begin to guess at the odds of intelligent intervention. Learned Hand
keiths:
The challenge cannot be met, even in principle. It is an empty challenge.
As usual, you're confused. How does your conclusion follow? Mung
Right... so to use CSI in biology you've have to be able to calculate the probability of a protein/sequence/organ or one like it? In fact, to assess CSI we need a "chance hypothesis” and its probability density? wd400
wd400 @ 111. Which, as I said, is exactly the opposite of what you said in 108. Good job. Barry Arrington
@R0bb #20
Contrary to Barry’s assertion that I “responded not by meeting the challenge”, I actually did point to a working example in response to his challenge. Here’s a summary of that example: 1) Ewert calculates that the pattern has 1,068,017 bits of specified complexity under the chance hypothesis of equiprobability. 2) The pattern is known to have been created by natural processes. 3) In practice, equiprobability is the only chance hypothesis that IDists (other than Ewert) ever consider. HeKS responded, but I doubt that many, if any, IDists will agree with his response.
First of all, Barry is right in saying that you did not meet the challenge. Second, why not link to my actual post so people can decided for themselves whether or not the agree? On the matter of your response to this challenge, Ewert agreed that I was "exactly right" with respect to what would be required to meet the challenge and in my criticism of your use of him as a source (see here). Here's my extensive post about CSI, which also clears up much of the definitional confusion present in this discussion: https://uncommondesc.wpengine.com/atheism/heks-strikes-gold-again-or-why-strong-evidence-of-design-is-so-often-stoutly-resisted-or-dismissed/#comment-518656 HeKS
I read it again. Keith says the outcome is unlikely if the coin is fair and being tossed fairly. So he concludes that specific chance hypothesis is very unlikely to explain the result. wd400
wd400 @ 108. Just exactly the opposite of what keiths said at 58. Read it again. Try harder this time. Barry Arrington
Actually, contrary to what Fair Witness believes, nobody is unfairly smuggling anything into the argument. ID relies on the same method of inference that Charles Darwin himself used to make his inference for evolution. i.e. presently known cause known to produce the effect in question:
Stephen Meyer - The Scientific Basis Of Intelligent Design - video https://vimeo.com/32148403
Simply put, nobody has ever seen unguided processes produce a single protein or molecular machine, whereas intelligence has done both:
Doug Axe PhD. on the Rarity and 'non-Evolvability' of Functional Proteins - video (notes in video description) https://www.youtube.com/watch?v=8ZiLsXO-dYo Can Even One Polymer Become a Protein in 13 billion Years? – Dr. Douglas Axe, Biologic Institute - June 20, 2013 - audio http://radiomaria.us/discoveringintelligentdesign/2013/06/20/june-20-2013-can-even-one-polymer-become-a-protein-in-13-billion-years-dr-douglas-axe-biologic-institute/ Creating Life in the Lab: How New Discoveries in Synthetic Biology Make a Case for the Creator - Fazale Rana Excerpt of Review: ‘Another interesting section of Creating Life in the Lab is one on artificial enzymes. Biological enzymes catalyze chemical reactions, often increasing the spontaneous reaction rate by a billion times or more. Scientists have set out to produce artificial enzymes that catalyze chemical reactions not used in biological organisms. Comparing the structure of biological enzymes, scientists used super-computers to calculate the sequences of amino acids in their enzymes that might catalyze the reaction they were interested in. After testing dozens of candidates,, the best ones were chosen and subjected to “in vitro evolution,” which increased the reaction rate up to 200-fold. Despite all this “intelligent design,” the artificial enzymes were 10,000 to 1,000,000,000 times less efficient than their biological counterparts. Dr. Rana asks the question, “is it reasonable to think that undirected evolutionary processes routinely accomplished this task?” http://www.amazon.com/gp/product/0801072093
Dr. Fuz Rana, at the 41:30 minute mark of the following video, speaks on the tremendous effort that went into building the preceding protein:
Science - Fuz Rana - Unbelievable? Conference 2013 - video http://www.youtube.com/watch?v=-u34VJ8J5_c&list=PLS5E_VeVNzAstcmbIlygiEFir3tQtlWxx&index=8 Computer-designed proteins programmed to disarm variety of flu viruses - June 1, 2012 Excerpt: The research efforts, akin to docking a space station but on a molecular level, are made possible by computers that can describe the landscapes of forces involved on the submicroscopic scale.,, These maps were used to reprogram the design to achieve a more precise interaction between the inhibitor protein and the virus molecule. It also enabled the scientists, they said, "to leapfrog over bottlenecks" to improve the activity of the binder. http://phys.org/news/2012-06-computer-designed-proteins-variety-flu-viruses.html ,,,we must concede that there are presently no detailed Darwinian accounts of the evolution of any biochemical or cellular system, only a variety of wishful speculations.’ Franklin M. Harold,* 2001. The way of the cell: molecules, organisms and the order of life, Oxford University Press, New York, p. 205. *Professor Emeritus of Biochemistry, Colorado State University, USA
Dr. James Tour, who, in my honest opinion, currently builds the most sophisticated man-made molecular machines in the world, will buy lunch for anyone who can explain to him exactly how Darwinian evolution works:
Top Ten Most Cited Chemist in the World Knows Darwinian Evolution Does Not Work - James Tour, Phd. - video https://www.youtube.com/watch?v=_Y5-VNg-S0s “I build molecules for a living, I can’t begin to tell you how difficult that job is. I stand in awe of God because of what he has done through his creation. Only a rookie who knows nothing about science would say science takes away from faith. If you really study science, it will bring you closer to God." James Tour – one of the leading nano-tech engineers in the world - Strobel, Lee (2000), The Case For Faith, p. 111 Science & Faith — Dr. James Tour – video (At the two minute mark of the following video, you can see a nano-car that was built by Dr. James Tour’s team) https://www.youtube.com/watch?v=pR4QhNFTtyw
Verse and Music:
Psalm 104:24 How many are your works, LORD! In wisdom you made them all; the earth is full of your creatures. Glorious Day - Casting Crowns http://myktis.com/songs/glorious-day/
bornagain77
In 58 Keith knows the process that generated the sequence -- flips of fair coin. Are you saying we have to know what process generated something in order to calculate CSI of that thing? wd400
wd400, ask keiths. See his analysis @ comment 58. He is correct when he writes:
To use the coin-flipping example, every sequence of 500 fair coin flips is astronomically improbable.
Barry Arrington
Barry,
To answer the second question you need to know whether the phenomenon has low probability and conforms to a specification.
And how do you know if phenomenon has low probability? wd400
Why wouldn’t you think we are simply using the dictionary definition of complex...
Because Dembski (and others) have written a very large body of work using a completely different definition of "complex." Orgel might have meant "complex" in the sense that you define it, but Dembski definitely isn't using the word that way. Learned Hand
adapa@ 80 Why wouldn't you think we are simply using the dictionary definition of complex, which is NOT simply many pieces. Here: adjective 1. composed of many interconnected parts; compound; composite: a complex highway system. 2. characterized by a very complicated or involved arrangement of parts, units, etc.: complex machinery. 3. so complicated or intricate as to be hard to understand or deal with: a complex problem. Interconnected, involved arrangements, intricate, hard to understand...... So there you go. Complex. It differs from things that aren't interrelated, interconnected, involved arrangements. Pebbles on a beach are none of these things. phoodoo
"Specified Complexity" is a dishonest attempt to smuggle (as Matt Dillahunty so aptly puts it) a designer into an argument. *Specified* means someone arranged it that way. The only way to tell if someone arranged things (objects, information, DNA) is 1) You witnessed them doing it, or 2) It conforms to a pattern that you have other independent evidence for, or experience of, that someone arranged things that way. And even case #2 is not conclusive. This is the same tactic as with the basic term "Intelligent Design", and Irreducible Complexity. It is an attempt at circular logic where you bake the answer into the question from the start. I'm not even sure Complexity is a valid objective measure of anything - I am damn sure that Specified is not. And for the purposes of your "challenge" you have apparently defined Specified along the lines of "something so improbable that they will never find an example of it, so I Win !" Shame on you, Barry. Fair Witness
This is worth reposting: keiths, to Barry:
By the way, are you ever going to admit that you were wrong about the need for chance hypotheses in establishing the presence of CSI and about the non-circular use of CSI to detect design?
I love the irony. Barry K. Arrington, the self-described “President” of UD, doesn’t understand ID and requires tutoring from ID critics. keith s
Don't kid yourself, Barry. See this comment:
Barry, People infer design all the time — sometimes correctly, sometimes incorrectly. The question isn’t whether design can ever be inferred. It’s whether it can be inferred in the cases under dispute, particularly those involving biological phenomena. I await any ID proponent’s demonstration that the flagellum or any other naturally occurring biological phenomenon is designed. By the way, are you ever going to admit that you were wrong about the need for chance hypotheses in establishing the presence of CSI and about the non-circular use of CSI to detect design?
keith s
keiths, you gave the store away in 58. Thanks. Barry Arrington
Barry,
To answer the second question you need to know whether the phenomenon has low probability and conforms to a specification.
Low probability with respect to the chance hypotheses. There is an 'H' in P(T|H). Dembski put it there for a reason. You're fighting a losing battle, Barry. keith s
To answer the second question you need to know whether the phenomenon has low probability and conforms to a specification. Barry Arrington
Barry, You are missing a simple and obvious point. The question...
How did X come about?
...is distinct from the question...
Does X exhibit CSI?
If you observe the process by which X arises, you can answer the first question. To answer the second question, however, you need to know the value of P(T|H). It's right there in Dembski's equation. And as Dembski explains, H must encompass "Darwinian and other material mechanisms". Chance hypotheses, in other words. Squirm all you like. You were wrong, and Dembski and the rest of us are right. keith s
Forget the coin flipping. Let's get real. :) See the below questions, based on an interesting commentary gpuccio posted in another thread:
can we regard the constant flux of information between epigenome-genome-epigenome as the only possible way to correctly describe cell differentiation? Is that flux ever interrupted, from the zygote to the adult being to a new zygote? Which of the following levels does the information reside? 1 - genome, both coding and non coding 2 - genome methylation 3 - histone modifications 4 - chromatin modifications 5 - transcription factors network 6 - regulatory RNAs (all the various forms) 7 - post-translational modifications 8 - asymmetric mytosis 9 - cell to cell signaling 10- all of the above 11- none of the above how do those different strata interact? are they independent, parallel networks which ensure a supreme redundancy and robustness, or do they work, at least in part, in sequences?
Dionisio
keiths @ 91. Just as soon as you acknowledge that one need not form a hypothesis about a cause of an phenomenon when one actually observed the cause of the phenomenon. I made the same mistake you are above. But having had that mistake pointed out, I'm comfortable admitting I was wrong. Try it! God knows you've demanded other people do it often enough. (If the shoe was on the other foot, I think we would have seen an OP by now titled something like, "So-and-So Simply Won't Admit that Orgel and Dembski Weren't Talking About the Same Thing Even Though it's Self Evident.") One does need to have such hypotheses to calculate CSI, because CSI is not about identifying one single way in which an event occurred. You need to know all the other ways in which it might have occurred, as well. Learned Hand
keiths @ 91. Just as soon as you acknowledge that one need not form a hypothesis about a cause of an phenomenon when one actually observed the cause of the phenomenon. Barry Arrington
PeterJ, No. I mean the probability any given coin is a head or a tail is 0.5. You could achieve this by throwing coins on the floor, tossing them individually, or drawing random numbers from one of several probability distributions. Dionisio. Now. wd400
BA, That's not actually testing whether CSI can detect design without knowing (or assuming) in advance whether the subject is designed. Do you understand what P(T|H) refers to? CSI isn't detecting design when it starts with the assumption that no non-design hypotheses are viable. In any event, CSI's boosters have a huge incentive to actually test its design-detection prowess in the real world. Their utter, and in many cases indignant, refusal to consider doing so strongly suggests that they are as convinced as I am that it just doesn't detect design. (Again, I think Ewert is explicit about that, but I'm not sure everyone got the memo.) Learned Hand
Barry, Before we move on to discuss your example, will you acknowledge that Dembski and the rest of us are right that chance hypotheses are required? You insisted that they weren't, but that's incorrect, as you now know. keith s
#69 wd400
Whoops, I know see that...
know? Dionisio
LH:
I think I’ll start a clock on any ID supporter actually testing whether CSI can detect design without knowing (or assuming) in advance whether the subject is designed.
How about an ID opponent a few minutes ago? See 58 where keiths explains it for you. Barry Arrington
Learned Hand #79, That's right. "X exhibits 842 bits of CSI" is no more meaningful than "Dembski is unaware of how a natural mechanism could have plausibly produced X". The former sounds a lot more mathy and sciencey than the latter, though. It's bad science, but good marketing. keith s
Keiths, read Dembski’s paper again. Read this part especially:
Probabilistic arguments are inherently fallible in the sense that our assumptions about relevant probability distributions might always be in error. Thus, it is always a possibility that {Hi}i?I omits some crucial chance hypothesis that might be operating in the world and account for the event E in question. But are we to take this possibility seriously in the absence of good evidence for the operation of such a chance hypothesis in the production of E? Indeed, the mere possibility that we might have missed some chance hypothesis is hardly reason to think that such a hypothesis was operating.
Dembski goes on to give an example of the kinds of hypotheses he thinks can be disregarded:
No experiments since [Miller-Urey] have shown how these building blocks could, by purely chemical means (and thus apart from design), be built up into complex biomolecular systems needed for life (like proteins and multiprotein assemblages, to say nothing of fully functioning cells).
It's a textbook argument from ignorance. "We don't know how this happened, therefore we can assume it didn't." Learned Hand
BA: In short, lawlike necessity accounts for the bulk of the effect. This is not the sort of information rich aperiodic interactively functional structures we have been pointing to over and over as examples of functionally specific complex organisation, only to be brushed aside in the haste to set up and knock over strawmen. I start, again with the Abu 6500 c3 reel. Please, take one up -- they are easy to find -- and satisfy yourself on the empirical reality and configurational constraints of FSCO/I. Understand, this is vastly simpler than protein synthesis in ribosomes, which is similarly FSCO/I. KF kairosfocus
Thanks Wd400, So, let me get this straight. I pick up a bucket of 500 coins and throw the contents up in the air, if they all happen to land heads then I could sat that this had a probability = 0.5? Look, don't worry, I'm not going to keep on about this, I am working night shift in a few hours and have to go lay down for a while :) PeterJ
Especially when for two years you personally have refused to provide an essay grounding on empirical evidence the blind watchmaker tree of life from root to twigs.
I think I'll start a clock on any ID supporter actually testing whether CSI can detect design without knowing (or assuming) in advance whether the subject is designed. If we start the clock with Dembski's early work, we're coming up on nearly twenty years, right? (To be fair, Ewert suggests this isn't what CSI is meant to do. But I don't think the UD regulars agree with that in practice; I'm not even sure Dembski does.) Learned Hand
keith, I think we are talking past each other. My example assumes a fair coin. If we flip a fair coin 500 times the pattern was the result of chance. Similarly, my challenge assumes the actual working of chance/law forces. There is no need to form a hypothesis about that which is observed to be in effect (i.e., chance/law processes in action). Barry Arrington
Barry Arrington “Complex” is not measured by the number of pebbles on the beach. It is measured by the probability of their configuration, which in the example you gave is close to 1 A natural process produced a highly ordered, specified structure that meets all of your CSI definitions in the original challenge. Too late to change the definitions now. Adapa
KS, please stop setting up and knocking over strawmen. Especially when for two years you personally have refused to provide an essay grounding on empirical evidence the blind watchmaker tree of life from root to twigs. The challenge to do so is till open, BTW. When we see that contrast, we will draw our conclusions on the tree with the bear the dog would not bark at. KF kairosfocus
phoodoo I wonder who said that? I don’t think this is the definition of complex at all. Then what is the definition of "complex" used by the ID community? How many parts and with what relationship to each other does something have to have to qualify as "complex"? Adapa
Keiths, read Dembski’s paper again. Read this part especially:
Probabilistic arguments are inherently fallible in the sense that our assumptions about relevant probability distributions might always be in error. Thus, it is always a possibility that {Hi}i?I omits some crucial chance hypothesis that might be operating in the world and account for the event E in question. But are we to take this possibility seriously in the absence of good evidence for the operation of such a chance hypothesis in the production of E? Indeed, the mere possibility that we might have missed some chance hypothesis is hardly reason to think that such a hypothesis was operating.
If Dembski really means that it's acceptable to disregard the probabilities of unknown non-design causes, then he can no longer claim that specified complexity is immune to false positives. In any case where he is ignorant of a viable non-design cause, he may correctly calculate CSI and falsely conclude that the subject exhibits specified complexity. Nor does CSI look like a very serious enterprise anymore. When calculating F=MA, one can't say, "Well, I don't know what acceleration is here... so let's just forget about that part." If CSI is calculating something in comparison to non-design alternatives, then it needs to consider what those alternatives are--otherwise, there's no basis for saying that the outcome is sufficiently improbable. "I don't know what value to use for P(T|H)" is not the same thing as "We don't need to consider P(T|H)." In other words, the impossibility of calculating the odds of unknown alternatives doesn't rescue a CSI calculation that depends upon them. Learned Hand
Adapa “Complex” is not measured by the number of pebbles on the beach. It is measured by the probability of their configuration, which in the example you gave is close to 1. Barry Arrington
Adapa, the Chesil beach example has long been answered, the matter is one of sorting action and is law pus chance similar to settling out soil in a beaker to see layers of different particle size; the beach is not a case of FSCO/I unlike a 6500 C3 reel or text in this thread or the like, or DNA. Do you wish to argue that water sorting action or the like explains proteins based on aa strings and as needed for life by the hundreds? The DNA code? Please think afresh. KF kairosfocus
Adapa, Did someone say that "complex" means having a large volume of things? I wonder who said that? I don't think this is the definition of complex at all. I wouldn't call a cup of flour complex, even though it has many pieces of powder inside. phoodoo
Barry:
Dang, keith, I could have written those two paragraphs. Perhaps we are not so far apart as I imagined.
Are you sure about that? Did you notice that I used the phrase "chance hypothesis"? Are you now admitting that Dembski is correct, and that you must employ chance hypotheses before you can establish the presence of CSI? keith s
Learned Hand, It's supremely ironic, isn't it? We have ID proponents who understand neither evolutionary theory nor ID, yet they're adamant that the former is false and the latter is correct. It's pure faith and no reason. Barry, Have you considered starting a thread in which ID supporters can ask questions about ID, with ID critics supplying the answers? keith s
#4 Moose Dr
I must admit, I am frustrated with the term “specified”. I would rather use “function specifying”. Provide 500 bits of data which, when provided to a data to function converter (such as a computer) produces complex function.
Agree. Thank you. Dionisio
LH" I have other things on the plate for today but glanced here. I am severely disappointed to see your "shut up" mischaracterisation, which -- given what has been repeatedly explained -- verges on outright mendacity as you are far too educated not to get the basic point. It comes across as this is too handy a rhetorical club to let the mere truth get in the way. That truth is simple. Given complexity beyond 500 - 1,000 bits, islands of function relevant to life will reliably not be found by sparse blind watchmaker searches as are feasible on sol system or observed cosmos scopes. Starting with OOL, where the reasonable hyps as to what can have been at work are physical, chemical and thermodynamic. The blind watchmaker thesis tree of life has no root. The relevant FSCO/I has one reasonable empirically warranted explanation, design and that also transforms estimates the rest of the way to the twigs including ours. Mathematically, it was repeatedly pointed out to you that the log reduction of the Dembski 2005 metric model reveals an info beyond a threshold metric. Where, information is readily empirically estimated on things such as structured strings of y/n qs to specify a config among possibilities, or statistical studies that bring up redundancies or the like. Info is readily estimated for D/RNA and Proteins especially, as has been published. The values do not help the blind forces thesis. The distance life systems are beyond the FSCO/I threshold is so large there is no material difference on simple 2 bits per base or 4.32 per AA and more sophisticated studies. A simplistic 1st cell with 100 AA proteins at 1 bit effective info per AA and 100 for life, is still 10,000 bits, where the config space DOUBLES per bit beyond 1,000 bits. As for, the implicitly hoped for searches for a golden search that somehow get you just right for OOL etc, we move to how samples are subsets so for a set of cardinality W the set of subsets is of cardinality 2^w, an exponentially harder yet search. Any reasonable blind search plausible on the physical setting of a Darwin's pond or the like will not justify such a golden search. As for novel body plans, mutations of various types, etc do not plausibly cross the gaps between protein clusters in AA sequence space, where jumps in genome size for the dozens of body plans look like 10 - 100+ mn bases per new plan. Within resources of Earth, not even sol system. So, the talking point on probability values vs info values has been answered over and over again, especially in recent days that this point has stuck its head over the parapet. Just, you refuse to respect the principle of fairly acknowledging that, and I have to say such because several times answers were directed to you. Please do better. KF kairosfocus
Keith s
To use the coin-flipping example, every sequence of 500 fair coin flips is astronomically improbable, because there are 2^500 possible sequences and all have equally low probability. But obviously we don’t exclaim “Design!” after every 500 coin flips. The missing ingredient is the specification of the target T. Suppose I specify that T is a sequence of 250 consecutive heads followed by 250 consecutive tails. If I then sit down and proceed to flip that exact sequence, you can be virtually certain that something fishy is going on. In other words, you can reject the chance hypothesis H that the coin is fair and that I am flipping it fairly.
Dang, keith, I could have written those two paragraphs. Perhaps we are not so far apart as I imagined.
But for Dembski, H must encompass all “Darwinian and other material mechanisms” that might explain the phenomenon in question. So yes, if P(T|H) is extremely low, where T is a prespecified target, and H is defined as broadly as Dembski requires, then Barry’s challenge becomes effectively impossible. If someone finds a natural mechanism producing T with sufficiently high probability, then CSI disappears — by definition. Barry’s challenge is empty. By the definition of CSI, it cannot be met.
Keiths, read Dembski’s paper again. Read this part especially:
Probabilistic arguments are inherently fallible in the sense that our assumptions about relevant probability distributions might always be in error. Thus, it is always a possibility that {Hi}i?I omits some crucial chance hypothesis that might be operating in the world and account for the event E in question. But are we to take this possibility seriously in the absence of good evidence for the operation of such a chance hypothesis in the production of E? Indeed, the mere possibility that we might have missed some chance hypothesis is hardly reason to think that such a hypothesis was operating.
Dembski would say in the context of the coin example that my challenge is not empty unless you can demonstrate some reason to suppose there is a chance/law process was in operation that would result in a 500 coin pattern other than 500 fairly flips. Barry Arrington
Barry Arrington The challenge does not call for “plausible [to you] scenarios. Here's a real world example. Chesil beach in England is a pebble beach 17 miles long. Over millions of years the iterative filtering action of the waves have created a very orderly sorting of the pebbles by size from fist-sized near Portland to pea-sized at West Bay. Locals can tell where they are on the beach just by observing the pebble size at their location. The beach is complex (made up of billions of parts) and has a specification (large-to-small). It also functions quite well as a beach. By pure chance the probability of the pebbles lining up by size across 17 miles is astromomical. By definition such an unlikely yet specified arrangement of pebbles has a huge CSI value yet thorough natural actions only it has happened. Adapa
Whoops, I know see that Bob O'H already presented the sample with replacement example. And yes, Learned Hand, you have it. wd400
By the definition of CSI, it cannot be met.
Trying to discuss CSI here has made two things clear: 1. The set of people who are confident that they know how CSI works does not overlap particularly well with the set of people who can actually have an informed conversation about it. 2. With a couple of notable exceptions the only really serious conversations about how CSI could work come from its critics. Its ardent supporters tend to treat it like an article of faith, with the inconvenient parts like P(T|H) (and in BA's case, the difference between how Orgel and Dembski defined their terms) taken aggressively off of the table. Learned Hand
Presumably he means that each coin has an independent probability X of being heads-up, where X is 1/2 in the first generation. In each subsequent generation, X is [number of heads in previous generation]/500. No? Learned Hand
Flip them, shake them up, draw a random binomial. You can start anywhere, it's just random with prob=0.5 is the furthers away from the specification, so, I guess, more CSI is being created during the "experiment". wd400
WD400, Can you please explain '500 coins randomly showing heads'. What makes them 'random' to start with? Please excuse me if this is a stupid question. PeterJ
Barry, You're still neglecting the fact that the CSI equation includes P(T|H), where H stands for all "Darwinian and material mechanisms". Suppose you have some phenomenon in mind that you think is an example of CSI. You challenge me to show how that phenomenon can be plausibly produced by a natural mechanism. If I succeed, then I have demonstrated that P(T|H) is not as low as you thought it was. And since P(T|H) is relatively high, the phenomenon, by definition, does not exhibit CSI. In other words, as soon as I succeed, the definition of CSI redefines my success as a failure. The challenge cannot be met, even in principle. It is an empty challenge. keith s
So, the specificity is that the 500 coins are all heads? Here's a chance process that can create such a specific outcome: random sampling with replacement. 1. Start with 500 coins, randomly showing heads with probability 0.5 (or whatever number want to start with). 2. Start a new "population" of coins, this time with coins randomly showing heads with probability equal to the frequency of heads in teh previous "population" 3. Repeat Most of the time, you'll have a population that meets the specific requirements within 500 samples. This process, by the way, is pretty much analogous to genetic drift. wd400
You were shown two plausible scenarios
The challenge does not call for "plausible [to you] scenarios." Barry Arrington
Adapa,
The definition of ‘bit’ only requires there be two possible states i.e 1, 0. It says nothing about the relative probabilities of the states.
That's how I would think of a binary digit, certainly. But googling for definitions of "bit" I found a reference to the definition used in information theory, which (according to Wikipedia) assumes equal probability of either state. And that's the sum total of my knowledge on the subject. Learned Hand
Barry Arrington 1. Yes. A chance process will never result in 500 heads, because, as you say, the probability is too low. Pure chance will almost certainly never do it. A process of chance with the feedback of filtering differential selection can do it easily. Guess which category evolutionary process falls into? You were shown two plausible scenarios that did result in 500 heads with no intelligent intervention, unless you consider crows to be an Intelligent Designer. Sorry if you didn't think through the question before asking it. Adapa
Keith s
That’s right. It’s the combination of low probability and specification that renders the challenge effectively impossible by definition. Both are necessary to establish the presence of CSI.
This is an interesting comment. I agree with it if you substitute “in practice” for “by definition.” That’s right. It’s the combination of low probability and specification that renders the challenge effectively impossible in practice. Both are necessary to establish the presence of CSI. We are on an even playing field. The definitions are the same for both design and chance/law processes 1. Any 500 coin pattern is going to have an astronomically low probability. This is true whether the pattern was created by design or by chance. A designed 500 coin pattern will have 500 bits of information. A chance pattern will also have 500 bits of information. 2. All 500 coin patterns are, as a matter of logic and physics, within the reach of both design and chance. The definitional playing field has not been tilted one way or the other. Therefore, it is not impossible “by definition” for chance to account for the pattern “500 coins.” It is impossible in practice. Barry Arrington
Learned Hand,
Not to beat a dead elephant, but would it become impossible if the challenge admitted the role of P(T|H) in a CSI calculation? If I understand the term–which I obviously might not!–it rules out non-design origins as part of the determination of CSI, making it effectively impossible to show non-design sources of CSI.
To use the coin-flipping example, every sequence of 500 fair coin flips is astronomically improbable, because there are 2^500 possible sequences and all have equally low probability. But obviously we don't exclaim "Design!" after every 500 coin flips. The missing ingredient is the specification of the target T. Suppose I specify that T is a sequence of 250 consecutive heads followed by 250 consecutive tails. If I then sit down and proceed to flip that exact sequence, you can be virtually certain that something fishy is going on. In other words, you can reject the chance hypothesis H that the coin is fair and that I am flipping it fairly. But for Dembski, H must encompass all "Darwinian and other material mechanisms" that might explain the phenomenon in question. So yes, if P(T|H) is extremely low, where T is a prespecified target, and H is defined as broadly as Dembski requires, then Barry's challenge becomes effectively impossible. If someone finds a natural mechanism producing T with sufficiently high probability, then CSI disappears -- by definition. Barry's challenge is empty. By the definition of CSI, it cannot be met. keith s
niwrad Learned Hand #46 why do they have to have equal probability? For the very definition of “bit”. The definition of 'bit' only requires there be two possible states i.e 1, 0. It says nothing about the relative probabilities of the states. Adapa
Mark, you say I am asking for something that is impossible. Yes and no. 1. Yes. A chance process will never result in 500 heads, because, as you say, the probability is too low. 2. No. The pattern is most certainly possible. I’m sure you will agree that an agent is capable of arranging 500 coins into an “all heads” pattern fairly easily. What does this mean? It means that my challenge will never be met. Chance/law processes will never be shown to have produced complex information (500 bits in our example) that conforms to a specification (500 heads in our example). The challenge will never be met not because it is impossible “by definition.” It will never be met because it is impossible in practice. Here is the point of all of this. There is nothing special about the “500 coins” pattern. It is merely a stand-in for all highly improbable patterns that conform to a specification. My challenge will never be met, because no one will ever be able to show me a chance/law process that has been actually observed creating 500 bits of information that conforms to a specification. Barry Arrington
niwrad In fact 1 bit of real CSI means that two possibilities (open/close, 1/0, on/off…) have equal probability to occur. Where did that new requirement for "real" CSI come from? Looks like you just made it up. Adapa
2. Now, suppose you and I were born at the same time as the big bang and did not age. Suppose further that instead of intentionally arranging the coins you watched me actually flip the coins at the rate of one flip per second. While it is not logically impossible for me to flip “all 500 heads,” it is not probable that we would see that specification from the moment of the big bang until now.
What if you had strange coins that acted so that after you had flipped all of them, the probability any coin would come up heads on the next set of 500 flips would be the proportion of coins that were presently heads? Would you call that a "chance/law process"? Bob O'H
why do they have to have equal probability?
For the very definition of “bit”.
That confused me until I looked up "bit"--apparently in information theory it is assumed that the two states are equally probable. Thanks, I never knew that. I don't see how it applies to CSI, though, especially as you've used it. Surely Dembski would say that if my 60-40 coin returned one million heads in a row, the result has more than 500 bits of CSI? I don't think he would say the calculation is inapplicable because the underlying states aren't equally probable, would he?
That’s not the case of adapa’s examples, where the odds are even 100-0 (*all* coins on the table are heads).
That's the post hoc result, but not by definition. If you happened to return home before the process was complete (not all coins have been shaken to a resting state, not all coins have been plucked by the bird) then you'd get a partially-selected result rather than "100-0". Learned Hand
R0bb:
But for the record, I don’t agree with the argument that the low-probability requirement renders your challenge effectively impossible by definition. After all, low probability events happen all the time.
That's right. It's the combination of low probability and specification that renders the challenge effectively impossible by definition. Both are necessary to establish the presence of CSI. keith s
Learned Hand #46
why do they have to have equal probability?
For the very definition of "bit".
Let’s say the coin is unfair and the odds are 60-40...
That's not the case of adapa's examples, where the odds are even 100-0 (*all* coins on the table are heads). niwrad
Barry:
Keith s I have read Specification: The Pattern That. Signifies Intelligence where those terms are discussed on page 3. So, yes, I know what the terms mean in Dembski’s work.
Apparently not, because Dembski makes it absolutely clear that chance hypotheses are required in order to establish the presence of CSI:
In Fisher’s approach to testing the statistical significance of hypotheses, one is justified in rejecting (or eliminating) a chance hypothesis provided that a sample falls within a prespecified rejection region (also known as a critical region). For example, suppose one’s chance hypothesis is that a coin is fair. To test whether the coin is biased in favor of heads, and thus not fair, one can set a rejection region of ten heads in a row and then flip the coin ten times. In Fisher’s approach, if the coin lands ten heads in a row, then one is justified rejecting the chance hypothesis. [Emphasis added]
And:
More formally, the problem is to justify a significance level ? (always a positive real number less than one) such that whenever the sample (an event we will call E) falls within the rejection region (call it T) and the probability of the rejection region given the chance hypothesis (call it H) is less than ? (i.e., P(T|H) < ?), then the chance hypothesis H can be rejected as the explanation of the sample... The more opportunities for an event to occur, the more possibilities for it to land in the rejection region and thus the greater the likelihood that the chance hypothesis in question will be rejected... Rejection regions eliminate chance hypotheses... [Emphasis added]
And on p. 18:
Next, define p = P(T|H) as the probability for the chance formation for the bacterial flagellum. T, here, is conceived not as a pattern but as the evolutionary event/pathway that brings about that pattern (i.e., the bacterial flagellar structure). Moreover, H, here, is the relevant chance hypothesis that takes into account Darwinian and other material mechanisms. [Emphasis added]
You can't establish the presence of CSI without calculating P(T|H), where H represents the chance hypotheses. Thus you are wrong to claim that chance hypotheses are not needed. Dembski disagrees with you, and he is the person who defined CSI. Will you admit, and correct, your error? keith s
But for the record, I don’t agree with the argument that the low-probability requirement renders your challenge effectively impossible by definition. After all, low probability events happen all the time.
Not to beat a dead elephant, but would it become impossible if the challenge admitted the role of P(T|H) in a CSI calculation? If I understand the term--which I obviously might not!--it rules out non-design origins as part of the determination of CSI, making it effectively impossible to show non-design sources of CSI. If we aren't considering the odds of non-design origins, then adepa's examples @ 31 seem like they fit. So would the distribution of fallen sticks in my front yard; it's extraordinarily unlikely that they'd be arranged in a gradient of big ones near my front door and smaller ones out by the curb. And yet they are, because my dog selects them and tries to drag the big ones inside after walks. (And if my dog counts as intelligence, we could look at the distribution of stones by a floodplain, sediment size at the bottom of a pond, vegetation mass in fire zones or anything else that gets sorted or filtered by nature.) I don't think Dembski would consider these to have CSI, since he'd say there's a high probability of a non-design origin. Learned Hand
Adapa, "all that does is push the problem to the definition of “complex function”" Take a car part, look at it, ignore the really simple ones like a sheer pin or piece of wire. There, you have an example of a complex function. Look at proteins, especially some of those really cool ones like are found in the bacterial flagella, in ADP synthase, etc. There's gazillions of examples, each one obviously does something very far beyond simple. Adapa, "ID has always had the problem (some say strategy) of keeping its definitions so vague that ..." The NDE community always has the problem of finding some stupidly simple example and saying "here, see" -- in this case of the simples "this kinda does something" protein. Why does the NDE community want to know where the edge of simplicity is so that it can prove that to be possible? Why doesn't the NDE community show how easily chance + selection can produce the "obviously complex" case? Oh yea, because you can't! Moose Dr
Barry, the point is that low probability is a necessary condition of CSI. Everyone agrees that "it is most certainly NOT merely low probability that gives the pattern CSI" [italics mine]. Nobody is denying the need for specificity. But for the record, I don't agree with the argument that the low-probability requirement renders your challenge effectively impossible by definition. After all, low probability events happen all the time. R0bb
niwrad, In fact 1 bit of real CSI means that two possibilities (open/close, 1/0, on/off…) have equal probability to occur. I don't think I've heard this one before; why do they have to have equal probability? Isn't it enough that the final specified result is arbitrarily unlikely, even if one component possibility is more likely than the other? That is, let's say the coin is unfair and the odds are 60-40; I think Dembski would still say a string of 1,000 heads would still exhibit CSI. Is that not right? Learned Hand
#44 BA I understand it is not mere low probability. But low probability is a necessary (but not sufficient) condition. Therefore there cannot be a case of CSI without low probability. You have to know there is a low probability of meeting the specification in order to know there is CSI. markf
Markf @ 42:
You know something has CSI because there is an astronomically low probability of a chance hypothesis causing it.
You obviously read my comment 28 (you quoted it), but you don’t seem to have understood it. Again, as I have tried to explain many times, in my 500 coins example it is most certainly NOT merely low probability that gives the pattern CSI. Let me try to put it this way. If CSI resulted from mere low probability ALL 500 coin patterns would contain CSI. Why? Because all 500 coin patterns have the exact same (low) probability. Barry Arrington
adapa #31 Finally two examples... compliments. Unfortunately your examples cannot count as CSI really created by chance. In fact 1 bit of real CSI means that two possibilities (open/close, 1/0, on/off...) have equal probability to occur. In your #1 example given "the difference in form factor the tails (heads down) tend to walk more than the heads". In #2 "the tails side is slightly shiner than the heads side". These differences are such that in the described scenarios (train's vibration for #1 and crow picking up coins in #2) the probability of having a face of the coin (say "1") is far higher than having the other one ("0"). We don't have a real "bit" 1/0. As a consequence that is not CSI. niwrad
Barry #40 As several people have tried to explain to you - you have asked people to do something is almost impossible by definition. You know something has CSI because there is an astronomically low probability of a chance hypothesis causing it. So of course no one can meet your challenge. It is like challenging someone to find a married bachelor. markf
This may help (or add more confusion) Barry: ANY configuration of 500 coins is improbable. This is only true if you assume the (admittedly extremely plausible) chance hypothesis that the probability of one coin being a head (or tail) is independent of all others and is not close to 1. There is always a chance hypothesis in there somewhere. It just may be so obvious you don't see it. markf
Keith s I have read Specification: The Pattern That. Signifies Intelligence where those terms are discussed on page 3. So, yes, I know what the terms mean in Dembski’s work. Are you going to take a crack at meeting the challenge in the OP? Based on your answers so far I assume not. Barry Arrington
Your comment is kind of amusing because you grasp that I’m right, but you can’t resist taking a rhetorical swing at me.
I didn't mean it to be a "rhetorical swing." I meant it to be a serious challenge: you take CSI on faith, neither understanding the details of the calculation nor willing to discuss whether your basic premises might be incorrect. When I asked gpuccio to explain how he calculates CSI, he didn't sneer or insult or complain or change the subject. He answered the question, graciously and thoughtfully. Based on your comments here, do you think you could fairly be described as "gracious"? Learned Hand
No, because P(T|H) does not depend on how T actually came about. It depends on all the ways T could have come about through non-design means. For example, suppose Barry deliberately places a single coin tails up on a table. That is a designed outcome, but it certainly doesn’t exhibit CSI, because it could easily have been produced by simply flipping the coin. This is important, because we are supposed to be able to assign CSI to things even when we haven’t witnessed their genesis.
Keith, Thanks, that makes sense. I guess I fell into the trap of oversimplifying a probability calculation to make it match my intuition! Learned Hand
keiths:
Do you understand Dembski’s CSI equation? Do you know what the P(T|H) term represents, and why? Do you know what H stands for?
Barry:
Do you understand that it is pointless to perform a Bayesian analysis when you already know the answer?
I'll take that as a 'no' to all three questions. You've also answered my next question, which is "Do you know that Dembski's approach is Fisherian, not Bayesian?" The answer to that question is also clearly "no". keith s
Barry: "centrestream, you already know what you have actually observed. Why is that so hard to understand?" Thank you for demonstrating what I have actually observed. I ask a simple, yet serious, question, and you respond with sarcasm. Would you like to try again? What is already known? That something is complex or that something is designed? centrestream
Barry
keith s @ 27. Do you understand that it is pointless to perform a Bayesian analysis when you already know the answer?
Nobody is trying to perform a Bayesian analysis, nor is anyone trying to find an answer that we already know (by which I assume you mean the actual cause of the pattern). Conditional probabilities do not imply Bayesian analysis. P(T|H) is a factor in Dembski's definition of specified complexity, and Dembski explicitly claims that his design detection method is non-Bayesian. And again, we're not trying to find the cause of the pattern. We're trying to determine whether it has CSI. R0bb
Adapa @ 31. I assume you are unable to meet the challenge, because you have made no attempt to do so. Maybe you don't understand the challenge. Let me help you. Speculating about how chance/law forces might result in the pattern is not the same as showing that chance/law forces actually did result in the pattern. I actually want to thank you for your comment though, because it illustrates perfectly the Darwinist mindset. Darwinism is the only scientific theory of which I am aware in which evidence-free speculations of the researcher actually count as evidence. Barry Arrington
centrestream, you already know what you have actually observed. Why is that so hard to understand? Barry Arrington
Barry #29: "Do you understand that it is pointless to perform a Bayesian analysis when you already know the answer?" I'm curious. What is the answer that you already know? That something is complex? Or that something is designed? centrestream
If you allow selection feedback to work on the objects I can think of two plausible non-intelligent scenarios for your 500 heads. 1. Unknown to you your roommate dumps his stash of 1000 pennies randomly on a shaky card table. They land half heads, half tails. He leaves. Your flat is next to the train tracks and every time a train comes by the table vibrates fiercely. Because of the difference in form factor the tails (heads down) tend to "walk" and fall off the table. The heads (tails down) don't move. After enough trains have gone by the table holds nothing but 500 heads. You come home, find them and falsely conclude design. 2. Unknown to you your roommate dumps his stash of 1000 pennies randomly on a table. They land half heads, half tails. He opens the window and leaves. A crow lands in the window and sees the coins. Because the tails side is slightly shiner than the heads side the crow picks up a "tails" and flies off with it. The process is repeated until the table holds nothing but 500 heads. You come home, find them and falsely conclude design. I pointed the problem out before but you haven't addressed it yet - iterative processes involving selection feedback can blow right by the 500 bit threshold. Evolution is an iterative process involving selection feedback. Adapa
Robb, Thank you for your example. The debating can begin. Anybody have any more? This is exciting! Ed Edward
keith s @ 27. Do you understand that it is pointless to perform a Bayesian analysis when you already know the answer? Barry Arrington
keith
Something possesses CSI if a) it is specified, and b) it cannot be produced by “Darwinian and other material mechanisms”
I don’t know anyone who defines CSI this way. Therefore, your attempt to show circularity fails. Take the 500 coin example. I say the “500 heads” pattern contains complex specified information. Because it is complex information (500 bits) and it is specified (“500 heads”). Put another way the search space is a gigantic ocean (all patterns of 500 coins) and the specification is one small island in that ocean (“500 heads”) that is descriptively compressible. Unlike your scenario, I never defined CSI as being, by definition, “that which is beyond material mechanisms.” Indeed, if you read my post carefully, you will see that I said just exactly the opposite. I stated that it is logically possible for chance to arrive at the specification. We can be practically certain, however, that it never will. The design inference is not based simply on low probability. Again, the probability of ALL 500 coin sequences is exactly the same, because all 500 coin patterns contain the exact same amount of information (500 bits). It is only the combination of the astronomically low probability with the specification (500 heads) that results in the design inference. Barry Arrington
Barry, Let me re-ask a question from the other thread. Do you understand Dembski's CSI equation? Do you know what the P(T|H) term represents, and why? Do you know what H stands for? keith s
Learned Hand, to R0bb:
Your point seems right, but isn’t the problem with the hypo simpler than that? If you see coins being laid out, then isn’t PTH just 0?
No, because P(T|H) does not depend on how T actually came about. It depends on all the ways T could have come about through non-design means. For example, suppose Barry deliberately places a single coin tails up on a table. That is a designed outcome, but it certainly doesn't exhibit CSI, because it could easily have been produced by simply flipping the coin. This is important, because we are supposed to be able to assign CSI to things even when we haven't witnessed their genesis. keith s
Learned @ 23: Your comment is kind of amusing because you grasp that I’m right, but you can’t resist taking a rhetorical swing at me. You are absolutely correct. A Bayesian analysis regarding the provenance of a pattern is pointless if you have actual knowledge regarding the provenance of the pattern. R0bb insists on a “chance hypothesis” when there is no need for any hypothesis. R0bb says that he can’t know whether an event is improbable unless there is a chance hypothesis. Nonsense on a stick. ANY configuration of 500 coins is improbable. This is true whether the configuration resulted from chance or design. Barry Arrington
Barry, markf and R0bb have been patiently explaining this to you, but you're brushing them off instead of thinking about what they're saying. The same bad logic is used in the following two scenarios. Scenario I 1. Definition: Something possesses nurpitude if a) it is blue, and b) it cannot be built from toothpicks. 2. You issue a challenge: "Show me just one example of something with nurpitude being built from toothpicks. Just one!" 3. Your opponents point out that anything that can be built from toothpicks is automatically, by definition, devoid of nurpitude. No matter how powerful toothpick construction techniques are, the challenge cannot be met. If X can be built from toothpicks, it automatically, by definition, does not possess nurpitude. 4. Therefore the challenge is empty. Scenario II 1. Definition: Something possesses CSI if a) it is specified, and b) it cannot be produced by "Darwinian and other material mechanisms". 2. You issue a challenge: "Show me just one example of something with CSI being produced by natural mechanisms. Just one!" 3. Your opponents point out that anything that can be produced by natural mechanisms is automatically, by definition, devoid of CSI. No matter how powerful natural mechanisms are, the challenge cannot be met. If X can be produced by natural mechanisms, it automatically, by definition, does not possess CSI. 4. Therefore the challenge is empty. It's the same bad logic in both scenarios. You have fallen into the circularity trap. keith s
R0bb, the UD answer to the P(T|H) problem seems to be "shut up." It's a versatile response, both easier and safer than having a conversation. Your point seems right, but isn't the problem with the hypo simpler than that? If you see coins being laid out, then isn't PTH just 0? Learned Hand
Materialism in 52 seconds. bb
Robb,
Without a chance hypothesis, how do I determine that it’s highly improbable?
*sigh* Never mind R0bb. If all you want to do is play definition derby in response to a straightforward challenge, that is all the answer I need. You've got nothing. OK. Barry Arrington
Edward:
Let’s instead post working examples (the submitted examples could even fit our own personal definitions), the esteemed pannel of posters at UD will determine if its a worthy example.
Contrary to Barry's assertion that I "responded not by meeting the challenge", I actually did point to a working example in response to his challenge. Here's a summary of that example: 1) Ewert calculates that the pattern has 1,068,017 bits of specified complexity under the chance hypothesis of equiprobability. 2) The pattern is known to have been created by natural processes. 3) In practice, equiprobability is the only chance hypothesis that IDists (other than Ewert) ever consider. HeKS responded, but I doubt that many, if any, IDists will agree with his response. R0bb
Edward, I kind of like your idea about posting examples and having the jurors deliberate on them. Maybe finally we will understand 'n-D e' on actual examples! :) Thank you for the clever suggestion. P.S. as you know, there is a thread with a few of those examples already in it. :) Also, the more recent neuroscience OP has a few examples to review. Dionisio
Markf, You don't understand. Let's avoid all of the definition lamers. Let's instead post working examples (the submitted examples could even fit our own personal definitions), the esteemed pannel of posters at UD will determine if its a worthy example. This could be the most fun UD thread yet! Edward
To meet my challenge, all you have to do is show me where chance/law forces have been observed to create 500 bits of specified information.
What about Hydrogen atom, which has a single Proton with a single energy level. Does it qualify ? Me_Think
Barry:
You see a highly improbable (500 bits) pattern conforming to a specification.
Without a chance hypothesis, how do I determine that it's highly improbable? R0bb
R0bb @ 11
That’s certainly true, but we’re not trying to explain the cause of the coin pattern. We trying to determine whether the coin pattern has CSI. Can you please tell us how to do that without a chance hypothesis?
1. Suppose you watched me arrange the coins. You see a highly improbable (500 bits) pattern conforming to a specification. Yes, it has CSI. 2. Now, suppose you and I were born at the same time as the big bang and did not age. Suppose further that instead of intentionally arranging the coins you watched me actually flip the coins at the rate of one flip per second. While it is not logically impossible for me to flip “all 500 heads,” it is not probable that we would see that specification from the moment of the big bang until now. So you see, we’ve actually observed the cause of each pattern. The specification was achieved in scenario 1 by an intelligent agent with a few minutes’ effort. In scenario 2 the specification was never achieved from the moment of the big bang until now. The essence of the design inference is this: Chance/law forces have never been actually observed to create 500 bits of specified information. Intelligent agents do so routinely. When we see 500 bits of specified information, the best explanation (indeed, the only explanation that has actually been observed to be a vera causa) is intelligent agency. To meet my challenge, all you have to do is show me where chance/law forces have been observed to create 500 bits of specified information. Barry Arrington
#13 Edward Without the definitions you don't know that the challenge is or what the working examples are examples of. markf
Rather than another boring debate on definitions, wouldn't it be much more fun to produce working examples. Then we could debate on whether or not the examples provided answer the challenge. Yea! Edward
I think it's important to point out that things like portraits of Elvis Presley appear in the foam of lattes from Starbucks. And that natural weathering of rocks produces arches and other patterns that human beings think look like something other than rocks. Children commonly play the game of looking for bunny rabbits in drifting clouds. Etc., etc. So, there is the fact that FLEETING (10,000 years is "fleeting" in geology) unlikely events do occur. The thing that suggests any intelligence behind the phenomenon is REPEATABILITY: did the exact same unlikely thing happen TWICE? Did Adam and Eve BOTH appear in the same generation? There is some chance that a tornado blowing through a junk yard can produce ONE 747. But when you see a DOZEN 747s at the same airport, the only reasonable conclusion is an Intelligent Designer. Or for us cloud gazers, if I notice that the SAME cloud is appearing day after day, I'll probably start looking for smokestacks. mahuna
Barry:
As I said above, in my coin example there is no need to form any sort of hypothesis to explain the cause of the coin pattern.
That's certainly true, but we're not trying to explain the cause of the coin pattern. We trying to determine whether the coin pattern has CSI. Can you please tell us how to do that without a chance hypothesis? R0bb
The GS (genetic selection) Principle – David L. Abel – 2009 Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.” http://www.bioscience.org/fbs/getfile.php?FileName=/2009/v14/af/3426/3426.pdf http://www.us.net/life/index.htm bornagain77
corrected link: Programming of Life – Dr. Donald Johnson interviewed by Casey Luskin – audio podcast http://intelligentdesign.podomatic.com/entry/2010-01-27T12_37_53-08_00 bornagain77
Moose Dr I must admit, I am frustrated with the term “specified”. I would rather use “function specifying”. Provide 500 bits of data which, when provided to a data to function converter (such as a computer) produces complex function. That's a good idea MDr but unfortunately all that does is push the problem to the definition of "complex function". ID has always had the problem (some say strategy) of keeping its definitions so vague that discrete value calculations (i.e 500 bits = designed) become totally subjective. To be clear science has plenty of vague definitions too (like "species") but it doesn't rely on precise calculations from those definitions to make its case. Adapa
It is part of the definition of CSI that the chance hypothesis being considered is so unlikely to produce the outcome it is effectively impossible. So of course you will never find a change hypothesis generating 500 bits of CSI.
This was my thought as well. What's the actual calculation you'd use to count bits of CSI that doesn't consider non-design hypotheses? Learned Hand
Using Shannon information as a metric does not help you as much as you think it does adapa, since the Shannon information metric puts a severe constraint on the evolvability of codes once they are put in place (by a Mind): Shannon Information - Channel Capacity - Perry Marshall https://vimeo.com/106430965 “Because of Shannon channel capacity that previous (first) codon alphabet had to be at least as complex as the current codon alphabet (DNA code), otherwise transferring the information from the simpler alphabet into the current alphabet would have been mathematically impossible" Donald E. Johnson – Bioinformatics: The Information in Life Biophysicist Hubert Yockey determined that natural selection would have to explore 1.40 x 10^70 different genetic codes to discover the optimal universal genetic code that is found in nature. The maximum amount of time available for it to originate is 6.3 x 10^15 seconds. Natural selection would have to evaluate roughly 10^55 codes per second to find the one that is optimal. Put simply, natural selection lacks the time necessary to find the optimal universal genetic code we find in nature. (Fazale Rana, -The Cell's Design - 2008 - page 177) "A code system is always the result of a mental process (it requires an intelligent origin or inventor). It should be emphasized that matter as such is unable to generate any code. All experiences indicate that a thinking being voluntarily exercising his own free will, cognition, and creativity, is required. ,,,there is no known law of nature and no known sequence of events which can cause information to originate by itself in matter. Werner Gitt 1997 In The Beginning Was Information pp. 64-67, 79, 107." (The retired Dr Gitt was a director and professor at the German Federal Institute of Physics and Technology (Physikalisch-Technische Bundesanstalt, Braunschweig), the Head of the Department of Information Technology.) Second, third, fourth… genetic codes - One spectacular case of code crowding - Edward N. Trifonov - video https://vimeo.com/81930637 In the preceding video, Trifonov elucidates codes that are, simultaneously, in the same sequence, coding for DNA curvature, Chromatin Code, Amphipathic helices, and NF kappaB. In fact, at the 58:00 minute mark he states, "Reading only one message, one gets three more, practically GRATIS!". And please note that this was just an introductory lecture in which Trifinov just covered the very basics and left many of the other codes out of the lecture. Codes which code for completely different, yet still biologically important, functions. In fact, at the 7:55 mark of the video, there are 13 codes that are listed on a powerpoint, although the writing was too small for me to read. "In the last ten years, at least 20 different natural information codes were discovered in life, each operating to arbitrary conventions (not determined by law or physicality). Examples include protein address codes [Ber08B], acetylation codes [Kni06], RNA codes [Fai07], metabolic codes [Bru07], cytoskeleton codes [Gim08], histone codes [Jen01], and alternative splicing codes [Bar10]. Donald E. Johnson – Programming of Life – pg.51 - 2010 further notes: Programming of Life - Information - Shannon, Functional & Prescriptive – video https://www.youtube.com/watch?v=h3s1BXfZ-3w Dr. Don Johnson explains the difference between Shannon Information and Prescriptive Information, as well as explaining 'the cybernetic cut', in this following Podcast: Programming of Life - Dr. Donald Johnson interviewed by Casey Luskin - audio podcast http://www.idthefuture.com/2010/11/programming_of_life.html Three subsets of sequence complexity and their relevance to biopolymeric information - Abel, Trevors Excerpt: Three qualitative kinds of sequence complexity exist: random (RSC), ordered (OSC), and functional (FSC).,,, Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC).,,, Testable hypotheses about FSC What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses: Null hypothesis #1 Stochastic ensembles of physical units cannot program algorithmic/cybernetic function. Null hypothesis #2 Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function. Null hypothesis #3 Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function. Null hypothesis #4 Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time. We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified. http://www.tbiomed.com/content/2/1/29 Mathematically Defining Functional Information In Molecular Biology - Kirk Durston - video https://vimeo.com/1775160 Kirk Durston - Functional Information In Biopolymers - video http://www.youtube.com/watch?v=QMEjF9ZH0x8 Measuring the functional sequence complexity of proteins - Kirk K Durston, David KY Chiu, David L Abel and Jack T Trevors - 2007 Excerpt: We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable. To demonstrate the relevance to functional bioinformatics, a method to measure functional sequence complexity was developed and applied to 35 protein families.,,, http://www.tbiomed.com/content/4/1/47 bornagain77
On the challenge itself. It hits the very circularity problem that Winston pointed out. It is part of the definition of CSI that the chance hypothesis being considered is so unlikely to produce the outcome it is effectively impossible. So of course you will never find a change hypothesis generating 500 bits of CSI. markf
I must admit, I am frustrated with the term "specified". I would rather use "function specifying". Provide 500 bits of data which, when provided to a data to function converter (such as a computer) produces complex function. Moose Dr
#2 Joe
Why is that so difficult to understand?
Dembski has written several books and papers about it including quite extensive mathematical definitions and even then it is the subject of much controversy. A thread well over one hundred comments long including disputes between ID proponents on CSI. This all suggests it is not so easy to understand what CSI is. markf
Umm Shannon's metric was only for measuring information. And CSI has been defined to death. It is nothing more than normal everyday information- the type that the human world could not exist without. It permeates our societies. Why is that so difficult to understand? Joe
Can you please provide your definitions of "complex" and "specified" (or specification) so there is no ambiguity about what is being asked. Unless you indicate otherwise I'll assume you mean the Shannon metric for "information". Thanks. Adapa

Leave a Reply