Uncommon Descent Serving The Intelligent Design Community

Did Mark Perakh Not Get The Dover Memo?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

We keep getting told that the Dover (Kitzmiller) decison was the end of Intelligent Design. Judge Jones ruled that ID is just creationism in a cheap tuxedo. Yet physicist and regular contributor to Panda’s Thumb, Mark Perakh, is still struggling to dispute Dembski’s design detection math. I don’t get it. Is Mark in the business of arguing with cheap tuxedos or have rumors of ID’s death been highly exaggerated?

And just for kicks, the paper itself begins with a hugely flawed example and continues to use the flawed example through the end. The author begins by using for an example of specified complexity a poker program which is observed to deal a royal flush on the very first hand. It is then put foward that most people would reasonably presume the program was flawed. No problem with that presumption – a betting man would bet that the program is flawed. The problem is in equating this with Dembski’s specified complexity. A royal flush happens on average in one of every 2.5 million hands. That seems like long odds but in Dembski’s reasoning it’s not even close to long odds. Dembski says that the odds against something must be one in 10 to the 150th power before a design inference can be made. If the author changes his example to getting dealt 25 royal flushes in a row he’ll have an example of specified complexity aligned with Bill Dembski’s definition. One royal flush ain’t nearly enough except to make the paper specious to the casual observer who doesn’t know about Dembski’s Universal Probability Bound.

Comments
Salvador Thanks for your comments. I have put my responses to a number of comments about the essay here: http://mark_frank.blogspot.com/2006/07/specification-vs-likelihood-revisited.html MarkMark Frank
July 11, 2006
July
07
Jul
11
11
2006
01:45 AM
1
01
45
AM
PDT
However improbable that outcome, any other set of three or fifty hands is equally improbable
True, but write down a specification of an exact hand. It should have the probability of a Royal flush in spades = 1 / [ 52! / (5! (52-5)!) ] number of bits is log2[ 52! / (5! (52-5)!) ] Have someone completely shuffle the cards, and then deal them to you. What do you think the chances are your specification will be hit by a random shuffle? ANY specification works as long as it is detachable and improbable. Specification is important. How could one possibly not use specification in a copyright infringement suit (which as valid instance of the EF)? Finally, your critique obfuscated Dembski's work rather than clarifying it. Your treatment of a no-aces hand was totally off-base. What Dembski said:
NFL page 78: Finally, in placing targets on a wall, it makes no sense to place targets within targets if hitting any target yields the same prize. If one target is included within another target and if all that is at issue is whether some target was hit, then all that matters is the biggest target. The small ones that sit inside bigger ones are, in that case, merely along for the ride.
For example, the spade royal flush is part of the class of royal flush. If our target of interest is any royal flush then hitting a spades royal flush is a sufficient but not necessary condition for getting a royal flush. Thus one does not have to explicitly calculate the odds of a spades royal flush, but merely the simpler class of royal flush. If a spades royal flush appears, one still has at least the surprisal value of a royal flush, and if ones detection threshhold is royal flush, then a spades royal flush(if that's more special to you) is merely icing on the cake. The odds of a royal flush target being hit are: Royal flush = 4 / [ 52! / (5! (52-5)!) ] number of bits is log2[ [ 52! / (5! (52-5)!) ] / 4 ] When you started talking about no-ace hands, your description of Dembski's ideas became Flawed Utterly Beyond All Recognition (FUBAR). What reason is there consider no-ace hands? The probability of that is huge compared to royal flushes. You were deliberately choosing specifications in that case which would not be very helpful in eliminating chance explanations, and thus your treatment of the matter was flawed. Salvadorscordova
July 10, 2006
July
07
Jul
10
10
2006
01:16 AM
1
01
16
AM
PDT
Mark, See my post where I state that information poor stochastic processes by definition are incompatible with highly specified events (information rich). https://uncommondescent.com/index.php/archives/1285#comment-47298 I encourage you to ponder why this should be evidently true. Salvadorscordova
July 9, 2006
July
07
Jul
9
09
2006
11:30 PM
11
11
30
PM
PDT

"Dembski says that the odds against something must be one in 10 to the 150th power before a design inference can be made. If the author changes his example to getting dealt 25 royal flushes in a row he’ll have an example of specified complexity aligned with Bill Dembski’s definition."

Is the difference between 1 royal flush and 25 royal flushes even significant? Either way, I don't see how you can logically preclude the possibility with certainty.

I think the problem that most critics have with ID concepts (IC and CSI), even if they don't articulate it exactly this way, is that ultimately, these concepts are examples of complexity that are indistinguishable from any other kind of complexity. So one is specified and the other is irreducible - so what? The chances of these properties appearing naturally are still expressed as 1 chance in X trials, like anything else.

For this reason, I think the question of likelihood is secondary to another question: Does nature possess the MEANS to produce these properties? We might be able to calculate that something is probable, and yet this something will not occur at all if the means to produce it are absent.

From here, we can say that intelligence is observed to possess the means to produce IC and CSI, and nature is not. I think this is a much stronger way of framing the argument. Otherwise, one could say, "Well nature produces all kinds of complexity, so why not this kind of complexity? What makes this kind of complexity so special?"

So I think the necessity of adequate means of production needs to be emphasized more.

The difference between 1 royal flush and 25 in a row is huge. The former is a rarity but it happens. About like getting hit by lightning or dying in a commercial jetliner crash. The latter is so rare if there were one hand dealt for every subatomic particle in the observable universe you'd only have a 50-50 chance of seeing it after half of all the particles had been dealt a hand. The problem with saying what nature has the means to do or not is that we can never be sure we know all the means at her disposal. Science offers best explanations not proofs. -ds CloseEncounters
July 7, 2006
July
07
Jul
7
07
2006
12:29 PM
12
12
29
PM
PDT

Dave I am utterly confused as to whether you want me to post or not. If you permit it then I will repeat the point here which I made on PT and on Alan Fox's blog.

The article is about the logical justification of the concept of specification, not the design inference in general. Dembski himself uses the example of the single Royal Flush in his article to illustrate specification (look at page 19) which is one of the reasons I chose that example myself. The logic of the argument still applies with three or fifty Royal Flushes in a row. However improbable that outcome, any other set of three or fifty hands is equally improbable. So the issue is why does a sequence of one or a sequence of Royal Flushes make us want to dismiss the random deal hypothesis, but another equally improbable outcome does not? Dembski argues that it is because the Royal Flushes are specified, that turns out to be a very complicated thing to define (41 pages!) with no underlying justification - I am simply pointing out that there is a simpler, well recognised alternative with a justification - the comparison of likelihoods. The degree to which the outcome is improbable is totally irrelevant to this argument.

The UPB comes in once have accepted the concept of specification. At that point you might argue (if you accepted specification) that some specified outcomes are so improbable they are effectively impossible.

If you really don't understand why you would conclude beyond any reasonable doubt that a Poker program dealing 25 royal flushes in a row isn't dealing random hands then I'm afraid there is little hope of anyone explaining it to you. Do you really not understand? Specifications aren't difficult. They're independently given patterns. A royal flush is an independently given pattern. Hoyle gave it before your poker program ever dealt its first hand. A random assortment of 5 cards that has no ranked pattern for the game of poker, while just as rare as any other 5 cards, is not specified. In biology it's helpful to consider specification as a function critical to survival. I actually prefer to use a lottery example. Maybe that will work better for you. The only real glimmer of understanding I saw in your paper was where you pointed out the difficulty of assessing all possible non-intelligent avenues for any particular complex specified pattern to arise in a living thing. I have written on numerous occasions that this is the real problem for ID. Dembski calls these "probabilistic resources". There will always remain a possibility that an undiscovered probabilistic resource turns the highly improbable into the rather likely. However, all of science is like that. All theories are tentative pending discovery of new and contradictory evidence. Some theories are just more tentative than others is all. So ignorance of all possible contrary evidence is an insufficient excuse to not accept any given explanation as the best explanation available. -ds -ds Mark Frank
July 4, 2006
July
07
Jul
4
04
2006
09:13 AM
9
09
13
AM
PDT
This article comes straight from here: https://uncommondescent.com/index.php/archives/1166 Still making the same elementary mistake...Patrick
July 4, 2006
July
07
Jul
4
04
2006
08:33 AM
8
08
33
AM
PDT
The Intelligent Design (ID) movement proposes that it is possible to detect whether something has been designed by inspecting it, assessing whether it might have been produced by either necessity or chance alone, and, if the answer is negative, concluding it must have been designed.
Will the critics ever deal honestly with ID arguments? Can we calculate the probabilities and see how they compare to the UPB, exclude chance, necessity, and their interaction, and infer (not conclude) that their misrepresentations are intentional/designed to be misleading?Mung
July 4, 2006
July
07
Jul
4
04
2006
08:20 AM
8
08
20
AM
PDT

I read the PT comments and Mark Frank said even if he made it 2 or 3 consecutive Royal Flushes it wouldn't change anything.

No Mark, it probably wouldn't, because that STILL doesn't approach the universal probability bound. I said you'd need to make your example TWENTY-FIVE royal flushes in a row. Seeing as how you pointedly refused to even mention that number it becomes quite apparent you understand what it means and refuse to use it because it obliterates your argument.

I asked you to move along to another blog here because you're poorly informed and refuse to be corrected. This refusal to acknowledge the number of royal flushes required to make a design inference is just one more example of it.

DaveScot
July 4, 2006
July
07
Jul
4
04
2006
07:54 AM
7
07
54
AM
PDT
Oh my goodness! Mark Frank as a contributor to Talk Reason! I don't have time to read his essay right now, but here's a question to think over: If one is to dismiss Dembski's concept of specified complexity (or at least one similar to it), then how is one able to defend his/her knowledge of intelligent agency external to him/herself?crandaddy
July 4, 2006
July
07
Jul
4
04
2006
07:48 AM
7
07
48
AM
PDT
I thought about it and concluded that Mark Perakh, by posting Mark Frank's paper, is in effect continuing to argue Dembski's math by proxy. And what a crummy proxy it is. It's maybe even worse than what Perakh hisself might have written. I could have responded to the Marks on Panda's Thumb but I'm banned there. I mean really physically banned there. Mark Frank's registration here is alive and he's not blacklisted. He's on the moderation list which means his comments take an editor's approval before they appear. He didn't even try responding here.DaveScot
July 4, 2006
July
07
Jul
4
04
2006
07:44 AM
7
07
44
AM
PDT
[[ Hi Dave - It appears it is actually (UD Banned) Mark Frank who wrote the article and NOT Mark Perakh. Maybe you could change the above to reflect the true author. He specifically says so on the PT blog comment-110009 ]] Aside from this minor error, it sounds like Mark's paper is wearing the cheap tuxedo. (would the real Mark shady please stand up??!!) It seems Mark didn't read the previous UD post on "Becoming a Jedi Master in the online ID Wars" and his rubuttal is reverting to the old ""Dave’s comment is so self-evidently silly it is not worth refuting"" trick. Mark be man enough to own up to using stupid examples in your paper and address the SPECIFIC problem when someone critiques you work. Pray tell, what exactly is SO self evident about a factor of 4x10E24 being irrelvant??? This just goes to show the same old tired tricks are being employed to firstly WRITE the paper and then BRUSH OFF the critisism. Nice one Mark you're firmly on the road to filling Richard D's dishonest shoes. Did I detect a hint of bitterness about being banned Mark??? Ouch...lucID
July 4, 2006
July
07
Jul
4
04
2006
05:30 AM
5
05
30
AM
PDT

Leave a Reply