Uncommon Descent Serving The Intelligent Design Community

John Derbyshire: “I will not do my homework!”

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

[[Derbyshire continues to embarrass himself regarding ID — see his most recent remarks in The Spectator — so I thought I would remind readers of UD of a past post regarding his criticisms of ID. –WmAD]]

John Derbyshire has written some respectable books on the history of mathematics (e.g., his biography of Riemann). He has also been a snooty critic of ID. Given his snootiness, one might think that he could identify and speak intelligently on substantive problems with ID. But in fact, his knowledge of ID is shallow, as is his knowledge of the history of science and Darwin’s writings. This was brought home to me at a recent American Enterprise Institute symposium. On May 2, 2007, Derbyshire and Larry Arnhart faced off with ID proponents John West and George Gilder. The symposium was titled “Darwinism and Conservatism: Friends or Foes.” The audio and video of the conference can be found here: www.aei.org/…/event.

Early in Derbyshire’s presentation he made sure to identified ID with creationism (that’s par for the course). But I was taken aback that he would justify this identification not with an argument but simply by citing Judge Jones’s decision in Dover, saying “That’s good enough for me.” To appreciate the fatuity of this remark, imagine standing before feminists who regard abortion for any reason as a fundamental right of women and arguing against partial birth abortions merely by citing some court decision that ruled against it, saying “That’s good enough for me.” Perhaps it is good enough for YOU, but it certainly won’t be good enough for your interlocutors. In particular, the issue remains what about the decision, whether regarding abortion or ID, makes it worthy of acceptance. Derbyshire had no insight to offer here.

What really drove home for me what an intellectual lightweight he is in matters of ID — even though he’s written on the topic a fair amount in the press — is his refutation specifically of my work. He dismissed it as committing the fallacy of an unspecified denominator. The example he gave to illustrate this fallacy was of a golfer hitting a hole in one. Yes, it seems highly unlikely, but only because one hasn’t specified the denominator of the relevant probability. When one factors in all the other golfers playing golf, a hole in one becomes quite probable. So likewise, when one considers all the time and opportunities for life to evolve, a materialistic form of evolution is quite likely to have brought about all the complexity and diversity of life that we see (I’m not making this up — watch the video).

But a major emphasis of my work right from the start has been that to draw a design inference one must factor in all those opportunities that might render probable what would otherwise seem highly improbable. I specifically define these opportunities as probabilistic resources — indeed, I develop a whole formalism for probabilistic resources. Here is a passage from the preface of my book THE DESIGN INFERENCE (even the most casual reader of a book usually persues the preface — apparently Derbyshire hasn’t even done this):

Although improbability is not a sufficient condition for eliminating chance, it is a necessary condition. Four heads in a row with a fair coin is sufficiently probable as not to raise an eyebrow; four hundred heads in a row is a different story. But where is the cutoff? How small a probability is small enough to eliminate chance? The answer depends on the relevant number of opportunities for patterns and events to coincide—or what I call the relevant probabilistic resources. A toy universe with only 10 elementary particles has far fewer probabilistic resources than our own universe with 10^80. What is highly improbable and not properly attributed to chance within the toy universe may be quite probable and reasonably attributed to chance within our own universe.

Here is how I put the matter in my 2004 book THE DESIGN REVOLUTION (pp. 82-83; substitute Derbyshire’s golf example for my poker example, and this passage precisely meets his objection):

Probabilistic resources refer to the number of opportunities for an event to occur or be specified. A seemingly improbable event can become quite probable once enough probabilistic resources are factored in. On the other hand, such an event may remain improbable even after all the available probabilistic resources have been factored in. Think of trying to deal yourself a royal flush. Depending on how many hands you can deal, that outcome, which by itself is quite improbable, may remain improbable or become quite probable. If you can only deal yourself a few dozen hands, then in all likelihood you won’t see a royal flush. But if you can deal yourself millions of hands, then you’ll be quite likely to see it.

Thus, whether one is entitled to eliminate or embrace chance depends on how many opportunities chance has to succeed. It’s a point I’ve made repeatedly. Yet Derbyshire not only ignores this fact, attributing to me his fallacy of the unspecified denominator, but also unthinkingly assumes that the probabilsitic resources must, of course, be there for evolution to succeed. But that needs to be established as the conclusion of a scientific argument. It is not something one may simply presuppose.

There’s a larger issue at stake here. I’ve now seen on several occasions where critics of design give no evidence of having read anything on the topic — and they’re proud of it! I recall Everett Mendelson from Harvard speaking at a Baylor conference I organized in 2000 decrying intelligent design but spending the whole talk going after William Paley. I recall Lee Silver so embarrassing himself for lack of knowing anything about ID in a debate with me at Princeton that Wesley Elsberry chided him to “please leave debating ID advocates to the professionals” (go here for the Silver-Elsberry exchange; for the actual debate, go here). More recently, Randy Olson, of FLOCK OF DODOS fame, claimed in making this documentary on ID that he had read nothing on the topic (as a colleague at Notre Dame recently reported, privately, on a talk Randy gave there: “He then explained how he deliberately didn’t do research for his documentary, and showed some movie clips on the value of spontaneity in film making”). And then there’s Derbyshire.

These critics of ID have become so shameless that they think they can simply intuit the wrongness of ID and then criticize it based simply on those intuitions. The history of science, however, reveals that intuitions can be wrong and must themselves be held up to scrutiny. In any case, the ignorance of many of our critics is a phenomenon to be recognized and exploited. Derbyshire certainly didn’t help himself at the American Enterprise Institute by his lack of homework.

Comments
Thanks, let's try: :-)kairosfocus
May 18, 2007
May
05
May
18
18
2007
06:20 AM
6
06
20
AM
PDT
Following up . . . Had to do family deliveries. 3] Jerry: While I generally understand the concept of CSI, I have yet to see a convincing explanation of it that is not picked apart. Of course the first problem with that,is just what Trib 7 pointed out: selective hyperskepticism can easily demand an undue degree of "proof" on an inconvenient claim, whilst on other claims that are acceptable tot he agenda, a much lower standard of evidence will do. But also, let us note: a] CSI is NOT -- repeat, "NOT" -- an ID-originated concept. CSI predates ID and in fact is part of the set of emerging ideas on the challenge of the origin of life that helped trigger the emergence of the design school of thought in the early 1980's. (And contra Forrest et al, ID dates to the late 70s to early 80s.] b] If you look up Thaxton et al's online chapters from The Mystery of Life's Origin, you will see the following in CH 8 [I leave off the link because of the spam filter . . .]:
Only recently has it been appreciated that the distinguishing feature of living systems is complexity rather than order.4 This distinction has come from the observation that the essential ingredients for a replicating system---enzymes and nucleic acids---are all information-bearing molecules. In contrast, consider crystals. They are very orderly, spatially periodic arrangements of atoms (or molecules) but they carry very little information. Nylon is another example of an orderly, periodic polymer (a polyamide) which carries little information. Nucleic acids and protein are aperiodic polymers, and this aperiodicity is what makes them able to carry much more information. By definition then, a periodic structure has order. An aperiodic structure has complexity. In terms of information, periodic polymers (like nylon) and crystals are analogous to a book in which the same sentence is repeated throughout. The arrangement of "letters" in the book is highly ordered, but the book contains little information since the information presented---the single word or sentence---is highly redundant . . . . Only certain sequences of letters correspond to sentences, and only certain sequences of sentences correspond to paragraphs, etc. In the same way only certain sequences of amino acids in polypeptides and bases along polynucleotide chains correspond to useful biological functions. Thus, informational macro-molecules may be described as being and in a specified sequence.5 Orgel notes: Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.6 . . . . Yockey7 and Wickens5 develop the same distinction, that "order" is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, "organization" refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity. In short, the redundant order of crystals cannot give rise to specified complexity of the kind or magnitude found in biological organization; attempts to relate the two have little future.
c] Thus, we see major non ID names in OOL coming up with the term specified complexity, BEFORE the ID movement originated, as a natural outcome of their work at the turn of the 80s. d] What does the idea in essence mean? Here, TBO in TMLO ch 8, contrast three sequences: (i) An ordered (periodic) and therefore specified arrangement: THE END THE END THE END THE END Example: Nylon, or a crystal. (ii) A complex (aperiodic) unspecified arrangement: AGDCBFE GBCAFED ACEDFBG Example: Random polymers (polypeptides) (iii) A complex (aperiodic) specified arrangement: THIS SEQUENCE OF LETTERS CONTAINS A MESSAGE! Example: DNA, protein. e] Dembski therefore "only" provided a mathematical model of an observed phenomenon. He also put up a criterion, that the chain in question should store at least 500 or so bits of information for a unique specified state, or the equivalent of being less than 1 in 10^150 within the relevant configurational space. Life forms by far exceed any reasonable version of this filter. f] IMHCO, the model and the associated inferential filter are appropriate and effective [insofar as I have followed his Mathematics and in light of my own knowledge of thermodynamics]. g] But because of the implications we often see the classic philosophical move, to reverse the implication in order to deny the antecedent: P => Q, but I reject Q so I deny P. h] The trap in that is the issue of selective hyperskepticism, and indeed, the objectors routinely accept on in fact far less evidence, similar cases. [Cf the general acceptance and common USE of Fisherian reasoning in inferential statistics. Dembski has competently addressed debates over Bayesian inference, for those who wish to make such points.] In short, CSI is coherent, properly empirically anchored, and originated BEFORE the ID movement. Its denial and contentiousness as a concept today reflect selective hyperskepticism and debate tactics, not the actual state of the case on the merits. Typical for ID-related issues, in my observation. GEM of TKIkairosfocus
May 18, 2007
May
05
May
18
18
2007
06:18 AM
6
06
18
AM
PDT
BTW, Trib 7, how do you get those neat smileys to post at UD? colon +dash + right parenthesis :-)tribune7
May 18, 2007
May
05
May
18
18
2007
05:29 AM
5
05
29
AM
PDT
Hi Folks . . . OOPS: From it won't post to it multiple posts! [Pardon . . . BTW, Trib 7, how do you get those neat smileys to post at UD?] Next, thanks for the many kind words. Atom, FYQI: GEM is the acrostic formed by my initials, and TKI is my organisation. I hail from, live and work in the Caribbean -- currently volcano island, now with a suspiciously quiet ol' smokie . . . Now on a few follow up points: 1] Jerry & Trib 7: The real issue or lottery is that there could be an enormous number of possible starting points for life all of them of incredibly low probability and the one that happened or “won the lottery” was just one of these starting points The real issue is isolation of the islands of functionality. As John Leslie pointed out long ago now, whether or no there are many regions of the wall with flies, and even regions positively carpeted ans swarming with then, when we see quite isolated regions beyond the reasonable reach of a random walk or random targeting, then the wondrousness of hitting the target emerges. [In short, LOCALLY isolated islands of function is all we need to defeat the chance hyp.] And, whatever other life technologies or architectures are possible -- maybe even non-physical [dare I say "spiritual?!] -- the fact is that in general small random disturbances of the observed bio-functional molecules typically destroy function. Further, the required information to get to the cluster of functional molecules for life is way, way beyond the sort of 500 or so bit limit on chance increments we have discussed. In short, even having got to life, body plan innovation by RM + NS is seriously problematic. Also, the OOL problem is so intractable that Shapiro recently panned the whole RNA world hyp in Sci Am; with issues of getting to information (which he appar does not see equally apply to his own metabolism first model). Of course, Leslie was in the main talking about the quasi-infinite array of sub cosmi issue,a nd was in effect highlighting that since local changes in many many key parameters make for a radically life-hostile cosmos, the finetuning issue does not go away so easily. 2] Eric: Picking the sequence does not change the probability of that sequence. It does change the probability of a “success” in your trials, but only because you changed what defined a success. Go to the head of the class! The vital point relevant to both origin and body plan level diversification of life, is that "success" is independently "functionally specified. Next, it is empirically, highly complex and integrated in such a way as to in many cases be credibly irreducibly complex, which strains the capacity of variations and co-opting of parts for other purposes. Third, it is so information rich [in the Shannon sense of data storing capacity] that it is utterly unlikely for something that meets the three observed constraints to happen by a chance-dominated process in the gamut of the observed cosmos. (The same extends in effect to the formation of a life-habitable cosmos. For details cf my always linked and freshly updated after the exchanges with Pixie and Dave Scott.) But, routinely, agents produce such entities -- even this post is a case in point -- i.e we KNOW a credible source for FSCI/CSI. Therefore on inference to best explanation, the cosmos we observe, the habitable planet we inhabit, and the life [including life that requires mind and morals . . . i.e us] we see on that planet all most credibly due to intelligent agency. So strong is this case, that it is only by selective hyperskepticism linked to institutional dominance of evolutionary materialism and associated methodological naturalism that blocks this from being he new paradigm. But, that is coming. GEM of TKIkairosfocus
May 18, 2007
May
05
May
18
18
2007
05:00 AM
5
05
00
AM
PDT
I have yet to see a convincing explanation of it that is not picked apart. Just because you can't convince someone doesn't mean you aren't right and haven't made a reasonable case. People are self-delusional. The materialist is self-delusional. You can never convince them. Just because your opponent says you haven't won the argument doesn't mean you haven't. A good sign is when they start resorting to ridicule. 30 percent of the Democratic Party believes that President Bush knew beforehand of the attacks on the World Trade Center. You can never convince most of them otherwise. Debating a materialist is like debating one of those people.tribune7
May 18, 2007
May
05
May
18
18
2007
04:28 AM
4
04
28
AM
PDT
tribune7, You do not have to convince me that the process could have happened by accident, chance, law or whatever non-intelligent process someone names. But I find the ID answers to the materialists objections/claims often vague and simplistic. While I generally understand the concept of CSI, I have yet to see a convincing explanation of it that is not picked apart. The materialist will argue that it all happened in steps over deep time because that is the way they handle the low probabilities. They will sometimes even assert it is a certainity given enough time. They haven't any proof of there ever existing any steps but they dare you to prove it couldn't have happened this way. ID should focus much of its attention on refuting the step approach. That is the best way I believe to refute the lottery cop-out instead of quoting just absurdly low probabilities. Also I once listened to a shrill student challenging ID by saying where is your proof when she said someone like Darwin was a real scientist who meticulously gathered empirical data to support his hypotheses. The answer was look at the complexity of the cell when the answer should have emphasized that Darwin had no empirical evidence for his hypotheses and that exactly zero new species have been confirmed to have arrived on the planet by natural selection. The main argument for Darwin these days is common descent when even if you accept common descent there is no evidence that it ever happened in a gradual fashion nor is any other mechanism indicated.jerry
May 17, 2007
May
05
May
17
17
2007
09:21 PM
9
09
21
PM
PDT
Kairosfocus, Thank you for your posts, I always appreciate your insights. Just so you know, I am not trying to undermine ID with my questions regarding low-probability and specification. They are honest questions, places I feel IDers can give clearer answers. The answers given so far go towards answering them. So I appreciate them.Atom
May 17, 2007
May
05
May
17
17
2007
03:10 PM
3
03
10
PM
PDT
Just as Moshe didn’t believe Aaaron when he said they just tossed the gold in the fire and the golden calf “just came out” . . . the first recorded inference to design!
LOL!Atom
May 17, 2007
May
05
May
17
17
2007
03:06 PM
3
03
06
PM
PDT
An addition/correction to a previous post of mine when I responded to this (in part):
But if you choose (specify) a sequence like “CAT” BEFORE you start grabbing the letters, then the likelihood that you will pick that particular sequence will drop significantly.
I forgot to point out that this is not technically true. Picking the sequence does not change the probability of that sequence. It does change the probability of a "success" in your trials, but only because you changed what defined a success. In the first case, your specified set was "any three letter word" as opposed to "that" three letter word. Again, these probabilities are only really meaningful when the set of desired outcomes is specified BEFORE the trials. To specify them AFTER the trial has run vastly confuses the discussion. Besides, it is not an analogy to how evolution works, since evolution is not completely random. Using pure probability on the end result only is not a realistic "test" of the theory.Eric
May 17, 2007
May
05
May
17
17
2007
09:39 AM
9
09
39
AM
PDT
Hey kairosfocus, What does "GEM of TKI" mean? It sounds like a Hip-Hop crew shout out: "Yo, this is the G.E.M., representing TKI to the fullest. Tha Killa Instinct crew remains from 2007 until the Heat Death!" lol. Seriously though, is that just your initials and the shire you come from?Atom
May 17, 2007
May
05
May
17
17
2007
07:27 AM
7
07
27
AM
PDT
kariosfocus -- GREAT POST. or half-post anyway :-)tribune7
May 17, 2007
May
05
May
17
17
2007
05:47 AM
5
05
47
AM
PDT
The real issue or lottery is that there could be an enormous number of possible starting points for life all of them of incredibly low probability and the one that happened or “won the lottery” was just one of these starting points. Jerry, I appreciate your posts and demands for ever clearer explanations. They only help improve our articulation. It really comes down to what is the most reasonable explanation. It's theoretically possible for wind and rain to etch the alphabet on a rock, but if you found the alphabet etched on a rock would you really, really believe it was done by wind and rain? The materialists arguing that life came by known natural causes is akin to arguing that wind and rain etched not just an alphabet but the works of Shakespeare on a rock. It gets to the point where you have to shake your head, understand they are fools, then go on to more practical things.tribune7
May 17, 2007
May
05
May
17
17
2007
05:45 AM
5
05
45
AM
PDT
Thread locks up on continuation . . . maybe a new spam filter will help, folks? GEM of TKIkairosfocus
May 17, 2007
May
05
May
17
17
2007
01:04 AM
1
01
04
AM
PDT
Try again part 2: 3] Now, what of [F]CSI and the explanatory filter? Here we look at two criteria, having first insisted on contingency so that natural regularities do not determine the outcome: [1] complexity, in the sense of being beyond 500 or so bits of information storing capacity, [2] specification, in the sense of fitting with an independently known pattern; in the cases of interest, a FUNCTIONAL pattern. (Sure, any 500 coin set is equi-probable, but when someone tells you he just tossed and lo THTH .. TH appeared, no-one will believe him! For excellent reason. Just as Moshe didn't believe Aaaron when he said they just tossed the gold in the fire and the golden calf "just came out" . . . the first recorded inference to design!] In short, it is like having miles of a wall, with just one fly on it in a 100 yard stretch. then, bang, a bullet hits the fly. A lucky shot, or an aimed shot? [And why do you plunk for the "lucky shot" explanation?] pause 2 . . .kairosfocus
May 17, 2007
May
05
May
17
17
2007
01:03 AM
1
01
03
AM
PDT
Continuing . . . 3] Now, what of [F]CSI and the explanatory filter? Here we look at two criteria, having first insisted on contingency so that natural regularities do not determine the outcome: [1] complexity, in the sense of being beyond 500 or so bits of information storing capacity, [2] specification, in the sense of fitting with an independently known pattern; in the cases of interest, a FUNCTIONAL pattern. (Sure, any 500 coin set is equi-probable, but when someone tells you he just tossed and lo THTH .. TH appeared, no-one will believe him! For excellent reason. Just as Moshe didn't believe Aaaron when he said they just tossed the gold in the fire and the golden calf "just came out" . . . the first recorded inference to design!] In short, it is like having miles of a wall, with just one fly on it in a 100 yard stretch. then, bang, a bullet hits the fly. A lucky shot, or an aimed shot? [And why do you plunk for the "lucky shot" explanation?] 4] Relevant cases: In the case of bio-information, DNA ranges 500k to in excess of 3 billion storage units, each capable of storing 2 bits of information. Even in bacteria, cutting out below about 360 k of storage units, destroys bio-function. But, 4^360,000 ~ 4.0 *10^216,741 bits. That's many orders of magnitude beyond 500 bits! Then too, the functionality of the DNA chain's stored information is quite easily observed: do you have a viable life form that can feed, move, reproduce etc in appropriate environments? So, how did we hit this lonely fly on the wall, given the obvious deep isolation of the functional states in the overall configuration space? [And cf here the way that statistics is deployed in thermodynamics to ground the 2nd law of thermodynamics. My nanobots and microjets example in the always linked through my name will help see this. In short I am pointing to selective hyperskepticism at work, again. Probability hurdles are as real as potential walls!] --> Of course, introducing the ideas that we can only infer to material entities in science simply begs the question. --> Similarly, trying to speculate on a vastly wider unobserved cosmos than what we see is a resort to speculative metaphysics, and is ad hoc to try to rescue a hyp that is preferred but otherwise in deep trouble on accounting for observed facts. GEM of TKI PS: Onlookers, look at my always linked through my handle for updated details, esp in the appendix on thermodynamics, after a follow-up debate with Pixie in my own blog.kairosfocus
May 17, 2007
May
05
May
17
17
2007
01:01 AM
1
01
01
AM
PDT
THat filter strikes again. Trying: Okay . . . "Once more unto the breach, dear friends . . ." I see the issue has now "evolved" to the question of both complexity and specification. I will make a few remarks on that, but first, on the issue . . . 1] The problem of selective hyperskepticism In Western thinking, we often meet those who imagine that by default the objection must prevail. For instance, we typically hear a quote from Sagan; "extraordinary claims require extraordinary evidence." They are wrong, and wrong based on the issue of consistency. For, properly, claims only require ADEQUATE not extraordinary evidence -- on pain of inconsistency between standards on what we accept and those on what we reject. And, in the empirical world of science, our evidence and arguments are provisional, so we look on which explanation is best or most credible relative to accounting for the material facts, being coherent and being explanatorily powerful but bot simplistic or ad hoc. Otherwise, we are simply begging the question, and may be guilty of the most blatant inconsistencies. 2] Fisherian Hyp testing and inference to design This immediately exposes the problem on rejecting inference to design on observing CSI. For, routinely, we characterise estimated distributions for events, and when they are sufficiently far out into the tails [usually at 1 in 20 or 1 in 100 levels] we accept that chance -- the usual null hypothesis -- is an inadequate explanation and revert to agency or natural regularity depending on the case in view. [Of course we run risks of errors: accepting chance when we should reject it, or rejecting it when we should accept it; but since when is the risk of error a new, or even an avoidable, thing?] As someone in that Feb thread said, as I recall, Dembski's 1 in 10^150 is just about as conservative a rejection region as he has ever seen. In short, only the most extraordinary cases will be rejected, relative to the chance null hyp! So, let us see the selective hyperskepticism at work here for what it is. Pause . . . GEM of TKIkairosfocus
May 17, 2007
May
05
May
17
17
2007
01:00 AM
1
01
00
AM
PDT
Okay . . . "Once more unto the breach, dear friends . . ." I see the issue has now "evolved" to the question of both complexity and specification. I will make a few remarks on that, but first, on the issue . . . 1] The problem of selective hyperskepticism In Western thinking, we often meet those who imagine that by default the objection must prevail. For instance, we typically hear a quote from Sagan; "extraordinary claims require extraordinary evidence." They are wrong, and wrong based on the issue of consistency. For, properly, claims only require ADEQUATE not extraordinary evidence -- on pain of inconsistency between standards on what we accept and those on what we reject. And, in the empirical world of science, our evidence and arguments are provisional, so we look on which explanation is best or most credible relative to accounting for the material facts, being coherent and being explanatorily powerful but bot simplistic or ad hoc. Otherwise, we are simply begging the question, and may be guilty of the most blatant inconsistencies. 2] Fisherian Hyp testing and inference to design This immediately exposes the problem on rejecting inference to design on observing CSI. For, routinely, we characterise estimated distributions for events, and when they are sufficiently far out into the tails [usually at 1 in 20 or 1 in 100 levels] we accept that chance -- the usual null hypothesis -- is an inadequate explanation and revert to agency or natural regularity depending on the case in view. [Of course we run risks of errors: accepting chance when we should reject it, or rejecting it when we should accept it; but since when is the risk of error a new, or even an avoidable, thing?] As someone in that Feb thread said, as I recall, Dembski's 1 in 10^150 is just about as conservative a rejection region as he has ever seen. In short, only the most extraordinary cases will be rejected, relative to the chance null hyp! So, let us see the selective hyperskepticism at work here for what it is. 3] Now, what of [F]CSI and the explanatory filter? Here we look at two criteria, having first insisted on contingency so that natural regularities do not determine the outcome: [1] complexity, in the sense of being beyond 500 or so bits of information storing capacity, [2] specification, in the sense of fitting with an independently known pattern; in the cases of interest, a FUNCTIONAL pattern. (Sure, any 500 coin set is equi-probable, but when someone tells you he just tossed and lo THTH .. TH appeared, no-one will believe him! For excellent reason. Just as Moshe didn't believe Aaaron when he said they just tossed the gold in the fire and the golden calf "just came out" . . . the first recorded inference to design!] In short, it is like having miles of a wall, with just one fly on it in a 100 yard stretch. then, bang, a bullet hits the fly. A lucky shot, or an aimed shot? [And why do you plunk for the "lucky shot" explanation?] 4] Relevant cases: In the case of bio-information, DNA ranges 500k to in excess of 3 billion storage units, each capable of storing 2 bits of information. Even in bacteria, cutting out below about 360 k of storage units, destroys bio-function. But, 4^360,000 ~ 4.0 *10^216,741 bits. That's many orders of magnitude beyond 500 bits! Then too, the functionality of the DNA chain's stored information is quite easily observed: do you have a viable life form that can feed, move, reproduce etc in appropriate environments? So, how did we hit this lonely fly on the wall, given the obvious deep isolation of the functional states in the overall configuration space? [And cf here the way that statistics is deployed in thermodynamics to ground the 2nd law of thermodynamics. My nanobots and microjets example in the always linked through my name will help see this. In short I am pointing to selective hyperskepticism at work, again. Probability hurdles are as real as potential walls!] --> Of course, introducing the ideas that we can only infer to material entities in science simply begs the question. --> Similarly, trying to speculate on a vastly wider unobserved cosmos than what we see is a resort to speculative metaphysics, and is ad hoc to try to rescue a hyp that is preferred but otherwise in deep trouble on accounting for observed facts. GEM of TKI PS: Onlookers, look at my always linked through my handle for updated details, esp in the appendix on thermodynamics, after a follow-up debate with Pixie in my own blog.kairosfocus
May 17, 2007
May
05
May
17
17
2007
12:59 AM
12
12
59
AM
PDT
" For example, if you randomly grab Scrabble ... But if you choose (specify) a sequence like “CAT” BEFORE you start grabbing the letters, then the likelihood that you will pick that particular sequence will drop significantly. And that’s just a three-letter sequence." Or even better yet what are the odds of throwing all the letters on a table and spell "cat" with the C standing perfect on top of A and A on top of T on their ends. Here you have the law of gravity working against you.Smidlee
May 16, 2007
May
05
May
16
16
2007
07:07 PM
7
07
07
PM
PDT
I guess an easier way to ask would be: What keeps CSI strings from forming by chance in random processes? Low-Probability? If Low-Probability, why doesn't Low-Probability stop even more unlikely unspecified events from occuring all the time? If Specification, what about specifying changes what is allowed by nature?Atom
May 16, 2007
May
05
May
16
16
2007
07:04 PM
7
07
04
PM
PDT
I understand that ericB, thanks for the discussion. Imagine we set up a coin flipper to spit out 501 random bits. We let it run, and it spits out a 501 bit string that doesn't match any pattern we know of. Our coin flipper is fair, and we know the string was generated randomly . Obviously the universe and laws of nature allowed that particular string of low probability to occur. Now we get ready to run the flipper again, over and over until the end of the cosmos. But before we do, we write down a string of 501 bits on paper. Question: Will the coin flipper be able to produce this specific bit string, given from now until the end of the cosmos, by random chance? If yes, then we just produced a CSI string by chance, since I independently pre-specified my string. (It is meaningful to me, but to a stranger may appear as random as any other string.) If not, then something prevents this particular 501 bit length string from occuring. Remember, our flipper can produce many 501 bit length strings, of equally low probability. The only difference with this string and my first one was that I took the time to specify this one. So what in my specification act changes things?Atom
May 16, 2007
May
05
May
16
16
2007
06:59 PM
6
06
59
PM
PDT
This is where I don't understand how all this probability discussion applies to ID/evolution:
For example, if you randomly grab Scrabble letters one after the other, it’s actually pretty likely that you’ll frequently generate a meaningful word like CAT or DOG. That’s not that impressive and obviously is totally explainable by reference to chance. But if you choose (specify) a sequence like “CAT” BEFORE you start grabbing the letters, then the likelihood that you will pick that particular sequence will drop significantly. And that’s just a three-letter sequence.
Isn't that the point of critics of ID attacks on evolution? You are specifying the "pattern" after it's already been observed in nature. Since evolution does not have a "goal" or a specific end point, is it fair to use probabilities calculated after the fact? Moreover, since evolution isn't acting all at once, it's not a good analogy to claim it's like throwing out letters and forming an 8-letter word. The analogy would have to account for a simple beginning and then some methodology for mimicking natural selection, et. al. (or whatever term you wish to use for the evolutionary process).Eric
May 16, 2007
May
05
May
16
16
2007
06:38 PM
6
06
38
PM
PDT
TRoutMac: One thing that I think is confusing, and makes probabilities difficult to understand, (for me at least) is that saying that an event has a “one in one million” chance of happening is NOT the same (correct me if I’m wrong) as saying that it WILL happen once in one million times. Isn’t that right?
That is correct. About this question, others have already provided helpful information. I just want to point out that it is not hard to see why and to see the difference exactly. If you want to see the probability that some unlikely event will happen at least once in some number of independent attempts, it is actually easiest to 1) figure the odds that it won't happen at all over all those attempts, and then 2) subtract this from 100% EXAMPLE: If you have a 10% chance of success (1 in 10) for a single try, then you have a 90% chance of failure on each try. Two attempts is 90% x 90% = 81% chance of not succeeding on any try. N attempts is (90%)^N, so ten tries is (90%)^10 = 34.9% chance of not succeeding on any of those attempts. So, even with a chance of 1 in 10, over ten attempts you still only have about 100% - 34.9% = 65.1% chance of success -- NOT 100%. Likewise, trying for heads with coin flips (1 in 2 chance), there is a 50% x 50% = 25% chance that even after two flips you will still not get any heads, so only a 100%-25% = 75% chance you will get at least one head -- again NOT 100%.ericB
May 16, 2007
May
05
May
16
16
2007
06:32 PM
6
06
32
PM
PDT
ericB wrote: "It isn’t the orderliness or symmetry or regularity of a pattern, per se. Its specifying vs. not specifying." Right. For example, if you randomly grab Scrabble letters one after the other, it's actually pretty likely that you'll frequently generate a meaningful word like CAT or DOG. That's not that impressive and obviously is totally explainable by reference to chance. But if you choose (specify) a sequence like "CAT" BEFORE you start grabbing the letters, then the likelihood that you will pick that particular sequence will drop significantly. And that's just a three-letter sequence.TRoutMac
May 16, 2007
May
05
May
16
16
2007
05:05 PM
5
05
05
PM
PDT
Atom: So we kept talking, and eventually came to the conclusion that we need both specification and low probability (viz. classic ID), but I couldn’t come up with a good reason why the specification should “prevent” certain outcomes. (I couldn’t say it was the low probability that prevented them, since lower probability events could happen at any time.)
It might to think of it this way. Consider writing down any sequence of 100 heads or tails. It doesn't matter if it follows a "pattern" or not. By writing it down you have specified the sequence. No matter which sequence it was, the probability that 100 coin tosses will get that sequence is exactly the same. OR, have someone toss the coins first, and then try to guess (i.e. specify) the sequence that was tossed without looking (also excluding any ESP or any other knowledgable "help"). The probability that the sequence tossed is the one you picked is still the same low probability. NOW, contrast this with tossing the coin 100 times and getting any old sequence of heads and tails. What is the probability that you will get something or other. Quite high! In fact, it is a certainty. If you want to say more specifically, "What is the probability that I will get some arrangement or other that is not such-and-such kind of pattern?", then just 1) define "such-and-such kind of pattern, 2) calculate the probability for getting that kind of pattern (usually very low), and 3) subtract the answer from 100% (or from one, for zero to 1 probabilities). The result is that it will still usually be very quite likely that you will get some sequence that is not-that-pattern, i.e. not a certainty but still quite high. BOTTOM LINE: Specification does not need to mean "regular pattern", and specification does not "prevent" anything. However, whether a regular pattern or not, a specified sequence can be extremely unlikely. The mistake is to think that any-old-result is just as unlikely as the specified result. It would be if and only if that particular sequence is specified. If you are willing to take whatever turns up without specifying independently, that is not unlikely at all! It isn't the orderliness or symmetry or regularity of a pattern, per se. Its specifying vs. not specifying.ericB
May 16, 2007
May
05
May
16
16
2007
04:46 PM
4
04
46
PM
PDT
TRoutMac says:
They’re free to assert this, of course, but they shouldn’t expect to retain any credibility since they badmouth ID as being “untestable.”
Is is testable, to some degree. (No, we can't tell what actually happened, I realize that, but hear me out here...) So, if the hypothesis is that there are multiple ways for life to start, we can test this by reproducing what we believe to be the initial conditions and seeing what arises. Possibly, we can do thought experiments regarding what combinations of proteins, etc. are self-replicating, etc. - but I know that many don't consider those very good evidence. The point is that, if we can reproduce initial conditions and we find that self-replicating substances can form from there, we've shows that it is possible. Here were are not predicting / testing that it did happen, only that it could. Note: these experiments may be extremely difficult to do and take a long time to get results. I know that frustrates a lot of people when scientists don't give up on a theory just because it hasn't borne fruit yet, but there are many areas of science (particle physics, for example) that are playing the same game of experimentation lagging theory by a wide margin. Oh, and for what it's worth, there are a couple of proposed tests for a multiverse concept as well - although they would be very difficult to do.Eric
May 16, 2007
May
05
May
16
16
2007
01:32 PM
1
01
32
PM
PDT
jerry wrote: "For the origin of life, several researchers propose that life had more than one origin and not all may have been DNA based but the DNA based one is the one that won out. There is no evidence for this but because life exists, they say there must be an explanation or a lottery winner." Yes, some do take this tack. However, they've just fallen into a trap because, just as the multiverse theory (which holds that there really is an infinite number of universes but ours just happened to land on the magic combination) that theory is untestable. They're free to assert this, of course, but they shouldn't expect to retain any credibility since they badmouth ID as being "untestable."TRoutMac
May 16, 2007
May
05
May
16
16
2007
12:49 PM
12
12
49
PM
PDT
Since my comment is stuck in the filter, I recomposed it without the link. Atom, We had a very similar discussion before on a long thread in February though the concept of lottery was not really mentioned Search for Michael Egnor Responds. There was a long thread in February. The actual link seems to put a comment into the spam filter. At the end great_ape, kairosfocus, and gpuccio were trying to illuminate the problem but it stopped when the thread essentially ran off the list of threads and got too long. No one after almost 200 comments had defined CSI to everyone's satisfaction. As far as I could see no one has ever answered the lottery example which is not just that someone has to win the lottery. That is really a bad choice of words because it doesn't define what the lottery is. The real issue or lottery is that there could be an enormous number of possible starting points for life all of them of incredibly low probability and the one that happened or "won the lottery" was just one of these starting points. So this lottery was really a lottery of starting points and one of the starting points is the one that we observed 3.5 billion years ago as cells fossilized in ancient rocks. If time had marched on then some other starting point may have arisen. There is of course other lotteries along the way after the first lottery. These are the nature of multi-celled organisms, the various phyla of the Cambrian Explosion, flight, legs, 4 chambered hearts, etc. For the origin of life, several researchers propose that life had more than one origin and not all may have been DNA based but the DNA based one is the one that won out. There is no evidence for this but because life exists, they say there must be an explanation or a lottery winner. The use of low probabilities does not eliminate the possibilities that one of all potential starting points didn't happen just that the one that appeared was the one that won the this "lottery" of low probability events.jerry
May 16, 2007
May
05
May
16
16
2007
12:03 PM
12
12
03
PM
PDT
I just placed a comment up and it did not appear. I assume it is caught in the filter, probably because of the link in it. Could a moderator check it out when they have time to see there is nothing wrong with it. And then remove this comment. Thank you.jerry
May 16, 2007
May
05
May
16
16
2007
08:22 AM
8
08
22
AM
PDT
Atom, We had a very similar discussion before on a long thread in February though the concept of lottery was not really mentioned https://uncommondescent.com/biology/michael-egnor-responds-to-michael-lemonick-at-time-online/ At the end great_ape, kairosfocus, and gpuccio were trying to illuminate the problem but it stopped when the thread essentially ran off the list of threads and got too long. No one after 200 comments had defined CSI to everyone's satisfaction. As far as I could see no one could answer the lottery example which is not just that someone has to win the lottery. That is really a bad choice of words and the real issue is that there could be an enormous number of possible starting points for life all of incredibly low probability and the one that happened or "won the lottery" was just one of these starting points. So this lottery was really a lottery of starting points and one of the starting points is the one that we observe. If time had marched on then some other starting point may have arisen. In fact several researchers propose that life had more than one origin and not all may have been DNA based but the DNA based one is the one that won out. There is no evidence for this but because life exists, they say there must be an explanation. The use of low probabilities does not eliminate all potential starting points and the one that appeared was the one that won the this "lottery."jerry
May 16, 2007
May
05
May
16
16
2007
08:13 AM
8
08
13
AM
PDT
Hey jerry, others, Allow me to throw my thoughts into the mix. I have just had a week long email exchange with a Darwinist friend on this very topic. I posed him a question, to see if he could help me resolve it. I said, can low probabilities ever be used to rule out the "chance" occurrence of an event? We said yes, we do it in stats. We prespecify our rejection region, then reject the chance hypothesis. But then I asked how we could do that consistently, when unlikely events of arbitrarily low probability happen all the time? Take for example the quote mentioned earlier:
If an event is less likely than 1 in 10^150, therefore, we are quite justified in saying it did not result from chance but from design.
Now, let's say I flip a fair coin 501 times and write down the resulting binary string. Was that specific string the result of chance? If so, we have an example of > 500 bits which is the result of chance. So we kept talking, and eventually came to the conclusion that we need both specification and low probability (viz. classic ID), but I couldn't come up with a good reason why the specification should "prevent" certain outcomes. (I couldn't say it was the low probability that prevented them, since lower probability events could happen at any time.) But somehow, macroscopically describable, low description length events of low probability do not happen by chance. I can't say why not, I just see that they don't.Atom
May 16, 2007
May
05
May
16
16
2007
07:39 AM
7
07
39
AM
PDT
1 2 3

Leave a Reply