Uncommon Descent Serving The Intelligent Design Community

John Derbyshire: “I will not do my homework!”

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

[[Derbyshire continues to embarrass himself regarding ID — see his most recent remarks in The Spectator — so I thought I would remind readers of UD of a past post regarding his criticisms of ID. –WmAD]]

John Derbyshire has written some respectable books on the history of mathematics (e.g., his biography of Riemann). He has also been a snooty critic of ID. Given his snootiness, one might think that he could identify and speak intelligently on substantive problems with ID. But in fact, his knowledge of ID is shallow, as is his knowledge of the history of science and Darwin’s writings. This was brought home to me at a recent American Enterprise Institute symposium. On May 2, 2007, Derbyshire and Larry Arnhart faced off with ID proponents John West and George Gilder. The symposium was titled “Darwinism and Conservatism: Friends or Foes.” The audio and video of the conference can be found here: www.aei.org/…/event.

Early in Derbyshire’s presentation he made sure to identified ID with creationism (that’s par for the course). But I was taken aback that he would justify this identification not with an argument but simply by citing Judge Jones’s decision in Dover, saying “That’s good enough for me.” To appreciate the fatuity of this remark, imagine standing before feminists who regard abortion for any reason as a fundamental right of women and arguing against partial birth abortions merely by citing some court decision that ruled against it, saying “That’s good enough for me.” Perhaps it is good enough for YOU, but it certainly won’t be good enough for your interlocutors. In particular, the issue remains what about the decision, whether regarding abortion or ID, makes it worthy of acceptance. Derbyshire had no insight to offer here.

What really drove home for me what an intellectual lightweight he is in matters of ID — even though he’s written on the topic a fair amount in the press — is his refutation specifically of my work. He dismissed it as committing the fallacy of an unspecified denominator. The example he gave to illustrate this fallacy was of a golfer hitting a hole in one. Yes, it seems highly unlikely, but only because one hasn’t specified the denominator of the relevant probability. When one factors in all the other golfers playing golf, a hole in one becomes quite probable. So likewise, when one considers all the time and opportunities for life to evolve, a materialistic form of evolution is quite likely to have brought about all the complexity and diversity of life that we see (I’m not making this up — watch the video).

But a major emphasis of my work right from the start has been that to draw a design inference one must factor in all those opportunities that might render probable what would otherwise seem highly improbable. I specifically define these opportunities as probabilistic resources — indeed, I develop a whole formalism for probabilistic resources. Here is a passage from the preface of my book THE DESIGN INFERENCE (even the most casual reader of a book usually persues the preface — apparently Derbyshire hasn’t even done this):

Although improbability is not a sufficient condition for eliminating chance, it is a necessary condition. Four heads in a row with a fair coin is sufficiently probable as not to raise an eyebrow; four hundred heads in a row is a different story. But where is the cutoff? How small a probability is small enough to eliminate chance? The answer depends on the relevant number of opportunities for patterns and events to coincide—or what I call the relevant probabilistic resources. A toy universe with only 10 elementary particles has far fewer probabilistic resources than our own universe with 10^80. What is highly improbable and not properly attributed to chance within the toy universe may be quite probable and reasonably attributed to chance within our own universe.

Here is how I put the matter in my 2004 book THE DESIGN REVOLUTION (pp. 82-83; substitute Derbyshire’s golf example for my poker example, and this passage precisely meets his objection):

Probabilistic resources refer to the number of opportunities for an event to occur or be specified. A seemingly improbable event can become quite probable once enough probabilistic resources are factored in. On the other hand, such an event may remain improbable even after all the available probabilistic resources have been factored in. Think of trying to deal yourself a royal flush. Depending on how many hands you can deal, that outcome, which by itself is quite improbable, may remain improbable or become quite probable. If you can only deal yourself a few dozen hands, then in all likelihood you won’t see a royal flush. But if you can deal yourself millions of hands, then you’ll be quite likely to see it.

Thus, whether one is entitled to eliminate or embrace chance depends on how many opportunities chance has to succeed. It’s a point I’ve made repeatedly. Yet Derbyshire not only ignores this fact, attributing to me his fallacy of the unspecified denominator, but also unthinkingly assumes that the probabilsitic resources must, of course, be there for evolution to succeed. But that needs to be established as the conclusion of a scientific argument. It is not something one may simply presuppose.

There’s a larger issue at stake here. I’ve now seen on several occasions where critics of design give no evidence of having read anything on the topic — and they’re proud of it! I recall Everett Mendelson from Harvard speaking at a Baylor conference I organized in 2000 decrying intelligent design but spending the whole talk going after William Paley. I recall Lee Silver so embarrassing himself for lack of knowing anything about ID in a debate with me at Princeton that Wesley Elsberry chided him to “please leave debating ID advocates to the professionals” (go here for the Silver-Elsberry exchange; for the actual debate, go here). More recently, Randy Olson, of FLOCK OF DODOS fame, claimed in making this documentary on ID that he had read nothing on the topic (as a colleague at Notre Dame recently reported, privately, on a talk Randy gave there: “He then explained how he deliberately didn’t do research for his documentary, and showed some movie clips on the value of spontaneity in film making”). And then there’s Derbyshire.

These critics of ID have become so shameless that they think they can simply intuit the wrongness of ID and then criticize it based simply on those intuitions. The history of science, however, reveals that intuitions can be wrong and must themselves be held up to scrutiny. In any case, the ignorance of many of our critics is a phenomenon to be recognized and exploited. Derbyshire certainly didn’t help himself at the American Enterprise Institute by his lack of homework.

Comments
[...] last year I reported on this blog that (go here) that John Derbyshire, despite repeatedly weighing in against intelligent design online and in [...]John Derbyshire: EXPELLED as “Creationist Porn” | Uncommon Descent
April 28, 2008
April
04
Apr
28
28
2008
07:06 PM
7
07
06
PM
PDT
ericB (63): "But mindless nature has no reason or motive to construct the encoding, storage/transmission, retrieval, and decoding mechanisms necessary to associate meaning with symbols taken as a code. Any part of that system is useless without the others." I agree, but of course the standard response of Darwinists would be that since this is at base a physical mechanism, "meaning" is an abstraction only existing in the conscious minds of us humans considering the matter. Since they cannot for a moment entertain teleology or intelligence in the process, they assume as a given that the genetic code and translation system arose by numerous small, successive adaptive steps. Since they can sort of imagine this (even with no specific "just so" story), then it must be how it came about (including apparently indefinite numbers of levels of alternate frame coding). Such closed-minded ideological thinking is impervious to reason.magnan
September 8, 2007
September
09
Sep
8
08
2007
01:19 PM
1
01
19
PM
PDT
Rude's: It takes a mind to spot a mind -- paraphrased. Great issue . . . though in the cases of funcxtionally specified information-rich entities, rarity of function in the config space leads to derangement of function from relatively small random changes [it's hard to build redundancy and error-correction up to take in large changes!] GP- Thanks for the kind words. GEM of TKIkairosfocus
September 8, 2007
September
09
Sep
8
08
2007
02:48 AM
2
02
48
AM
PDT
I didn't get in on the hash and rehash re the "specification" in that other thread--unless I did and have absent mindedly forgotten--but maybe the problem is that there is no algorithm, no set of mechanistic procedures that could be programed into a robot that could pinpoint a specification. It's a pretty mechanical procedure that would disqualify simple repetitive patterns, but it may take a designer to recognize the specification behind design. And these folks who don't recognize that there is such a thing as mind--could they admit to something that only a mind could recognize? By the way--O'leary and Beauregard's The Spiritual Brain was waiting for me when I got home. Y'all have a great weekend too!Rude
September 7, 2007
September
09
Sep
7
07
2007
04:41 PM
4
04
41
PM
PDT
Sorry to join this interesting thread so late. I see that much has already been said (thanks kairosfocus, for always being so generous and pertinent) and we have discussed similar things before, so I will just try to sum up some aspects which are specially dear to my heart: 1) Consensus: Jerry seems to think, if I understand well, that in our discussions nobody has been able to give a clear definition of specification. I disagree. Dembski has done that very well, and we have tried, in our simple way, to "explain" some aspects in our discussions. The facts that some remain unconvinced is nobody's fault, but I am very happy with the concept of specification and with the general understanding of it in this blog. 2) Specification: I will try anyway to sum up my personal understanding of specification, even if I am certain that the concept is still so deep that ot will be clarified further in the years to come. Specification, as I see it, is any characteristic which makes some piece of information specially recognizable by conscious and intelligent beings. Specification can be given by at least three different mechanisms: a) Pre-specification: if a specific information has been defined in advance, it can be "recognized" when it occurs again. In this case, specification is not a property of the information itself, but rather of the precious occurrance of the definition. b) Compressibility (order): some (rare) pieces of information are higly compressible in terms of bits, and that makes them recognizable to conscious, intelligent beings. A sequence of 1000 identical bits is a good example of that. c) Function: some (rare) pieces of information can "accomplish" specific tasks. In other words, they have a recognizable "meaning", llinked to what they can do or communicate. The classical sequence of prime numbers is a good example. A functional protein is another one. A computational algoritm is still another one. This is, perhaps, the most important kind of specification, and the most represented in nature. I believe that all languages (both natural and programming languages) fall in this category. 3) Low probability: specification alone is not enough. Low probability is necessary too, so that we can have CSI. A sequence of ten identical bits is specified, but its probability is not very low (1: 2^10). A sequence of 10^150 random bits has an extremely low probability, but it can easily occur (unless it is pre-specified). But specification "plus" extremely low probability cannot practically occur in reality. And if we assume a very, very generous level of low probability, like Dembski's UPB, then you can be really sure that any CSI is due to design. It didn't occur by chance, it was conceived and written by an intelligent being. 4) Lottery: any discussion about lotteries is completely overcome if one really understands the previous points. No lottery in the universe could "win" a specified outcome with UPB probability (or lower). Unless, of course, you believe in the multiverse argument. But then, that's your problem... 5) The "life could have been built in many other ways" argument. Maybe that's true. But the argument is completely insignificant. Let's go back to the problem of function, and let's consider, for simplicity, computational algorithms. Let's consider ordering algorithms, that is sequences of bits which, in a specific hardware environment, can order data. We know that the ordering process can be realized in many different ways. You have many different ordering algorythms, of different complexity, length, efficiency. All of them can perform the task, even if in different ways and times. But how many sequences, in the limit of a certain length of bits, do you think are ordering algorithms? Not one, but certainly not many. So, the probability of finding by chance an ordering algorithm in random sequence of, say, 500 bits, will not probably be 1:2^500. Let's suppose that ten of those sequences are functional ordering algorithms. Then, if I am not wrong, the probability will be ten times higher, that is something less than 1:2^496. Do you think that makes a big difference? 5) So, unless you believe in the multiverse fantasy, or unless you believe that for some even more fantastic reason a lot, but really a lot, of the random combinations of information can give rise to life, the CSI argument is solid truth, and the "lottery" argument is completely bogus. 6) Finally, just stop and consider that, according to what we can daily see in our world daily, all around us (living beings of all kinds and forms, just to be celar), the supposed "lottery" should have been won billions of times, each one at a level of CSI abundantly above (or below, if you prefer) the UPB limit: first of all to have a universe capable of order and life; then to get the OOL, the DNA code, the transcription and translation system; then to get eukaryotes from prokaryotes; then to get multicellular beings, sex reproduction, cambrian explosion, different body plans, each single new functional protein, each single new regulation network, the evolution of each species, the development of the nervous system, of the immune system, and so on... Not to mention the flagellum, just not to repeat ourselves.gpuccio
September 7, 2007
September
09
Sep
7
07
2007
03:16 PM
3
03
16
PM
PDT
1. If you follow social construction theory all the way through, you will find that it follows a Heglian world view, and its special application allows for a thing to be true and false at the same time and under the same formal circumstances. That is one subtle reason why it is impossible to have a rational discussion with its advocates. 2. Not only will Derbyshire not do his homework, he will not even respond when someone else does it for him. The difference between CS and DI was explained right in front of him. Does anyone doubt, nevertherless, that he will continue to promote the big lie by conflating CS and Id even after having been instructed. How can anyone attribute ignorace to behavior that is clearly dishonest?StephenB
September 7, 2007
September
09
Sep
7
07
2007
03:16 PM
3
03
16
PM
PDT
[...] John Derbyshire: “I will not do my homework!” Whose knowledge of ID isn’t shallow? I will grant the ID-ists one thing: their tactics are clever. Make the public think the advanced math in Dembski’s books makes ID an esoteric subject requiring ‘lots of homework’. The air of expertise has certainly proven effective as a propaganda snow-job. But, apart from some obvious blunders that people do make in the restatement of the Dembski version of design, the question of the design argument is not complex. It seems complex because it tweaks one’s metaphysical unconscious (as does ‘natural selection’), but in fact noone has ever gained an inch of ground on the question since Kant and Hume. John Derbyshire has written some respectable books on the history of mathematics (e.g., his biography of Riemann). He has also been a snooty critic of ID. Given his snootiness, one might think that he could identify and speak intelligently on substantive problems with ID. But in fact, his knowledge of ID is shallow, as is his knowledge of the history of science and Darwin’s writings. [...]Darwiniana » Lots of homework, even ID detention
September 7, 2007
September
09
Sep
7
07
2007
02:14 PM
2
02
14
PM
PDT
But I was taken aback that he would justify this identification not with an argument but simply by citing Judge Jones’s decision in Dover, saying “That’s good enough for me.” Or imagine him saying Dred Scott was "good enough for me" or Plessy v Ferguson.tribune7
September 7, 2007
September
09
Sep
7
07
2007
01:09 PM
1
01
09
PM
PDT
O'Leary:
A person who accepts a court ruling as definitive in a matter of this type apparently believes in the social construction of reality.
Physics can be socially constructed too! See: http://www.jefflindsay.com/PCPhysics.shtml An excerpt:
The prohibitive, traditional "laws" of physics must be rejected in favor of new models that foster tolerance, empowerment, and social justice. Under the old order, radical conservative forces have imposed "conservative" laws restricting the use of energy, mass, momentum, and electrical charge. Rather than conserving such forces and powers, they must be increased and made available to all people, regardless of race, gender, or sexual orientation.
If anyone needs a smile today check out the whole article.dacook
September 7, 2007
September
09
Sep
7
07
2007
12:43 PM
12
12
43
PM
PDT
TroutMac, "One thing that I think is confusing, and makes probabilities difficult to understand, (for me at least) is that saying that an event has a “one in one million” chance of happening is NOT the same (correct me if I’m wrong) as saying that it WILL happen once in one million times." This "somebody has to win the lottery" argument is useless because it ignores relevant issues. For example, consider a Rubics Cube. It has a limited number of configurations or states. To get from one state to another state requires a certain minimum number of intermediate states, or steps. No amount of random activity or "luckiness" can change this fact. There are certain states that can never occur for a Rubics Cube, never, ever, unless, of course, you peel the stickers off and put them back on in one of these impossible states. If I gave a five year old kid one of these cubes in an impossible state, he probably would not be able to detect the fraud. But an adult scientist who is familiar with Rubics Cubes would most certainly be able. My point is, blindwatchmaker devotees merely assume that it possible to get configurations of proteins, like a flagellum, without demonstrating that the material path is possible without a "cheat" imposed by an intelligence with insight. When they can demonstrate this - (Matzke's paper doesn't come close to giving a complete developement of the assembly process) - I will begin to take them seriously. I'm neither religionist nor "anti-evolution". I'm just an engineer who demands proof of concept for claims made. If these blindwatchmaker devotees were engineers and approach reality that waym, I would not hire them.mike1962
September 7, 2007
September
09
Sep
7
07
2007
10:45 AM
10
10
45
AM
PDT
Hi Jerry I believe I have already said enough, by objective measures. I have shown by discussion in light of examples tied to digital strings, what the CSI concept is, and that it emerged form the OOL research community at the turn of the 1980's as a means for understanding how life is different from the sort of ordering that say Prigogine investigated. [BTW, Prigogine as cited in the linked by TBO gives some interesting comments too . . .] I have also dicussed the difference between concepts and definiitons,and addressed the issue of the limitations of proof, in science and generally. In my judgement, I have given you and others enough, and note that your own comment is that:
the materialists have an answer for that even if it is bogus
Now, too, the insistent use of "bogus" -- i.e objectively fallacious -- but persuasive arguments is the mark of the manipulative rhetor or even propagandist, not the serious thinker. (Onlookers, cf my always linked on what I think of the God of the notorious gaps fallacy.) The proper answer to such dishonest advocacy is not to let them get away with such selective hyperskepticism, but to expose them, and point out how long since this has been exposed. That too, I have done. Cheerio :-) GEM of TKIkairosfocus
May 22, 2007
May
05
May
22
22
2007
04:50 AM
4
04
50
AM
PDT
Actually I always thought there were one-eyed, one-horned, flyin’ purple people eaters up there. :-)tribune7
May 21, 2007
May
05
May
21
21
2007
10:50 AM
10
10
50
AM
PDT
tribune7, Actually I always thought there were one-eyed, one-horned, flyin' purple people eaters up there. Thank you for letting me know that I was mistaken. But maybe they both could be up there. Since we are not 100% certain and they could be symbiotic.jerry
May 21, 2007
May
05
May
21
21
2007
07:34 AM
7
07
34
AM
PDT
4] The real issue: Philosophy and institutional politics, NOT science and analysis Dittos. Jerry, something for you to mulll: How do you know there really aren't green men on the moon? I mean to 100 percent certainty. As a mental exercise imagine a scenario in which green men are living on the moon unbeknownst to us. Then ponder this: whatever you come up with will be more likely than for life to have occurred without a designer.tribune7
May 21, 2007
May
05
May
21
21
2007
04:56 AM
4
04
56
AM
PDT
kairosfocus, I have read the other thread and your comments and have yet to see a coherent discussion of specificity. All is offered is low probability examples and the materialists have an answer for that even if it is bogus. It is the God of the Gaps argument and it is winning the day ever since LaPlace made Newton look foolish. As I said, the discussion is all over the lot and people keep using low probability events as examples of specificity but offer no definition or why that word should be used. How do Mt. Rushmore, coin flips, DNA, card orders etc. justify the use of the word specificity? The people you have to convince are the ones doing medical research, running the government agencies and universities and those supporting them not the people on this site. You also have to convince the typical science student in the country that you have a coherent scientific explanation for what you propose. ID has two sides to it and one is open to a lot of criticism because it does not seem to have any lucid argument for it other than a general appeal to low probability events. I don't think bringing up faith is useful as part of the discussion especially when ID is primarily looked at as a conversion opportunity by many.jerry
May 21, 2007
May
05
May
21
21
2007
04:17 AM
4
04
17
AM
PDT
PS: Based on comments in another thread, Jerry may wish to look here to see more on the limitations of our thinking and reasoning. In brief, reasoning inherently embeds faith-commitments, and science in particular is no exception. Further tot his, scientific reasoning is by inference to best current explanation, not demonstrative proof. Indeed, even proofs rely on faith points. [Or we end up in infinite regresses.]kairosfocus
May 21, 2007
May
05
May
21
21
2007
02:25 AM
2
02
25
AM
PDT
Continuing . . . 3] EB: The worst problem for the unguided hypothesis is not the improbability of forming symbols (though that may be unlikely). The unattainable aspect is getting mindless processes to attach associated meanings to symbols used as a code. Strictly, since any one config in a set of possible outcomes is by the Laplacian principle of indifference, just as attainable as another [absent specific reason to weight possible outcomes unevenly, which simply makes the point a bit more complicated to calculate], we can “just by coincidence” get to a code and an associated integrated information system – no logical or physical/force/energy barrier directly prevents it. Just as, the oxygen molecules in the room in which you sit can all rush to one end by chance, leaving you to collapse mysteriously. But, the probabilities for such happening by chance are so remote that the “lottery” to attain that target by chance is unfeasible – this is the base for the statistical form of the second law of thermodynamics. (In short, a probabilistic resources hurdle is as real and as effective as a direct physical barrier. Also, J, while you may have indeed studied Math did you do statistical thermodynamics? That is the materially relevant discipline. Unfortunately, it is also one of the more abstruse and subtle provinces of physics; cf. My appendix A in my always linked for some sketched in thoughts on what I am getting at.) What the proposed chance origin of a code with the symbols, and the associated algorithms and complex information processing system does, is to compound utterly beyond mere astronomical odds, making the point that there is a probability hurdle there plainer and plainer by reduction to absurdity. After all, there is a simpler, even obvious explanation: agents routinely use symbolic code, target purposes, create algorithms, and use physical resources to implement technologies that express the codes and algors then execute them to achieve the targets. That is what we see with OOL, macro-level biodiversity and cosmogenesis, including the multidimensional Goldilocks zone effect that leads to us on Earth. So, the probability issue I have emphasised is more general, but the symbolic language and algorithms by chance issue makes the absurdity more directly obvious in a computer literate age. 4] The real issue: Philosophy and institutional politics, NOT science and analysis So much is the just above the case, that the real real device being used to block inference to design is the attempted imposed redefinition of science as excluding agency on relevant matters through so-called methodological naturalism. This is evident in court decisions as well as statements of leading institutions and spokesmen. In short, it is only by begging major definitional and historical questions on the nature of science that the illusion that evolutionary materialism is “science” can now be sustained. That is why the current Gonzalez case is so blatantly political and patently unjust in character. But unless the public wises up, then rises up, the power brokers hope to get away with injustice and oppression. BTW, that is one of the reasons why a favourite accusation of the evo mat advocates these days is that ID thinkers are doing politics and PR not science. For, so long as they control the institutions and the media mikes, they can block hiring or promotion or tenure or publications and break careers unjustly to their heart's content. But they know that if they face an accurately informed and justly angry public, they don't stand a chance. So, we see the classic resort to the bodyguard of lies and the turnabout false or misleading accusations to keep such an uprising at bay for as long as possible. Time to wise up and rise up, folks Seriously . . . GEM of TKIkairosfocus
May 21, 2007
May
05
May
21
21
2007
01:17 AM
1
01
17
AM
PDT
Hi again: I see Dave Scott has started a definition of ID thread now that his one has slipped off the opening page. Now on key points: 1] Jerry: ID etc There are three strands of issues on the Evo Mat view that fall generally within the ambit of science-dominated reasoning: i] Critique of the NDT thesis that RM + NV accounts for body-plan level bio-diversity, with a secondary [but it should be primary] issue on OOL. (Secondary arises because, conveniently, the origin of life is not viewed as part of NDT proper,though of course there is a close association.) This is far broader than ID or Creationism, and has quite a distinguished history actually. ii] ID, biological: that agency best explains origin and macro-level biodiversity, as opposed to chance + necessity in the NDT paradigm iii] Creationism [Biblical form, there is a generic Creationism that is not pinned to specific texts] : asserts that the Biblical account is an accurate record of origins by credible witnesses and the Creator, which leaves sufficient evidence that is observable that we can see good reason to take it at appropriate level of interpretation. --> The first is pretty well established though hotly contended. The re is good reason to see that for the second, also. The third is more controversial. 2] The concept of specificity seems to be all over the place and what I was looking for is a simple definition and then seeing it applied to the many examples offered. Part of the problem here is the concept of a definition. Definitions can be seen as falling under two general heads: by example, and by precising statement. The latter falls into two sub-heads: genus and difference [i.e taxonomy], and statements of in effect what is necessary and sufficient to see that a putative case is really in/out of the target zone. The trick to is is to understand that we first form concepts by abstracting commonalities from experiences with examples of a pattern. (Think about how we come to understand chairs, tables, furniture, artifacts etc.) Precising verbal definitions then apply boundaries to the concept, and depend for their credibility on ability in the first instance to include recognised examples, and exclude recognised non-examples. Then, we tend to give the precising statement a status of gatekeeper over whether or not something is in/out. But, note what has happened – the examples and concepts come first and are logically prior to the statements. In many real-world cases, we are unable to come up with such precising statements that are acceptable to every rational agent, not least because worldviews and agendas are often at stake. In other cases, we just simply cannot figure out how to do so: try to define “life.” But the concept is valid, and we can identify many clear instances and non-instances, with borderline cases that challenge our ability to precisely state terms and conditions for in/out. But, family resemblance rules. It is fair comment that above, I have cited adequate examples to give a clear enough concept, and that Dembski's model offers a reasonable filter for deciding at least on clear cases. And, the cases in view are more than clear – they are more or less plain as day: agency is by far and away the best EXPLANATION for the OOL and its macro-level diversity – in the realm of facts we are dealing with inference to best explanation, not demonstrative proofs to an arbitrary standard (which is often “conveniently” substituted when the best explanation does not sit well with one's worldview; i.e. resort to selective hyperskepticism, as Simon Greenleaf long ago pointed out.). It is also the best explanation by far for the fine-tuning of the observed cosmos for life like ours. Just, this cuts across institutionallly dominant worldviews. BTW, the config space issues and associated probabilities are tied to the issue of IBE: when the raw probability of something happening by chance that just happens to come out very conveniently to fit a target zone becomes too incredible, agency makes a lot better sense. Pausing . . .kairosfocus
May 21, 2007
May
05
May
21
21
2007
01:06 AM
1
01
06
AM
PDT
jerry, regarding CSI and clear explanations for the average person, although I love math, I find most people don't and they do not trust it, even with clear explanations. (Try getting people to accept that 0.999... repeating exactly equals 1. ;-) This is doubly so for probability. Even professional mathematicians and university professers can be convinced and yet dead wrong. See Marilyn vos Savant The Game Show Problem I believe it is important for mathematicians such as Dembski to make their case, which is a genuine contribution, but I also expect this to remain a black box for most people. That said, consider for a moment swapping the coin flipping machine for a prebiotic DNA base pair generating machine. Given a particular assumed genetic code, one might consider probabilities for chance generation of sequences corresponding to functional proteins, etc. However, probabilities calculated based just on the sequence itself are independent of and do not reflect whether decoding mechanisms exist. Without an actual decoding mechanism to give associated semantic meaning to the sequence, all sequences are equivalently meaningless as well as equally unlikely. A sequence without decoding is just noise, no different than random bits. Some ID skeptics think that because random noise counts as Shannon information, nature's ability to generate random noise can solve the information problem. But analysis that only goes as far as sequence improbability has not yet touched the much deeper issue of creating associated semantic content and meaning. I believe that the fact that language requires intelligence can be made more accessible to the average person than pure probability analysis.ericB
May 20, 2007
May
05
May
20
20
2007
03:17 PM
3
03
17
PM
PDT
jerry, regarding lotteries and responses to ID skeptics, it is not surprising to find ID skeptics suggesting that there might be Many paths to "life". One path they may like is to redefine life to include other options. I wouldn't attempt to deny that "life" could be redefined such that even generations of stars might be counted as life. But none of that would tell us that unguided processes can create the language-based life we actually observe. Redefinitions of life become a dodge. IOW, the issue is not whether unguided nature could make anything that might be called life (including various unseen and unthought of possibilities). The issue is whether the best inference concerning the language-based life we do observe and study is that it requires intelligent agency. I do not believe that lottery reasoning can defeat that inference because I do not consider the inference to be based merely on a probability argument. In particular, I would submit that unguided nature has zero ability to cross the Language Barrier to processing of symbol sequences as coded messages, regardless of how many possible routes it tries to run up to the Language Barrier.ericB
May 20, 2007
May
05
May
20
20
2007
02:08 PM
2
02
08
PM
PDT
The unattainable aspect is getting mindless processes to attach associated meanings to symbols used as a code. Good pointtribune7
May 20, 2007
May
05
May
20
20
2007
02:05 PM
2
02
05
PM
PDT
tribune7: It really comes down to what is the most reasonable explanation. It’s theoretically possible for wind and rain to etch the alphabet on a rock, but if you found the alphabet etched on a rock would you really, really believe it was done by wind and rain?
This is a good point to keep in mind. Science makes (or should be making) the best inferences possible -- not proofs -- based on the data it can access so far. Even long standing perspectives (e.g. Aristotle's understanding of the content of the heavens, Newtonian physics) may eventually be superceded as we learn more. About the alphabet example, there is one crucial aspect that we tend to overlook. The worst problem for the unguided hypothesis is not the improbability of forming symbols (though that may be unlikely). The unattainable aspect is getting mindless processes to attach associated meanings to symbols used as a code. It comes so easily and naturally for us, we might think only of the difficulty of forming letters (visions of grade school trauma?). But mindless nature has no reason or motive to construct the encoding, storage/transmission, retrieval, and decoding mechanisms necessary to associate meaning with symbols taken as a code. Any part of that system is useless without the others.ericB
May 20, 2007
May
05
May
20
20
2007
01:31 PM
1
01
31
PM
PDT
kairosfocus, You have to understand that I support the ID position and think Darwin's gradualistic approach is nonsense and that OOL is probably the best case for ID there is, followed by the Cambrian Explosion. I believe the anti-gradualism information or lack of it is the killer for current biology's preoccupation with neo Darwinism. At the moment I am half way through watching a hour and 50 minute video I got on Itunes by a Stanford professor who so far has not introuduced any empirical evidence for gradualism but is just extolling about how wonderful a theory it is. He has spent the last 10 minutes talking about dog and pigeon breeding which is interesting and I am sure he is going to use the same rhetorical approach Darwin did to convince you it works the same way in the wild. So I ascribe very readily to half of ID that presents the information that is critical of gradualism. On the other side I understand all the arguments from small probabilities and complexity that cannot be overcome in any likely manner by chance and law. What I have failed to see is a clear cogent discussion of CSI in plain English. The concept of specificity seems to be all over the place and what I was looking for is a simple definition and then seeing it applied to the many examples offered. I also did not see any good refutation of the lottery argument other than hand waving. People generally misunderstand the lottery argument and the materialist's objections to the ID use of small probabilities. It is not necessary to go into a long exchange. I do not have time to reply. Over time I will see if there is anything that I think is easy to understand that explains CSI clearly. So far I have not seen it but there is a lot more to read. I was a math major and have had many courses in statistics and probability in graduate schol but these were quite a while ago and I never used them in work so I am familiar with the arguments but the details have long faded into the background.jerry
May 20, 2007
May
05
May
20
20
2007
06:55 AM
6
06
55
AM
PDT
PS: I discuss the technical nature and origins of the fallacy I descriptively term selective hyperskepticism here. I of course put the link in its own little post to see if that will get through the ever watchful spam filter . . . PPS: Part 3 seems to have got itself swallowed by the ever lurking spam filter . . . [It includes a test case and addresses the usual Genetic Algorithm type objection . . .]kairosfocus
May 20, 2007
May
05
May
20
20
2007
05:25 AM
5
05
25
AM
PDT
Concluding . . . [OOps, 1 in 1/4 * 10^211,741th fraction . . .] 6] Defining CSI Again, we have an independent control on the “specification”: the macromolecules involved must hit the fly on the wall, i.e. they must function in a viable organism, and we have good reason to believe that the configs in question are incredibly isolated in the overall config space, even at the bottom end of the range for viable life. Then, in the Cambrian, for instance we have to get dozens of new body plans in a context where for instance a modern arthropod (oddly enough, a fruit fly!) has a DNA of order 180 million base pairs. If just 10% of that is functional, we need to account for, on an earth of ~ 6 * 10^24 kg, and estimated lifespan ~ 4.6 BY [again being over-generous] the origin of some 17.5 mn base pairs of biofunctional information. The config space for that is ~ 7.05*10^10,536,049 “cells.” This swamps the 10^5000 overgenerous possible organisms again.] We see complexity here: information storage capacity required to express the functional system, and we see specification (and associated intricate, integrated structure and life function algortihms): an easily observable and highly specific target: either it flies or it fails. Either it hits the target or [far more likely on a random search] it hopelessly misses. And, we have no need to entangle ourselves in the project of trying to get a global precising definition that every rational agent is compelled to agree with; we have cases in point enough to have to deal with the facts as observed. So the OOL researchers have been forced to identify and accept that there is an objective, easily observed and abundantly instantiated difference between [1] order and [2] complexity and [3] complex, specified information that is characteristic of life, over 25 years ago. They provided cases in point, based on linear strings of potentially information storing elements. [From a digital pespective the difference is simply that of how many states the “alphabets” have: English, 27 letters and a space, in the simplest version; DNA – 4 states, proitein, 20. We simply exponentiate N^i = number of accessible states. The sudden rhetorical backtracking and in some cases pretence that it is ID thinkers who have the burden of proof beyond all rational dispute, is ill-informed at best or even frankly dishonest. Speaking of which: 7] how can you completely eliminate some small specified sequence happening, less than the probability limits you have set up and then having this event accepted and then why could there not be another. Provisionality is a part and parcel of scientific work, which seeks to account for the currently known and credible facts through the best explanation to date. Of course it is always conceivable that we can come up with an exception. For instance, we could conceivably come up with a perpetual motion machine, and so throw all of thermodynamics into a cocked hat. But, based on what we do know and can best explain, this is not likely, so we accept that thermodynamics is a well-warranted science. So, I refuse the improper shifting of the burden of proof through selective hyperskepticism, and so should you. On the evidence, the architecture of life has both the complexity and specificity that I have noted on above, which go beyond the reasonable reach of the proposed chance + necessity mechanisms. But, on inference to best explanation, this is well within the reach of agent action – and absent certain institutionally powerful worldview commitments, would immediately be seen and accepted – so I simply point out that this is selective hypersketicism at work. So, if you want me to accept your speculative mechanism, you must meet a simple empirical test, commonly used in the sciences: demonstrate, on a replicable basis, the creation of functionally specified complex information beyond the Dembski type bound through mechanisms that only rely on chance plus undirected natural forces, even incrementally, starting with simple stages and cumulating. [But, TARGETTED searches based on generic algorithms are not anything but illustrations of how agentscan use reasonable random searchesto find targets within the reach of probbailistic resources. Methinks it is a weasel etc fail.] A simple case in point would be to take 1 billion PCs, load them with unformatted floppies and rigged drives that will spew random noise across them once every minute for a year, then test for a properly formatted disk - which can be replicated to your heart's content. After success in that exercise, further random noise will be spewed across the surviving disks [let's be generous and say that the formatting is not to be touched], and look for a properly formatted document, image or program in any reasonable PC document format, with at least 1 k bits of information in it. How many years would we have to wait for success? GEM of TKIkairosfocus
May 20, 2007
May
05
May
20
20
2007
05:20 AM
5
05
20
AM
PDT
Continuing . . . 4] why couldn’t one of these many, many ways that God could have created life be one that could have emerged without direct agency through law and chance. Note, it is you who are introducing Gods into the discussion, I have spoken of intelligent agents and their capacity relative to undirected chance plus necessity. Inference that the principal agent involved in life as we observe it is God, is beyond the proper current realms of Science as there is no commonly accepted base of empirical observation. In philosophy – the topic thus introduced – there are good reasons to infer to God as principal agent, but that is a different subject and not on topic. [If scientific arguments can be marshalled in a phil context when they are thought to support atheism, surely they can also be regarded as properly science even thought hey may now lend support to theistic worldviews!] 5] Everybody then knows that what you mean is that some random process by chance with the help of the laws of physics produced more than order at one time and this particular thing is the result. And if it could do it once, then it could do it again. And if it could do it for small example, then it could do it again and add to this small example to make it a slightly bigger example. And once you are there, what is to prevent it from going even bigger. An admirable summary of a common, but fatally flawed, perception. The individual step of complexity is too big, and the islands of functionality are too isolated, by far. [Cf my always linked.] Unfortunately, it fails to see and understand just how fast the probabilistic resources available run out, i.e 500 – 1,000 bits of functional information is more than enough to kill off such a mechanism as hopelessly beyond the reach of the observed universe [~ 10^80 atoms, and 13.7 BY]. Thus, the relevance of Dembski's upper probability bound! And note that the relevant range is that is a configuration or cluster of configs is isolated in the space of all configs of such information carrying capacity, to better than one in 10^120 – 150, it is beyond the reach of the observable universe. Remember, even a small DNA strand is 500k long nd life function breaks down if knockouts go beyond about 360k. 4^360k ~ 3.95*10^216,741. I doubt there have been/can be as many as 10^5000 examples of DNA over the years from origin to heat death of the universe – I am being incredibly over generous as the Dembski number is the number of QUANTUM STATES possible across the lifetime and gamut of the cosmos. 10^5,000 is ¼ * 10^215,741th fraction of the possible states for just a 360,000 base pair DNA strand. The fly is incredibly isolated on the wall! But, we are not dealing with Mr Spock here. So, we have to address the problem of worldview level question-begging though selective hyperskepticism; as I have pointed out above. Pause . . .kairosfocus
May 20, 2007
May
05
May
20
20
2007
05:01 AM
5
05
01
AM
PDT
Hi Jerry: I do not really want to get into a long back-forth exchange, having just had 3/4 MB with Pixie over in my own blog, on closely related subjects. [You can follow up through my always linked . . .] I comment: 1] Lottery example. I addressed that further back above, and others did so; IMHCO, cogently. In the case in view, the onward point relative to Leslie's remarks, is that LOCAL isolation is sufficiently wondrous to raise the issue of design as the best explanation relative to what we know about the origins of FSCI systems. You might want to look at Denton's remarks on the matter back in 1985 or so. It boils down to if islands of functionality [they don't have to be unique] are sufficiently isolated, random search-dominated, none purposive strategies cannot credibly access the first island or hop from one island to the next. In short, the OOL and body plan level evolutionary claims lack empirical foundation. And, so does the origin of THIS life-habitable universe. 2] Are there a large number of ways to construct life, apart from the DNA and proteins design we see around us? First, I DID raise even spiritual forms of life as a case in point considered seriously by a great many informed people across the ages. [Indeed, some argue – IMHCO, compellingly; cf my always linked -- that we are spiritual-material hybrids, and that this is the root of the credibility of mind and the compelling nature of morality.] Second, I explicitly noted that the DNA-protein architecture we see manifests FSCI, and on a local basis the configurations that work are sufficiently mutually isolated that random search dominated strategies cannot credibly originate and diversify what we see. But agent action easily explains what we do see in light of what we do know. 3] Gambler example: Of course, the point of the tricky gambler is that this is an instance of a naïve person imagining that random forces are in play when in fact clever design is at work. FSCI is again the product of design . . . Breakkairosfocus
May 20, 2007
May
05
May
20
20
2007
04:53 AM
4
04
53
AM
PDT
kairosfocus, I read all your comments above and I have two reactions. First, I do not have to be convinced of the incredibly low likelihood that life could arise by chance or even law if even the universe was designed to lead to life which is the theistic evolutionist's assumption. I cannot imagine of any way it could happen even given an eternal universe. Second, I have seen enough rabbits pulled out of a hat to know that when a gambler bets you that he can cut the ace of spades, you better not bet against him. I will get back to the ace of spades bet at the end. Nothing in your discussion covers the lottery example. Namely, that there could be an extremely large number of ways to construct life and we happen to be the one example that emerged. Given different initial conditions, or different accidents of nature, another form may have emerged at a later time. Also, I wouldn't want to challenge God to say He could not have made life differently or that the way He chose here was the only one or even one of only a few. So if you believe that, then why couldn't one of these many, many ways that God could have created life be one that could have emerged without direct agency through law and chance. Now, the term "emerge" is a hot term in the evolutionary biology circles because it explains everything. You just have to say this is what emerged or evolved without giving the process or steps that took place. Everybody then knows that what you mean is that some random process by chance with the help of the laws of physics produced more than order at one time and this particular thing is the result. And if it could do it once, then it could do it again. And if it could do it for small example, then it could do it again and add to this small example to make it a slightly bigger example. And once you are there, what is to prevent it from going even bigger. So it is on and on and deep time is your ally. And to use a Darwinian metaphor, it could be that some of these complexities were more stable than others and that these are the ones that survived and that the conditions that produced them no longer exist except that the complexities survived. It is all Darwinian just so story telling but how are we to know ti could not have happened or even if God wanted it to happen this way. For example, at the time of the Cambrian Explosion, our solar system was in the galactic spiral arm and the amount of radiation falling on the earth was probably much higher than it is today as it was surrounded by millions of stars much closer than today. The night sky would have looked much different then. Now as the Privileged Planet so rightly points out we are out on own with very few other stars off the galactic arm. Could this have had an effect? So what has to be addressed is the step approach and I am not sure if CSI does this and after reading your comments, I am not sure I yet understand a general definition of CSI. Where does the specificity lie? Is in the sequence itself or in some outside reference to this sequence. For example, in an English sentence, the specificity comes from the grammar and dictionary of the English language not the sequence itself or else we wouldn't know it wasn't nonsense. But in a fair coin toss of 500 heads in a row it is in the sequence itself. In a DNA sequence the specificity comes from the fact that the process produces functional proteins not that a sequence of ACTG's have meaning of themselves. If the proteins were not functional, would we say the DNA was specified. Actually it is what produces the tRNA's and the Ribosome that gives specificity to the DNA which I also assume is somehow produced by DNA. I have only seen how proteins are assembled, not how such things as Ribosomes are made. (Does anyone have a cite to explain this?) So your discussion of CSI should discuss what gives specificity to a sequence and this is generally lacking in any discussion of what I have seen. Why are coin tosses and sentences both specified? Why is the same term used in each example? What specifies DNA or proteins? I prefer not some abstract information theory approach but something in plain English that the average person could understand. The tendency is to give examples and not a definition that would help one decide if this sequence is specified or not. I tried to read Dembski's book but gave up. We all can recognize unlikely events. And I understand that DNA is different from a specific rock outcropping which is also both rare and complex. But how can you completely eliminate some small specified sequence happening, less than the probability limits you have set up and then having this event accepted and then why could there not be another. And then when combined with the other low but acceptable event at a later time to form the event that is outside the boundaries. There are a lot of questions to be answered. By the way the gambler example came from a an old TV show called Maverick. A gambler bet Maverick who was also a gambler a thousand dollars he could cut the Ace of Spades. Maverick took the bet and examined the deck to see if it was fair. The gambler then put the deck of card on the table and took a knife out of his pocket and thrush the knife through the deck of cards into the table. He demanded his thousand dollars where upon Maverick pulled out the Ace of Spades from his pocket. He had taken it out when he examined the cards. The moral is don't bet a gambler or someone who can do magic tricks because you may not be able to imagine what is going to happen.jerry
May 19, 2007
May
05
May
19
19
2007
12:55 PM
12
12
55
PM
PDT
kairosfocus, Thanks for all the long posts on CSI. I just saw them and have copied them to print out and read. Hopefully, sometime this weekend I can get a chance to see how much of it I can digest and will respond with any questions I have. One of the good things about this site is that there are many people who care and think out what they say. So thank you again and I will see what I can assimilate.jerry
May 19, 2007
May
05
May
19
19
2007
05:45 AM
5
05
45
AM
PDT
Hi Jerry (et al.): Did that stab at defining and warranting CSI as a coherent and useful concept help? Note the points in essence: 1] CSI emerged circa turn of the 80s, and it emerged from the general development of OOL research. Indeed, it served to help trigger the emergence of the first identifiable modern design school analysis, TBO's TMLO [~ 1979 - 84]. 2] In particular, the concept of CSI was identified as OOL researchers sought to distinguish the sort of polymer molecular pattern seen in life forms from the simple order of repetitive crystals on one hand, and from random chains of monomers on the other. (In short this was a further step in the discussion triggered by Prigogine's dissipative structures and similar cases of spontaneous ordering, e.g. the formation of crystals from solution, and that of a hurricane.] 3] In doing that, the concept of an informational macromolecule emerged, and it was seen as complex, aperiodic and informational in function, thus specific and in effect storing information through physically instantiating a code, either in stored form [DNA] or in bio-functionally expressed form [proteins]. 4] Dembski's model and his upper probability bound estimates seek to identify whether it is credible that particular cases of such functionality could credibly have formed through chance-dominated forces, as opposed to agency; deterministic natural forces being of only instrumental character here, as contingency dominates. To do that he in effect forms a chance origin null hypothesis, then rejects it if its probability of occurrence under reasonable principles of such calculation, falls below what it is at all reasonable could occur in the gamut of the observed cosmos over its reasonably estimated lifetime. [Cf on this, the statistical thermodynamics use of phase spaces and the discussion of relative statistical weights of macrostates and associated probabilities, which undergirds the statistical form of the second law of thermodynamics. I discuss in Appendix A in my always linked.] In short, there are two distinct levels on the issue, so to dismiss the concept as incoherent and/or irrelevant and/or factually inadequate and/or excessively ad hoc, both would have to be properly and cogently addressed. IMHCO, on long observation, that is simply not done in the attempted rebuttals I have seen and especially in those I have engaged "live." Instead, routine resort is made to an evidentiary double-standard, which I have come to descriptively term selective hyperskepticism. To that the proper counter is that we should have a criterion of consistent adequacy on addressing the realm of facts, so that we only need adequate evidence in particular cases, not "extraordinary" evidence. Trust that helps Cheerio GEM of TKIkairosfocus
May 19, 2007
May
05
May
19
19
2007
12:10 AM
12
12
10
AM
PDT
1 2 3

Leave a Reply