Uncommon Descent Serving The Intelligent Design Community

Lizzie Joins the ID Camp Without Even Knowing It!

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Lizzie,

You continue to astonish.

In the first sentence of your reply to my prior post you wrote: “I know that it is possible for intelligent life-forms to send radio signals, because we do; my priors for the a radio signal to have an intelligent origin are therefore substantially above zero.”

As I demonstrated earlier, the issue is not whether nature or intelligent agents can cause radio signals. We know that both can. The issue is whether we have any warrant to distinguish this particular signal from a natural signal.

Then you write: “I know of no non-intelligent process that might generate prime numbers (presumably expressed as binary code), and so my priors on that are low.”

Upon a moment’s reflection I am certain you will agree that this is not, strictly speaking, correct. It is easy to imagine such a process. Imagine (as you suggested) a simple binary code that assigns two “dots” to the number “two” and three “dots” to the number “three” and five “dots” to the number “five” and so on, and also assigns a “dash” to delimit each number (a cumbersome code to be sure, but a conceivable one). In this code the series “dot dot dash dot dot dot dash” denotes the first two prime numbers between 1 and 100. Surely you will agree that it is well within the power of chance and mechanical necessity to produce a radio signal with such a simple sequence.

So what do we now know? We know that nature sends out radio signals. But that is not all we know. We know that it is entirely within the realm of reason to suppose that nature could send out a radio signal that denotes the first two prime numbers between 1 and 100 given a particular binary code.

From this information we must conclude that if the signal we received were only the first two prime numbers, we would have no warrant to assign a high probability to “intelligent cause.”

Nevertheless, we both know that your calculation (and it is a very good calculation for which I commend you) that the probability that this particular signal has an intelligent source is for all practical purposes “one” is correct.

Nature can send out a radio signal.

Nature can embed a pattern in that signal that appears to generate prime numbers under the binary protocol we have designated.

Why, then, are we warranted to infer intelligent agency and not the work of nature as the cause of this particular signal?

The answer has nothing to do with your or my “intuition” about the signal.

The answer is that we both know that nature can do two things. (1) It can generate highly improbable patterns. Imagine ANY 500 bit long series of dots and dashes, and you will have a pattern that could not reasonably be replicated by chance before the heat death of the universe. And (2) it can generate specified patterns (for example, the two prime numbers we saw above).

We also know something about what nature cannot do. You said, “I know of no non-intelligent process that might generate prime numbers.” You were almost right. As I have already demonstrated, you should have said “I know of no non-intelligent process that might generate A COMPLEX PATTERN OF prime numbers.”

In other words, you and I know that while nature can do “specified,” and nature can do “complex,” it cannot do “specified and complex at the same time”! This is not your intuition speaking Lizzie. Without seeming to know it, you have made an inference from the universal experience of the human race.

Here’s the most important “take away” for purposes of the discussion we have been having: As much as you have bucked against the idea, you were able to make this design inference based upon nothing more than the character of the embedded signal (i.e., that it contained complex and specified information at the same time, that is to say, complex specified information).

Welcome to the ID camp Lizzie!

Comments
A short addendum, and hopefully summary of my point: There seem to me to be too quite separate issues here: 1) Can Darwinian "search" find solutions that are connected? 2) Are the solutions that we observe in nature connected? The answer to the first, seems to me to be clearly "yes". I think it is highly likely that the answer to the second is also yes, but there are certainly gaps in our knowledge. Whether these are also gaps in the connectivity is what we are debating, I think.Elizabeth Liddle
August 17, 2011
August
08
Aug
17
17
2011
05:06 AM
5
05
06
AM
PDT
That is why the above remarks you have made above – after months of patient discussion and repeated explanation — come across as irresponsible, supercilious and willfully obtuse, indeed I can understand why some would see them as manifesting a passive aggressive strategy of resistance to the unwelcome.
kf, I am not "resisting" the above, I'm trying to point out that that the probability argument is a straw man: Of course there is an infinitessimal probability that complex biological structures will just pop into being because the right atoms or molecules happen to be next to each other at the right time. But no-one is claiming this. We all reject that hypothesis, but it tells us nothing. The question is which, of several candidate hypotheses, offers a plausible mechanism. So the probability arguments are irrelevant to any actual evolutionary argument. The argument isn't that evolutionary processes can find deeply isolated islands of function, it's that biological functions are not deeply isolated! But what there is no point in doing is keeping on trying to persuade me that evolutionary search can't find deeply isolated islands of function, because I completely agree! Nobody disagrees. What you have there is an IC argument, not a probability argument. And the trouble with IC arguments is that they are, essentially, arguments from ignorance - if we don't have a detailed account of how a given organism or function could have evolved incrementally, then it is potentially "IC". There isn't an obvious evolutionary counter-argument to that in evolutionary theory, each IC candidate throws up a different set of problems, and the best evolutionary biologists can do, mostly, is to provide circumstantial evidence that points to plausible pathways. And, right now, for OOL we don't even have that, although OOL researchers seem quite excited at recent progress. What evolutionary scientists can do, however, is point to the power of Darwinian algorithms to deliver complex solutions when the solution space is not a series of isolated islands, and also draw attention to genetic and palaeontological evidence that suggests that the evolutionary fitness landscape is similar. Indeed the most important supporting evidence is the evidence Darwin himself drew attention to - that far from being "islands" the pattern of distribution of structures in organisms forms a connected tree Sure there are islands, too, but they are conspicuously uninhabited! So we do not have mammals with bird lungs, or six limbed lizards, or birds with rotational symmetry. Evolution can only as you correctly state, find "solutions" that are connected, it can't leap. It's the view of evolutionary biologists, in general, that the data do not show leap. I am aware that you, and many others in the ID movement disagree, but that's the issue that needs to be debated, not how improbable a leap would be. We all agree that a leap would be improbable.Elizabeth Liddle
August 17, 2011
August
08
Aug
17
17
2011
05:00 AM
5
05
00
AM
PDT
[concluding] 22 --> Mengue is obviously correct and that is why the usual talking points about co-option are refuted by the reality of car parts stores. The part does not only have to be generically right, it has to be specifically right, and put in the right way in the right place for the machine to work again. Refuted to the point where such co-option rhetoric is plainly irresponsible and in some cases outright willfully deceptive. FRAUD, in one word. 23 --> Where the problem does not rise tot the level of fraud, I have begun to get the impression that I am dealing with people who have never had to design and develop a moderately complex partly mechanical system that has to be properly integrated to work right, and/or who have never had to develop and debug a complex software program, and/or who are not open to see that there is a vast difference between a random string of gibberish and a 72+ ASCII character paragraph in contextually responsive, correctly spelled, grammatically correct English. 24 --> So, pardon me but, for serious reasons, I do not think that declarations like:
"some of the isolated-island proponents have made the error of thinking that all parts of an island have to appear simultaneously. They don’t"
. . . are reasonable or responsible. Not after the past several months of discussions and patient, repeated explanations. 25 --> In that context where the REASONS and empirical data for identifying that body plans starting with the first will be deeply isolated in genome and proteinome space, I am also much less than amused to see a remark like:
I think it’s that people think that evos are saying that these astronomically unlikely events are in fact likely. They aren’t. They are saying that the events postulated as being astronomically unlikely are not the events being postulated by evolutionary theory.
26 --> To respond in the terms of Dawkins' Mt Improbable analogy -- and yes he is giving an argument by analogy, I am giving an argument on cutting down a phase space to a configuration or state space by leaving off momentum variables -- Mt Improbable, on much evidence as already summarised and as has been discussed for months in painful detail and/or as linked -- sits on an ISLAND of function. Until you get to the shores that island, questions about the easy back slope don't even arise. And the beyond astronomical challenge is not to move from shoreline to niches and peaks within the island of function, it is to get to the island. 27 --> Let me clip from my always linked, an apt remark by Gary Parker, via Royal Trueman:
A cell needs over 75 "helper molecules", all working together in harmony, to make one protein (R-group series) as instructed by one DNA base series. A few of these molecules are RNA (messenger, transfer, and ribosomal RNA); most are highly specific proteins. ‘When it comes to "translating" DNA’s instructions for making proteins, the real "heroes" are the activating enzymes. Enzymes are proteins with special slots for selecting and holding other molecules for speedy reaction. Each activating enzyme has five slots: two for chemical coupling, one for energy (ATP), and most importantly, two to establish a non-chemical three-base "code name" for each different amino acid R-group. You may find that awe-inspiring, and so do my cell-biology students! [Even more awe-inspiring, since the more recent discovery that some of the activating enzymes have editing machinery to remove errant products, including an ingenious "double sieve" system.[2],[3]] ‘And that’s not the end of the story. The living cell requires at least 20 of these activating enzymes I call "translases," one for each of the specific R-group/code name (amino acid/tRNA) pairs. Even so, the whole set of translases (100 specific active sites) would be (1) worthless without ribosomes (50 proteins plus rRNA) to break the base-coded message of heredity into three-letter code names; (2) destructive without a continuously renewed supply of ATP energy [as recently shown, this is produced by ATP synthase, an enzyme containing a miniature motor, F1-ATPase.[4],[5],[6],[7]] to keep the translases from tearing up the pairs they are supposed to form; and (3) vanishing if it weren’t for having translases and other specific proteins to re-make the translase proteins that are continuously and rapidly wearing out because of the destructive effects of time and chance on protein structure! [8]
28 --> To that, we can add the astonishing complexity of the ATP Synthase molecular factory that makes the steady supply of ATP molecules required to energise the cell, and many other associated nanomachines required to carry out the processes of life. To get to a viable self-replicating metabolic automaton is an exercise in the most complex and against the flow sort of molecular engineering and nanotechnology. 29 --> Then, to move up to the body plans level, the best thing I can do is to point you to the remarks by the now expelled Sternberg, in this video, on how to make a whale. This one on the cichlids, will also be illuminating on built-in capacity for adaptive radiation. [Both of course are to be found in the IOSE page on body plan origins issues, which you may find useful to read, as I have suggested several times.] 30 --> Translation: the pop genetics just does not add up within ay reasonable estimate of the available time and resources on earth or in our observed cosmos. _________ That is why the above remarks you have made above - after months of patient discussion and repeated explanation -- come across as irresponsible, supercilious and willfully obtuse, indeed I can understand why some would see them as manifesting a passive aggressive strategy of resistance to the unwelcome. Please do better than the above. A lot better. GEM of TKIkairosfocus
August 17, 2011
August
08
Aug
17
17
2011
04:10 AM
4
04
10
AM
PDT
[oops, modded too many links, try again] 15 --> Loennig of the Max Planck Institute, adds:
examples like the horseshoe crab [supposedly, a 250 mn yr living fossil] are by no means rare exceptions from the rule of gradually evolving life forms . . . In fact, we are literally surrounded by 'living fossils' in the present world of organisms when applying the term more inclusively as "an existing species whose similarity to ancient ancestral species indicates that very few morphological changes have occurred over a long period of geological time" [85] . . . . One point is clear: granted that there are indeed many systems and/or correlated subsystems in biology, which have to be classified as irreducibly complex and that such systems are essentially involved in the formation of morphological characters of organisms, this would explain both, the regular abrupt appearance of new forms in the fossil record as well as their constancy over enormous periods of time. For, if "several well-matched, interacting parts that contribute to the basic function" are necessary for biochemical and/or anatomical systems to exist as functioning systems at all (because "the removal of any one of the parts causes the system to effectively cease functioning") such systems have to (1) originate in a non-gradual manner and (2) must remain constant as long as they are reproduced and exist. And this could mean no less than the enormous time periods mentioned for all the living fossils hinted at above. Moreover, an additional phenomenon would also be explained: (3) the equally abrupt disappearance of so many life forms in earth history . . . The reason why irreducibly complex systems would also behave in accord with point (3) is also nearly self-evident: if environmental conditions deteriorate so much for certain life forms (defined and specified by systems and/or subsystems of irreducible complexity), so that their very existence be in question, they could only adapt by integrating further correspondingly specified and useful parts into their overall organization, which prima facie could be an improbable process -- or perish . . . . According to Behe and several other authors [5-7, 21-23, 53-60, 68, 86] the only adequate hypothesis so far known for the origin of irreducibly complex systems is intelligent design (ID) . . . in connection with Dembski's criterion of specified complexity . . .
16 --> It is plain that what is on the ground is a scrubland of bushes model, with roughly family level body plans adapting to environmental niches. Precisely what would happen once one moves onto an island of function then spreads out across it. The Darwinian tree of life is a dead icon. 17 --> Now, have I made a blunder of thinking that "all parts of an island have to appear simultaneously"? 18 --> Frankly, this is a highly misleading strawmannish caricature of the real and unavoidable challenge: functionally specific complex organisation often -- indeed, typically -- exhibits irreducible complexity, so that the core function (manifest in the basic body plan) has to arrive all at once based on the right sized, matching parts all put together in the right way or the function will simply not be there. 19 --> We may have variations on the basic theme, but that core has to be there in the right config or there will be no function. That is a common fact of life for writing sentences, for programming computers, for building musical instruments or houses, and so on and so forth. 20 --> In this case each body plan has to be embryologically feasible, based on early mutations that can affect the body plan -- precisely those most likely to be lethal. (Hence the problem of miscarriages.) 21 --> In the ID foundations series, no 3, I cited Angus Menuge:
For a working [bacterial] flagellum to be built by exaptation, the five following conditions would all have to be met: C1: Availability. Among the parts available for recruitment to form the flagellum, there would need to be ones capable of performing the highly specialized tasks of paddle, rotor, and motor, even though all of these items serve some other function or no function. C2: Synchronization. The availability of these parts would have to be synchronized so that at some point, either individually or in combination, they are all available at the same time. C3: Localization. The selected parts must all be made available at the same ‘construction site,’ perhaps not simultaneously but certainly at the time they are needed. C4: Coordination. The parts must be coordinated in just the right way: even if all of the parts of a flagellum are available at the right time, it is clear that the majority of ways of assembling them will be non-functional or irrelevant. C5: Interface compatibility. The parts must be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’: even if a paddle, rotor, and motor are put together in the right order, they also need to interface correctly. ( Agents Under Fire: Materialism and the Rationality of Science, pgs. 104-105 (Rowman & Littlefield, 2004). HT: ENV.)
[ . . . ]kairosfocus
August 17, 2011
August
08
Aug
17
17
2011
04:10 AM
4
04
10
AM
PDT
Dr Liddle: Pardon, but instead I think the above aptly shows that you do not understand the issue that CSI highlights. This, I believe is due to prior commitments. For instance, I see a telling exchange in the just above:
[KF:] o: Remember, the 500 bit threshold is equivalent o having a cubical haystack 1 light month across, and picking ONE straw sized sample at random through all the 10^57 atoms of our solar system working away for the lifespan of the cosmos since the usual date of the big bang. You could have a whole solar system in there and it would make no difference, overwhelmingly you are going to end up with straw. {EL:} Right – if the needles are unconnected. The Darwinian contention is that they are not.
Not at all. Did you notice how I pointed out several times that a whole solar system could be lurking in the haystack and it would make but little difference to the search challenge? The problem is that you are taking so disproportionately small a sample of possibilities, due to the explosive exponentiation of possibilities vs the scope of Planck Time quantum states for the solar system since its founding, that you are overwhelmingly unlikely to pick up the UNrepresentative in any reasonable blind sample. That is why your earlier objection also fails:
[EL:] So let’s talk about those putative islands! The chance issue just isn’t in dispute. We all know that IF these functions are isolated islands, evolution can’t happen. The question is: are they isolated islands? From our side of the divide, the answer is no, and one of the reasons it is no, is that some of the isolated-island proponents have made the error of thinking that all parts of an island have to appear simultaneously. They don’t.
This one goes to the heart of the problem, so let's take it in steps: 1 --> Take an arbitrary ASCII text string equal in length to the first 72 characters of this post. For all intents and purposes, that is 500 bits. There are many possible sense-making configs, some of which will be a few steps apart and can be imagined in aggregate to form an archipelago of islands. 2 --> One thing is certain, the number of gibberish configs vastly outnumbers these, so we can be certain that we are dealing with islands of specific function in a sea of non-functional configs. 3 --> This is demonstrated by the output of monkey at keyboard experiments, which I have excerpted on again and again for literally months, only to be brushed aside again and again. One more time, citing Wiki testifying against interest, to put the matter on the table squarely, but this time I will extend the clip slightly:
A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters: RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d... Due to processing power limitations, the program uses a probabilistic model (by using a random number generator or RNG) instead of actually generating random text and comparing it to Shakespeare. When the simulator "detects a match" (that is, the RNG generates a certain value or a value within a certain range), the simulator simulates the match by generating matched text. More sophisticated methods are used in practice for natural language generation. If instead of simply generating random characters one restricts the generator to a meaningful vocabulary and conservatively following grammar rules, like using a context-free grammar, then a random document generated this way can even fool some humans
4 --> Saw the way that it is a processing power challenge to get to coherent text and it is a further challenge to detect it? Guess why: gibberish -- as common sense driven by the "law of averages" tells us -- is the overwhelming majority of the output. 5 --> Next, notice what improves performance: programming that sets up an algorithm that then guides random variation towards function and presumably may even improve the function by hill climbing, much as ID objector Zachriel boasts of and imagines is a fatal objection to the design inference. DESIGN out performs chance and necessity without direction. 6 --> But -- on track record of objecting to DNA exhibiting a linguistic, algorithmic 4-state digital code -- you will object, this has nothing to do with life forms. 7 --> Here, your objection (which to my recall you have never withdrawn) is astonishingly ill-informed. The code is real and is easily accessible. It functions for protein manufacture in the ribosome, specifying in step by step algorithmic sequence:
a: START (and lay down a Methionine AA), b: elongate step by step by using tRNA taxicab molecules as position-arm machines with pre-loaded AA's based on coded assignment --
recall the CCA -- COOH link is generic, it is a recognising enzyme that loads a given tRNA with a given AA, and this can be reprogrammed, as has been demonstrated
c: continue in a cycle until one of the STOP codons is reached, d: Release the protein, and perhaps pass it to a chaperone unit to ensure correct folding, and maybe taxicab it with kinesin to its work site
8 --> Protein fold domains, as has been pointed out to you over and over to the point of frustration, form deeply isolated fold domains, where the capacity to fold and to function crucially depend on the programed AA sequence. THESE ARE ISLANDS OF ISOLATED FUNCTION. 9 --> Similarly, A pile of bricks and the like do not a house make. And a tornado hitting a hardware store is utterly unlikely to build a house, for the same needle in a haystack reason. 10 --> In short, there is an inherent basic plausibility tot he islands of function model of complex configuration spaces that makes it plain that your reasoning in effect is that we "know" evo happened and works by forces of chance plus necessity, so we "know" there is an "exception" to the rule here. 11 --> There is no exception. There is a reason why "missing links" is a part of the vocabulary of this debate. Namely, the fossil record -- however we interpret it, it is the only actual direct observable evidence of the deep past -- is one of gaps, stasis, and disappearance. 12 --> There are a lot of triumphant headlines about found links, and there is a lot of rhetoric to obfuscate but in fact Darwin recognise that the record (starting from the Cambrian explosion) was not in favour of gradualist, tree-branching evo, but hoped that future evidence would bear him out. It has not. 13 --> Here is Gould's summary -- and recall that in the macro evo model, speciation is the gateway to all higher forms, so the talking point that tries to dismiss this as applicable tot the higher levels is misleading and irresponsible (a distressingly common pattern with evo mat advocates and popularisers):
. . . long term stasis following geologically abrupt origin of most fossil morphospecies, has always been recognized by professional paleontologists. [[The Structure of Evolutionary Theory (2002), p. 752.] . . . . The great majority of species do not show any appreciable evolutionary change at all. These species appear in the section [[first occurrence] without obvious ancestors in the underlying beds, are stable once established and disappear higher up without leaving any descendants." [[p. 753.] . . . . proclamations for the supposed ‘truth’ of gradualism - asserted against every working paleontologist’s knowledge of its rarity - emerged largely from such a restriction of attention to exceedingly rare cases under the false belief that they alone provided a record of evolution at all! The falsification of most ‘textbook classics’ upon restudy only accentuates the fallacy of the ‘case study’ method and its root in prior expectation rather than objective reading of the fossil record. [[p. 773.]
14 --> Meyer's summary in PBSW -- a case of expulsion and blaming the victim by the reigning orthodoxy, where I have seen many irresponsible remarks that distract from the clear report on investigation that the paper "passed proper peer review by renowned scientists" -- is even more pointed:
The Cambrian explosion represents a remarkable jump in the specified complexity or "complex specified information" (CSI) of the biological world. For over three billions years, the biological realm included little more than bacteria and algae (Brocks et al. 1999). Then, beginning about 570-565 million years ago (mya), the first complex multicellular organisms appeared in the rock strata, including sponges, cnidarians, and the peculiar Ediacaran biota (Grotzinger et al. 1995). Forty million years later, the Cambrian explosion occurred (Bowring et al. 1993) . . . One way to estimate the amount of new CSI that appeared with the Cambrian animals is to count the number of new cell types that emerged with them (Valentine 1995:91-93) . . . the more complex animals that appeared in the Cambrian (e.g., arthropods) would have required fifty or more cell types . . . New cell types require many new and specialized proteins. New proteins, in turn, require new genetic information. Thus an increase in the number of cell types implies (at a minimum) a considerable increase in the amount of specified genetic information. Molecular biologists have recently estimated that a minimally complex single-celled organism would require between 318 and 562 kilobase pairs of DNA to produce the proteins necessary to maintain life (Koonin 2000). More complex single cells might require upward of a million base pairs. Yet to build the proteins necessary to sustain a complex arthropod such as a trilobite would require orders of magnitude more coding instructions. The genome size of a modern arthropod, the fruitfly Drosophila melanogaster, is approximately 180 million base pairs (Gerhart & Kirschner 1997:121, Adams et al. 2000). Transitions from a single cell to colonies of cells to complex animals represent significant (and, in principle, measurable) increases in CSI . . . . In order to explain the origin of the Cambrian animals, one must account not only for new proteins and cell types, but also for the origin of new body plans . . . Mutations in genes that are expressed late in the development of an organism will not affect the body plan. Mutations expressed early in development, however, could conceivably produce significant morphological change (Arthur 1997:21) . . . [but] processes of development are tightly integrated spatially and temporally such that changes early in development will require a host of other coordinated changes in separate but functionally interrelated developmental processes downstream. For this reason, mutations will be much more likely to be deadly if they disrupt a functionally deeply-embedded structure such as a spinal column than if they affect more isolated anatomical features such as fingers (Kauffman 1995:200) . . . McDonald notes that genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes--the very stuff of macroevolution--apparently do not vary. In other words, mutations of the kind that macroevolution doesn't need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don't occur.6
[ . . . ]kairosfocus
August 17, 2011
August
08
Aug
17
17
2011
04:07 AM
4
04
07
AM
PDT
kairosfocus: Thank you for this summary of your argument. I appreciate that you consider my failure to appreciate it must be due to incalcitrance on my part. I do not believe it is, but having been in the equivalent position myself, I understand how it looks. So let me have a go at dealing with it point-by-point, which your point-by-point layout facilitates nicely:
So, I comment: a: The null hyp in testing under Fisherian conditions exists in the context of an alternative. This is somewhat related to what is happening with the EF where high contingency is to be explained.
Certainly null hypothesis testing is the testing of an alternative to the null.
b: For the Explanatory filter, the first default hyp is actually necessity, leading to low contingency under similar starting conditions, e.g. the dropped heavy object falls. This has been discussed with Dr L before, fruitlessly so I am simply noting for record.
The EF is interesting, kf, in that it has two successive hypothesis testing phases. I was in fact referring to Dembski's later integrated formulation. But let's say that the first hypothesis to be tested is: do biological organisms come about by the direct, one-stage action of a natural law? Well, it certainly doesn't look like it, and no-one is claiming that, so, no! I think we can reject that without recourse to a probability distribution at all. Although some OOL researchers think that eventually life will be observed forming in a lab, nobody thinks it will be a non-contingent process - it will be contingent on a large number of variables that the scientists have yet to discover and tune.
c: Where there is high contingency under initial conditions, then chance and/or agency may be involved, e.g what side of a die comes up when it is dropped.
Right - so stochastic processes are likely to be important, and whether life emerges from given initial conditioins will have a probability distribution.
d: the second default is chance, i.e a sample of the config space that is a random in some relevant sense. And chance can be seen as relevant in the same sense that it drives dice, it comes up in the inescapable noise in real world measurements, it disturbs telecomms systems and is known to appear in chemical reactions where the particular outcome is a statistical pattern, cf JM’s current discussion on pre life chemistry and what happens in the real world outside the neatly controlled lab. It even drives time’s arrow, the second law of thermodynamics. So, pardon me, it may not be neat and sweet but it is reality. Ever watched “grass” on a good old D52 CRO screen? having done that many a day, please don’t try to tell me noise and chance are not neat, easily digested cut and dry hypotheses. They are observable reality, as close as what temperature means: a measure of molecular chaos, and as significant as the resulting noise in telecomms systems, or the aging of a component vulnerable to activation processes.
Well, here is where we disagree, kf. No, I don't think "chance" is a "cut and dry hypothesis". Or rather, if you want to present "chance" as the null, you need to actually compute the pdf of the chance hypothesis you want to model. For example, if you were modeling the "chance" hypothesis of a coin, you'd need the right pdf. The "chance" hypothesis for a fair coin would have a mean of .5. The "Chance" hypothesis for a biased coin might have a mean of .6. Both could be "Chance" hypothesis for different questions, for examplel you might want to test the hypothesis that instead of your usual biased coin with a bias of .6 of for tails, you'd accidentally picked up some fellow shyster's coin with a bias of .6 for heads. That was the point I was trying to make - the "chance" in itself, is not a hypothesis. The pdf of the null has to be specified in order to make Fisher testing work, and that's what I'm not seeing in EF or CSI calculations - any derivation of that pdf.
e: Where we have islands of function or specified zones in large config spaces, the logical thing is if there is something that pushes you to those zones, or you have no good reason to infer this. Absent a chemist and considerable manipulation, i.e. design, we have no reason to think a warm little pond etc is going to be pushed in the desired directions. See JM’s remarks here. Watch proteins first AND RNA first disintegrate. The tree of life at best would have a root deep in a config space well beyond he search capacity of our observed cosmos. That molecular nanotech lab several generations beyond Venter is looking better and better as a hyp.
Well, that is a separate issue, of course. Different people will have different priors regarding the plausibility of OOL hypotheses. As for the "islands of function" - perhaps there are - essentially an "island of function" is an "irreducibly complex" function, right? And I would seriously contest the claim that there are any IC functions in living things, Behe notwithstanding. As for "pushed in the desired direction" - the whole point of Darwin's theory is that there is no "desired direction" and no "push". Indeed, I'd say that "pull" is a better metaphor than "push" - populations roll into attractor basins posed by the environment, where anything that inhibits reproduction is minimised and anything that facilitates it is maximised. And as the environment changes, as it must, the populations rolls with it, unless it gets stuck in a local minimum. "Hill climbing" is another metaphor, but it does tend to imply a higher energy state. But it's essentially the same metaphor.
f: Now, in evolutionary models ever since Darwin, the discussion is on hill climbing by differential reproductive success, but that presumes being ON an island of function already. Big begged question, for OOL and for body plans.
Well, I don't think it's begged for body plans. For OOL, yes - we do not yet know how simple the first Darwinian-capable self-replicator was.
g: Until you are on an island of function hill climbing adaptation mechanisms cannot kick in, so we see the issue already highlighted: origin of life and/or of major body plans requiring complex functional info.
Well, this assumes that life is an archipeligo. I do not regard this as demonstrated.
h: From observations it is credible that first cell based life required 100+ k bits of genetic info, and new body plans required 10+ m bits. WENT OVER THAT ALREADY, DOD YOU TAKE NOTE?
Well, I dispute these assertions! Yes, I take note, but disagree!
i: These are so far beyond 500 – 1,000 bits to come form lucky noise that it is a no brainer that we are looking at huge haystacks and tiny relative samples, on the gamut of accessible resources of our solar system or observed cosmos.
Well, yes, if true. That's the whole point - sure, if you are correct, life is impossible by Darwinian means. I don't think you are correct. I think your logic is fine but your premise faulty!
j: And that is exactly where the EF comes in. Once we have quantified the PTQS resources of our solar system and observed cosmos, it is reasonable to ask whether special and unrepresentative zones can reasonably be hit on by the known molecular level chaotic forces at work in warm little ponds for OOL or in triggering accidents in the chemistry of the living cell.
Well, no - sure the EF might come in at some stage, but only when you know what the potential mechanisms are, and when you are sure about the height above sea level of the lowest shoreline, as it were. That's what is in dispute. Nobody disputes that it is inconceivable that a highly complex biological object could come into being ex nihilo "by chance". What they - we - dispute is that we are talking about a highly complex biological objects. The reasons there is all this talking past each other is that one side keeps shouting "look! You can't get to these islands by chance!" and the other side is shouting "they aren't islands!".
k: The wall of 500 to 1,000 bits as the upper limit of our cosmic resources jumps out at us. Adaptation within an island of function is possible, but getting to such deeply isolated islands of function is not credible on the gamut of our observed universe, much less our solar system.
Sure. But it's the deeply isolated islands that are contested, not the probability of getting to them were they to exist.
l: That is why the test of generating coherent text in English by chance is so crucial as a test, one that evo mat advocates despise because they know the3 message. Spaces of 10^50 possibilities are searchable just barely, but spaces of 10^150 are patently not.
Right. Because those are deeply isolated islands.
m: Funny, when I was a kid, the monkeys at keyboards example was often used to persuade us that any config would eventually appear at random, but over the past few years that argument has vanished. No prizes for guessing why.
Because the proposal - Darwin's proposal - was that the islands aren't isolated - that they are contiguous, i.e. not islands, or, if islands, separated by wadeable water.
n: Monkeys at keyboards is a dead icon of evo, and one that the evo mat advocates hope would vanish into the memory hole. Turns out the monkeys have switched sides, and are now an icon of Design, along with the burning match and spinning flagellum, the walking kinesin molecule [talk about Imperial AT AT walker tanks!] pulling a vesicle along a microtubule highway, the ATP Synthase rotary motor enzyme etc etc!
Right! So let's talk about those putative islands! The chance issue just isn't in dispute. We all know that IF these functions are isolated islands, evolution can't happen. The question is: are they isolated islands? From our side of the divide, the answer is no, and one of the reasons it is no, is that some of the isolated-island proponents have made the error of thinking that all parts of an island have to appear simultaneously. They don't.
o: Remember, the 500 bit threshold is equivalent o having a cubical haystack 1 light month across, and picking ONE straw sized sample at random through all the 10^57 atoms of our solar system working away for the lifespan of the cosmos since the usual date of the big bang. You could have a whole solar system in there and it would make no difference, overwhelmingly you are going to end up with straw.
Right - if the needles are unconnected. The Darwinian contention is that they are not.
p: the search resources to get to OOL and onward to major body plans just are not there without programming or other intelligent direction, period.
Only if OOL and "major body plans" are islands. If they aren't, then no, you don't need intelligent direction.
q: So, it is no surprise that when we actually test empirically — another point where the rhetoric on side issues obfuscates the actual real world result — we keep on seeing that FSCI is indeed an excellent sign of design. the text of this post is sufficient to show that what chance could not reasonably do, intelligence does routinely and quickly.
Well, what it means is that Behe and Meyer are making the right argument, and that the chance argument is only relevant once Behe is established to be right. At which point you don't even need a Fisher test. The trouble is, that Behe and Meyer's arguments don't IMO actually work.
s: So blatant is this that the only explanation for why a plainly failed model prevails is that it has us in ideological captivity, to C19 positivism and its descendants.
No, I think it's that people think that evos are saying that these astronomically unlikely events are in fact likely. They aren't. They are saying that the events postulated as being astronomically unlikely are not the events being postulated by evolutionary theory. In other words, we ae arguing over the different things. Anyway, hope your island is behaving itself, and not too battered by storms :) Cheers LizzieElizabeth Liddle
August 17, 2011
August
08
Aug
17
17
2011
01:51 AM
1
01
51
AM
PDT
So a hypothesis with no demonstrable plausibility should have a pr > 0?junkdnaforlife
August 17, 2011
August
08
Aug
17
17
2011
12:20 AM
12
12
20
AM
PDT
Nope. Not plain to me.Timbo
August 16, 2011
August
08
Aug
16
16
2011
08:28 PM
8
08
28
PM
PDT
Debacle?Upright BiPed
August 16, 2011
August
08
Aug
16
16
2011
06:01 PM
6
06
01
PM
PDT
I trust it should be plain -- yet again -- that CSI is most definitely not "useless." Good nightkairosfocus
August 16, 2011
August
08
Aug
16
16
2011
05:30 PM
5
05
30
PM
PDT
A few notes: By now it is clear that there will be no agreement so I simply note for record so the astute onlooker can see where Dr Liddle goes off the rails, as has been pointed out to her over and over again, but repeatedly brushed aside. I expect nothing different this time around. I am now getting tired of having to repeat this over and over again, being convinced that Dr Liddle is simply not open to hear this message. She is too locked into the failed paradigm to hear the force of anything that would break it up. Pardon if that sounds harsh, but it is meant to be frank though respectful. Dr Liddle please re-read that incident in Jn 8 again. To my mind at present, the resort to Bayesianism is simply yet another confusing distractor, as the real issue is to find islands of function and not so much to try to identify how likelihood ratios can be estimated on prior probability estimates etc. DOWN THAT ROAD LIE ALL SORTS OF CONFUSIONS, DRIVEN BY LEWONTINIAN A PRIORISM WHICH INDEED AHEAD OF ANY EVIDENCE HAS DECIDED THAT DESIGN IS VERBOTEN, MOST STRICTLY VERBOTEN. So yes my point on closed minded refusal to even entertain design as a possibility is all too patently relevant. Prof Lewontin put this on the table for all to see, and the US NAs etc show that this is not just one or a few, it is the system. Mutabaruka drumming in the head: De System, de system, de system is a FRAUD . . . Never mind the other dust-ups on deciding how to put numbers into the parameters identified under the relevant circumstances. The algebra is nice and pretty, the application is not. I come from the old school of having to deal with less than neat and sweet realities, so I am not overly impressed by pretty math exercises that run into problems on getting down to the real world. Just like in management, you will see pretty academic exercises on net present value, only to run into the real world of practical finance where Internal Rate of Return is often king, because real world managers will have less of a dust up over comparing rates of interest equivalent to a project, never mind the potential pathologies in the math. The mathematically neater may not be the better on the ground, save where the pathologies are real. Let's keep things simple, and reserve heavy artillery that is ever so hard to set up for when it is necessary. So, I comment: a: The null hyp in testing under Fisherian conditions exists in the context of an alternative. This is somewhat related to what is happening with the EF where high contingency is to be explained. b: For the Explanatory filter, the first default hyp is actually necessity, leading to low contingency under similar starting conditions, e.g. the dropped heavy object falls. This has been discussed with Dr L before, fruitlessly so I am simply noting for record. c: Where there is high contingency under initial conditions, then chance and/or agency may be involved, e.g what side of a die comes up when it is dropped. d: the second default is chance, i.e a sample of the config space that is a random in some relevant sense. And chance can be seen as relevant in the same sense that it drives dice, it comes up in the inescapable noise in real world measurements, it disturbs telecomms systems and is known to appear in chemical reactions where the particular outcome is a statistical pattern, cf JM's current discussion on pre life chemistry and what happens in the real world outside the neatly controlled lab. It even drives time's arrow, the second law of thermodynamics. So, pardon me, it may not be neat and sweet but it is reality. Ever watched "grass" on a good old D52 CRO screen? having done that many a day, please don't try to tell me noise and chance are not neat, easily digested cut and dry hypotheses. They are observable reality, as close as what temperature means: a measure of molecular chaos, and as significant as the resulting noise in telecomms systems, or the aging of a component vulnerable to activation processes. e: Where we have islands of function or specified zones in large config spaces, the logical thing is if there is something that pushes you to those zones, or you have no good reason to infer this. Absent a chemist and considerable manipulation, i.e. design, we have no reason to think a warm little pond etc is going to be pushed in the desired directions. See JM's remarks here. Watch proteins first AND RNA first disintegrate. The tree of life at best would have a root deep in a config space well beyond he search capacity of our observed cosmos. That molecular nanotech lab several generations beyond Venter is looking better and better as a hyp. f: Now, in evolutionary models ever since Darwin, the discussion is on hill climbing by differential reproductive success, but that presumes being ON an island of function already. Big begged question, for OOL and for body plans. g: Until you are on an island of function hill climbing adaptation mechanisms cannot kick in, so we see the issue already highlighted: origin of life and/or of major body plans requiring complex functional info. h: From observations it is credible that first cell based life required 100+ k bits of genetic info, and new body plans required 10+ m bits. WENT OVER THAT ALREADY, DOD YOU TAKE NOTE? i: These are so far beyond 500 - 1,000 bits to come form lucky noise that it is a no brainer that we are looking at huge haystacks and tiny relative samples, on the gamut of accessible resources of our solar system or observed cosmos. j: And that is exactly where the EF comes in. Once we have quantified the PTQS resources of our solar system and observed cosmos, it is reasonable to ask whether special and unrepresentative zones can reasonably be hit on by the known molecular level chaotic forces at work in warm little ponds for OOL or in triggering accidents in the chemistry of the living cell. k: The wall of 500 to 1,000 bits as the upper limit of our cosmic resources jumps out at us. Adaptation within an island of function is possible, but getting to such deeply isolated islands of function is not credible on the gamut of our observed universe, much less our solar system. l: That is why the test of generating coherent text in English by chance is so crucial as a test, one that evo mat advocates despise because they know the3 message. Spaces of 10^50 possibilities are searchable just barely, but spaces of 10^150 are patently not. m: Funny, when I was a kid, the monkeys at keyboards example was often used to persuade us that any config would eventually appear at random, but over the past few years that argument has vanished. No prizes for guessing why. n: Monkeys at keyboards is a dead icon of evo, and one that the evo mat advocates hope would vanish into the memory hole. Turns out the monkeys have switched sides, and are now an icon of Design, along with the burning match and spinning flagellum, the walking kinesin molecule [talk about Imperial AT AT walker tanks!] pulling a vesicle along a microtubule highway, the ATP Synthase rotary motor enzyme etc etc! o: Remember, the 500 bit threshold is equivalent o having a cubical haystack 1 light month across, and picking ONE straw sized sample at random through all the 10^57 atoms of our solar system working away for the lifespan of the cosmos since the usual date of the big bang. You could have a whole solar system in there and it would make no difference, overwhelmingly you are going to end up with straw. p: the search resources to get to OOL and onward to major body plans just are not there without programming or other intelligent direction, period. q: So, it is no surprise that when we actually test empirically -- another point where the rhetoric on side issues obfuscates the actual real world result -- we keep on seeing that FSCI is indeed an excellent sign of design. the text of this post is sufficient to show that what chance could not reasonably do, intelligence does routinely and quickly. s: So blatant is this that the only explanation for why a plainly failed model prevails is that it has us in ideological captivity, to C19 positivism and its descendants. Okay, am I clear enough now. (And I won't even bother to more than note that MF has long ago decided to studiously ignore instead of cogently address. That speaks volumes on what he is doing here at UD, and I have not forgotten where the one who threatened my family got his start before he ran totally out of control.) Okay, it should be plain enough onlookers. GEM of TKIkairosfocus
August 16, 2011
August
08
Aug
16
16
2011
05:27 PM
5
05
27
PM
PDT
would=would beElizabeth Liddle
August 16, 2011
August
08
Aug
16
16
2011
03:54 PM
3
03
54
PM
PDT
Because it would assuming the consequent.Elizabeth Liddle
August 16, 2011
August
08
Aug
16
16
2011
03:42 PM
3
03
42
PM
PDT
"Now, you could argue, and Dembski and Marks do, that the probability of alternative hypotheses are indeed zero, but that is the case that has to be made." Currently no remotely plausible scenario exists, as you yourselves know first hand with the Upright Biped debacle, so why is setting pr = 0 for alternative hypothesis not a valid provisional variable?junkdnaforlife
August 16, 2011
August
08
Aug
16
16
2011
03:06 PM
3
03
06
PM
PDT
kf: I'm not sure whether you read my response to William Dembski, but your post does not address my key point which is that "chance", in itself, is not a hypothesis. The reason it is invoked, in Fisherian statistics, is because it refers to the probability that a random ("chance") sample from the population with the probability distribution under the null hypothesis would have looked like your observed data. The problem with the CSI formulation is that the null hypothesis is not specified, and all alternative hypotheses are given an a priori probability of zero. Now, you could argue, and Dembski and Marks do, that the probability of alternative hypotheses are indeed zero, but that is the case that has to be made. You can't just reject the (unspecified) null, produce a p value, together with the cosmological alpha criterion, and say "therefore design". That's why I keep saying that CSI is useless. It doesn't pose the question we want to know the answer to.Elizabeth Liddle
August 16, 2011
August
08
Aug
16
16
2011
01:34 PM
1
01
34
PM
PDT
Onlookers: I am simply noting for record that the main issues raised in support of Bayesianism were addressed from here on at the top of the thread. Just reflect on the sound point WmAD made in 14 above, in light of a reduction to likelihoods, in the onward linked from the above. before I clip, let me cite Dembski:
Elizabeth Liddle seems stuck with where the discussion over CSI was ten years ago when I published my book NO FREE LUNCH. She characterizes the Bayesian approach to probabilistic rationality as though that’s what science universally accepts when in fact the scientific community overwhelmingly adopts a Fisherian approach (which, by the way, is not wedded to a frequentist approach — epistemic probabilities, Popper’s propensities, and Bernoulli’s indifference principle all work quite well with it). Liddle makes off that CSI is a fuzzy concept when her notion of prior probabilities when applied to design inferences is nothing but an exercise in fuzzification. What, indeed, is the prior probability of a space alien sending a long sequence of primes to planet earth? In such discussions no precise numbers ever get reasonably assigned to the priors. Bayes works when the priors can themselves be established through direct empirical observation (as with medical tests for which we know the [prior] incidence of the disease in the population). That’s never the case in these discussions, where the evidence of design is purely circumstantial.
Clipping my own remarks again, having done the algebra we come to a point where we deduce a ratio of likelihoods that allow relative comparison of degree of warrant for two theories in light of the evidence, E being observed evidence and T1 or 2 the relevant theories in contest: __________ >> L[E|T2]/ L[E|T1] = LAMBDA = {p[E|T2]/ p[E|T1]} * {P(T2)/P(T1)} >> __________ Now, here's the key trick, who assigns P(T1) or P(T2) on what grounds, in a subjective context? That is, if I set P(T2) = 0, say [on whatever clever argument -- cf here Lewontinian a priori materialism], then I can stoutly insist that T is simply not good enough, i.e, I have a suitably mathematicised excuse for the fallacy of the closed mind in this context where there are no background epidemiology studies to set the values. Expanding somewhat on the clip, we can see why an approach based on the fact that most reasonable samples of a large pop will represent its typical vales, not its atypical values, is a quite reasonable approach: ___________ >> L[E|T2]/ L[E|T1] = LAMBDA = {p[E|T2]/ p[E|T1]} * {P(T2)/P(T1)} Thus, the lambda measure of the degree to which the evidence supports one or the other of competing hyps T2 and T1, is a ratio of the conditional probabilities of the evidence given the theories (which of course invites the “assuming the theory” objection, as already noted), times the ratio of the probabilities of the theories being so. [In short if we have relevant information we can move from probabilities of evidence given theories to in effect relative probabilities of theories given evidence, and in light of an agreed underlying model.] All of this is fine as a matter of algebra (and onward, calculus) applied to probability, but it confronts us with the issue that we have to find the outright credible real world probabilities of T1, and T2 (or onward, of the underlying model that generates a range of possible parameter values). In some cases we can get that, in others, we cannot; but at least, we have eliminated p[E]. Then, too, what is credible to one may not at all be so to another. This brings us back to the problem of selective hyperskepticism, and possible endless spinning out of — too often specious or irrelevant but distracting — objections [i.e closed minded objectionism]. Now, by contrast the “elimination” approach rests on the well known, easily observed principle of the valid form of the layman’s “law of averages.” Namely, that in a “sufficiently” and “realistically” large [i.e. not so large that it is unable or very unlikely to be instantiated] sample, wide fluctuations from “typical” values characteristic of predominant clusters, are very rarely observed. [For instance, if one tosses a "fair" coin 500 times, it is most unlikely that one would by chance go far from a 50-50 split that would be in no apparent order. So if the observed pattern turns out to be ASCII code for a message or to be nearly all-heads or alternating heads and tails, or the like, then it is most likely NOT to have been by chance. (See, also, Joe Czapski's "Law of Chance" tables, here.)] Elimination therefore looks at a credible chance hyp and the reasonable distribution across possible outcomes it would give [or more broadly the "space" of possible configurations and the relative frequencies of relevant "clusters" of individual outcomes in it]; something we are often comfortable in doing. Then, we look at the actual observed evidence in hand, and in certain cases — e.g. Caputo — we see it is simply too extreme relative to such a chance hyp, per probabilistic resource exhaustion. So the material consequence follows: when we can “simply” specify a cluster of outcomes of interest in a configuration space, and such a space is sufficiently large that a reasonable random search will be maximally unlikely within available probabilistic/ search resources, to reach the cluster, we have good reason to believe that if the actual outcome is in that cluster, it was by agency. [Thus the telling force of Sir Fred Hoyle’s celebrated illustration of the utter improbability of a tornado passing through a junkyard and assembling a 747 by chance. By far and away, most of the accessible configurations of the relevant parts will most emphatically be unflyable. So, if we are in a flyable configuration, that is most likely by intent and intelligently directed action, not chance. ] We therefore see why the Fisherian, eliminationist approach makes good sense even though it does not so neatly line up with the algebra and calculus of probability as would a likelihood or full Bayesian type approach. Thence, we see why the Dembski-style explanatory filter can be so effective, too. >> ____________ So, the Bayes rabbit trail is a distraction, a storm in a teacup. There is an excellent reason tied to the valid form of the law of averages, why the explanatory filter works. And if the why of that is not obvious, think about how the solar system scope EF is saying that if at most you can take a straw sized sample from a cubical hay bale a light month across, even if a great many needles are in it and even if it has out solar system in it out to Pluto, by overwhelming probability, you are most likely going to pick up a straw, and nothing else. So much so that if someone claims to have picked up a needle at random in such a bale on a single trial -- and that is equivalent to a sample of scope equal to the Planck Time Quantum states for our solar system's 10^57 atoms since the Big Bang's usual date, you would be entitled to disbelieve him. Some lotteries are not credibly winnable. GEM of TKIkairosfocus
August 16, 2011
August
08
Aug
16
16
2011
12:23 PM
12
12
23
PM
PDT
Cheap-shot emptily dismissive talking points. Please, do better than that.kairosfocus
August 16, 2011
August
08
Aug
16
16
2011
11:49 AM
11
11
49
AM
PDT
Barry, first of all, David Wheeler and Tao Tao started from the certain knowledge that the artificial genome was designed, and that it contained a "watermark"! Not only that, but they knew it was designed by a human being who spoke a language with a roman alphabet. And not only that, but they knew the alphabetic English shorthand for each codon! How can you possibly think this method could tell you whether a string of unknown provenance contained information or not? Seriously, my mind boggles!Elizabeth Liddle
August 16, 2011
August
08
Aug
16
16
2011
09:17 AM
9
09
17
AM
PDT
It's ironic (but not surprising)that the best collection of evidences of the scientific vacuity of Intelligent Design lies within the posts of a blog dedicated to its promotion.lastyearon
August 16, 2011
August
08
Aug
16
16
2011
08:56 AM
8
08
56
AM
PDT
Elizabeth, Markf, From someone not involved in statistical analysis on a daily basis, thank you for your clear, concise and rigorous rebuttals of the concept of CSI as a marker of intelligent design.lastyearon
August 16, 2011
August
08
Aug
16
16
2011
08:51 AM
8
08
51
AM
PDT
I was taught stats by a somewhat eccentric professor who would fail papers if you gave a p value! He'd return the paper with red ink all over it,saying "DO NOT DO THIS". And would withhold a pass mark until you'd deleted it. It was effect sizes or nothing. Needless to say I have never published a research paper that does not report a p value! But it did force us to think very hard about what our p values are a probability of :)Elizabeth Liddle
August 16, 2011
August
08
Aug
16
16
2011
05:31 AM
5
05
31
AM
PDT
William Dembski: Thank you for responding to my posts. You write:
Elizabeth Liddle seems stuck with where the discussion over CSI was ten years ago when I published my book NO FREE LUNCH.
Possibly, but I don’t find the discussion resolved by your more recent writing, I’m afraid.
She characterizes the Bayesian approach to probabilistic rationality as though that’s what science universally accepts when in fact the scientific community overwhelmingly adopts a Fisherian approach (which, by the way, is not wedded to a frequentist approach — epistemic probabilities, Popper’s propensities, and Bernoulli’s indifference principle all work quite well with it).
Well, no, I don’t so characterise it. Fisherian approaches are, of course the workhorse of statistical hypothesis testing in science, but increasingly, Bayesian approaches are used to resolve questions as to which, of several hypotheses, all giving good fits to the data, are more likely to be true. The reason for this is, of course, that Bayesian methods pose the question: “what is the most likely explanation for these data?”, which is the question we actually want to know the answer to, whereas Fisherian approaches ask the highly counter-intuitive (and widely misunderstood) question: “given the null, how likely are the data?” That’s fine-ish for very circumscribed null hypotheses, for example the null hypothesis that two samples of data are drawn from the same, well-defined population, or that the number of correct responses on a multiple choice test are no more than you’d expect under the null hypothesis that the candidate is guessing. But it’s absolutely useless for addressing the question, “what is the most likely explanation for these data?”. And the reason it’s useless becomes obvious as soon as you start to consider alternative theories. We can use Fisherian null hypothesis testing to conclude that a candidate actually knows the answers as long as we can be sure that that is the only alternative to the null hypothesis that she is guessing. But there are other possibilities – that she was given the answer numbers in advance and memorised them, or that she is copying them from another candidate. In the Fisherian model, the probability of these alternatives is set, a priori, at zero, hence the “illusion of theory confirmation”, cited in that Gill paper linked by MarkF. And when it comes to ID, we don’t have a clearly specified null. Is it that biological organisms just happen to assemble themselves from atoms that just happened to find themselves in close enough proximity to bind into the observed molecules with the observed conformations? We don’t even have to do a CSI calculation to reject that null. The entire science of evolutionary theory assumes the rejection of that null and searches for alternate hypotheses. So what we want to know is, of the various postulated mechanisms by which biological organisms might have came about, which is the most likely? Which is why the Bayesian approach is the right approach. But that takes more than just math. The beauty, and also the downside, of Bayesian approaches is that it tells you how probable an explanation is, given what you know. It isn’t an absolute value, like the p value you extract from a Fisherian hypothesis test, which, although appealingly precise, is illusory, because it hides the zero probability you have implicitly assigned to anything other than your research hypothesis. As for Barry’s example (actually I think you have priority on it :)) of a radio signal consisting of prime numbers in binary code: in order to reject a non-intelligent source as an explanation, using Fisherian hypothesis testing, you’d be asking: “how likely is the signal, given the null hypothesis that it comes from a non-intelligent source?” How do you even start to compute your null? We don’t even know what the population of non-intelligent radio sources is, and we have no clue as to what the distribution of signal patterns would be. To be honest, if an extraterrestrial signal had any kind of unpredictable radio pattern, I’d start to wonder: “aliens?” because the only kind of non-intelligent extra-terrestrial radio signal I’m aware of is that from pulsars, and if the signal were anything other than something likely to be generated by simple harmonic motion, I’d prick up my ears. But that might be my ignorance. Which is precisely the point: Bayesian probabilities tell you what is likely given what you know. They drive you to find out more. Your Fisherian approach, in contrasts, leads you to assume (fallaciously, IMO) that you have a valid conclusion (a significant p value), and regard any further research into the nature of your hypothesised Designer as extraneous to the project.
Liddle makes off that CSI is a fuzzy concept when her notion of prior probabilities when applied to design inferences is nothing but an exercise in fuzzification. What, indeed, is the prior probability of a space alien sending a long sequence of primes to planet earth? In such discussions no precise numbers ever get reasonably assigned to the priors. Bayes works when the priors can themselves be established through direct empirical observation (as with medical tests for which we know the [prior] incidence of the disease in the population). That’s never the case in these discussions, where the evidence of design is purely circumstantial.
Well, no, I didn’t say that CSI was a “fuzzy concept”, or, if I inadvertently implied that I thought so, I must clarify that I do not. My problem with it is that it is too precise. Or rather, that it is precise but not accurate, and the trouble with measures that are precise but not accurate, is that it is tempting to mistake the precision for accuracy. The lack of accuracy arises from a priori setting of the probability of any alternative hypothesis to zero. That’s as much a prior as any Bayesian prior, but its precision (exactly, and permanently, zero) is unwarranted, and, indeed, renders your whole ID inference circular. Of course Bayesian inferences are fuzzy. But better a fuzzy hit than a precise miss! There’s a reason why sawn-off shotguns are the weapon of choice for certain purposes :)
I refer readers to two articles of mine relevant to this thread: (1) “Design by Elimination vs. Design by Comparison” (available at http://www.designinference.com....._Bayes.pdf), in which I clearly spell out how the Bayesian approach to design inferences is parasitic on my generalization of Fisher to CSI.
Thank you. Yes, I had already read this paper. I have two problems with it. Firstly, you seem to set out to compare the two approaches as though they were approaches to answering the same problem. They aren’t. They address different problems, and which you use depends on what problem you want to solve. If you want to know which explanation is the most likely explanation for your data, you ask a Bayesian question. If you want to know whether your data are unlikely under some null hypothesis you ask a Fisherian question. This isn’t Cavaliers versus Roundheads; it’s hammers versus screwdrivers. I use both approaches in my work on a daily basis, and which I use depends very simply on the question I am asking. To give a practical example: if I want to know whether the brain activation induced by a task is different in different groups of participants, I ask a simple Fisherian question: if these participants are drawn from the same population (i.e are not different on this measure), how likely would I be to observe the observed differences? And I get a nice low Fisherian p value, telling me it is very unlikely. However, if now want to know: “what is the most likely explanation for the observed differences in activation between these two groups of participants?” then I have a Bayesian question, and so I use Bayesian Model Selection to tell me which of a set of possible models is the most likely explanation for each group’s data. In fact, my classic response to anyone who comes up to me with a stats question in relation to data analysis is: “what question are you asking?” Once they know that, they know the right test. But my second problem with this paper is your repeated reference to “the chance hypothesis”. That’s where the bodies are buried. “Chance” isn’t an explanation, and so it isn’t a hypothesis. When we test whether a coin is fair, and we reject the hypothesis that it is, we are not rejecting “chance” as an explanation for our data, we are rejecting the highly specific hypothesis “that the coin is fair”. We can reject it because we know the probability distribution for the tosses under that null hypothesis. We refer to “chance” simply because in order to infer from our sample to the population, we randomly sample. So when we say, loosely, that the data we observed are unlikely to have occurred “by chance”, we don’t mean that “chance” is unlikely to be the explanation for the data. What we mean is that if we randomly sampled from the distribution that we’d expect under our null, we would be unlikely to observe the data sample we want to test. So when you say: “When the Bayesian approach tries to adjudicate between chance and design hypotheses...” that makes no more sense, because chance is not an explanation. Just as a Fisherian approach does not attempt to determine whether data are unlikely to be “due to chance”, but rather to determine whether, under some null hypothesis, a random sample would resemble our data sample. So the Bayesian approach does not try to adjudicate between any hypothesis and “chance”. What it does is to adjudicate between two proposed explanations for your sample of data. And that is what is wrong with CSI – not that it is Fisherian, or not-Bayesian, but that the null hypothesis is not specified, and so there is no way of calculating the probability distribution under the null (as there is for the null hypothesis that the coin is fair).
(2) “Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information,” which answers Liddle’s vain hope that RV+NS can serve as a designer substitute. This article is available at http://evoinfo.org/publication.....ation-law/ and is also a chapter in THE NATURE OF NATURE.
Again, thank you for the link, although again, I have read it. I won’t address it here, although I consider it flawed. However, it certainly poses a more coherent question IMO.
Closing thought: If Bayes were such a boon to design inferences, then why don’t we see more of them? When people in real-life infer design on the basis of a small probability event (and such events do regularly trigger design inferences), why don’t they factor in the priors? Is it that they’re just not properly educated in the logic of Bayes? Or perhaps it’s that estimating priors in such circumstances is simply an exercise in handwaving. In any case, if design in biology is real, then Bayes should long ago have uncovered it. The fact that it has not and that it is regularly used to insulate Darwinian evolution from probabilistic critique (Elliott Sober is the master of this) suggests that more objectve probabilistic methods are called for — such as CSI.
IANAL, but I’d have thought that Bayesian inferences were the stuff of design inferences: “did he fall or was he pushed”? Fisherian hypothesis tests won’t help you with that kind of question, because it’s not the question they pose. But to address your implication that Bayesian methods are “less objective” than Fisherian methods, as I hope I have made clear above, this is simply not the case. Fisherian methods, unless the null is clearly stated, simply hide their bias and deliver theory confirmation that is illusory. So we remain in disagreement :) Cheers LizzieElizabeth Liddle
August 16, 2011
August
08
Aug
16
16
2011
05:20 AM
5
05
20
AM
PDT
This should have appeared as a response to comment 14 by William Dembski. I am sorry for any confusion.markf
August 16, 2011
August
08
Aug
16
16
2011
12:56 AM
12
12
56
AM
PDT
I find that I already wrote a discussion of the paper "Design by Elimination vs. Design by Comparison" back in 2006 as part of this. The discussion comes at the end, and as it not so very long I have repeated it here.
So far we have established that the use of specifications to reject chance hypothesis has some problems of interpretation and has no justification, while comparing likelihoods seems to account for our intuitions and is justified. Dembski is well aware of the likelihood approach and has tried to refute it by raising a number of objections elsewhere, notably in chapter 33 of his book "The Design Revolution" which is reproduced on his web site (Dembksi 2005b). But there is one objection that he raises which he considers the most damning of all and which he repeats virtually word for word in the more recent paper. He believes that the approach of comparing likelihoods presupposes his own account of specification. He illustrates his objection with another well worn example in this debate -- the case of the New Jersey election commissioner Nicholas Caputo who is accused of rigging ballot lines. It was Caputo's task to decide which candidate comes first on a ballot paper in an election and he is meant to do this without bias towards one party or another. Dembski does not have the actual data but assumes a hypothetical example where the party of the first candidate on the ballot paper follows this pattern for 41 consecutive elections (where D is democrat and R is republican) DDDDDDDDDDDDDDDDDDDDDDRDDDDDDDDDDDDDDDDDD This is clearly conforms to a pattern which is very demanding for the hypothesis that Caputo was equally likely to make a Republican or Democrat first candidate. In fact it conforms to a number of such patterns for 41 elections, for example: There is only one republican as first candidate. One party is only represented once. There are two or less republicans. There is just one republican and it is between the 15th and 30th election. Includes 40 or more Democrats. And so on. Dembski has decided that the relevant pattern is the last one. (This is interesting in itself as it is a single-tailed test and assumes the hypothesis that Caputo was biased towards Democrats. Another alternative might simply have been that Caputo was biased -- direction unknown -- in which case the pattern should have been "one party is represented at least 40 times"). His argument is that when comparing the likelihoods of two hypotheses (Caputo was biased towards Democrats or Caputo was unbiased) generating this sequence, we would not compare the probability of the two hypotheses generating this specific event but the probability of the two hypotheses generating an event which conforms to the pattern. And we have to use his concept of a specification to know what the pattern is. But this just isn't true. We can justify the choice of pattern simply by saying "this is a set of outcomes which are more probable under the alternative hypothesis (Caputo is biased towards Democrats) than under the hypothesis that Caputo is unbiased". There is no reference to specification or even patterns in this statement. This is clearer if we consider a different alternative hypothesis. Suppose that instead of suspecting Caputo of favouring one party or another we suspect him of being lazy and simply not changing the order from one election to another -- with the occasional exception. The "random" hypothesis remains the same - he selects the party at random each time. The same outcome: DDDDDDDDDDDDDDDDDDDDDDRDDDDDDDDDDDDDDDDDD counts against the random hypothesis but for a different reason -- it has only two changes of party. The string: DDDDDDDDDDDDDDDDDDDDDDRRRRRRRRRRRRRRRRRRRR would now count even more heavily against the random hypothesis - whereas it would have been no evidence for Caputo being biased. So now we have two potential patterns that the outcome matches and could be used against the random hypothesis. How do we decide which one to use? On the basis of the alternative hypothesis that might better explain the outcomes that conform to the pattern. The comparison of likelihoods approach is so compelling that Dembski himself inadvertently uses it elsewhere in the same chapter of The Design Revolution. When trying to justify the use of specification he writes "If we can spot an independently given pattern.... in some observed outcome and if possible outcomes matching that pattern are, taken jointly, highly improbable ...., then it's more plausible that some end-directed agent or process produced the outcome by purposefully conforming it to the pattern than that it simply by chance ended up conforming to the pattern."
markf
August 16, 2011
August
08
Aug
16
16
2011
12:53 AM
12
12
53
AM
PDT
I expect Lizzie will respond but I must add a comment because this is so misleading. I agree that the Bayes/Fisher distinction is different from the Subjective/Frequentist distinction.  As you say you can adopt a Fisherian approach and a wide range of views of probability.  Similarly you can be a Bayesian frequentist or whatever. But, as I am sure you know, there are deep conceptual problems with Fisherian hypothesis testing. There are any number of articles pointing this out e.g. The Insignificance of Null Hypothesis Significance Testing. In fact pure Fisherian hypothesis testing is not used overwhelmingly in the scientific community.  For example in a wide range of disciplines such as medical statistics Fisherian approaches would not be allowed and it is required to use something like Neyman/Pearson.  (Try getting a test for a drug through the authorities without calculating the power!). This tries to avoid the deep problems with the Fisherian approach by defining one more alternative hypothesis and comes closer to comparing likelihoods. However, I admit p-values are still, unfortunately, used all too often. The biggest problem with classical hypothesis testing in all its shades is that it answers the wrong question.  It doesn’t try to work out the probability of the hypothesis given the data.  Bayes answers the right question.  This follows from the maths.  As you say there is a pragmatic problem because it may not be possible to make a reasonable estimate of prior probabilities.  All other methods of hypothesis testing can be considered as heuristics to overcome the difficulty of doing the correct Bayesian calculation. But it is bizarre to respond to not being able to answer a question with certainty by answering a different question instead!  Especially when dealing with philosophically challenging issues such as the development of life and alien civilisations. What we should do is recognise the uncertainty in our answer and limit it as best we can.  Needless to say, I think your article Design by Elimination vs. Design by Comparison is also mistaken but that takes rather more space to address.markf
August 15, 2011
August
08
Aug
15
15
2011
11:44 PM
11
11
44
PM
PDT
Elizabeth Liddle:
And so, faced with a pattern that has CSI (and not all designed things do), the first thing I’d ask is: does it replicate with heritable variance in the ability to replicate? If so, I have a candidate for CSI generation right there.
Like humans?
If not, I look for an external designer.
Oh wait. Humans are external designers.Mung
August 15, 2011
August
08
Aug
15
15
2011
11:12 PM
11
11
12
PM
PDT
Elizabeth Liddle seems stuck with where the discussion over CSI was ten years ago when I published my book NO FREE LUNCH. She characterizes the Bayesian approach to probabilistic rationality as though that's what science universally accepts when in fact the scientific community overwhelmingly adopts a Fisherian approach (which, by the way, is not wedded to a frequentist approach -- epistemic probabilities, Popper's propensities, and Bernoulli's indifference principle all work quite well with it). Liddle makes off that CSI is a fuzzy concept when her notion of prior probabilities when applied to design inferences is nothing but an exercise in fuzzification. What, indeed, is the prior probability of a space alien sending a long sequence of primes to planet earth? In such discussions no precise numbers ever get reasonably assigned to the priors. Bayes works when the priors can themselves be established through direct empirical observation (as with medical tests for which we know the [prior] incidence of the disease in the population). That's never the case in these discussions, where the evidence of design is purely circumstantial. I refer readers to two articles of mine relevant to this thread: (1) "Design by Elimination vs. Design by Comparison" (available at http://www.designinference.com/documents/2005.09.Fisher_vs_Bayes.pdf), in which I clearly spell out how the Bayesian approach to design inferences is parasitic on my generalization of Fisher to CSI. (2) "Life's Conservation Law: Why Darwinian Evolution Cannot Create Biological Information," which answers Liddle's vain hope that RV+NS can serve as a designer substitute. This article is available at http://evoinfo.org/publications/lifes-conservation-law/ and is also a chapter in THE NATURE OF NATURE. Closing thought: If Bayes were such a boon to design inferences, then why don't we see more of them? When people in real-life infer design on the basis of a small probability event (and such events do regularly trigger design inferences), why don't they factor in the priors? Is it that they're just not properly educated in the logic of Bayes? Or perhaps it's that estimating priors in such circumstances is simply an exercise in handwaving. In any case, if design in biology is real, then Bayes should long ago have uncovered it. The fact that it has not and that it is regularly used to insulate Darwinian evolution from probabilistic critique (Elliott Sober is the master of this) suggests that more objectve probabilistic methods are called for -- such as CSI. William Dembski
August 15, 2011
August
08
Aug
15
15
2011
09:25 PM
9
09
25
PM
PDT
Yes, actually. I do. See: http://www.wired.com/wiredscience/2008/01/venter-institut/Barry Arrington
August 15, 2011
August
08
Aug
15
15
2011
06:25 PM
6
06
25
PM
PDT
True. But science moves on.Elizabeth Liddle
August 15, 2011
August
08
Aug
15
15
2011
03:34 PM
3
03
34
PM
PDT
Just to make clear, in case this is the difficulty: when I said you did not need to know the identity or characteristics of the designer, I meant that it is perfectly possible, in many cases, to make an inference without those details. It's not a universally necessary requirement. But in the case above, I cannot see any way of doing it without at least some information about Craig Venter.Elizabeth Liddle
August 15, 2011
August
08
Aug
15
15
2011
03:31 PM
3
03
31
PM
PDT
1 2 3 4

Leave a Reply