Uncommon Descent Serving The Intelligent Design Community

Of coin-tosses, expectation, materialistic question-begging and forfeit of credibility by materialists

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Yesterday, I crossed a Rubicon, for cause, on seeing the refusal to stop from enabling denial and correct and deal with slander on the part of even the most genteel of the current wave of critics.

It is time to face what we are dealing with squarely: ideologues on the attack.  (Now, in a wave of TSZ denizens back here at UD and hoping to swarm down, twist into pretzels, ridicule and dismiss the basic case for design.)

coin-flip
Flipping a coin . . . is it fair? (Cr: Making Near Future Predictions, fair use)

Sal C has been the most prolific recent contributor at UD, and a pivotal case he has put forth is the discovery of a box of five hundred coins, all heads.

What is the best explanation?

The talking point gymnastics to avoid the obvious conclusion that have been going on for a while now, have been sadly instructive on the mindset of the committed ideological materialist and that of his or her fellow travellers.

(For those who came in 30 years or so late, “fellow travellers” is a term of art that described those who made common cause with the outright Marxists, on one argument or motive or another. I will not cite Lenin’s less polite term for such.  The term, fellow traveller, is of obviously broader applicability and relevance today.)

Now, I intervened just now in a thread that further follows up on the coin tossing exercise and wish to headline the comment:

_____________

>> I observe:

NR, at 5: >> Biological organisms do not look designed >>

The above clip and the wider thread provide a good example of the sort of polarisation and refusal to examine matters squarely on the merits that too often characterises objectors to design theory.

coin_prob_percent
A plot of typical patterns of coin tossing, showing the overwhelming trend for the average percent of H’s to move to the mean, as a percentage even as the absolute difference between H and T will diverge across time.  (Credit: problemgambling.ca, fair use)

In the case of coin tossing, all heads, all tails, alternating H and T, etc. are all obvious patterns that are simply describable (i.e. without in effect quoting the strings). Such patterns can be assigned a set of special zones, Z = {z1, z2, z3 . . . zn} of one or more outcomes, in the space of possibilities, W.

Thus is effected a partition of the configuration space.

It is quite obvious that |W| >> |Z|, overwhelmingly so; let us symbolise this |W| >> . . . > |Z|.

Now, we put forth the Bernoulli-Laplace indifference assumption that is relevant here through the stipulation of a fair coin. (We can examine symmetry etc of a coin, or do frequency tests to see that such will for practical purposes be close enough. It is not hard to see that unless a coin is outrageously biased, the assumption is reasonable. [BTW, this implies that it is formally possible that if a fair coin is tossed 500 times, it is logically possible that it will be all heads. But that is not the pivotal point.])

When we do an exercise of tossing, we are in fact doing a sample of W, in which the partition that a given outcome, si in S [the set of possible samples], comes from, will be dominated by relative statistical weight. S is of course such that |S| >> . . . > |W|. That is, there are far more ways to sample from W in a string of actual samples s1, s2, . . . sn, than there are number of configs in W.

(This is where the Marks-Dembski search for a search challenge, S4S, comes in. Sampling the samplings can be a bigger task than sampling the set of possibilities.)

Where, now, we have a needle in haystack problem that on the gamut of the solar system [our practical universe for chemical level atomic interactions], the number of samples that is possible as an actual exercise is overwhelmingly smaller than S, and indeed than W.

Under these circumstances, we take a sample si, 500 tosses.

The balance of the partitions is such that by all but certainty, we will find a cluster of H & T in no particular order, close to 250 H: 250 T. The farther away one gets from that balance, the less likely it will be, through the sharp peaked-ness of the binomial distribution of fair coin tosses.

Under these circumstances, we have no good reason to expect to see a special pattern like 500 H, etc. Indeed, such a unique and highly noticeable config will predictably — with rather high reliability — not be observed once on the gamut of the observed solar system, even were the solar system dedicated to doing nothing but tossing coins for its lifespan.

That is chance manifest in coin tossing is not a plausible account for 500 H, or the equivalent, a line of 500 coins in a tray all H’s.

However, if we were now to come upon a tray with 500 coins, all H’s, we can very plausibly account for it on a known, empirically grounded causal pattern: intelligent designers exist and have been known to set highly contingent systems to special values suited to their purposes.

Indeed, such are the only empirically warranted sources.

Where, for instance we are just such intelligences.

So, the reasonable person coming on a tray of 500 coins in a row, all H, will infer that per best empirically warranted explanation, design is the credible cause. (And that person will infer the same if a coin tossing exercises presented as fair coin tossing, does the equivalent. We can reliably know that design is involved without knowing the mechanism.)

Nor does this change if the discoverer did not see the event happening. That is, from a highly contingent outcome that does not fit chance very well but does fit design well, one may properly infer design as explanation.

Indeed, that pattern of a specific, recognisable pattern utterly unlikely by chance but by no means inherently unlikely to the point of dismissal by design, is a plausible sign of design as best causal explanation.

The same would obtain if instead of 500 H etc, we discovered that the coins were in a pattern that spelled out, using ASCII code, remarks in English or object code for a computer, etc. In this case, the pattern is recognised as a functionally specific, complex one.

Why then, do we see such violent opposition to inferring design on FSCO/I etc in non-toy cases?

Obviously, because objectors are making or are implying the a priori stipulation (often unacknowledged, sometimes unrecognised) that it is practically certain that no designer is POSSIBLE at the point in question.

For under such a circumstance, chance is the only reasonable candidate left to account for high contingency. (Mechanical necessity does not lead to high contingency.)

So, we see why there is a strong appearance of design, and we see why there is a reluctance or even violently hostile refusal to accept that that appearance can indeed be a good reason to accept that on the inductively reliable sign FSCO/I and related analysis, design is the best causal explanation.

In short, we are back to the problem of materialist ideology dressed up in a lab coat.

I think the time has more than come to expose that, and to highlight the problems with a priori materialism as a worldview, whether it is dressed up in a lab coat or not.

We can start with Haldane’s challenge:

“It seems to me immensely unlikely that mind is a mere by-product of matter. For if my mental processes are determined wholly by the motions of atoms in my brain I have no reason to suppose that my beliefs are true. They may be sound chemically, but that does not make them sound logically. And hence I have no reason for supposing my brain to be composed of atoms. In order to escape from this necessity of sawing away the branch on which I am sitting, so to speak, I am compelled to believe that mind is not wholly conditioned by matter. [[“When I am dead,” in Possible Worlds: And Other Essays [1927], Chatto and Windus: London, 1932, reprint, p.209.]

This and other related challenges (cf here on in context) render evolutionary materialism so implausible as a worldview that we may safely dismiss it. Never mind how it loves to dress up in a lab coat and shale the coat at us as if to frighten us.

So, the reasonable person, in the face of such evidence, will accept the credibility of the sign — FSCO/I — and the possibility of design that such a strong and empirically grounded appearance points to.

But, notoriously, ideologues are not reasonable persons.

For further illustration, observe above the attempt to divert the discussion into definitions of what an intelligent and especially a conscious intelligent agent is.

Spoken of course, by a conscious intelligent agent who is refusing to accept that the billions of us on the ground are examples of what intelligent designers are. Nope, until you can give a precising definition acceptable to him [i.e. inevitably, consistent with evolutionary materialism — which implies or even denies that such agency is possible leading to self referential absurdity . . . ], he is unwilling to accept the testimony of his own experience and observation.

I call that a breach of common sense and self referential incoherence.>>

____________

The point is, the credibility of materialist ideologues is fatally undermined by their closed-minded demand to conform to unreasonable a prioris. Lewontin’s notorious cat- out- of- the- bag statement in NYRB, January 1997 is emblematic:

. . . . the problem is to get [the public] to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth [[–> NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting]. . . .

[T]he practices of science provide the surest method of putting us in contact with physical reality, and that, in contrast, the demon-haunted world rests on a set of beliefs and behaviors that fail every reasonable test  [[–> i.e. an assertion that tellingly reveals a hostile mindset, not a warranted claim] . . . .

It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [[–> another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [[–> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door. [“Billions and Billions of Demons,” NYRB, January 9, 1997. Bold emphasis and notes added. of course, it is a commonplace materialist talking point to dismiss such a cite as “quote mining. I suggest that if you are tempted to believe that convenient dismissal, kindly cf the linked, where you will see the more extensive cite and notes.]

No wonder, in November that year, ID thinker Philip Johnson rebutted:

For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”
. . . .   The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]

Too often, such ideological closed-mindedness and question-begging multiplied by a propensity to be unreasonable, ruthless and even nihilistic or at least to be an enabler going along with such ruthlessness.

Which ends up back at the point that when — not if, such an utterly incoherent system is simply not sustainable, and is so damaging to the moral stability of a society that it will inevitably self-destruct [cf. Plato’s warning here, given 2,350 years ago]  — such materialism lies utterly defeated and on the ash-heap of history, there will come a day of reckoning for the enablers, who will need to take a tour of their shame and explain or at least ponder why they went along with the inexcusable.

Hence the ugly but important significance of the following picture in which, shortly after its liberation, American troops forced citizens of nearby Wiemar to tour Buchenwald so that  the people of Germany who went along as enablers with what was done in the name of their nation by utterly nihilistic men they allowed to rule over them, could not ever deny the truth of their shame thereafter:

Buchenwald01
A tour of shame at Buchenwald, showing here the sad, shocking but iconic moment when a woman from the nearby city of Wiemar could not but avert her eyes in horror and shame for what nihilistic men — enabled by the passivity of the German people in the face of the rise of an obviously destructive ideology since 1932 — had done in the name of her now forever tainted nation

It is time to heed Francis Schaeffer in his turn of the 1980’s series, Whatever Happened to the Human Race, an expose of the implications and agendas of evolutionary materialist secular humanism that was ever so much derided and dismissed at the time, but across time has proved to be dead on target even as we now have reached the threshold of post-birth abortion and other nihilistic horrors:

[youtube 8uoFkVroRyY]

And, likewise, we need to heed a preview of the tour of shame to come, Expelled by Ben Stein:

[youtube V5EPymcWp-g]

Finally, we need to pause and listen to Weikart’s warning from history in this lecture:

[youtube w_5EwYpLD6A]

Yes, I know, these things are shocking, painful, even offensive to the genteel; who are too often to be found in enabling denial of the patent facts and will be prone to blame the messenger instead of deal with the problem.

However, as a descendant of slaves who is concerned for our civilisation’s trends in our time, I must speak. Even as the horrors of the slave ship and the plantation had to be painfully, even shockingly exposed two centuries and more past.

I must ask you, what genteel people sipping their slave-sugared tea 200 years ago,  thought of images like this:

African_woman_slave_trade
An African captive about to be whipped on a slave-trade ship, revealing the depravity of ruthless men able to do as they please with those in their power (CR: Wiki)

Sometimes, the key issues at stake in a given day are not nice and pretty, and some ugly things need to be faced, if horrors of the magnitude of the century just past are to be averted in this new Millennium. END

Comments
KF, That all sounds very much luck a long and elaborate retelling of Hoyle's 747 argument veiled in a bit of pseudo-maths. If you wanted to say something meaningful You'd need to show evidence that "islands of function" are not connected by passable troughs in the fitness landscape and you'd need to consider how many other configurations would be just as functional as those that we see now. And most importantly you'd need to include some conception of how evolution works before you could declared some result or other to be any number of standard deviations away form an expectation. (As Elizabeth noted, for one example, in a population of 500 individuals with two alleles "H" and "T", it's extremely improbable a random draw of 500 gametes would create a new population with all "H"s or all "T"s, but in the long run that outcome is inevitable and happens in only a few generations thanks entirely to chance)wd400
July 1, 2013
July
07
Jul
1
01
2013
04:46 PM
4
04
46
PM
PDT
They can be tested, but only if they can be operationalised as a computable null distribution. That’s not the usual way of going about null hypothesis testing – we do not usually set up our study hypothesis as the null! In fact null hypothesis is a deeply problematic approach anyway, although it is still the workhorse of scientific methodology. You just have to be very careful the inferences you make from a rejected null. (btw, I’m not a statistician either, but I do use statistics professionally, and consult with statisticians when necessary)
Elizabeth, Thank you very much for your response. This is related to a post I'm working on. Thank you! Salscordova
July 1, 2013
July
07
Jul
1
01
2013
06:40 AM
6
06
40
AM
PDT
Earth to Elizabeth BS Liddle: I have supported what I say about darwinian evolution with actual references. OTOH you have never supported anything you have said wrt darwinian evolution. And it definitely is known that you are wrong and have been in the past also.Joe
July 1, 2013
July
07
Jul
1
01
2013
06:27 AM
6
06
27
AM
PDT
Dr Liddle: The energy fed back to intermediate steps is a reward.
OK, if you want to use that word for an advantageous step, fine. But not all intermediate steps between functions were rewarded. Only those that actually performed a logic function were rewarded. These were separated by many unrewarded, and indeed "penalized" steps. Do you acknowledge this?
Notice what happened when there was none of this, only the XNOR was rewarded, it failed to work. KF Of course. If there were NO intermediate forms that had increased fitness, EQU did not evolve. But that doesn't mean that when some were, ALL were. They weren't. As I recall, only about eight logic functions increased fitness. The vast majority of intermediate steps did not result in new rewarded functions, and some resulted in loss of function. In fact some necessary steps to EQU resulted in loss of function, atlhough this was sometimes subsequently regained. Are you under the mistaken impression that landscapes in which EQU evolved consisted of a "reward" (increased fitness) for every single step between the starter population and EQU? If so, that might explain the problem. Re your 148: 1. I have already said that I have no more to say to you on the subject of the alleged "slander". 2. I simply cannot parse the rest of your post. I appreciate that it was written in haste, but I find it incomprehensible. I do not know what most of your abbreviations mean.
Elizabeth B Liddle
July 1, 2013
July
07
Jul
1
01
2013
06:25 AM
6
06
25
AM
PDT
Lizzie is either lying or willfully ignorant.
Alternatively, Joe is wrong. It wouldn't be unknown.Elizabeth B Liddle
July 1, 2013
July
07
Jul
1
01
2013
06:13 AM
6
06
13
AM
PDT
F/N: On a run and seeing I forgot to do something requiting a fair bit of writing, will have to do so fast within an hour, so I just note that the notion of non functional sections mutating freely then coming back into play with 250 - 500+ BP of info is facing bridging gaps by pure unfiltered chance that are beyond the capacity of our solar system's resources much less the earth's biosphere. KFkairosfocus
July 1, 2013
July
07
Jul
1
01
2013
05:23 AM
5
05
23
AM
PDT
Important point #2: Behe's argument is that darwinian processes cannot produce IC. And AVIDA does not represent darwinian processes. Lizzie is either lying or willfully ignorant.Joe
July 1, 2013
July
07
Jul
1
01
2013
05:18 AM
5
05
18
AM
PDT
Dr Liddle: The energy fed back to intermediate steps is a reward. Notice what happened when there was none of this, only the XNOR was rewarded, it failed to work. KFkairosfocus
July 1, 2013
July
07
Jul
1
01
2013
05:17 AM
5
05
17
AM
PDT
Dr Liddle: First,the basic issue is on the 500H distribution, which turns out to be pivotal. To red herring away to something else without acknowledigng the point on the basic case is not reasonable behaviour. Just as to pretend that slander was not present on your blog then when to try to justify it in the thread above, were not reasonable. Next, if you will note the discussion in response to WD400 just above, who tried the same red herring trick, you will see that the issue is that darwinian evo is a case of: CV + DRS --> DWM. But DRS subtracts info, sot he issue is CV as creator of info, incrementally. That is multiplied by the issues of fixing and then further increments to pops hundreds or thousands of times over to make body plans requiring 10 - 100+ mn base pair increments of info. Where in teh case of say humans vs chimps event eh outdated 2% difference estimate shows that we are at 10's of mns of base pairs to be explained in 6 MY or so. No go. Beyond lies the biggie, that the requisites of complex functional specificity imply a BIG jump in config space to move from one form that works to another, across an intervening sea of non-function obviously unbridgeable by incremental CV and DRS as there is no R at all for that. At most darwinian evo is a limited micro-evo theory within islands of function. And AVIDA inadvertently shows just that, its unreasonably probable advantageous change steps rewarded by unrealistic increments are small. In short, we need a theory that handles 250 - 500 base pair jumps in complexity [about the size of both a reasonable protein and that of a reasonable regulatory programming block], which we do not have. (This has actually been pointed out to you previously and adequately discussed, so yo0u are here resorting to another unreasonable tactic, insistence on recirculating a point already reasonably answered, ad nauseum -- as though drumbeat repetition will convert error into truth. This sinks your credibility even further.) KFkairosfocus
July 1, 2013
July
07
Jul
1
01
2013
05:15 AM
5
05
15
AM
PDT
Importasnt point: AVIDA doesn't have anything to do with darwinian evolution.Joe
July 1, 2013
July
07
Jul
1
01
2013
05:15 AM
5
05
15
AM
PDT
Important point: Incremental steps were not rewarded in AVIDA. However, the fitness landscape included some simpler functions that conferred increased reproductive success. These were ALL separated by non-advantageous "steps". I don't think any two functions were adjacent on the fitness landscape, and many were separated not only by "unrewarded" steps but by actually deleterious steps.Elizabeth B Liddle
July 1, 2013
July
07
Jul
1
01
2013
05:02 AM
5
05
02
AM
PDT
WD400, re:
Do want to address the substance of Elizabeth’s comment? It’s true, isn’t it, that a “x sigma” result describes how far away from a given expectation that result is? So we need to define the the expectation. That’s easy for random uncorrelated throws of a fair coin. But evolution isn’t at all like that – being massively parallel and an approximately markov process.
First, it should be obvious that the expected value of a sample of a bell curve, the typical value, is its mean, or at least its median [this being the 50th percentile] but as the distributions are symmetric they are equal. That is why averages are important. And, the darts and charts exercise should suffice to show what I am getting at on sampling a config space blindly. Now, you have injected an onward issue, presumably darwinian evolution, as a challenge to the whole process. Let's clarify; [neo-] darwinian evo is supposed to be about how: chance variation [CV] + differential reproductive success [DRS] --> descent with modification [DWM], presumed unlimited: CV + DRS --> DWM DRS, however, is a subtracter of information thought to originate in CV, leading to a shift in population, DWM. That is, DRS is not a SOURCE of info, it is an exit sink for info, removing it from the environment by way of extinction of less favoured varieties [Darwin spoke in terms of races]. So, we are back to chance sampling of a space of possibilities as the proposed root of variation. CHANCE is being expected to incrementally write the software of life, filtered by DRS that subtracts some of the chance-based info. (We can use the idea of genetic mutation through random processes uncorrelated with success as a proxy for the clusters that are tossed up these days, up to 47 on last chance I noted.) That variation and filtering obviously happens in a population that is already functional and capable of reproducing. So, OOL, the root of the tree of life, is unexplained. But no roots, no shoots or anything beyond. The physics and chemistry of warm ponds is maximally unfavourable to the formation of FSCO/I rich structures in a network of functional organisation. The only observed source for FSCO/I is design. Design is at the table from the root of the tree of life on, save to those who lock it our through holding to or going along with a priori materialism imposed on science. That is important, as is the observation that by its very nature FSCO/I locks us down to narrow zones in a space of possible configs of a given cluster of components. Indeed, that is the underlying point of the 500H exercise, which illustrates this in a toy example. Now, WD 400, evidently you have not seriously pondered the point of the threshold of complexity used in the design filter plausibility threshold analysis, 500 bits or 1,000 bits. Otherwise, you would not have tried to emphasise the parallel nature of the search process as though it had not long since been more than reckoned with. What is envisioned by design thinkers is that the elements of a config space have at least 500 bits worth of possibilities. To see how this is deducible, think as to how any 3-d object can be represented as a cluster of nodes and arcs, with perhaps orientation of nodes also important. AutoCAD or the like show how this can be represented as a string of memory locations with a suitable structure. That is, analysis on bit strings is WLOG. The complexity threshold is then set on the number of possibilities for the 10^57 atoms of our solar system, EACH making a new search in the config space of a string of 500 bits every 10^-14s, the peak rate for chemical rxns, for 10^17s, a reasonable estimate for time since solar system formation. (Cf the clips from Abel's paper on the universal plausibility bound above.) You don't get more generously parallel than that. (In practice, for life forms in populations on earth, you would be more looking at something like the irradiation of cells, leading to ionisation of water molecules and interference of such activated molecules with DNA molecules or the like, a much smaller parallel sample. So sorry, the sample we are discussing is far too generously parallel if anything.) And having made such a generous sample, the result is that we are looking at, after 10^17 s, a sample of the space of 500 bits that stands as picking one straw sized sample blindly from a cubical haystack 1,000 light years across; about as thick as our galaxy. In short, we are tossing in effect one dart at the haystack. Sampling theory tells us the result, without need to go into elaborate expectation and probability models, simply off the sheer balance of statistical weights. To see how that happens, imagine the haystack superposed on our galactic neighbourhood in which star systems are several LY apart on average, and stars are much smaller than 1 LY in size on average. Such a sample on such a space would with all but certainty, pick up straw. For the same reason why the darts and charts exercise above would have a big problem getting a 4 SD - 5 SD event. Namely, relative statistical weight counts, heavily. Where also, it bears repeating that by the necessities of functionally specific complex organisation of components in a system, the proportion of functional configs is going to be a very small fraction of possible configs. That is why the sort of irradiation we see described is liable to lead to cell death or derangement, in relatively low dose cases to trigger cancer. (Radiation physics is a depressing subject, and leads to gallows humour about how you can spot the physics students on the dorms: they glow in the dark, usually faintly blue.) So, we naturally see the result that is empirically warranted: observed evolution is mircro-evo, and mostly consists in loss of function that is not fatal and for some strange reason will confer a survival advantage in a deprived environment. Insects on islands losing wings so they don't blow out to sea is a cited example, but the prevalence of mosquitoes, flies, termites [fly for a mode in their life cycle] and fire ants [an invasive species that has a flying mode] on islands in the Caribbean shows that flying insects can do well thank you. Sickle cell is a classic. There was a recent case on Tom Cod in the Hudson, and the like. There is no good evidence that incremental variation within an island of function is capable of bridging islands of function, starting with moving from first life to the cluster of major multicellular body plans. OOL requires 100 - 1 mn bits of new info, and body plans 10 - 100 mbits. This, from patterns in genomes. Every additional bit beyond 500 DOUBLES the config space. if 500 bits is unsearchable on our solar system scope, 1,000 is much more so on the scope of the observed cosmos. 100,000 bits disposes of a config space of order 9.99 *10^30,102 possibilities. The notion that blind chance and or mechanical necessity assembled the components of cell based life under any plausible scenario, is ludicrous. Even if, for argument, we assumed that there is a huge continent of possible life forms beyond the first life [for which there is much evidence to the contrary], traversible incrementally by a branching tree pattern, the pop genetics to fix the required number of mutations on the scope in question, dozens of times to hundreds of times over, would be prohibitive on the available resources. macroevo is an unjust6ified extrapolation of observed micro evo. As for AVIDA, it is a trick as was outlined above already. In a nutshell, EVERY logic function can be done by a NAND, including not only combinational but sequential circuits. (Indeed it used to be a system design step to convert the system designed on more intuitive and-or-not to NANDs as 7400's were cheap and fast.) By rewarding incremental steps and setting s suitably low information increment threshold, with parameters set well beyond what would be empirically warranted, the appearance of evo is made to happen. A stock promoter who did the like would be sitting in gaol for fraud, not celebrated as proving his case at Dover. Effective as rhetoric and/or as confirmation bias support in an ideologically charged climate that begs questions from definition of science on up, not so good as actually warranting what was claimed. KFkairosfocus
July 1, 2013
July
07
Jul
1
01
2013
04:45 AM
4
04
45
AM
PDT
KF: Thank you for your response, but it simply does not address my point. I have no problem in agreeing with you that certain processes produce certain distributions of data reliably, and that therefore, if we have an observation that falls in the far tails of the distribution for a particular process, we can reject that process as a likely cause of that observation. What I am saying is that unless you can produce the expected distribution of data under the null of Darwinian evolution you do not know whether a given observation is in the tails of it or not, and therefore you cannot reject "Darwinian evolution" as a null. And no-where do I see you constructing such a null distribution. You talk of the probability distribution of the results of dart shots, coin-tosses, and looking for needles in haystacks, but not of the results of Darwinian evolution. That's the distribution you need to try to compute if that's the null you want to test.Elizabeth B Liddle
July 1, 2013
July
07
Jul
1
01
2013
04:14 AM
4
04
14
AM
PDT
Dr Liddle: I find it interesting that you found it necessary to go all over the ball park to try to dismiss the problem of blind/chance sampling of a bell-type population with limited resources and the increasing unlikelihood of OBSERVING farther and farther skirt outliers under those circumstances. This is in fact right at the heart of the darts and charts exercise I have discussed so many times. It may surprise you that this has industrial applications in which the pop patterns of a controlled process are characterised and observations of output are used and actually plotted on charts to spot runs or drift or broadened scatter that would be showing the process getting out of control. That is, the darts and chart exercise is not just a pipe dream notion. In any case, it stands on its own merits. Set up a bell distribution chart on a bristol board backed by a bagasse board or the like, and mark the mean then 1/2 or 1 SD wide stripes to either side, , up to 5 or so SD to the side should be enough. (And yes the skirts here would be awfully thin, approximate as needed -- lesson no 1! [Onlookers, at 5 SD out the actual height of a normal curve is about 3 * 10^-6 of its peak, so if the peak is 1 m high we are looking at about the thickness of a bacterium here]). Then, drop darts from a height where the scatter would be fairly uniform. Say, 30 - 100 times. The likelihood of getting a hole will of course be proportionate to area, more or less [we cannot drop darts in a perfectly random, perfectly even pattern but this is just to get an idea; let us idealise for argument, to make the point clear]. It is obvious that we will see far more hits in the bulk near the peak than in the far skirts. Indeed, it is highly unlikely that on that sort of sample scale we will get any hits in say the 4 - 5 SD band. Which was the point at issue. And on the narrower point where you dismissed the idea of a 4-sigma event as reasonable terminology [what I intervened on], the meaning in terms of darts and charts should be obvious: 4 - 5 SD out from the mean, beyond would be a 5 sigma and so forth. This also illustrates by a simple example a natural zone of interest or special zone, here in the far skirt of a distribution. Precisely because it is utterly unlikely to be captured in a sample of reasonable scale, it illustrates the problem of needles in haystacks. In the case of the 500 H coin toss, we are at the extremum of the distribution, and the "cell" of interest is 1 in 3.27 *10^150 of the space of possibilities, 22 SD away from the mean (of a binomial distribution). This is the context in which a maximum sample on the gamut of the solar system's resources, would be as 1 straw sized sample to a cubical haystack 1,000 LY on the side. if such were superposed on our galactic neighbourhood, the overwhelming bulk would be straw and nothing else, i.e. the formally possible outcome of 500H, is empirically, practically unobservable. An empirical as opposed to a logical impossibility, on blind sampling. It just is not credibly reachable by any process dependent on blind chance sampling. That needle in haystack challenge is why fixation on the formal possibility is serving as a strawman distractor on the matter. By focussing on one point, a formal possibility, the much more material point, that we are dealing with sampling with finite resources that do not allow us to expect reasonably to catch tiny and deeply isolated zones of interest, is being missed. And, the design thinkers who have emphasised that empirical sampling point have been misrepresented over and over again tot he point whee the misrepresentation is now culpable. If it is repeated, I have to now treat it as a willful fallacy intended to distort, distract and confuse. Don't ever forget, you have forfeited the benefit of the doubt due to harbouring, denying and then when denial was impossible, trying to defend an outrageous slander. the harbourer is worse than the perp. WD 400, you next. KFkairosfocus
July 1, 2013
July
07
Jul
1
01
2013
03:48 AM
3
03
48
AM
PDT
I should have written: "They can be tested, but only falsified as null hypothesis if they can be operationalised as a computable null distribution." They can be (probabilistically) falsified in other ways, though, for instance using Bayesian methods, in which two competing hypotheses can be compared.Elizabeth B Liddle
July 1, 2013
July
07
Jul
1
01
2013
03:26 AM
3
03
26
AM
PDT
scordova:
I’m not a statistician like you or Mark Frank, but can some (not all) the major claims of Darwinian evolution be falsified through the means you practice? If the claims can’t be tested even in principle, that would be bothersome.
They can be tested, but only if they can be operationalised as a computable null distribution. That's not the usual way of going about null hypothesis testing - we do not usually set up our study hypothesis as the null! In fact null hypothesis is a deeply problematic approach anyway, although it is still the workhorse of scientific methodology. You just have to be very careful the inferences you make from a rejected null. (btw, I'm not a statistician either, but I do use statistics professionally, and consult with statisticians when necessary)Elizabeth B Liddle
July 1, 2013
July
07
Jul
1
01
2013
03:22 AM
3
03
22
AM
PDT
I am not sure why you keep insisting on misrepresenting Behe, notwithstanding numerous corrections issued to you on this point. Here is what Behe actually said: “Even if a system is irreducibly complex (and thus cannot have been produced directly), however, one can not definitively rule out the possibility of an indirect, circuitous route. As the complexity of an interacting system increases, though, the likelihood of such an indirect route drops precipitously. And as the number of unexplained, irreducibly complex biological systems increases, our confidence that Darwin’s criterion of failure has been met skyrockets toward the maximum that science allows.”
That is a fair point, and you are of course correct. Behe writes, a few paragraphs earlier:
"What type of biological system could not be formed by "numerous successive slight modification"? Well, for starters, a system that is irreducibly complex. by irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the renival of any of the parts causes the system to effectively cease functioning."
This is directly falsified by the evolution of all the functions in AVIDA (all of which are IC), and especially EQU which is the most complex. However I accept that Behe does qualify this statement, in the passage you cite, a few paragraphs later, although his statement: "As the complexity of an interacting system increases, though, the likelihood of such an indirect route drops precipitously" is mere assertion, not argument, and he does not develop this argument in Darwin's Black Box. He does, however, develop it later in a response to critics, where he gives an alternative "evolutionay" second definition of IC: “An irreducibly complex evolutionary pathway is one that contains one or more unselected steps (that is, one or more necessary-but-unselected mutations). The degree of irreducible complexity is the number of unselected steps in the pathway." But he still does not present any argument that the the more complexly interacting an IC structure is, the greater the degree of IC pathway required to reach it. Instead, he, in my view, shoots his own argument in the foot, by assigning the property IC to the pathway not the structure, and merely assuming, that two must be correlated. There is no reason to make this assumption, and, with double irony, your own point below demonstrates its falsity. You wrote:
Indeed, one ironic result from Avida is that the population didn’t evolve the final EQU goal when only the final goal was rewarded, confirming and underscoring Behe’s very point about irreducible complexity.
There is nothing "ironic" about this result! Nobody, least of all Darwin, thought a complex structure could evolve if no precursor offered any reproductive advantage! The whole point of Darwin's idea was that IF precursors offer reproductive advantage, complex features can evolve! What the AVIDA study shows here is that the evolvability of EQU is not an intrinsic property of EQU, but of the fitness landscape in which it sits. In other words, whether or not EQU evolved was not a function of the complexity of EQU, but of the fitness landscape in which it sits. If EQU is a single butte on a flat plain it won't evolve. If it is on a summit in a landscape of rolling hills, it does, even if the only route to the summit must cross a fairly deep ravine. Behe originally thought that an structure could be diagnosed as "Irreducibly Complex" if it failed to function if any part was removed. Then he said that a pathway was IC to degree N if it contained unselected steps. AVIDA shows that IC structures readily evolve, and that structures evolve via deeply IC pathways. So we are left with the question: Is the interactive complexity of an IC structures correlated with the IC degree of the required to reach it, as Behe asserts? I agree that this question is not directly addressed by AVIDA, but the fact that EQU reliably evolved in a landscape with other beneficial functions, by a very great number of different pathways, all of which were deeply IC tells us that you can't look at a structure and say "this could not have evolved because it is highly IC". That alone is not a criterion for unevolvability. What might be, but is far more difficult, if not impossible, to determine, is whether the only possible paths to it were too deeply IC for "evolution" to cross. It may be worth making a simple mathematical point that is often forgotten (as revealed in expressions like "parts must come together simultaneously"): Let's say that some complex function Y is disabled unless parts X1, X2...XN) are present. If a precursor to Y (say function X1 ) is selectable that means that it, by definition, confers increased reproductive success on the organism in which it appears de novo. This means that many more organisms with feature X1 will be born. This in turn increases the probabilistic resources as Dembski would say, i.e. the number of opportunites for another feature, say X2, also required by Y, to occur in an organisms already possessed of X1, is increased, simply because lots of organisms now have X2. X2 might perform a different function to X1 and to Y, but still be selectable. Or it may not. But if it appears de novo in a viable X1-bearing organism, it still has a decent chance of propagating by drift. And so again, the number of X1 + X2 bearing organisms can become quite large. And so on. The fewer parts of Y (X1: XN) that confer advantage in some context earlier in the evolutionary history, the less chance Y has of evolving, but that is not a function of Y itself, but a function of its parts within the "fitness lancscape". If X1: XN can never confer any reproductive advantage, Y won't evolve (as when only EQU was advantageous). But at least some of X1: XN are sometimes part of some less complex but advantageous function, then it can. And that is what AVIDA showed. Many different forms of the EQU function evolved, by many different pathways, all quite deeply IC, and all, IIRC, involving at least one steeply deleterious step. In other words, at least one crucial part was actually disadvantageous when added to the parts already present. It isn't a "hill climbing" algorithm, but a true Darwinian one, in which advantageious functions can be reached via deleterious steps. This is why the coin-tossing example is so misleading. Once one part is in place, and advantageous, the number of opportunities for a second part by definition grow exponentially. The steps are NOT independent, as they are in a series of coin tosses. And this is why Behe's statement: "As the complexity of an interacting system increases, though, the likelihood of such an indirect route drops precipitously" is unsupported. And I have never seen it supported. That's probably why I forgot it!Elizabeth B Liddle
July 1, 2013
July
07
Jul
1
01
2013
03:12 AM
3
03
12
AM
PDT
By the way, KF, I reject null hypotheses for a living, more or less. And in order to reject a null I take enormous pains to generate the expected distribution of my data under the null I am hoping to reject. This often means time-consuming Monte Carlo simulations, as often the distribution under the null is not a nice neat standard distribution from a text book. And with non-linear feedback processes, it is especially important to simulate the results under the null, because there is no way of computing them analytically. And the results are often surprising. For instance, something that looked like a low-frequency oscillator turned out to be indistinguishable from the output of a poisson process. Inadequate modelling of the null is major cause of spurious inferences in science.
I'm not a statistician like you or Mark Frank, but can some (not all) the major claims of Darwinian evolution be falsified through the means you practice? If the claims can't be tested even in principle, that would be bothersome.scordova
July 1, 2013
July
07
Jul
1
01
2013
12:01 AM
12
12
01
AM
PDT
Elizabeth @108:
We did not know, prior to Avida, that IC structures could evolve, or evolve by deeply IC pathways. Behe’s case was that such structures could not in principle evolve, not that they could not evolve in biology.
I am not sure why you keep insisting on misrepresenting Behe, notwithstanding numerous corrections issued to you on this point. Here is what Behe actually said: “Even if a system is irreducibly complex (and thus cannot have been produced directly), however, one can not definitively rule out the possibility of an indirect, circuitous route. As the complexity of an interacting system increases, though, the likelihood of such an indirect route drops precipitously. And as the number of unexplained, irreducibly complex biological systems increases, our confidence that Darwin's criterion of failure has been met skyrockets toward the maximum that science allows.” A number of ID critics have substituted their own interpretation of Behe’s point in their efforts to refute strawman versions of his argument. Your statement above is clearly one of those misinterpretations and strawmen. I would like to think it is inadvertent, but you have been corrected before, so it is unclear why you continue to misrepresent Behe’s position. Furthermore, even if Behe had said what you claim he said, he has certainly said a great deal more since then in responding to critics and clarifying his viewpoint. Thus, even those who are intent on misunderstanding him have no excuse.
AVIDA showed that both IC structures could evolve by Darwinian mechanisms, and that structures could evolve via deeply IC pathways, including quite severely deleterious precursors. Thus Behe’s principle was falsified by AVIDA.
Avida showed that if you assume all the key factors and put in place a highly simplified scenario an indirect pathway to an end function could be achieved. Big deal. No-one disputes that. And Behe specifically stated from the outset – contra your false assertion – that we could not rule out indirect pathways. His point is that such indirect pathways become more and more unlikely as the complexity and inter-connected functionality increases. Behe was most certainly not falsified. Indeed, one ironic result from Avida is that the population didn’t evolve the final EQU goal when only the final goal was rewarded, confirming and underscoring Behe’s very point about irreducible complexity.
It may well be true that some features observed in nature could not have evolved. But what AVIDA showed is that we cannot simply look at a feature, observe that it is IC or that the precursor pathway must have been deeply IC, and conclude it could not have evolved.
Agreed. As a matter of pure possibility, sure. But we already knew that. Behe acknowledged it back in 1996 in his original book. And we didn’t need Avida to tell us this.
AVIDA demonstrated that Behe’s criteria don’t work. That doesn’t mean that there are no such criteria. It’s just that you can’t base the ID case on Behe’s IC.
Wrong, as discussed above. Avida demonstrated no such thing. If anything, it supported Behe.
[Eric] Notably, with the oft-cited Avida study (in Nature, if memory serves), the authors acknowledged that if the program required a couple of parts to come along simultaneously that their digital organisms never “evolved” the final goal. That was precisely Behe’s point. He argues there is good reason to believe that — in reality, not in silico — there are molecular machines that require multiple parts to come along at once and that such machines are not amenable to a Darwinian process.
[Elizabeth] Could you provide a direct quotation, Eric? I think you may have misread or misremembered. I agree that it was precisely Behe’s point. And it was precisely that point that I understood as being refuted. And I have looked at the paper very closely, and indeed, played with AVIDA.
I found the quote I was thinking of. The Avida authors stated: “At the other extreme, 50 populations evolved in an environment where only EQU was rewarded, and no simpler function yielded energy. We expected that EQU would evolve much less often because selection would not preserve the simpler functions that provide foundations to build more complex features. Indeed, none of these populations evolved EQU, a highly significant difference from the fraction that did so in the reward-all environment (P less than 4.3 £ 1029, Fisher’s exact test).” This occurred even though those populations tested more genotypes, on average, than the “reward-based” environments.
Of course AVIDA does not prove that certain biological features evolved.
Agreed. Indeed, it cannot in principle in its current state. However, if we could get to the point where in silico environments at least approximate with reasonable fidelity the real world, we might be able to learn (even to your satisfaction, I would hope) whether a Darwinian process has any hope of functioning in the real world. That was Behe’s original point, and it still stands.
What it does show is that the Behe’s IC argument is not a good argument against the evolution of IC features or evolution via IC pathways.
No. Again, you are misunderstanding Behe’s point. Behe’s argument is evidentiary in nature, not based on sheer logical possibility.Eric Anderson
June 30, 2013
June
06
Jun
30
30
2013
07:54 PM
7
07
54
PM
PDT
Sorry, See beginning at about the 31:31 mark instead of the 32:32 mark indicated above. That is if you do not want to start at the beginning of a recent debate (May of this year) between Darwinist philosopher Michael Ruse and biochemist Fuzale Rana of Reasons To Believe regarding the origins of life.bpragmatic
June 30, 2013
June
06
Jun
30
30
2013
07:25 PM
7
07
25
PM
PDT
E. Liddle said: "Modelling the expected distribution under some kind of process in which each “draw” is independent from prior “draws” is clearly not a model of Darwinian processes." I don't believe that in the OOL phase of "evolution", the laws of physics and chemistry (darwinian processes are beholding to) would be anywhere near as charitable to the material formation requirements as would "independent draws" as you seem to imply with the above statement. In fact I would propose that there is a clear cut scientific case for asserting that some sort of guiding intelligence is required to overcome the IMPOSSIBILITY of certain component relationships from developing guideded purely by the laws of physics and chemical reactions. http://www.youtube.com/watch?v=2CnZ3n8I5b8 See beginning at the 32:32 mark.bpragmatic
June 30, 2013
June
06
Jun
30
30
2013
07:16 PM
7
07
16
PM
PDT
By the way, KF, I reject null hypotheses for a living, more or less. And in order to reject a null I take enormous pains to generate the expected distribution of my data under the null I am hoping to reject. This often means time-consuming Monte Carlo simulations, as often the distribution under the null is not a nice neat standard distribution from a text book. And with non-linear feedback processes, it is especially important to simulate the results under the null, because there is no way of computing them analytically. And the results are often surprising. For instance, something that looked like a low-frequency oscillator turned out to be indistinguishable from the output of a poisson process. Inadequate modelling of the null is major cause of spurious inferences in science.Elizabeth B Liddle
June 30, 2013
June
06
Jun
30
30
2013
05:15 PM
5
05
15
PM
PDT
KF:
Dr Liddle: You are simply wrong. The terms just above make a lot of sense to anyone familiar with statistical process control. A process can have a modal or mean value and it can have a pattern of deviations within 1, 2, 3 etc sigma bands.
Correction: The results from a process can have a mean value and a pattern of deviations. This matters.
Where for bell type distributions, the frequency patterns on being in bands tell us a lot, including runs etc.
Yes, they tell us a lot about the process that produced the distribution. And if the observed data are unlikely under the postulated process, we can reject the possiblity that the postulated process generated the data. But you have to be clear what process you are rejecting. Nobody claims that biological organisms are the result of the kind of process that tosses fair coins fairly. So rejecting "Darwinian" processes in favor of Design is not warranted unless you have modelled the expected distribution of the data under Darwinian processes. Modelling the expected distribution under some kind of process in which each "draw" is independent from prior "draws" is clearly not a model of Darwinian processes. And so rejecting Darwinian processes for, say, a functional protein because it is unlikely under independent random draw, is a non-sequitur. It would be like concluding "car accident" as the explanation for "broken leg" just because you had rejected "fell off roof".Elizabeth B Liddle
June 30, 2013
June
06
Jun
30
30
2013
04:45 PM
4
04
45
PM
PDT
KF, Do want to address the substance of Elizabeth's comment? It's true, isn't it, that a "x sigma" result describes how far away from a given expectation that result is? So we need to define the the expectation. That's easy for random uncorrelated throws of a fair coin. But evolution isn't at all like that - being massively parallel and an approximately markov process.wd400
June 30, 2013
June
06
Jun
30
30
2013
04:19 PM
4
04
19
PM
PDT
Dr Liddle: You are simply wrong. The terms just above make a lot of sense to anyone familiar with statistical process control. A process can have a modal or mean value and it can have a pattern of deviations within 1, 2, 3 etc sigma bands. Where for bell type distributions, the frequency patterns on being in bands tell us a lot, including runs etc. And I could keep going, just it would make no sense, I have lost all confidence in your credibility. Which, is driven by your slander- enabling behaviour and denials as I have pointed out. Good day madam, KFkairosfocus
June 30, 2013
June
06
Jun
30
30
2013
04:01 PM
4
04
01
PM
PDT
Jerad: At this point, I am operating on the conclusion on evidence that you are locked into an ideological system. I just briefly note that electrons are classic cases of invisible entities in physics, which you did not seem to know. Your methodological assumptions need considerable adjustments that I do not expect at this point. However, this I must note on:
Well, that a concession of sorts.
Why do you come across as unable to read with comprehension what is in the OP? To date you show no sign of being able to understand the difference between the logically possible and the empirically observable on relative statistical weights. I will simply repeat that it is logically, physically possible for all the O2 molecules in the room where you are to rush to one end and stay there for some minutes. But such a spontaneous undoing of diffusion processes is not reasonably observable. Reliably, concentration will reflect the overwhelming bulk of configs, mixed. Your life literally depends on it. I don't know if that will help you begin to understand what has been pointed out all along. KFkairosfocus
June 30, 2013
June
06
Jun
30
30
2013
03:50 PM
3
03
50
PM
PDT
A lot of loose talk is going on in this talk about "22 sigma events" and "5 sigma events" etc. There is no such thing. What those sigmas are doing is telling you how likely an event is under some null hypothesis. And so, to claim that X is a "22 sigma event" only make sense if you also specify the null. And you'd reject that null. With coin tosses, and roulette it's very easy to compute the null, because it's intrinsic to the assumptions of the game. Computing the null for a complex non-linear process is another kettle of fish entirely. So to talk about a protein being "22 sigma", therefore design, is meaningless. It's 22 sigma under what null? Durston and Abel's null is simply random draw. That's fine, because they don't actually infer Design from their 500+ bits. As they shouldn't because "random draw" does not equal "non-design". Nobody in evolutionary science is suggesting that proteins appeared by a process of "random draw". If you want to show that a protein was designed, it is not sufficient to show that it did not result from "random draw". And that's why null hypothesis testing doesn't work for making a Design inference. You need to use something else.Elizabeth B Liddle
June 30, 2013
June
06
Jun
30
30
2013
02:02 PM
2
02
02
PM
PDT
William
The Darwinists are only interested in making the point that it is possible to flip 500 coins and get heads, and that it is possible for Darwinian processes to generate what we see in biology.
This is simply not true, William. You must be reading a different blog from the one I am reading. Firstly: Every "Darwinist" I have read who has claimed (perfectly accurately, and uncontentiously) that flipping 500 Heads is just as possible as flipping any other sequence as also made the point that almost any other explanation would be more likely. Secondly: Darwinian processes are not "chance", and in coin flips. To say that Darwinian evolution is a plausible explanation for the diversity of life is nothing like saying that chance is a plausible explanation for a sequence of 500 coin flips.
Yes, it is all possible. It is also possible that the road in front of your car, via massive happenstance quantum fluctuation, turn into purple taffee. It’s not impossible; there is no law of physics that would prevent it. It’s possible that your neighbor won 15 lotteries in a row by chance. It’s possible that many different parts of a finely tuned machine were generated by chance, for other reasons, for other uses, and somehow by chance became fitted together over time, each step selectably advantageous, until an entirely new machine that does something entirely different is built, functions, and provides an advantage. There is no physical law that prevents such things from occurring, and for those desperate to cling to a particular worldview, bare possibility is all that is necessary to ignore the blatantly obvious.
Darwinian evolution is not a coin flip, William. This conversation becomes increasingly bizarre. Somehow, making the perfectly uncontentious point that 500 Heads are possible has been taken as the Mark of Cain, and anyone making that point is assumed also to be saying that this means that Darwinian evolution is as plausible an explanation of the diversity of life as flipping 500 Heads. I can only think that this is because the magic number "500" has entered ID mythology as a kind of shibboleth, so that anyone who expresses the view that a 500 bit event is possible is considered terminally gullible and/or so committed to Materialism that they will say that black is white rather than consider that anything other than chance is responsible for anything. Nothing could be further from the truth. Quite apart from anything else, I've said several times, and I'm sure others would agree, that 500 bits (22 sigma)is way too conservative an alpha criterion (i.e. the threshold for rejecting a null). I'd be suspicious at 3, and highly confident at 5. The problems with CSI aren't with the 500 bit threshold. You could lower it to 5, and it would still be a flawed measure, for the blindingly simple reason that Darwinian evolution is not a chance hypothesis. You can't slide a razorblade between what "Darwinists" think about the probability of 500 Heads being tossed fairly with a fair coin and what ID proponents think. It's a phoney war.Elizabeth B Liddle
June 30, 2013
June
06
Jun
30
30
2013
01:50 PM
1
01
50
PM
PDT
William:
If bare possibility is enough to satisfy a Darwinist or materialists that 500 heads in row is sufficiently explained by chance, then there is no evidence that can be presented that can change their minds about either the fine-tuning of the universe or about Darwinistic evolutionary “explanations”. There will always be enough chance, for them, to fill in the gaps.
Can you cite a single "Darwinist or materialist" who has calimed that 500 heads is "sufficiently explained by chance" (whatever that is supposed to mean). Has any poster said anything other than the equivalent of: almost any other explanation is more likely? If so, please link to the post. If not, please retract your generalisation.Elizabeth B Liddle
June 30, 2013
June
06
Jun
30
30
2013
01:19 PM
1
01
19
PM
PDT
"I hope that makes my position clearer :) Cheers Lizzie" Yes it does Lizzy. Thanks for going through the effort to respond to my post! I understand your points of view better now and will try to keep all of that in mind when following your posts. (p.s., Please do not let those dishes go too long. Hard telling what might evolve in the sink. LOL!)bpragmatic
June 30, 2013
June
06
Jun
30
30
2013
12:27 PM
12
12
27
PM
PDT
1 2 3 6

Leave a Reply