Just what is “chance”?
This point has come up as contentious in recent UD discussions, so let me clip the very first UD Foundations post, so we can look at a paradigm example, a falling and tumbling die:

2 –>As an illustration, we may discuss a falling, tumbling die:
Heavy objects tend to fall under the law-like natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance.
But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes!
[Also, the die may be loaded, so that it will be biased or even of necessity will produce a desired outcome. Or, one may simply set a die to read as one wills.]
{We may extend this by plotting the (observed) distribution of dice . . . observing with Muelaner [here] , how the sum tends to a normal curve as the number of dice rises:}

Then, from No 21 in the series, we may bring out thoughts on the two types of chance:
Chance:
TYPE I: the clash of uncorrelated trains of events such as is seen when a dropped fair die hits a table etc and tumbles, settling to readings in the set {1, 2, . . . 6} in a pattern that is effectively flat random. In this sort of event, we often see manifestations of sensitive dependence on initial conditions, aka chaos, intersecting with uncontrolled or uncontrollable small variations yielding a result predictable in most cases only up to a statistical distribution which needs not be flat random.
TYPE II: processes — especially quantum ones — that are evidently random, such as quantum tunnelling as is the explanation for phenomena of alpha decay. This is used in for instance zener noise sources that drive special counter circuits to give a random number source. Such are sometimes used in lotteries or the like, or presumably in making one time message pads used in decoding.
{Let’s add a Quincunx or Galton Board demonstration, to see the sort of contingency we are speaking of in action and its results . . . here in a normal bell-shaped curve, note how the ideal math model and the stock distribution histogram align with the beads:}
[youtube AUSKTk9ENzg]
Why the fuss and feathers?
Because stating a clear enough understanding of what design thinkers are talking about when we refer to “chance” is now important given some of the latest obfuscatory talking points. So, bearing the above in mind, let us look afresh at a flowchart of the design inference process:
(So, we first envision nature acting by low contingency mechanical necessity such as with F = m*a . . . think a heavy unsupported object near the earth’s surface falling with initial acceleration g = 9.8 N/kg or so. That is the first default. Similarly, we see high contingency knocking out the first default — under similar starting conditions, there is a broad range of possible outcomes. If things are highly contingent in this sense, the second default is: CHANCE. That is only knocked out if an aspect of an object, situation, or process etc. exhibits, simultaneously: (i) high contingency, (ii) tight specificity of configuration relative to possible configurations of the same bits and pieces, (iii) high complexity or information carrying capacity, usually beyond 500 – 1,000 bits. And for more context you may go back to the same first post, on the design inference. And yes, that will now also link this for an all in one go explanation of chance, so there!)
Okie, let us trust there is sufficient clarity for further discussion on the main point. Remember, whatever meanings you may wish to inject into “chance,” the above is more or less what design thinkers mean when we use it — and I daresay, it is more or less what most people (including most scientists) mean by chance in light of experience with dice-using games, flipped coins, shuffled cards, lotteries, molecular agitation, Brownian motion and the like. At least, when hair-splitting debate points are not being made. It would be appreciated if that common sense based usage by design thinkers is taken into reckoning. END
But, chance is so vague and imprecise . . . ? How can you say you are using it as a legitimate scientific concept? (And so forth)
kairosfocus, I noticed that you delineated chance into two different forms. i.e. A pair of die and quantum. I would suggest a more ‘scientific’ delineation, as would be pertinent to the ID vs Darwin debate, would involve delineating chance along the entropic and quantum boundary.
Randomness (Chance) – Entropic and Quantum
I think the whole Theistic Evolution issue, in which some Theists think God somehow guides evolution through ‘random chance and/or chaotic’ processes, hinges on the misapplication of the term ‘random chance’. For something to be considered a ‘random chance’ event in the universe is generally regarded as something lacking predictability to its occurrence or lacking a pattern to it. i.e. Generally the cause of the event is held to be unknown, but no one in their right mind would say that ‘nothing’ caused the random event to occur!. But how, in a general sense, when an atheist invokes randomness, as if he has issued a statement of final causality, is that any different from a Theist saying an event was ‘miraculous’, if the atheists says an event ‘just happened’ for no particular reason at all? Indeed, it has been observed by no less than the noted physicist Wolfgang Pauli that the word ‘random chance’, as used by Biologists, is synonymous with the word ‘miracle’:
Talbott humorously reflects on the awkward situation between Atheists and Theists here:
Also of related interest:
Basically, if the word random (chance) were left in this fuzzy, undefined, state one could very well argue as Theistic Evolutionists argue, and as even Alvin Plantinga himself has argued at the 8:15 minute mark of this following video,,
,,, that each random (chance) event that occurs in the universe could be considered a ‘miracle’ of God. And thus, I guess the Theistic Evolutionists would contend, God could guide evolution through what seem to us to be ‘random’ events. And due to the synonymous nature between the two words, random (chance) and miracle, in this ‘fuzzy’, undefined, state, this argument that random events can be considered ‘miraculous’, while certainly true in the overall sense, would none-the-less concede the intellectual high ground to the atheists since, by and large, the word random, as it is defined in popular imagination, is not associated with the word miraculous at all but the word random is most strongly associated with unpleasant ‘random’ events. Associated with ‘natural’ disasters, and such events as that. Events that many people would prefer to distance God from in their thinking, or that many people, even hardcore Christian Theists, are unable to easily associate an all loving God with (i.e. the problem of evil). Such as tornadoes, earthquakes, and other such catastrophes. Moreover, Darwinists, as Casey Luskin and Jay Richards pointed out in a disagreement with Alvin Plantinga, have taken full advantage of the popular definition of the word ‘random event’, (as in the general notion of unpredictable tragic events being separated from God’s will), in textbooks to mislead the public that a ‘random’ event is truly separated from God’s divine actions,,,
More notes along that line:
But, because of the advance of modern science, we need not be armchair philosophers that must forever, endlessly, wrangle over the precise meaning of the word random being synonymous with the word miraculous, (all the while conceding the public relations battle to the Darwinists over the word ‘random’), we can now more precisely define exactly what the word random means, as to a causal chain, so as to see exactly what a Darwinist means when he claims a ‘random’ event has occurred! ,,
In this endeavor, in order to bring clarity to the word random, it is first and foremost very important to note that when computer programmers/engineers want to build a better random number generator for any particular computer program they are building then a better source of entropy is required to be found by them in order for them to achieve the increased randomness they desire for their program:
Along that line:
Also of interest, not that computer programmers will ever tap into it, but the maximum source for entropy (randomness) in the universe is now known to be black holes,,,
In fact, it has been argued that Gravity arises as an ‘entropic force’,,
In fact, entropy is pervasive in its explanatory power for physical events that occur in this universe,,
In fact it was, in large measure, by studying the entropic considerations of black holes that Roger Penrose was able to derive the gargantuan 1 in 10^10^123 number as to the necessary initial entropic state for the universe:
But what is devastating for the atheist (or even for the Theistic Evolutionist) who wants ‘randomness’ to be the source for all creativity in the universe, is that randomness, (i.e. the entropic processes of the universe), are now shown, scientifically, to be vastly more likely to destroy functional information within the cell rather than ever building it up’. Here are my notes along that line:
,,having a empirically demonstrated direct connection between entropy of the universe and the information inherent within a cell is extremely problematic for Darwinists because of the following principle,,,
and this principle is confirmed empirically:
Thus, Darwinists are found to be postulating that the ‘random’ entropic events of the universe, which are found to be consistently destroying information in the cell, are instead what are creating information in the cell. ,,, It is the equivalent in science of someone (in this case a ‘consensus of scientists’) claiming that Gravity makes things fall up instead of down, and that is not overstating the bizarre situation we find ourselves in in the least with the claims of atheistic Darwinists and Theistic Evolutionists.
It is also very interesting to note that Ludwig Boltzmann, an atheist, when he linked entropy and probability, did not, as Max Planck a Christian Theist points out in the following link, think to look for a constant for entropy:
I hold that the primary reason why Boltzmann, an atheist, never thought to carry out, or even propose, a precise measurement for the constant on entropy is that he, as an atheist, had thought he had arrived at the ultimate ‘random’ explanation for how everything in the universe operates when he had link probability with entropy. i.e. In linking entropy with probability, Boltzmann, again an atheist, thought he had explained everything that happens in the universe to a ‘random’ chance basis. To him, as an atheist, I hold that it would simply be unfathomable for him to conceive that the ‘random chance’ (probabilistic) events of entropy in the universe should ever be constrained by a constant that would limit the effects of ‘random’ entropic events of the universe. Whereas on the contrary, to a Christian Theist such as Planck, it is expected that even these seemingly random entropic events of the universe should be bounded by a constant. In fact modern science was born out of such thinking:
Verse and Music:
But where do we delineate ‘quantum randomness’ from entropic randomness in all this? Well let’s add some perspective shall we?
Around the 13:20 minute mark of the following video Pastor Joe Boot comments on the self-defeating nature of the atheistic worldview in regards to absolute truth:
The scientist in the following video, who works within the field of Quantum Mechanics, scientifically confirms Pastor Joe Boots intuition and shows how conservation of energy in the universe requires quantum non-locality to be true in order for the universe to have coherence.
The difference between Quantum and Entropic randomness is that the ‘randomness’ of quantum mechanics is, unlike bounded entropic randomness (Planck), found to be associated with the free will of the conscious observer.
In the following video, at the 37:00 minute mark, Anton Zeilinger, a leading researcher in quantum teleportation with many breakthroughs under his belt, humorously reflects on just how deeply the determinism of materialism has been undermined by quantum mechanics by musing that perhaps such a deep lack of determinism in quantum mechanics may provide some of us a loop hole when we meet God on judgment day.
This ‘unbounded random’ situation found in quantum mechanics is brought out a bit more clearly in this following article:
Personally, I felt that such a deep undermining of determinism by quantum mechanics, far from providing a ‘loop hole’ on judgment day as Dr. Zeilinger was musing about, actually restores free will to its rightful place in the grand scheme of things, thus making God’s final judgments on men’s souls all the more fully binding since, as far as science can tell us, man truly is a ‘free moral agent’, just as Theism has always maintained. To solidify this basic theistic ‘free will’ claim for how reality is now found to be constructed on the quantum level, the following study came along a few months after I had seen Dr. Zeilinger’s video:
Moreover,
So just as I had somewhat suspected after watching Dr. Zeilinger’s video, it is found that there is indeed a required assumption of ‘free will’ in quantum mechanics (that measurement parameters can be chosen independently), and that it is ‘free will’ that is what necessarily drives the completely random (non-deterministic) aspect of quantum mechanics.,,, To further differentiate the ‘spooky’ randomness of quantum mechanics, (which is directly associated with the free will of our conscious choices), from that of the ‘bounded entropic randomness’ of the space-time of General Relativity, it is found that,,
But why should the random entropic events of the universe care if and when I decide to observe a particle if, as Darwinists hold, I’m suppose to be the result of random entropic events in the first place?
The following experiment goes even further in the differentiation of the entropic randomness of the space-time of the universe and free will randomness found in quantum mechanics. And is also very good in highlighting just how deeply the deterministic, no-free will, materialistic view of reality has been undermined by quantum mechanics.,, Here’s a recent variation of Wheeler’s Delayed Choice experiment, which highlights the ability of the conscious observer to effect ‘spooky action into the past’. Furthermore in the following experiment, the claim that past material states determine future conscious choices (materialistic determinism) is directly falsified by the fact that present conscious choices are in fact effecting past material states:
In other words, if my conscious choices really are just merely the result of whatever state the material particles in my brain happen to be in in the past (deterministic) how in blue blazes are my free will choices instantaneously effecting the state of material particles into the past? The preceding experiment is simply completely impossible on a materialistic/deterministic view of reality!,,, I consider the preceding experimental evidence to be a vast improvement over the traditional ‘uncertainty’ argument for free will, from quantum mechanics, that had been used for decades to undermine the deterministic belief of materialists:
Of related note as to free will and the creation of new information (of note: neo-Darwinian processes have yet to demonstrate the origination of new information!)
Of important note as to how almighty God exercises His free will in all of this:
Here is the last power-point slide of the preceding video:
Supplemental note:
, finding ‘free will conscious observation’ to be ‘built into’ our best description of foundational reality, quantum mechanics, as a starting assumption, ‘free will observation’ which is indeed the driving aspect of randomness in quantum mechanics, is VERY antithetical to the entire materialistic philosophy which demands that a ‘non-telological randomness’ be the driving force of creativity in Darwinian evolution! Also of interest:
I once asked a evolutionist, after showing him the preceding experiments, “Since you ultimately believe that the ‘god of random chance’ produced everything we see around us, what in the world is my mind doing pushing your god around?”
Here are some of the papers to go with the preceding video and articles;
BA77:
Fair enough to raise such issues and concerns.
Our problem is, however, that we are dealing with determined sometimes ruthless zero concession objectors and onlookers tossed into confusion by clouds of rhetorical squid ink. So we have to start with the familiar, get things clear, and build out from there.
As used by the man in the street, the common relevant meaning of chance is what you get from fair dice or coins or things somewhat like that, or else by accident, what in my native land we call “buck ups.” (One can even have children by “buck ups.” That is, unplanned and obviously uncontrolled events. Here, the talk is “drive carefully, most people are the result of accidents.”)
A good example is that it is possible to use the phone book as a random number table, based on the lack of correlation between names, area of residence and line codes. Each of these is separately not mere happenstance at all, but because they lack correlation, the resulting pattern is sufficiently random for this to work. The same obtains for lack of correlation between the decimal place value system and the ratio of circumference to diameter for a circle, leading to how one can use blocks of digits of pi from a million digit value, to give effectively random numbers. And so forth.
We can then take this up to the thermodynamic level and the Maxwell-Boltzmann distribution; as I do in my discussion here, app 1 my usual linked note (which is in effect the implicit context for every comment I have ever made at UD):
So, chance and randomness enter even before we get to the quantum level, and they lead to entropy as a measure of in effect lack of information/degree of freedom at micro level consistent with gross, lab level macro conditions. (This is the context of s = k log w, and of the perception that entropy often is an index of degree of disorder, though of course there are ever so many subtleties and surprises involved so that is over simplified.)
When we get to quantum level phenomena, stochastic distributions of unknown root crop up everywhere. The nice crisp orbits of electrons fuzz out into probabilistically distributed orbitals. Potential barriers too high for classical cases can be tunnelled [NB there is a wave optics case on frustration of total internal reflection . . . ], etc etc.
For this case, I use the alpha emission radioactivity case as it is a classic and gives rise to a pretty easily observed macro effect, as we count with Geiger Counters or watch with ZnS scintillation screens etc. The random tiny greenish flashes are unforgettable. The counter chatter too.
The relevance of these is that we then see that chance is the inferred cause of highly contingent outcomes that tend to follow what we would expect from appropriate stochastic models. And, as a result, when the outcomes are sufficiently complex and especially functionally specific to the point where we have deeply isolated islands of function in large config spaces, we may not be able to get a big enough sample that it is reasonable to hit such islands blindly.
But routinely, e.g. posts in this thread, designers using intelligence, do so.
That marks a pretty sharp distinction and a reliable sign of design.
Which gets us back to the reason the explanatory filter works.
KF
In the flowchart: what do the abbreviations “Lo” and “Hi” mean?
PS: I should note that chance variations or mutations can be triggered by radioactivity. An Alpha particle ionises water molecules, triggering messing with the genes by processes that are accidental, uncorrelated. Non-foresighted variation results. A gene changes in an uncontrolled way through resulting chemical reaction. Suppose the creature is not killed by being hit by a large dose. (Radiation Physics was a gruesome course.) It has a hereditable variation. That feeds into the gene pool, again with all sorts of uncontrolled factors. A new variety pops up. Somehow, in some env’t, it is slightly advantageous. Less advantaged varieties then are outcompeted, and the population is modified. Continue long enough and voila, tree of life.
Big problems.
The only actual adder of info was chance. The natural selection part is really culling out of the disadvantaged for whatever reason. Mods do happen, but with the scope of searches for new body plans etc, 10 – 100+ mn bits, with reasonable pops, mut rates, reproduction rates and the step limit warranted empirically of 6 – 7 co-ordinated muts at one go, we do not have enough time, population or maybe even atoms to sufficiently explore the space of possibilities to give a plausible chance of getting to novel islands of function.
But of course, this is hotly denied by the Darwinists.
They need to provide empirical observation backed answers, and not beg questions by imposing a priori materialism.
Starting with OOL.
(You will remember my year long tree of life challenge, which does not have a really solid attempt, even though I put something together from various remarks. And OOL is the ROOT of the Darwinist tree of life.)
KF
Box, low and high. Low (or ideally no) contingency, high contingency in context. KF
PPS: Which of these is pi digits, which sky noise, which phone numbers (admittedly the local phone directory is a bit on the scanty side), and why does the pattern stand out so clearly at D and at E:
A: 821051141354735739523
B: 733615329964125325790
C: 698312217625358227195
D: 123409876135791113151
E: 113581321345589146235
(Ans: C — sky, A – pi, B – phone, last 2 digits each of line codes.)
Thx KF,
One more question: I understand contingent in this context as “depending on unknown events / conditions”. Do you agree?
Box:
That could work in many cases — especially if you include an indefinite number of uncontrolled perturbing events, but the more effective way is the direct empirical one: set up quite similar initial circumstances and see how the results come out.
Drop a die in the “same” position from a holder at a given height and place over a surface 500 times, and see the result. (Or try a Quincunx or Galton Board machine that simulates the Normal distribution, cf video. Notice the ideal model and the stock distribution histogram.)
Chance in action.
KF
kf, I certainly don’t want to take anything away from the explanatory filter. Nor do I take lightly the concerted effort at obfuscation that Darwinists continually employ towards the word ‘chance’ (and everything else in the debate for that matter). I just thought that you would appreciate the subtle, but important, distinction that its to be found between the random entropic events of the universe and the ‘unbounded’ randomness in quantum mechanics that results as a consequence of our ‘free will’ choice as to how we choose to consciously observe an event.,, Personally, although many i’s have to be dotted and t’s crossed to further delineate the two, I found the distinction between the two types of ‘chance’, uncovered thus far, to be fascinating.
Box,
I added the vid.
BA:
There are ever so many fascinating twists and turns out there indeed. I am however here trying to nail down a shingle on a fog bank so to speak.
I am thinking cavils may need to go on that growing list of Darwinist debate tactics and fallacies here.
And of course, tricks with chance and necessity.
KF
What is chance? Chance is nothing more than happenstance, accidental, ie not planned
Not planned, not controlled, sometimes not controllable (by us).
Correct answer: A thru E are pi digits
Haha, good one Cantor!
But can you actually prove mathematically that ALL numeric sequences can be found in pi (versus other types of random numbers)? Think cryptography.
Just wondering. 😉
-Q
Querius,
It is probably true that pi, and in fact most numbers, contain all possible numeric sequences. It’s no possible to prove it though,
Hmm,
This post seems to say precisely that “chance”, left hanging by itself, it too vague and imprecise a term to describe a cause.
As you say, we can test a chance explanation for a series of dice rolls, but only if we test a specific chance hypothesis (a fair die, rolled not placed). The post that started all that wasted energy and cross talk about “chance as a cause” very specifically didn’t provide present a specific chance hypothesis.
More to the point, what’s the appropriate probability distribution for, say, the evolution of a particular amino acid sequence given mutation, drift and natural selection? If you don’t have that, then how can you reject the “chance” hypothesis?
WD:
The issue at stake first, is what does “chance” mean. It answers, using dice as an illustration of one type. Quantum sources are also mentioned.
The matter is then extended to an illustrative chance mutation scenario.
Then the issue of searching config spaces comes in.
I get the feeling, this is no longer a familiar topic.
The issue is NOT what distribution can we construct and “mathematicise” over. That is irrelevant when we run into FSCO/I — due to the need for the right parts in a proper config to work — forcing small zones in the space of possible configs, and the scope of the config space being such that no search based on atoms being able to sample a fraction appreciably different from zero.
Sometimes, there is just too much haystack, and too few, too isolated search resources to have hopes of finding needles.
For 500 bits and the gamut of the solar system, we can set up each of 10^57 atoms as a searching observer and give it a string of 500 coins to watch, updating every 10^-14 s, as fast as ionic chem rxns,for 10^17 s . . . a typical lifetime estimate. Impossibly generous, but the result is that the sample to the space of 3.27 * 10^150 possibilities for 500 bits, is as a one straw sample to a cubical haystack 1,000 light years thick, about as fat as our galaxy’s central bulge. Effectively no sample of a size plausibly able to find reasonably rare clusters of configs. Superpose on our galactic neighbourhood and you can predict the result with all but certainty: straw.
Doesn’t matter the precise distribution, unless it is in effect not chance at all but a directed search or a programmed necessity. Which would point straight to a design by fine tuning.
Remember, the first context for this is a warm pond with some organic precursors in it or the like, operating on known forces of thermodynamics (esp. diffusion and Brownian motion), and known chemistry and physics. No, the hoped for magic out of “natural selection” — which is really a distractor as chance is the only actual candidate to write genetic code (differential reproductive success REMOVES info, the less successful varieties) — is off the table. For, one of the things to be accounted for is exactly the self-replicating facility to be joined to a gated encapsulation and a metabolic automaton based on complex functionally specific molecular nanomachines. Hundreds of them, and in a context of key-lock fitting that needs homochirality. Which thermodynamics is not going to give us easily: mirror image molecules have the same energy dynamics.
A toy example that gives an idea of the challenge is to think of a string of 500 fair coins all H, or alternating H and T or coins with the ASCII code for the first 72 characters of this message. No plausible blind chance process is going to get such in any trial under out observation, with all but certainty. For the overwhelming bulk cluster of outcomes of coin tossing or blindly arrived at configs will be near 50-50 H and T in no particular pattern.
All of this and more has been repeatedly pointed out, but we must not underestimate the blinding power of an a priori ideology that demands that something much harder than this MUST have happened to get the ball rolling for life, and wraps that in the lab coat and demands that the only acceptable explanations will be those that start from blind chance and mechanical necessity.
And when it comes to body plans, we should note that to get to such we are going to need jumps of 10 – 100+ million bits of just genetic info, as we can see form genome sizes and reasonable estimates alike. The notion that there is a smoothly varying incremental path from a unicellular ancestor tot he branches of the tree of life, is not an empirically warranted explanation based on demonstrated capacity, but an ideological a priori demand.
just as one illustration, the incrementalist position would logically imply that transitional forms would utterly dominate life forms, and after observing billions of fossils in the ground, with millions taken as samples and over 250 000 fossil species, the gaps Darwin was embarrassed by are still there, stronger than ever.
And, there is no credible observed evidence that blind chance and mechanical necessity on the gamut of our solar system can write even 500 bits of code. For our observed cosmos, move up to 1,000 bits.
The only empirically warranted source of such a level of code as 10 – 1000 mn bits is design. And code is an expression of symbolic language towards a purpose, all of the which are strong indicators of design.
Save to those locked up in an ideological system that locks such out a priori.
And of course all of this has been pointed out ov er and over and over, with reasons, days and weeks at a time, again and again.
But id there is an ideological lock out there is a lock out. No amount of evidence or reasoning will shift that, only coming to a point where there is a systemic collapse that makes it obvious this is a sinking ship.
How do I know that?
History.
The analysis that showed how marxist central planning would fail was done in the 1920’s. It was fended off and dismissed until the system collapsed in the late 1980’s. But it was important for some despised few to stand their ground for 60 long years.
In an info age sooner or later enough will wake up to the source of FSCO/I to make the system collapse. Just we have to stand ground and point to the fallacies again and again until the break-point happens.
And that is the context, WD, in which I took time to point out that whatever the obfuscatory rhetorical ink clouds that may be spewed to cloud the issue, what is meant by chance in the design inference is fairly simple, and not at all a strained or dubious notion.
KF
C & Q: You are technically right, but in fact the list of sources as given was the direct source of the sets of digits. D and E were constructed D notionally, E based on the Fibonacci series. You would probably have to get 10^22 or so digits of pi to be fairly sure that you would catch 21 digit numbers, and I think you would need to get a supercomputer to search. KF
Well, you seem to be ignoring natural selection, focusing on the origin of life and saying that functional proteins wouldn’t fall out of a “prebiotic soup” as a result of amino acids bumping into each other.
But no one (that I know of) thinks that would happen. So what’s the point? If we want to test the plasuability of a particular origin of life scenario we need to understand that particular hypothesis, this 500bit business isn’t going to account for all “chance” hypotheses.
WD:
Did you observe the following remarks just above?
Kindly explain to me how these constitute IGNORING “natural selection.”
To highlight:
1 –> In the warm pond or the like, until you have encapsulation and gating, diffusion and cross-reactions will break up reaction sets.
2 –> want of homochirality will break up the key-lock fitting.
3 –> Until you have a metabolic automaton joined to a code based self replicating entity within the encapsulated system, you do not have cell based life. And the speculative models mutually ruin one another. That is why OOL is empty just so stories at popular level and increasing crisis at technical level.
4 –> No self replication, no reproduction, and no differential reproductive success leading to subtracting out the less fit varieties.
5 –> There is a tendency to reify “natural selection” and treat it as if it has creative powers. This is an error, that part of the Darwinian model SUBTRACTS info, it does not add it. Differential reproductive success leads to REMOVAL of the less fit.
6 –> Let’s write as an expression:
7 –> That is:
CV – LRSV –> IDWM = Micro Evo –> BTPD –> Macro Evo
8 –> As the minus sign emphasises, the ONLY adder of info and organisation is CV. And, on empirical studies the steps are up to maybe 6 – 7 bases.
9 –> Blend in reasonable generation times, mut rates, pop sizes etc, and very modest changes will require easily 100’s of millions of years. We have 500 – 600 MY or so since the Cambrian. And if fossil dates are taken, we have several MY to a few dozen MY to account for HUGE body plan changes.
10 –> And that is assuming a smooth incremental path, so that incremental transitions do the job. The evidence is missing, and there is reason to believe that body plans exist in islands of function.
11 –> If you doubt this, simply think of the many hundreds of protein fold domains that have only one or a few members and are locked away in islands in amino acid chain space. That is the first building brick to a new body plan.
So, what we plainly have is a mechanism that might explain minor adaptations forced to punch far above its weight because of the a priori materialism imposed under the guise of a modest methodological requirement.
And no, I have definitely not “ignored” natural selection.
KF
PS: And simply repeating talking points about all chance hyps is not going to make the challenge that your chance based search process, no matter how powerful is not going to beat 10^57 observers updating every 10^-14 s for 10^17 y, so they cannot account for even the finding of islands of function in a space of configs for 500 bits, a toy sized space compared to that for a genome of 100,000 bases or increments of 10 – 100+ mn bases. If anyone has been “ignoring,” WD, it is you.
I’m not repeating talking points. I really don’t see how you can asses the plausibility of a hypothesis without specifically calculating the probability of that hypothesis.
I can think of many chance processes that create a 500 H or T sequence of coins (or equivalent). You seem to be saying no chance hypothesis could ever explain that?
You also, as far as I can tell from all the “->” business, fail to grasp that natural selection makes the avaliable sequence space much smaller than the theoretical one.
Selection won’t work for OOL.
Symbolic information processing must, as a matter of principle, be decoupled from physics and chemistry, much like the symbolism of head/tails is decoupled from mechanical considerations. That’s exactly why all heads stands out as a design pattern, it violates experimental expectation. Symbolic organization in the first life will also violate experimental expectation from a prebiotic soup both on theoretical ground and empirical grounds (i.e. dead dogs stay dead dogs).
Even granting for the sake of argument Darwinian Selection actually works as advertised, it cannot solve the OOL problem, which is quite severe.
Of course, our estimates of distribution could be wrong, but what’s wrong with a falsifiable hypothesis? That’s a good thing.
@ KF
A very helpful post indeed. It’s amazing how even simple concepts are made outrageously esoteric in the defense of Darwin. Yeesh!
I really think talk of symbols and information almost always obscures rather than helps in these cases. We are talking about biology (and chemistry), so we should focus on that not an abstraction.
The questions for the OOL are about whether metabolic processes can start spontaneously, under what conditions self-replication arises and what reactions can give rise to “precurssor” molecules. We should consider thouse questions (which, of course, remain largely unanswered).
I respect that you feel that way and that highlight a conflict between the ID and non-ID camps that is not just metaphysical. ID proponents are disproportionately individuals in the IT industry. They see life as an information processing, software intensive system.
Developmental mechanisms, DNA translation, regulation, are information intensive. Yes, physics and chemistry are involved just like physics and chemistry are involved in the hardware of a computer, but the software doesn’t come from chance and law mechanisms, it comes from intelligence.
DNA software of is critical to making proteins, and proteins are critical to making DNA, but that become a chicken and egg problem. The OOL problem is one of building both the hardware and software simultaneously before chemical degradation blows apart the pre-cursors.
We could do that too, and it shows the expected evolution of the chemicals is away from a living organism, not toward one. We look at any dead creature, if it’s not devoured and decomposed by other creatures, the experimental and observational expectation is the collection of dead parts will become even more dead over time — less likely to ever become living again. If that happens with organisms that were alive, how much more will life not arise in a pre-biotic soup. Real chemical evolution is toward death.
Like 500 fair coins heads, the origin of life seems to be at variance with theoretical expectation from basic chemistry and physics. We might ascribe expectation to a chance process or whatever, but OOL seems deeply at variance with expectation. It doesn’t mean life is impossible any more than all-coins-heads is impossible, it just doesn’t seem consistent with expectation of a mindless process.
WD,
It’s 8:00 o’clock in my time zone, on New Years Eve. I have some obscure old jazz on the box; the house looks like an amalgamation of a family deli and a package store, and the Mrs in floating around the house to the music as guest arrive at the door. I obviously don’t have time at the moment to address your comment, but suffice it to say, with all due respect to you, you simply have not studied the issue to the level required to comment on it. You would have never said what you just said.
Happy New Year to you and yours.
wd400 says:
Can you assess whether or not it it is plausible that the molecule configuration you are looking at right now (the configuration of the molecules that make up the pixels in your viewscreen) was not generated by an unseen, intelligent agent, but rather was generated by chance (undirected) interactions of chemical properties according to physics?
Answer: Yes, you can make such an assessment: it is not plausible.
Question 2: Can you specifically calculate the probability that this configuration of molecules could have been generated by chance interactions of chemical properties according to physics?
No?
Hmmm.
Sal,
I write software myself, and have long noted over-representation of IT folks in ID fields (along with engineers). Biology is always beset by people from other fields who think there own field, be it IT engineering or physics provides the best way to do biology. Very rarely, these people learn enough biology to make a genuine contribution. I’ve yet to see an IDist ITers who falls into that category.
We won’t agree on that, which is fine, but I prefer to focus on the science rather than the abstraction.
Upright,
I hope you feel better having made this contribution,
William Murray,
I’m not someone who cares that CSI conditioned on evolutionary processes can’t be calculated. And I can’t calculate the specific probability that “just physics” would create a laptop screen. The point is, and is hard to imagine I’m still trying to make this, you need to know precisely what the chance hypothesis is. If it’s evolution by mutation, selection, drift, speciation and the like then it’s a different question than atoms bumping into each other.
WD400
It seems to me that the root of biology is basically abstract chemistry (protein synthesis, etc.), and the resulting ecology and economy of nature that emerges is abstract. How can you do biology and avoid abstractions?
It’s certainly true that you have to find the right frame in which to study questions in biology. You’d get nowhere trying to study ecological interactions if you started with molecular biology.
The point I didn’t quite make, is that very often physcicists/engineers/software people coming to biology mistake the map for the territory. As I say, if you want to study the origin of life then you need to do chemistry.
wd400:
I said nothing about CSI or evolution with natural selection.
You said you don’t see how one can make a claim about the plausibility of a thing unless they can compute the probability. My example was to show that it is indeed possible to know a thing is implausible without formally computing the probability.
If you found the verbatim text of War and Peace written out in molecules in the DNA of an ancient ant embalmed in amber (tip of the hat to Dr. Liddle), is it a plausible hypothesis that it got there by any non-intelligent process? Can you compute the probabilities?
Knowing the formal probability is not necessary to reach a reasonable finding of the plausibility of a hypothesis.
Yes. My point was you don’t need to calculate a precise number but you do have to consider the specific chance hypothesis.
The probability of a given protein function is very different under evolutionary mechanisms than by atoms bumping into each other, surely?
WD:
Pardon, but the notion that you must calculate the specific probabilities of a particular hypothesis before being able to see it as implausible is both a now common objecting talking point to the design inference and one that has been a failure from the outset. Whether or no you are independently putting it up, that remains the case.
The basic fallacy in it is the underlying assumption that chance can account for anything given enough time and resources. In my student days, a common way to put it by evolutionists was to recite what seems to be an urban legend that in a debate with Bishop Wilberforce (or the like) Huxley gave the example that enough monkeys typing for enough time could come up with the text of Shakespeare’s plays, or just that of Hamlet.
This has recently been multiplied by the idea that since every individual configuration of 500 tosses of a fair coin is equiprobable, we should be no more surprised to see (i) 500 H (or the same with tails), or (ii) alternating H & T, or (iii) the ascii code for the first 72 characters of this post, as for any other. (NB: This last shows how coin tossing is informationally equivalent to the monkey typing exercise.)
Nonsense.
What happens is that we have clustering of sets of possible outcomes, so that we are interested above in a set of just five possibilities . . . or something substantially similar. Everything else in the config space of 2^500 ~ 3.27*10^150 possibilities will not be an event from our clustered zone of interest. And as the binomial theorem will quickly tell us, the vast bulk of the set of possibilities will be outcomes near 50-50 H & T, in no particular order . . . three of the five members of our zone of interest are in fact 50-50, but in very particular orders indeed. The exact 50-50 outcomes as a set are something like 500!/ [(250!)^2], and those near 50-50 are going to be likewise very large fractions of the space of possibilities, in aggregate these dominate. (That near 50-50 subset will include essentially the code for every 72 character string of text in reasonable English. Very large but absolutely overwhelmed by the nonsense strings in no particular order.)
If you look at the OP, you will see a diagram that illustrates how such peakiness emerges with dice as the number of possibilities goes up, and the video of a Quincunx machine in action shows how an effectively normal curve emerges from a probabilistic chance process with many possibilities for beads to go a step left/right in succession. Notice in this case how after 5,0000 beads, very few are in the far skirts, and yet at each pin it hits, a bead can with 50-50 odds go left or right one step.
Clearly — with the Quincunx, you can WATCH it happen — there are some net outcomes that are vastly improbable relative to others. Some zones of interest [notice the neat little columns . . . ] are much less likely to be hit by a random chance process than others.
Empirical fact, easily seen.
(And, the fact that stock returns also fit the pattern pretty well, should give warning about investment schemes that effectively promise the heavens. Unless you have very good internal knowledge and warrant, such schemes should be viewed as too good to be true. But if your name is Bill Gates c. 1980, you already know the profs have nothing to teach you . . . even if very few will believe you. [Give me your address to bill you for advice you need to heed financially even if you refuse to heed it on your worldviews commitments, where whether or no you believe it, as Pascal — a founder of probability theory — warned so long ago now, you are wagering your soul.])
The 500 coin toy challenge therefore gives us a picture that is reasonable: finding zones of interest in large config spaces. Just like, how Shakespeare’s entire corpus is in that space, 72 characters at a time. So, it should be “simple” for our imaginary monkeys flipping coins or sitting at keyboards to hit the right keys or flips, nuh?
We shouldn’t be surprised then to see the text of Hamlet by happy chance!
Rubbish.
Patent absurdity.
For the very same reason why Bill Gates and co paid programmers to intelligently design their operating systems and office software instead of running banana plantations and running an operation based on millions of monkeys busy at keyboards or at coin flipping. The needle in haystack search challenge makes nonsense of such a notion. (Cf. here earlier in the ID Foundations series on monkeys, keyboards and search space challenges. Please read the onward linked 2009 Abel paper on the universal plausibility bound. This addresses islands of function as implied by the requisites of getting multiple components to be properly matched, fitted together and organised to achieve function. Also, protein synthesis can be seen as a clear example of an algorithmically controlled information based process with Ribosomes as NC assembly machines, here on in context. Ask yourself, if proteins in functional clusters can so readily form and work in Darwin’s pond or the like, why then is the cell such a Rube Goldberg-ish contraption, going the looooong way around to do what should be ever so simple? Or, is it a case where this is the type of factory it takes, much as we see in a pharmaceuticals plant that makes in much less elegant and far more resources intensive ways, bioactive compounds. Or, in an aircraft assembly plant.)
Let’s go back, to my remarks at 21 above to see what happens when we substitute for monkeys the 10^57 atoms of our solar system working as impossibly fast 500-coin flippers [500 coins flipped every 10^-14 s, as fast as fast ionic rxns . . . organic rxns are orders of magnitude slower], for 10^17 s:
Why so much stress on a toy example? Let me cite the clip from Wiki’s Infinite Monkeys article in IDF # 11:
That is, we are dealing with that which is empirically effectively impossible of observation on the gamut of our solar system, or by extension to just 1,000 bits or coins, our observed cosmos. Our ONLY observed cosmos . . . to try to drag in a speculative multiverse at this point is to traipse into philosophy, which means we have a perfect right to demand a full bore comparative difficulties analysis across major families of worldviews, cf. here on in context.
Now, in Darwin’s little pond or the like, we will have racemic [near 50-50] mixes of various relevant molecules, and a great many others that are not so relevant, plus water molecules that will readily hydrolyse and break up chains. We are dealing with a challenge of functionally specific complex organisation and associated information [FSCO/I] that dwarfs the challenge of aircraft assembly or building a pharmaceuticals plant. That can be readily seen from the biochem flowchart for the metabolic pathways of the cell, e.g. here.
The point here is that a complex, functionally specific organised entity has an implicit wiring diagram “blueprint” that can be represented as a chain of coded bit strings. Just ask the makers of AutoCAD. And just ask anyone who has had to do the development of a microprocessor controlled system from the ground up, hard and soft ware, whether he would trust monkeys at keyboards or flipping coins to solve the organisation challenge involved . . . as one such, I readily answer: no, thank you. Worse, the key entity in view includes an additional facility, a code based von Neuman kinematic self replication function. As Paley long ago pointed out in his Ch 2 example that somehow almost always gets omitted in the rush to dismiss his point, a watch that is self replicating is even more evidently a wonderful contrivance than a “simple” time-telling watch.
That means — surprise (NOT) — the coin flipping monkeys and atoms toy example ALSO covers the origin of such organisation. And so, we begin to see the magnitude of the challenge. Which, Denton aptly summarised:
In short, it should be clear why the living cell is an information rich system, and why its claimed spontaneous origin — a hugely contingent process — falls under the coin-flipping exercise challenge.
And if you imagine that I exaggerate the degree to which the various schools of thought on OOL have deadlocked to mutual ruin [and if you imagine you can dismiss me as wanting to get to a living cell in one step . . . ], let me cite an exchange from a few years back between two major spokesmen for the genes- first and the metabolism- first schools:
In short, RNA components can be hard to synthesise, and chains are vulnerable to hydrolysis through which water molecules split up the chain. Worse, to get them to functionally chain to relevant lengths and code for, then join up with functional proteins on say a clay surface then inside a suitable protective membrane, is problematic. And, it is similarly problematic to get the clusters of functional bio-molecules to carry out typical metabolic processes.
No wonder, then, that science writer Richard Robinson has noted that neither main model — despite what we may sometimes read to the contrary in more popular writings or see and hear in “science news” articles (or even, sadly, textbooks) — is “robust.”
The “simple” cell ain’t, and the “simple” pathway up from an autocatalytic reaction set or some spontaneously formed RNA or some cometary debris etc etc isn;t.
Functionally specific, complex organisation and associated information are not going to be had on the cheap. Just ask Bill Gates. There is no free lunch.
And, that is why in the book of that name, Wm A Dembski, UD’s founder, went on record:
There is no free lunch WD, and no chance hypothesis that genuinely is a chance driven hyp is going to do better than the 10^57 500-coin flipping atoms of our solar system at it for 10^17 s. Such an ideal case is woefully inadequate to sample better than one straw to a cubical haystack 1,000 LY across, of the 3.27*10^150 possibilities for just 500 coins, 72 or so ASCII characters worth of info. Which as an experienced programmer you know can do very little by itself. The genomes for living forms start out at 100 – 1,000 kbits, and easily go on into the billions, with major body plans requiring increments of 10 – 100+ billions, dozens of times over. The resources to do that incrementally in realistic populations just are not there, which is the message of the 10^57 coin-flipping atoms.
So, you need to squarely face the implications of Lewontin’s a priori materialism, and how it biases the ability to address the real challenge of origins of the world of life:
Philip Johnson’s retort in Nov that year was richly deserved:
WD, please, go flip 500 coins for a bit, and see what happens, watch the Quincunx in action, and then ponder the thought exercise of 10^57 coin flipping atoms> Then, ponder what that is telling us about the credible — and only observed — source of FSCO/I, intelligence.
Then, reflect on the FSCO/I in the living cell and in the many complex body plans for life forms, and ask your self whether you REALLY have empirical observational evidence that backs the claim that blind chance and mechanical necessity suffice to explain what you see, why.
Enjoy the new year.
KF
wd400:
In case you are interested, exactly two years ago I started a long exchange with Elizabeth Liddle about how it is possible to model the neo darwinian algorithm, including both the RV and NS part, and using dFSCI.
You can find that discussion here:
http://www.uncommondescent.com.....selection/
It starts more or less at post 137, with my “happy new year” for 2012 (not relevant to the discussion 🙂 ), and goes on for post 223. The most relevant part starts at post 194.
In brief, I try to define exactly how the random part of the algorithm can be evaluated in specific hypotheses, how dFSCI can be computed for a specific protein, and how explicit paths of NS would modify the computation.
I discuss also the non relevance of genetic drift to the computation of probabilities, which is a point often misunderstood.
If you are interested in discussing any aspect of that, I am here.
A happy new year to you GP and to you all. There have been some very interesting turn of new year discussions here at UD, I link one in the above, too.
F/N: BTW, 10^57 atoms flipping coins and having tables to flip them on, would probably just about exhaust the atomic resources of the observed cosmos. KF
KF:
Happy new year to you and to all! 🙂
And as the binomial theorem will quickly tell us, the vast bulk of the set of possibilities will be outcomes near 50-50 H & T, in no particular order . . . three of the five members of our zone of interest are in fact 50-50, but in very particular orders indeed. The exact 50-50 outcomes as a set are something like 500!/ [(250!)^2], and those near 50-50 are going to be likewise very large fractions of the space of possibilities, in aggregate these dominate. (That near 50-50 subset will include essentially the code for every 72 character string of text in reasonable English. Very large but absolutely overwhelmed by the nonsense strings in no particular order.)
If you look at the OP, you will see a diagram that illustrates how such peakiness emerges with dice as the number of possibilities goes up, and the video of a Quincunx machine in action shows how an effectively normal curve emerges from a probabilistic chance process with many possibilities for beads to go a step left/right in succession. Notice in this case how after 5,0000 beads, very few are in the far skirts, and yet at each pin it hits, a bead can with 50-50 odds go left or right one step.
It’s interesting, isn’t it, that all you’re examples have very specific chance hypotheses associated with them. You’d get different behaviour with different chance mechanisms (skewed distributions with loaded dice, endlessly increasing varinace with random walks…).
What I don’t follow if your leap from these well define examples to all chance hypotheses. As I say, I can think of several chance mechanisms that stand a good probability of creating a string equvialent to 500H, and random walk being one such.
WD:
The constraint is not the specific probability distribution, until it becomes so biased that it effectively isn’t chance anymore but mechanical necessity or programmed necessity — loading on steroids.
The challenge is that you are blindly (needle in haystack) sampling from a very large config space . . . much worse than the toy example given. Where, as the old physicist’s joke about two drunks looking for lost contacts under a lamp puts it, after a time A says to B, are you SURE you lost your contacts here? B says, no, they were lost over there in the dark but this is where the light is. This is usually told on a case like why we study ideal gases, or the like instead of more realistic cases. The complexity shoots way up real fast. (E.g. I did a very rough cut approximate calc for just the 50-50 distribution, assuming my tired eyes calc has no material errors, about 2 * 10^147, out of 3.27*10^150, and the nearby 100 or so +/- 50 is going to catch most of the rest. By contrast, we were looking to get one of five possibilities. Effectively, zero.)
That puts you in the position where sufficiently isolated zones . . . which FSCO/I is going to be because of functional constraints, are simply too isolated to be credibly hit in a practical situation.
But at this point, you are simply turning yourself into yet another study in the art of avoiding facing the material point. which is that there are very good needle in haystack grounds for seeing why special clusters deeply isolated in vast config spaces are not going to credibly come up by blind chance. Essentially for the same reasons why systems free to move at micro level so strongly tend to entropy maximising equilibrium clusters.
Maybe I am a glutton for punishment as the Bajans say, I will give a short outline of a case that brings to bear relevant factors, via a closer to reality toy. This updates an example from my note, app 1.
We take a 1 cu m vat, with some unspecified fluid, good enough that layering effects a la Boltzmann’s atmospheric distribution will not be relevant.
I trust the point is clear enough.
Organising work demands explanation on forces credibly able to carry such out within the atomic and temporal gamut of the solar system or the observed cosmos. the only such empirically warranted force capable of generating FSCO/I is design.
If you wish to deny this, kindly provide empirically f=grounded warrant.
On my side the very posts in this thread, which are bit string equivalent, are cases in point. The only credible explanation for a post in English that makes sense in context of the thread is design.
So are the computers we are using, which as we saw WLOG reduce to strings, informationally.
And so forth.
KF
And there you go
“That puts you in the position where sufficiently isolated zones . . . which FSCO/I is going to be because of functional constraints, are simply too isolated to be credibly hit in a practical situation.
You assume that which you need to esablish. If you cold show that protein functions are “isolated islands” in sequence space you wouldn’t need any of this information stuff to show evolutionary theory as it stands can’t explain the biological world. It would be bloody obvious. That’s the question you want to get at, but to do that you’d need to do some biochemistry (and now Axe’s protein croco-ducks).
Anway, I’m pretty sure we are wasting each other’s time now. Nothing will disuade you from the idea that you are right about this, and I see nothing of interest in your ideas.
Yes, I did. Thanks.
All symbol systems operate in material objects, regardless of their provenance or purpose. Biology is no different.
The fact remains that biology requires physical effects to be produced from the translation of recorded information. The cell cannot be organized without it (and Darwinian evolution would not exist). This phenomenon requires a unique set of material conditions, which do not occur anywhere else in the physical world except during the translation of recorded information. And the central feature of those conditions requires a local discontinuity to be instantiated within the system between the medium of the information and the effects produced by its translation. The translation of information during protein synthesis (like any other form of recorded information) is not reducible to the material that makes up the system. It can’t be or the system would not function. Therefore, far from obscuring the issues, it is pointless to talk about the origin of biology without those facts (i.e. the systems material requirements) on the table.
WD: Did you actually read the part of the post in which I explained by instructive example, of a vat with diffusion vs nanobots? KF
PS: And I don’t need to show the case for proteins, just look up singleton protein fold domains and the like. As I noted already, there are many such islands of function that can be shown to exist in the biological world. I gave the example of a vat and a jet with a million 1 micron parts, in order that we can have something more amenable to analysis. It may smoke your calculator to try to work it out but you can work out the number of ways 10^6 parts can be arranged among 10^18 cells. You can then multiply for possible arrangements of orientation etc [at simplistic level we can see a cube having six orientations in a grid , , , and for each of those the whole may be different . . ), and types of parts of various kinds etc. All of these will simply add to the point already made.
UB: I think your contributions are very valuable and that in an information age more and more will see the point. I also suspect that for people who make the error on the meaning of reification we have been seeing, “abstract” is another word for not real, i.e. they are materialists of the crudest sort, or are confused by that self-refuting scheme of thought. KF
I did read the short play about nanobots. Singleton protein folds don’t prove proteins are form unreachable islands in sequence space – that’s croco-duck thinking.
wd400 pondered in 19
You missed my point. Just as all infinities are not the same, I’m wondering whether all irrational numbers are not the same as well.
If I pick a random point in space, its coordinates will be irrational numbers (by an overwhelming probability). Since my picking the point was randomized my unsteady hand, it seems reasonable to assume that the digits are also random, and will contain all finite numeric sequences. However, numbers such as pi and the square root of 2 are computed, and might not behave the same way.
Before you dismiss this idea as silly, remember that pi varies with the curvature of space and the size of the circle. I can easily imagine a curvature with a circle of given size in which the measured value of PI is exactly 3.00000… and if the circle was large enough, pi could be as small as 2.00000…
Because we’re living in gravitational energy wells, the measured value for pi will vary depending on the direction that the diameter is measured.
-Q
F/N: A First place to look on isolation of functional forms in Protein sequence space: Axe, 2004. KF
PS: A second place to look on singletons and implications. KF
wd400 @23:
Then what do they think would happen?
wd400:
Umm natural selection doesn’t do anything so there isn’t anything to ignore.
Joe and EA:
Of course I did not ignore NS, but rephrased it in more descriptive, accurate terms that show that this is just another way of saying less fit varieties die out so NS actually subtracts info, if anything.
This leaves chance variation as the source of info, the only source.
With the hope that there is that easy fast incremental path up the back slope of Mt Improbable.
The real problem is that this is a description of micro evo, grossly extrapolated to macro evo without empirical observational warrant for a vast continent of function that can be achieved incrementally in a branching tree pattern. Hence the significance of missing fossils of transitionals as a dominant feature of the record and in stead the pattern of sudden appearance, stasis, disappearance. Not to mention molecular stuff such as singleton proteins in isolated fold domains etc.
Then we have the problem of the pop genetics and the pacing of the process.
But all of this then comes back to, what is chance capable of in given time and scope of atomic materials.
Where the limit of 500 bits comes from the solar system’s 10^57 atoms, each making observations of 500 coin toss exercises every 10^-14 s. For 10^17 s. Sampling 1 straw to a 1,000 LY on the side cubical haystack. Effectively zero sample relative tot he task of finding isolated narrow zones or islands of function.
But the objectors look like they will go down with the ship of denial, never mind the implications of the thought exercise of a microjet assembled in a vat that shows why FSCO/I will be very rare in a realistic config space.
Nah, it’s just a play of words — nope, it is a gedankenexperiment — and we are not interested anyway.
Well it seems there may have been a real demo of dropped musket and cannon balls at Pisa, and the objectors whose side predicted a 60 ft lag of the lighter behind the heavier ball, then crowed as to how a 2 inch lag meant they could dismiss Galileo’s overall point.
H’mm . . .
KF
F/N: The wiki cite is accurate, at least to the book — went to the local library which I knew had a copy, read esp pp 19 – 20. Looks like in old age Galileo told a student of the incident, and it has been passed down. There are many back-forths over the issue involving inter alia problems on releasing the balls at the same time. KF
Even if the lower edges of the two balls were at identical heights at the point of release, if we’re talking about a lag of 2 inches, there is also the question of the center of gravity of the objects in question. Presumably the edge of the cannon ball is 2 or more inches from its center of gravity. I don’t know if it would have an impact over that short of a distance, but it could.
As could air resistance, updraft, crosswinds, etc. Which is why subsequent experiments with the feather are conducted in a vacuum.
UB @46:
Thanks for always keeping things focused on the heart of the matter. The desire to squirm out of the discussion of information and symbols and into vague, general assertions of things like “natural selection” is very tempting. Keep up the pressure.
—–
And KF, thanks for a valuable post.
wd400:
I for one would personally be very interested to know what mechanisms you have in mind. Can you describe one for us that would create a complex, specified string? (Keeping in mind, that 500H is not complex and is most likely explained as a result of necessity, so we really need to talk about a complex specified string.)
Eric #59,
Thank you Eric. I think the opponents of design are trying to defeat the observations by ignoring them. At least that appears to be WD’s strategy.
🙁
F/N: Notice, how the objections on how vague “chance” is, have suddenly vanished? Without any acknowledgement that there is a point here? As in, the zero acknowledgements, zero concessions, zero apologies tactics in action. Mentally bookmark that tactic, for future reference.
And of course, let us remember the triple-context:
The shoe pinches a bit on the other foot, nuh.
KF
wd400:
You may have missed my prior comment in the thread, so thought I would ask again:
If OOL theorists don’t think that functional proteins result from amino acids bumping into each other, then how do they think functional proteins came along?