Uncommon Descent Serving The Intelligent Design Community

ID Foundations, 1a: What is “Chance”? (a rough “definition”)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Just what is “chance”?

This point has come up as contentious in recent UD discussions, so let me clip the very first UD Foundations post, so we can look at a paradigm example, a falling and tumbling die:

A pair of dice showing how 12 edges and 8 corners contribute to a flat random distribution of outcomes as they first fall under the mechanical necessity of gravity, then tumble and roll influenced by the surface they have fallen on. So, uncontrolled small differences make for maximum uncertainty as to final outcome. (Another way for chance to act is by  quantum probability distributions such as tunnelling for alpha particles in a radioactive nucleus)
A pair of dice showing how 12 edges and 8 corners contribute to a flat random distribution of outcomes as they first fall under the mechanical necessity of gravity, then tumble and roll influenced by the surface they have fallen on. So, uncontrolled small differences make for maximum uncertainty as to final outcome. (Another way for chance to act is by quantum probability distributions such as tunnelling for alpha particles in a radioactive nucleus)

2 –>As an illustration, we may discuss a falling, tumbling die:

Heavy objects tend to fall under the law-like natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance.

But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes!

[Also, the die may be loaded, so that it will be biased or even of necessity will produce a desired outcome. Or, one may simply set a die to read as one wills.]

{We may extend this by plotting the (observed) distribution of dice . . . observing with Muelaner [here] , how the sum tends to a normal curve as the number of dice rises:}

central-limit-theorem-300x149
How the distribution of values varies with number of dice (HT: Muelaner)

Then, from No 21 in the series, we may bring out thoughts on the two types of chance:

Chance:

TYPE I: the clash of uncorrelated trains of events such as is seen when a dropped fair die hits a table etc and tumbles, settling to readings in the set {1, 2, . . . 6} in a pattern that is effectively flat random. In this sort of event, we often see manifestations of sensitive dependence on initial conditions, aka chaos, intersecting with uncontrolled or uncontrollable small variations yielding a result predictable in most cases only up to a statistical distribution which needs not be flat random.

TYPE II: processes — especially quantum ones — that are evidently random, such as quantum tunnelling as is the explanation for phenomena of alpha decay. This is used in for instance zener noise sources that drive special counter circuits to give a random number source. Such are sometimes used in lotteries or the like, or presumably in making one time message pads used in decoding.

{Let’s add a Quincunx or Galton Board demonstration, to see the sort of contingency we are speaking of in action and its results . . . here in a normal bell-shaped curve, note how the ideal math model and the stock distribution histogram align with the beads:}

[youtube AUSKTk9ENzg]

Why the fuss and feathers?

Because stating a clear enough understanding of what design thinkers are talking about when we refer to “chance” is now important given some of the latest obfuscatory talking points. So, bearing the above in mind, let us look afresh at a flowchart of the design inference process:

explan_filter

(So, we first envision nature acting by low contingency mechanical necessity such as with F = m*a . . . think a heavy unsupported object near the earth’s surface falling with initial acceleration g = 9.8 N/kg or so. That is the first default. Similarly, we see high contingency knocking out the first default — under similar starting conditions, there is a broad range of possible outcomes. If things are highly contingent in this sense, the second default is: CHANCE. That is only knocked out if an aspect of an object, situation, or process etc. exhibits, simultaneously: (i) high contingency, (ii) tight specificity of configuration relative to possible configurations of the same bits and pieces, (iii)  high complexity or information carrying capacity, usually beyond 500 – 1,000 bits. And for more context you may go back to the same first post, on the design inference. And yes, that will now also link this for an all in one go explanation of chance, so there!)

Okie, let us trust there is sufficient clarity for further discussion on the main point. Remember, whatever meanings you may wish to inject into “chance,” the above is more or less what design thinkers mean when we use it — and I daresay, it is more or less what most people (including most scientists) mean by chance in light of experience with dice-using games, flipped coins, shuffled cards, lotteries, molecular agitation, Brownian motion and the like. At least, when hair-splitting debate points are not being made.  It would be appreciated if that common sense based usage by design thinkers is taken into reckoning. END

Comments
wd400: You may have missed my prior comment in the thread, so thought I would ask again: If OOL theorists don't think that functional proteins result from amino acids bumping into each other, then how do they think functional proteins came along?Eric Anderson
January 4, 2014
January
01
Jan
4
04
2014
11:14 AM
11
11
14
AM
PDT
F/N: Notice, how the objections on how vague "chance" is, have suddenly vanished? Without any acknowledgement that there is a point here? As in, the zero acknowledgements, zero concessions, zero apologies tactics in action. Mentally bookmark that tactic, for future reference. And of course, let us remember the triple-context:
1: The key darwinist claim that chance variation yielding varieties [CV], less differential reproductive success of varieties [DRS] leads to incremental descent with modification [IDWM] thence branching tree evo at micro and macro . . . body plan . . . levels [BTE, m&M] thence the Darwinist tree of life {DTOL]:
CV - DRS --> IDWM --> BTE, m&M --> DTOL
2: The explanatory filter challenge on the limits of chance variation in finding islands of function in large config spaces. For, in the intervening large non-functional spaces, there is no function to give advantage, so no attractor, no slope to draw to function without foresight. Where also, the requisites of many properly organised, matched parts to achieve function, naturally lock function to rare islands in the space of possible configs. (Which is of course common experience, those imagining a vast continent of incrementally accessible function need to show us demonstrations of that extraordinary claim . . . which of course they have not.) 3: That, any case of complex organisation is informationally equivalent to a coded describing string that sets out its nodes and arcs, so coin flipping is WLOG. So also, viewing our solar system's 10^57 atoms as coin flipping observers doing a flip and view exercise on 500 coins each every 10^-14 s, for 10^17 s, will dominate any realistic chance hyp in ability to search out the space for 500 coins, 3.27*10^150 configs from TTT . . . through THTH ... to HHHHH . . . but will only be able to sample as one straw to a cubical haystack as thick as our galaxy at its central bulge. Superpose the stack on our stellar neighbourhood and blindly pick a one straw sample. On needle in haystack grounds, with all but absolute certainty, you will pick nothing but straw. Straw -- non function -- just plain utterly dominates the config space, never mind that there are lots of zones of interest in it. Blind search is strictly limited in its capability, and is far surpassed by intelligent imagination and creativity. Just as, the text for this comment was not produced by a blind needle in haystack search.
The shoe pinches a bit on the other foot, nuh. KFkairosfocus
January 3, 2014
January
01
Jan
3
03
2014
02:04 AM
2
02
04
AM
PDT
Eric #59, Thank you Eric. I think the opponents of design are trying to defeat the observations by ignoring them. At least that appears to be WD's strategy. :(Upright BiPed
January 3, 2014
January
01
Jan
3
03
2014
12:06 AM
12
12
06
AM
PDT
wd400:
As I say, I can think of several chance mechanisms that stand a good probability of creating a string equvialent to 500H, and random walk being one such.
I for one would personally be very interested to know what mechanisms you have in mind. Can you describe one for us that would create a complex, specified string? (Keeping in mind, that 500H is not complex and is most likely explained as a result of necessity, so we really need to talk about a complex specified string.)Eric Anderson
January 2, 2014
January
01
Jan
2
02
2014
09:28 PM
9
09
28
PM
PDT
UB @46: Thanks for always keeping things focused on the heart of the matter. The desire to squirm out of the discussion of information and symbols and into vague, general assertions of things like "natural selection" is very tempting. Keep up the pressure. ----- And KF, thanks for a valuable post.Eric Anderson
January 2, 2014
January
01
Jan
2
02
2014
08:28 PM
8
08
28
PM
PDT
Even if the lower edges of the two balls were at identical heights at the point of release, if we're talking about a lag of 2 inches, there is also the question of the center of gravity of the objects in question. Presumably the edge of the cannon ball is 2 or more inches from its center of gravity. I don't know if it would have an impact over that short of a distance, but it could. As could air resistance, updraft, crosswinds, etc. Which is why subsequent experiments with the feather are conducted in a vacuum.Eric Anderson
January 2, 2014
January
01
Jan
2
02
2014
08:22 PM
8
08
22
PM
PDT
F/N: The wiki cite is accurate, at least to the book -- went to the local library which I knew had a copy, read esp pp 19 - 20. Looks like in old age Galileo told a student of the incident, and it has been passed down. There are many back-forths over the issue involving inter alia problems on releasing the balls at the same time. KFkairosfocus
January 2, 2014
January
01
Jan
2
02
2014
08:06 AM
8
08
06
AM
PDT
Joe and EA: Of course I did not ignore NS, but rephrased it in more descriptive, accurate terms that show that this is just another way of saying less fit varieties die out so NS actually subtracts info, if anything. This leaves chance variation as the source of info, the only source. With the hope that there is that easy fast incremental path up the back slope of Mt Improbable. The real problem is that this is a description of micro evo, grossly extrapolated to macro evo without empirical observational warrant for a vast continent of function that can be achieved incrementally in a branching tree pattern. Hence the significance of missing fossils of transitionals as a dominant feature of the record and in stead the pattern of sudden appearance, stasis, disappearance. Not to mention molecular stuff such as singleton proteins in isolated fold domains etc. Then we have the problem of the pop genetics and the pacing of the process. But all of this then comes back to, what is chance capable of in given time and scope of atomic materials. Where the limit of 500 bits comes from the solar system's 10^57 atoms, each making observations of 500 coin toss exercises every 10^-14 s. For 10^17 s. Sampling 1 straw to a 1,000 LY on the side cubical haystack. Effectively zero sample relative tot he task of finding isolated narrow zones or islands of function. But the objectors look like they will go down with the ship of denial, never mind the implications of the thought exercise of a microjet assembled in a vat that shows why FSCO/I will be very rare in a realistic config space. Nah, it's just a play of words -- nope, it is a gedankenexperiment -- and we are not interested anyway. Well it seems there may have been a real demo of dropped musket and cannon balls at Pisa, and the objectors whose side predicted a 60 ft lag of the lighter behind the heavier ball, then crowed as to how a 2 inch lag meant they could dismiss Galileo's overall point. H'mm . . . KFkairosfocus
January 2, 2014
January
01
Jan
2
02
2014
07:11 AM
7
07
11
AM
PDT
wd400:
Well, you seem to be ignoring natural selection, focusing on the origin of life and saying that functional proteins wouldn’t fall out of a “prebiotic soup” as a result of amino acids bumping into each other.
Umm natural selection doesn't do anything so there isn't anything to ignore.Joe
January 2, 2014
January
01
Jan
2
02
2014
06:17 AM
6
06
17
AM
PDT
wd400 @23:
Well, you seem to be ignoring natural selection, focusing on the origin of life and saying that functional proteins wouldn’t fall out of a “prebiotic soup” as a result of amino acids bumping into each other. But no one (that I know of) thinks that would happen.
Then what do they think would happen?Eric Anderson
January 1, 2014
January
01
Jan
1
01
2014
05:32 PM
5
05
32
PM
PDT
PS: A second place to look on singletons and implications. KFkairosfocus
January 1, 2014
January
01
Jan
1
01
2014
05:30 PM
5
05
30
PM
PDT
F/N: A First place to look on isolation of functional forms in Protein sequence space: Axe, 2004. KFkairosfocus
January 1, 2014
January
01
Jan
1
01
2014
05:16 PM
5
05
16
PM
PDT
wd400 pondered in 19
It is probably true that pi, and in fact most numbers, contain all possible numeric sequences. It’s no possible to prove it though,
You missed my point. Just as all infinities are not the same, I'm wondering whether all irrational numbers are not the same as well. If I pick a random point in space, its coordinates will be irrational numbers (by an overwhelming probability). Since my picking the point was randomized my unsteady hand, it seems reasonable to assume that the digits are also random, and will contain all finite numeric sequences. However, numbers such as pi and the square root of 2 are computed, and might not behave the same way. Before you dismiss this idea as silly, remember that pi varies with the curvature of space and the size of the circle. I can easily imagine a curvature with a circle of given size in which the measured value of PI is exactly 3.00000... and if the circle was large enough, pi could be as small as 2.00000... Because we're living in gravitational energy wells, the measured value for pi will vary depending on the direction that the diameter is measured. -QQuerius
January 1, 2014
January
01
Jan
1
01
2014
04:59 PM
4
04
59
PM
PDT
I did read the short play about nanobots. Singleton protein folds don't prove proteins are form unreachable islands in sequence space - that's croco-duck thinking.wd400
January 1, 2014
January
01
Jan
1
01
2014
02:49 PM
2
02
49
PM
PDT
UB: I think your contributions are very valuable and that in an information age more and more will see the point. I also suspect that for people who make the error on the meaning of reification we have been seeing, "abstract" is another word for not real, i.e. they are materialists of the crudest sort, or are confused by that self-refuting scheme of thought. KFkairosfocus
January 1, 2014
January
01
Jan
1
01
2014
01:46 PM
1
01
46
PM
PDT
PS: And I don't need to show the case for proteins, just look up singleton protein fold domains and the like. As I noted already, there are many such islands of function that can be shown to exist in the biological world. I gave the example of a vat and a jet with a million 1 micron parts, in order that we can have something more amenable to analysis. It may smoke your calculator to try to work it out but you can work out the number of ways 10^6 parts can be arranged among 10^18 cells. You can then multiply for possible arrangements of orientation etc [at simplistic level we can see a cube having six orientations in a grid , , , and for each of those the whole may be different . . ), and types of parts of various kinds etc. All of these will simply add to the point already made.kairosfocus
January 1, 2014
January
01
Jan
1
01
2014
01:42 PM
1
01
42
PM
PDT
WD: Did you actually read the part of the post in which I explained by instructive example, of a vat with diffusion vs nanobots? KFkairosfocus
January 1, 2014
January
01
Jan
1
01
2014
01:33 PM
1
01
33
PM
PDT
Upright, I hope you feel better having made this contribution.
Yes, I did. Thanks.
I really think talk of symbols and information almost always obscures rather than helps in these cases. We are talking about biology (and chemistry), so we should focus on that not an abstraction.
All symbol systems operate in material objects, regardless of their provenance or purpose. Biology is no different. The fact remains that biology requires physical effects to be produced from the translation of recorded information. The cell cannot be organized without it (and Darwinian evolution would not exist). This phenomenon requires a unique set of material conditions, which do not occur anywhere else in the physical world except during the translation of recorded information. And the central feature of those conditions requires a local discontinuity to be instantiated within the system between the medium of the information and the effects produced by its translation. The translation of information during protein synthesis (like any other form of recorded information) is not reducible to the material that makes up the system. It can’t be or the system would not function. Therefore, far from obscuring the issues, it is pointless to talk about the origin of biology without those facts (i.e. the systems material requirements) on the table.Upright BiPed
January 1, 2014
January
01
Jan
1
01
2014
11:55 AM
11
11
55
AM
PDT
And there you go "That puts you in the position where sufficiently isolated zones . . . which FSCO/I is going to be because of functional constraints, are simply too isolated to be credibly hit in a practical situation. You assume that which you need to esablish. If you cold show that protein functions are "isolated islands" in sequence space you wouldn't need any of this information stuff to show evolutionary theory as it stands can't explain the biological world. It would be bloody obvious. That's the question you want to get at, but to do that you'd need to do some biochemistry (and now Axe's protein croco-ducks). Anway, I'm pretty sure we are wasting each other's time now. Nothing will disuade you from the idea that you are right about this, and I see nothing of interest in your ideas.wd400
January 1, 2014
January
01
Jan
1
01
2014
10:42 AM
10
10
42
AM
PDT
WD: The constraint is not the specific probability distribution, until it becomes so biased that it effectively isn't chance anymore but mechanical necessity or programmed necessity -- loading on steroids. The challenge is that you are blindly (needle in haystack) sampling from a very large config space . . . much worse than the toy example given. Where, as the old physicist's joke about two drunks looking for lost contacts under a lamp puts it, after a time A says to B, are you SURE you lost your contacts here? B says, no, they were lost over there in the dark but this is where the light is. This is usually told on a case like why we study ideal gases, or the like instead of more realistic cases. The complexity shoots way up real fast. (E.g. I did a very rough cut approximate calc for just the 50-50 distribution, assuming my tired eyes calc has no material errors, about 2 * 10^147, out of 3.27*10^150, and the nearby 100 or so +/- 50 is going to catch most of the rest. By contrast, we were looking to get one of five possibilities. Effectively, zero.) That puts you in the position where sufficiently isolated zones . . . which FSCO/I is going to be because of functional constraints, are simply too isolated to be credibly hit in a practical situation. But at this point, you are simply turning yourself into yet another study in the art of avoiding facing the material point. which is that there are very good needle in haystack grounds for seeing why special clusters deeply isolated in vast config spaces are not going to credibly come up by blind chance. Essentially for the same reasons why systems free to move at micro level so strongly tend to entropy maximising equilibrium clusters. Maybe I am a glutton for punishment as the Bajans say, I will give a short outline of a case that brings to bear relevant factors, via a closer to reality toy. This updates an example from my note, app 1. We take a 1 cu m vat, with some unspecified fluid, good enough that layering effects a la Boltzmann's atmospheric distribution will not be relevant.
Now, imagine a micro-jet aircraft, made up from some 10^6 1 micron cubical parts that have to be properly arranged for it to be flyable. Decant parts into the vat, so that 10^6 of 10^18 1 micron cells are occupied by parts. Diffusion and Brownian motion naturally occur, scattering the parts at random. (a: Why is that?) Now, let us think: b:Would it be plausible that the parts will ever in our observation reassemble themselves in a clump by chance? ANS: No, as the forces at work are such that the number of scattered arrangements or configurations so vastly exceeds clumped ones, and the clumped ones just will not be likely. Much more likely will be small clumps. c: What if we introduced an army of nanobots that cooperatively work to clump and encapsulate parts? ANS: Much more likely to work, and we can see how the entropy of the parts would now be vastly reduced. But still, the likelihood of a flyable jet would be small as the number of ways to arrange 1 mn parts vs ways that are flyable, will again be in utter disproportion, just not as bad as before. d: Now, pour in more nanobots, which are programmed to catalogue the clumped parts and rearrange them into a flyable jet according to a blueprint. Would this have a reasonable prospect of success? ANS: Obviously yes, showing the power of intelligent guidance in accord with a design. e: What if we had more complex nanobots capable of doing both jobs at once? ANS: they could to the job too. And on the nature of entropy the direct reduction would be similar to the summed reductions to clump then congfigure to flyable condition. f: Does this illustrate that FSCO/I will strongly tend to be deeply isolated in the config space of possible scattered or even clumped parts? ANS: yes, as though there may be many ways to configure parts to flyable condition the number of ways parts not constrained by that condition could be arranged is much much higher. Similarly, the number of ways 10^18 parts could be scattered at random by diffusive and Brownian motion forces across the vat is even utterly higher yet. f: So is a monkey at keyboards or needle in haystack or trying to find islands in a great ocean analogy reasonable? ANS: Obviously yes, as illustrations. g: What about arrangements of 500 coins in a string? ANS: Any 3-d config can be described by a sufficiently long string that specifies nodes, arcs between them, and orientations, couplings if necessary, as say AutoCAD shows. So, a coded string of sufficient length is informationally equivalent to the parts in the vat. But as we are dealing with 10^6 parts amidst 10^18 possible locations, we have strings much much larger than 500 bits to contend with. As AutoCAD drawing file sizes tell us. Thus, the 500 coin string thought exercise is a much simpler but directly relevant exercise. And, if the system is such that certain preferred clusters of configs become very probable without something that makes such probable . . . nanobots, something is fishy. In effect you are saying that allegedly random diffusive forces are not, they are carrying out what the nanobots would do based on programming.
I trust the point is clear enough. Organising work demands explanation on forces credibly able to carry such out within the atomic and temporal gamut of the solar system or the observed cosmos. the only such empirically warranted force capable of generating FSCO/I is design. If you wish to deny this, kindly provide empirically f=grounded warrant. On my side the very posts in this thread, which are bit string equivalent, are cases in point. The only credible explanation for a post in English that makes sense in context of the thread is design. So are the computers we are using, which as we saw WLOG reduce to strings, informationally. And so forth. KFkairosfocus
January 1, 2014
January
01
Jan
1
01
2014
09:59 AM
9
09
59
AM
PDT
And as the binomial theorem will quickly tell us, the vast bulk of the set of possibilities will be outcomes near 50-50 H & T, in no particular order . . . three of the five members of our zone of interest are in fact 50-50, but in very particular orders indeed. The exact 50-50 outcomes as a set are something like 500!/ [(250!)^2], and those near 50-50 are going to be likewise very large fractions of the space of possibilities, in aggregate these dominate. (That near 50-50 subset will include essentially the code for every 72 character string of text in reasonable English. Very large but absolutely overwhelmed by the nonsense strings in no particular order.) If you look at the OP, you will see a diagram that illustrates how such peakiness emerges with dice as the number of possibilities goes up, and the video of a Quincunx machine in action shows how an effectively normal curve emerges from a probabilistic chance process with many possibilities for beads to go a step left/right in succession. Notice in this case how after 5,0000 beads, very few are in the far skirts, and yet at each pin it hits, a bead can with 50-50 odds go left or right one step. It's interesting, isn't it, that all you're examples have very specific chance hypotheses associated with them. You'd get different behaviour with different chance mechanisms (skewed distributions with loaded dice, endlessly increasing varinace with random walks...). What I don't follow if your leap from these well define examples to all chance hypotheses. As I say, I can think of several chance mechanisms that stand a good probability of creating a string equvialent to 500H, and random walk being one such.wd400
January 1, 2014
January
01
Jan
1
01
2014
08:55 AM
8
08
55
AM
PDT
KF: Happy new year to you and to all! :)gpuccio
January 1, 2014
January
01
Jan
1
01
2014
02:06 AM
2
02
06
AM
PDT
F/N: BTW, 10^57 atoms flipping coins and having tables to flip them on, would probably just about exhaust the atomic resources of the observed cosmos. KFkairosfocus
January 1, 2014
January
01
Jan
1
01
2014
02:00 AM
2
02
00
AM
PDT
A happy new year to you GP and to you all. There have been some very interesting turn of new year discussions here at UD, I link one in the above, too.kairosfocus
January 1, 2014
January
01
Jan
1
01
2014
01:58 AM
1
01
58
AM
PDT
wd400: In case you are interested, exactly two years ago I started a long exchange with Elizabeth Liddle about how it is possible to model the neo darwinian algorithm, including both the RV and NS part, and using dFSCI. You can find that discussion here: https://uncommondescent.com/intelligent-design/evolutionist-youre-misrepresenting-natural-selection/ It starts more or less at post 137, with my "happy new year" for 2012 (not relevant to the discussion :) ), and goes on for post 223. The most relevant part starts at post 194. In brief, I try to define exactly how the random part of the algorithm can be evaluated in specific hypotheses, how dFSCI can be computed for a specific protein, and how explicit paths of NS would modify the computation. I discuss also the non relevance of genetic drift to the computation of probabilities, which is a point often misunderstood. If you are interested in discussing any aspect of that, I am here.gpuccio
January 1, 2014
January
01
Jan
1
01
2014
01:43 AM
1
01
43
AM
PDT
WD: Pardon, but the notion that you must calculate the specific probabilities of a particular hypothesis before being able to see it as implausible is both a now common objecting talking point to the design inference and one that has been a failure from the outset. Whether or no you are independently putting it up, that remains the case. The basic fallacy in it is the underlying assumption that chance can account for anything given enough time and resources. In my student days, a common way to put it by evolutionists was to recite what seems to be an urban legend that in a debate with Bishop Wilberforce (or the like) Huxley gave the example that enough monkeys typing for enough time could come up with the text of Shakespeare's plays, or just that of Hamlet. This has recently been multiplied by the idea that since every individual configuration of 500 tosses of a fair coin is equiprobable, we should be no more surprised to see (i) 500 H (or the same with tails), or (ii) alternating H & T, or (iii) the ascii code for the first 72 characters of this post, as for any other. (NB: This last shows how coin tossing is informationally equivalent to the monkey typing exercise.) Nonsense. What happens is that we have clustering of sets of possible outcomes, so that we are interested above in a set of just five possibilities . . . or something substantially similar. Everything else in the config space of 2^500 ~ 3.27*10^150 possibilities will not be an event from our clustered zone of interest. And as the binomial theorem will quickly tell us, the vast bulk of the set of possibilities will be outcomes near 50-50 H & T, in no particular order . . . three of the five members of our zone of interest are in fact 50-50, but in very particular orders indeed. The exact 50-50 outcomes as a set are something like 500!/ [(250!)^2], and those near 50-50 are going to be likewise very large fractions of the space of possibilities, in aggregate these dominate. (That near 50-50 subset will include essentially the code for every 72 character string of text in reasonable English. Very large but absolutely overwhelmed by the nonsense strings in no particular order.) If you look at the OP, you will see a diagram that illustrates how such peakiness emerges with dice as the number of possibilities goes up, and the video of a Quincunx machine in action shows how an effectively normal curve emerges from a probabilistic chance process with many possibilities for beads to go a step left/right in succession. Notice in this case how after 5,0000 beads, very few are in the far skirts, and yet at each pin it hits, a bead can with 50-50 odds go left or right one step. Clearly -- with the Quincunx, you can WATCH it happen -- there are some net outcomes that are vastly improbable relative to others. Some zones of interest [notice the neat little columns . . . ] are much less likely to be hit by a random chance process than others. Empirical fact, easily seen. (And, the fact that stock returns also fit the pattern pretty well, should give warning about investment schemes that effectively promise the heavens. Unless you have very good internal knowledge and warrant, such schemes should be viewed as too good to be true. But if your name is Bill Gates c. 1980, you already know the profs have nothing to teach you . . . even if very few will believe you. [Give me your address to bill you for advice you need to heed financially even if you refuse to heed it on your worldviews commitments, where whether or no you believe it, as Pascal -- a founder of probability theory -- warned so long ago now, you are wagering your soul.]) The 500 coin toy challenge therefore gives us a picture that is reasonable: finding zones of interest in large config spaces. Just like, how Shakespeare's entire corpus is in that space, 72 characters at a time. So, it should be "simple" for our imaginary monkeys flipping coins or sitting at keyboards to hit the right keys or flips, nuh? We shouldn't be surprised then to see the text of Hamlet by happy chance! Rubbish. Patent absurdity. For the very same reason why Bill Gates and co paid programmers to intelligently design their operating systems and office software instead of running banana plantations and running an operation based on millions of monkeys busy at keyboards or at coin flipping. The needle in haystack search challenge makes nonsense of such a notion. (Cf. here earlier in the ID Foundations series on monkeys, keyboards and search space challenges. Please read the onward linked 2009 Abel paper on the universal plausibility bound. This addresses islands of function as implied by the requisites of getting multiple components to be properly matched, fitted together and organised to achieve function. Also, protein synthesis can be seen as a clear example of an algorithmically controlled information based process with Ribosomes as NC assembly machines, here on in context. Ask yourself, if proteins in functional clusters can so readily form and work in Darwin's pond or the like, why then is the cell such a Rube Goldberg-ish contraption, going the looooong way around to do what should be ever so simple? Or, is it a case where this is the type of factory it takes, much as we see in a pharmaceuticals plant that makes in much less elegant and far more resources intensive ways, bioactive compounds. Or, in an aircraft assembly plant.) Let's go back, to my remarks at 21 above to see what happens when we substitute for monkeys the 10^57 atoms of our solar system working as impossibly fast 500-coin flippers [500 coins flipped every 10^-14 s, as fast as fast ionic rxns . . . organic rxns are orders of magnitude slower], for 10^17 s:
The issue is NOT what distribution can we construct and “mathematicise” over. That is irrelevant when we run into FSCO/I — due to the need for the right parts in a proper config to work — forcing small zones in the space of possible configs, and the scope of the config space being such that no search based on atoms being able to sample a fraction appreciably different from zero. Sometimes, there is just too much haystack, and too few, too isolated search resources to have hopes of finding needles. For 500 bits and the gamut of the solar system, we can set up each of 10^57 atoms as a searching observer and give it a string of 500 coins to watch, updating every 10^-14 s, as fast as ionic chem rxns,for 10^17 s . . . a typical lifetime estimate. Impossibly generous, but the result is that the sample to the space of 3.27 * 10^150 possibilities for 500 bits, is as a one straw sample to a cubical haystack 1,000 light years thick, about as fat as our galaxy’s central bulge. Effectively no sample of a size plausibly able to find reasonably rare clusters of configs. Superpose on our galactic neighbourhood and you can predict the result with all but certainty: straw. Doesn’t matter the precise distribution, unless it is in effect not chance at all but a directed search or a programmed necessity. Which would point straight to a design by fine tuning. Remember, the first context for this is a warm pond with some organic precursors in it or the like, operating on known forces of thermodynamics (esp. diffusion and Brownian motion), and known chemistry and physics. No, the hoped for magic out of “natural selection” — which is really a distractor as chance is the only actual candidate to write genetic code (differential reproductive success REMOVES info, the less successful varieties) — is off the table. For, one of the things to be accounted for is exactly the self-replicating facility to be joined to a gated encapsulation and a metabolic automaton based on complex functionally specific molecular nanomachines. Hundreds of them, and in a context of key-lock fitting that needs homochirality. Which thermodynamics is not going to give us easily: mirror image molecules have the same energy dynamics. A toy example that gives an idea of the challenge is to think of a string of 500 fair coins all H, or alternating H and T or coins with the ASCII code for the first 72 characters of this message. No plausible blind chance process is going to get such in any trial under out observation, with all but certainty. For the overwhelming bulk cluster of outcomes of coin tossing or blindly arrived at configs will be near 50-50 H and T in no particular pattern . . .
Why so much stress on a toy example? Let me cite the clip from Wiki's Infinite Monkeys article in IDF # 11:
These images invite the reader to consider the incredible improbability of a large but finite number of monkeys working for a large but finite amount of time producing a significant work, and compare this with the even greater improbability of certain physical events. Any physical process that is even less likely than such monkeys’ success is effectively impossible, and it may safely be said that such a process will never happen.
That is, we are dealing with that which is empirically effectively impossible of observation on the gamut of our solar system, or by extension to just 1,000 bits or coins, our observed cosmos. Our ONLY observed cosmos . . . to try to drag in a speculative multiverse at this point is to traipse into philosophy, which means we have a perfect right to demand a full bore comparative difficulties analysis across major families of worldviews, cf. here on in context. Now, in Darwin's little pond or the like, we will have racemic [near 50-50] mixes of various relevant molecules, and a great many others that are not so relevant, plus water molecules that will readily hydrolyse and break up chains. We are dealing with a challenge of functionally specific complex organisation and associated information [FSCO/I] that dwarfs the challenge of aircraft assembly or building a pharmaceuticals plant. That can be readily seen from the biochem flowchart for the metabolic pathways of the cell, e.g. here. The point here is that a complex, functionally specific organised entity has an implicit wiring diagram "blueprint" that can be represented as a chain of coded bit strings. Just ask the makers of AutoCAD. And just ask anyone who has had to do the development of a microprocessor controlled system from the ground up, hard and soft ware, whether he would trust monkeys at keyboards or flipping coins to solve the organisation challenge involved . . . as one such, I readily answer: no, thank you. Worse, the key entity in view includes an additional facility, a code based von Neuman kinematic self replication function. As Paley long ago pointed out in his Ch 2 example that somehow almost always gets omitted in the rush to dismiss his point, a watch that is self replicating is even more evidently a wonderful contrivance than a "simple" time-telling watch. That means -- surprise (NOT) -- the coin flipping monkeys and atoms toy example ALSO covers the origin of such organisation. And so, we begin to see the magnitude of the challenge. Which, Denton aptly summarised:
To grasp the reality of life as it has been revealed by molecular biology, we must magnify a cell a thousand million times until it is twenty kilometers in diameter [[so each atom in it would be “the size of a tennis ball”] and resembles a giant airship large enough to cover a great city like London or New York. What we would then see would be an object of unparalleled complexity and adaptive design. On the surface of the cell we would see millions of openings, like the port holes of a vast space ship, opening and closing to allow a continual stream of materials to flow in and out. If we were to enter one of these openings we would find ourselves in a world of supreme technology and bewildering complexity. We would see endless highly organized corridors and conduits branching in every direction away from the perimeter of the cell, some leading to the central memory bank in the nucleus and others to assembly plants and processing units. The nucleus itself would be a vast spherical chamber more than a kilometer in diameter, resembling a geodesic dome inside of which we would see, all neatly stacked together in ordered arrays, the miles of coiled chains of the DNA molecules. A huge range of products and raw materials would shuttle along all the manifold conduits in a highly ordered fashion to and from all the various assembly plants in the outer regions of the cell. We would wonder at the level of control implicit in the movement of so many objects down so many seemingly endless conduits, all in perfect unison. We would see all around us, in every direction we looked, all sorts of robot-like machines . . . . We would see that nearly every feature of our own advanced machines had its analogue in the cell: artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of components, error fail-safe and proof-reading devices used for quality control, assembly processes involving the principle of prefabrication and modular construction . . . . However, it would be a factory which would have one capacity not equaled in any of our own most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours . . . . Unlike our own pseudo-automated assembly plants, where external controls are being continually applied, the cell's manufacturing capability is entirely self-regulated . . . . [Denton, Michael, Evolution: A Theory in Crisis, Adler, 1986, pp. 327 – 331.]
In short, it should be clear why the living cell is an information rich system, and why its claimed spontaneous origin -- a hugely contingent process -- falls under the coin-flipping exercise challenge. And if you imagine that I exaggerate the degree to which the various schools of thought on OOL have deadlocked to mutual ruin [and if you imagine you can dismiss me as wanting to get to a living cell in one step . . . ], let me cite an exchange from a few years back between two major spokesmen for the genes- first and the metabolism- first schools:
[Shapiro:] RNA's building blocks, nucleotides contain a sugar, a phosphate and one of four nitrogen-containing bases as sub-subunits. Thus, each RNA nucleotide contains 9 or 10 carbon atoms, numerous nitrogen and oxygen atoms and the phosphate group, all connected in a precise three-dimensional pattern . . . . [[S]ome writers have presumed that all of life's building could be formed with ease in Miller-type experiments and were present in meteorites and other extraterrestrial bodies. This is not the case. A careful examination of the results of the analysis of several meteorites led the scientists who conducted the work to a different conclusion: inanimate nature has a bias toward the formation of molecules made of fewer rather than greater numbers of carbon atoms, and thus shows no partiality in favor of creating the building blocks of our kind of life . . . . To rescue the RNA-first concept from this otherwise lethal defect, its advocates have created a discipline called prebiotic synthesis. They have attempted to show that RNA and its components can be prepared in their laboratories in a sequence of carefully controlled reactions, normally carried out in water at temperatures observed on Earth . . . . Unfortunately, neither chemists nor laboratories were present on the early Earth to produce RNA . . . [Orgel:] If complex cycles analogous to metabolic cycles could have operated on the primitive Earth, before the appearance of enzymes or other informational polymers, many of the obstacles to the construction of a plausible scenario for the origin of life would disappear . . . . It must be recognized that assessment of the feasibility of any particular proposed prebiotic cycle must depend on arguments about chemical plausibility, rather than on a decision about logical possibility . . . few would believe that any assembly of minerals on the primitive Earth is likely to have promoted these syntheses in significant yield . . . . Why should one believe that an ensemble of minerals that are capable of catalyzing each of the many steps of [[for instance] the reverse citric acid cycle was present anywhere on the primitive Earth [[8], or that the cycle mysteriously organized itself topographically on a metal sulfide surface [[6]? . . . Theories of the origin of life based on metabolic cycles cannot be justified by the inadequacy of competing theories: they must stand on their own . . . . The prebiotic syntheses that have been investigated experimentally almost always lead to the formation of complex mixtures. Proposed polymer replication schemes are unlikely to succeed except with reasonably pure input monomers. No solution of the origin-of-life problem will be possible until the gap between the two kinds of chemistry is closed. Simplification of product mixtures through the self-organization of organic reaction sequences, whether cyclic or not, would help enormously, as would the discovery of very simple replicating polymers. However, solutions offered by supporters of geneticist or metabolist scenarios that are dependent on “if pigs could fly” hypothetical chemistry are unlikely to help.
In short, RNA components can be hard to synthesise, and chains are vulnerable to hydrolysis through which water molecules split up the chain. Worse, to get them to functionally chain to relevant lengths and code for, then join up with functional proteins on say a clay surface then inside a suitable protective membrane, is problematic. And, it is similarly problematic to get the clusters of functional bio-molecules to carry out typical metabolic processes. No wonder, then, that science writer Richard Robinson has noted that neither main model -- despite what we may sometimes read to the contrary in more popular writings or see and hear in "science news" articles (or even, sadly, textbooks) -- is "robust." The "simple" cell ain't, and the "simple" pathway up from an autocatalytic reaction set or some spontaneously formed RNA or some cometary debris etc etc isn;t. Functionally specific, complex organisation and associated information are not going to be had on the cheap. Just ask Bill Gates. There is no free lunch. And, that is why in the book of that name, Wm A Dembski, UD's founder, went on record:
p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity, or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . . Biological specification always refers to function . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .” p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”
There is no free lunch WD, and no chance hypothesis that genuinely is a chance driven hyp is going to do better than the 10^57 500-coin flipping atoms of our solar system at it for 10^17 s. Such an ideal case is woefully inadequate to sample better than one straw to a cubical haystack 1,000 LY across, of the 3.27*10^150 possibilities for just 500 coins, 72 or so ASCII characters worth of info. Which as an experienced programmer you know can do very little by itself. The genomes for living forms start out at 100 - 1,000 kbits, and easily go on into the billions, with major body plans requiring increments of 10 - 100+ billions, dozens of times over. The resources to do that incrementally in realistic populations just are not there, which is the message of the 10^57 coin-flipping atoms. So, you need to squarely face the implications of Lewontin's a priori materialism, and how it biases the ability to address the real challenge of origins of the world of life:
the problem is to get [the general public] to reject irrational and supernatural explanations [--> note the implicit bias, polarising rhetoric and refusal to address the real alternative posed by design theory, assessing natural (= chance and/or necessity) vs ART-ificial alternative causes on empirically tested reliable signs] of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth [--> NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting]. . . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [[--> another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [[--> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door. [From: “Billions and Billions of Demons,” NYRB, January 9, 1997. if you think this is "quote mined," I suggest you read the fuller cite and notes here.]
Philip Johnson's retort in Nov that year was richly deserved:
For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose." . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
WD, please, go flip 500 coins for a bit, and see what happens, watch the Quincunx in action, and then ponder the thought exercise of 10^57 coin flipping atoms> Then, ponder what that is telling us about the credible -- and only observed -- source of FSCO/I, intelligence. Then, reflect on the FSCO/I in the living cell and in the many complex body plans for life forms, and ask your self whether you REALLY have empirical observational evidence that backs the claim that blind chance and mechanical necessity suffice to explain what you see, why. Enjoy the new year. KFkairosfocus
January 1, 2014
January
01
Jan
1
01
2014
01:38 AM
1
01
38
AM
PDT
Yes. My point was you don't need to calculate a precise number but you do have to consider the specific chance hypothesis. The probability of a given protein function is very different under evolutionary mechanisms than by atoms bumping into each other, surely?wd400
December 31, 2013
December
12
Dec
31
31
2013
08:40 PM
8
08
40
PM
PDT
wd400: I said nothing about CSI or evolution with natural selection. You said you don't see how one can make a claim about the plausibility of a thing unless they can compute the probability. My example was to show that it is indeed possible to know a thing is implausible without formally computing the probability. If you found the verbatim text of War and Peace written out in molecules in the DNA of an ancient ant embalmed in amber (tip of the hat to Dr. Liddle), is it a plausible hypothesis that it got there by any non-intelligent process? Can you compute the probabilities? Knowing the formal probability is not necessary to reach a reasonable finding of the plausibility of a hypothesis.William J Murray
December 31, 2013
December
12
Dec
31
31
2013
08:32 PM
8
08
32
PM
PDT
It's certainly true that you have to find the right frame in which to study questions in biology. You'd get nowhere trying to study ecological interactions if you started with molecular biology. The point I didn't quite make, is that very often physcicists/engineers/software people coming to biology mistake the map for the territory. As I say, if you want to study the origin of life then you need to do chemistry.wd400
December 31, 2013
December
12
Dec
31
31
2013
08:25 PM
8
08
25
PM
PDT
WD400 It seems to me that the root of biology is basically abstract chemistry (protein synthesis, etc.), and the resulting ecology and economy of nature that emerges is abstract. How can you do biology and avoid abstractions?littlejohn
December 31, 2013
December
12
Dec
31
31
2013
08:05 PM
8
08
05
PM
PDT
1 2 3

Leave a Reply