Uncommon Descent Serving The Intelligent Design Community

ID Foundations, 1a: What is “Chance”? (a rough “definition”)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Just what is “chance”?

This point has come up as contentious in recent UD discussions, so let me clip the very first UD Foundations post, so we can look at a paradigm example, a falling and tumbling die:

A pair of dice showing how 12 edges and 8 corners contribute to a flat random distribution of outcomes as they first fall under the mechanical necessity of gravity, then tumble and roll influenced by the surface they have fallen on. So, uncontrolled small differences make for maximum uncertainty as to final outcome. (Another way for chance to act is by  quantum probability distributions such as tunnelling for alpha particles in a radioactive nucleus)
A pair of dice showing how 12 edges and 8 corners contribute to a flat random distribution of outcomes as they first fall under the mechanical necessity of gravity, then tumble and roll influenced by the surface they have fallen on. So, uncontrolled small differences make for maximum uncertainty as to final outcome. (Another way for chance to act is by quantum probability distributions such as tunnelling for alpha particles in a radioactive nucleus)

2 –>As an illustration, we may discuss a falling, tumbling die:

Heavy objects tend to fall under the law-like natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance.

But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes!

[Also, the die may be loaded, so that it will be biased or even of necessity will produce a desired outcome. Or, one may simply set a die to read as one wills.]

{We may extend this by plotting the (observed) distribution of dice . . . observing with Muelaner [here] , how the sum tends to a normal curve as the number of dice rises:}

central-limit-theorem-300x149
How the distribution of values varies with number of dice (HT: Muelaner)

Then, from No 21 in the series, we may bring out thoughts on the two types of chance:

Chance:

TYPE I: the clash of uncorrelated trains of events such as is seen when a dropped fair die hits a table etc and tumbles, settling to readings in the set {1, 2, . . . 6} in a pattern that is effectively flat random. In this sort of event, we often see manifestations of sensitive dependence on initial conditions, aka chaos, intersecting with uncontrolled or uncontrollable small variations yielding a result predictable in most cases only up to a statistical distribution which needs not be flat random.

TYPE II: processes — especially quantum ones — that are evidently random, such as quantum tunnelling as is the explanation for phenomena of alpha decay. This is used in for instance zener noise sources that drive special counter circuits to give a random number source. Such are sometimes used in lotteries or the like, or presumably in making one time message pads used in decoding.

{Let’s add a Quincunx or Galton Board demonstration, to see the sort of contingency we are speaking of in action and its results . . . here in a normal bell-shaped curve, note how the ideal math model and the stock distribution histogram align with the beads:}

[youtube AUSKTk9ENzg]

Why the fuss and feathers?

Because stating a clear enough understanding of what design thinkers are talking about when we refer to “chance” is now important given some of the latest obfuscatory talking points. So, bearing the above in mind, let us look afresh at a flowchart of the design inference process:

explan_filter

(So, we first envision nature acting by low contingency mechanical necessity such as with F = m*a . . . think a heavy unsupported object near the earth’s surface falling with initial acceleration g = 9.8 N/kg or so. That is the first default. Similarly, we see high contingency knocking out the first default — under similar starting conditions, there is a broad range of possible outcomes. If things are highly contingent in this sense, the second default is: CHANCE. That is only knocked out if an aspect of an object, situation, or process etc. exhibits, simultaneously: (i) high contingency, (ii) tight specificity of configuration relative to possible configurations of the same bits and pieces, (iii)  high complexity or information carrying capacity, usually beyond 500 – 1,000 bits. And for more context you may go back to the same first post, on the design inference. And yes, that will now also link this for an all in one go explanation of chance, so there!)

Okie, let us trust there is sufficient clarity for further discussion on the main point. Remember, whatever meanings you may wish to inject into “chance,” the above is more or less what design thinkers mean when we use it — and I daresay, it is more or less what most people (including most scientists) mean by chance in light of experience with dice-using games, flipped coins, shuffled cards, lotteries, molecular agitation, Brownian motion and the like. At least, when hair-splitting debate points are not being made.  It would be appreciated if that common sense based usage by design thinkers is taken into reckoning. END

Comments
Sal, I write software myself, and have long noted over-representation of IT folks in ID fields (along with engineers). Biology is always beset by people from other fields who think there own field, be it IT engineering or physics provides the best way to do biology. Very rarely, these people learn enough biology to make a genuine contribution. I've yet to see an IDist ITers who falls into that category. We won't agree on that, which is fine, but I prefer to focus on the science rather than the abstraction. Upright, I hope you feel better having made this contribution, William Murray, I'm not someone who cares that CSI conditioned on evolutionary processes can't be calculated. And I can't calculate the specific probability that "just physics" would create a laptop screen. The point is, and is hard to imagine I'm still trying to make this, you need to know precisely what the chance hypothesis is. If it's evolution by mutation, selection, drift, speciation and the like then it's a different question than atoms bumping into each other.wd400
December 31, 2013
December
12
Dec
31
31
2013
07:36 PM
7
07
36
PM
PDT
wd400 says:
I really don’t see how you can asses the plausibility of a hypothesis without specifically calculating the probability of that hypothesis.
Can you assess whether or not it it is plausible that the molecule configuration you are looking at right now (the configuration of the molecules that make up the pixels in your viewscreen) was not generated by an unseen, intelligent agent, but rather was generated by chance (undirected) interactions of chemical properties according to physics? Answer: Yes, you can make such an assessment: it is not plausible. Question 2: Can you specifically calculate the probability that this configuration of molecules could have been generated by chance interactions of chemical properties according to physics? No? Hmmm.William J Murray
December 31, 2013
December
12
Dec
31
31
2013
06:38 PM
6
06
38
PM
PDT
WD,
I really think talk of symbols and information almost always obscures rather than helps in these cases.
It's 8:00 o'clock in my time zone, on New Years Eve. I have some obscure old jazz on the box; the house looks like an amalgamation of a family deli and a package store, and the Mrs in floating around the house to the music as guest arrive at the door. I obviously don't have time at the moment to address your comment, but suffice it to say, with all due respect to you, you simply have not studied the issue to the level required to comment on it. You would have never said what you just said. Happy New Year to you and yours.Upright BiPed
December 31, 2013
December
12
Dec
31
31
2013
06:17 PM
6
06
17
PM
PDT
I really think talk of symbols and information almost always obscures rather than helps in these cases. We are talking about biology (and chemistry), so we should focus on that not an abstraction.
I respect that you feel that way and that highlight a conflict between the ID and non-ID camps that is not just metaphysical. ID proponents are disproportionately individuals in the IT industry. They see life as an information processing, software intensive system. Developmental mechanisms, DNA translation, regulation, are information intensive. Yes, physics and chemistry are involved just like physics and chemistry are involved in the hardware of a computer, but the software doesn't come from chance and law mechanisms, it comes from intelligence. DNA software of is critical to making proteins, and proteins are critical to making DNA, but that become a chicken and egg problem. The OOL problem is one of building both the hardware and software simultaneously before chemical degradation blows apart the pre-cursors.
We are talking about biology (and chemistry), so we should focus on that not an abstraction
We could do that too, and it shows the expected evolution of the chemicals is away from a living organism, not toward one. We look at any dead creature, if it's not devoured and decomposed by other creatures, the experimental and observational expectation is the collection of dead parts will become even more dead over time -- less likely to ever become living again. If that happens with organisms that were alive, how much more will life not arise in a pre-biotic soup. Real chemical evolution is toward death. Like 500 fair coins heads, the origin of life seems to be at variance with theoretical expectation from basic chemistry and physics. We might ascribe expectation to a chance process or whatever, but OOL seems deeply at variance with expectation. It doesn't mean life is impossible any more than all-coins-heads is impossible, it just doesn't seem consistent with expectation of a mindless process.scordova
December 31, 2013
December
12
Dec
31
31
2013
06:04 PM
6
06
04
PM
PDT
I really think talk of symbols and information almost always obscures rather than helps in these cases. We are talking about biology (and chemistry), so we should focus on that not an abstraction. The questions for the OOL are about whether metabolic processes can start spontaneously, under what conditions self-replication arises and what reactions can give rise to "precurssor" molecules. We should consider thouse questions (which, of course, remain largely unanswered).wd400
December 31, 2013
December
12
Dec
31
31
2013
05:45 PM
5
05
45
PM
PDT
@ KF A very helpful post indeed. It's amazing how even simple concepts are made outrageously esoteric in the defense of Darwin. Yeesh!Optimus
December 31, 2013
December
12
Dec
31
31
2013
05:35 PM
5
05
35
PM
PDT
Selection won't work for OOL. Symbolic information processing must, as a matter of principle, be decoupled from physics and chemistry, much like the symbolism of head/tails is decoupled from mechanical considerations. That's exactly why all heads stands out as a design pattern, it violates experimental expectation. Symbolic organization in the first life will also violate experimental expectation from a prebiotic soup both on theoretical ground and empirical grounds (i.e. dead dogs stay dead dogs). Even granting for the sake of argument Darwinian Selection actually works as advertised, it cannot solve the OOL problem, which is quite severe. Of course, our estimates of distribution could be wrong, but what's wrong with a falsifiable hypothesis? That's a good thing.scordova
December 31, 2013
December
12
Dec
31
31
2013
05:18 PM
5
05
18
PM
PDT
I'm not repeating talking points. I really don't see how you can asses the plausibility of a hypothesis without specifically calculating the probability of that hypothesis. I can think of many chance processes that create a 500 H or T sequence of coins (or equivalent). You seem to be saying no chance hypothesis could ever explain that? You also, as far as I can tell from all the "->" business, fail to grasp that natural selection makes the avaliable sequence space much smaller than the theoretical one.wd400
December 31, 2013
December
12
Dec
31
31
2013
04:30 PM
4
04
30
PM
PDT
PS: And simply repeating talking points about all chance hyps is not going to make the challenge that your chance based search process, no matter how powerful is not going to beat 10^57 observers updating every 10^-14 s for 10^17 y, so they cannot account for even the finding of islands of function in a space of configs for 500 bits, a toy sized space compared to that for a genome of 100,000 bases or increments of 10 - 100+ mn bases. If anyone has been "ignoring," WD, it is you.kairosfocus
December 31, 2013
December
12
Dec
31
31
2013
04:04 PM
4
04
04
PM
PDT
WD: Did you observe the following remarks just above?
Remember, the first context for this is a warm pond with some organic precursors in it or the like, operating on known forces of thermodynamics (esp. diffusion and Brownian motion), and known chemistry and physics. No, the hoped for magic out of “natural selection” — which is really a distractor as chance is the only actual candidate to write genetic code (differential reproductive success REMOVES info, the less successful varieties) — is off the table. For, one of the things to be accounted for is exactly the self-replicating facility to be joined to a gated encapsulation and a metabolic automaton based on complex functionally specific molecular nanomachines. Hundreds of them, and in a context of key-lock fitting that needs homochirality. Which thermodynamics is not going to give us easily: mirror image molecules have the same energy dynamics . . . . And when it comes to body plans, we should note that to get to such we are going to need jumps of 10 – 100+ million bits of just genetic info, as we can see form genome sizes and reasonable estimates alike. The notion that there is a smoothly varying incremental path from a unicellular ancestor tot he branches of the tree of life, is not an empirically warranted explanation based on demonstrated capacity, but an ideological a priori demand. just as one illustration, the incrementalist position would logically imply that transitional forms would utterly dominate life forms, and after observing billions of fossils in the ground, with millions taken as samples and over 250 000 fossil species, the gaps Darwin was embarrassed by are still there, stronger than ever. And, there is no credible observed evidence that blind chance and mechanical necessity on the gamut of our solar system can write even 500 bits of code. For our observed cosmos, move up to 1,000 bits. The only empirically warranted source of such a level of code as 10 – 1000 mn bits is design. And code is an expression of symbolic language towards a purpose, all of the which are strong indicators of design.
Kindly explain to me how these constitute IGNORING "natural selection." To highlight: 1 --> In the warm pond or the like, until you have encapsulation and gating, diffusion and cross-reactions will break up reaction sets. 2 --> want of homochirality will break up the key-lock fitting. 3 --> Until you have a metabolic automaton joined to a code based self replicating entity within the encapsulated system, you do not have cell based life. And the speculative models mutually ruin one another. That is why OOL is empty just so stories at popular level and increasing crisis at technical level. 4 --> No self replication, no reproduction, and no differential reproductive success leading to subtracting out the less fit varieties. 5 --> There is a tendency to reify "natural selection" and treat it as if it has creative powers. This is an error, that part of the Darwinian model SUBTRACTS info, it does not add it. Differential reproductive success leads to REMOVAL of the less fit. 6 --> Let's write as an expression:
a: chance variation (CV) b: LESS less reproductively successful varieties (LRSV) c: Gives incremental descent with modification (IDWM = micro evo) d: Which goes to a branching tree pattern of diversification (BTPD) e: Which accumulates as macro evo (Macro Evo)
7 --> That is: CV - LRSV --> IDWM = Micro Evo --> BTPD --> Macro Evo 8 --> As the minus sign emphasises, the ONLY adder of info and organisation is CV. And, on empirical studies the steps are up to maybe 6 - 7 bases. 9 --> Blend in reasonable generation times, mut rates, pop sizes etc, and very modest changes will require easily 100's of millions of years. We have 500 - 600 MY or so since the Cambrian. And if fossil dates are taken, we have several MY to a few dozen MY to account for HUGE body plan changes. 10 --> And that is assuming a smooth incremental path, so that incremental transitions do the job. The evidence is missing, and there is reason to believe that body plans exist in islands of function. 11 --> If you doubt this, simply think of the many hundreds of protein fold domains that have only one or a few members and are locked away in islands in amino acid chain space. That is the first building brick to a new body plan. So, what we plainly have is a mechanism that might explain minor adaptations forced to punch far above its weight because of the a priori materialism imposed under the guise of a modest methodological requirement. And no, I have definitely not "ignored" natural selection. KFkairosfocus
December 31, 2013
December
12
Dec
31
31
2013
03:59 PM
3
03
59
PM
PDT
Well, you seem to be ignoring natural selection, focusing on the origin of life and saying that functional proteins wouldn't fall out of a "prebiotic soup" as a result of amino acids bumping into each other. But no one (that I know of) thinks that would happen. So what's the point? If we want to test the plasuability of a particular origin of life scenario we need to understand that particular hypothesis, this 500bit business isn't going to account for all "chance" hypotheses.wd400
December 31, 2013
December
12
Dec
31
31
2013
03:13 PM
3
03
13
PM
PDT
C & Q: You are technically right, but in fact the list of sources as given was the direct source of the sets of digits. D and E were constructed D notionally, E based on the Fibonacci series. You would probably have to get 10^22 or so digits of pi to be fairly sure that you would catch 21 digit numbers, and I think you would need to get a supercomputer to search. KFkairosfocus
December 31, 2013
December
12
Dec
31
31
2013
03:06 PM
3
03
06
PM
PDT
WD: The issue at stake first, is what does "chance" mean. It answers, using dice as an illustration of one type. Quantum sources are also mentioned. The matter is then extended to an illustrative chance mutation scenario. Then the issue of searching config spaces comes in. I get the feeling, this is no longer a familiar topic. The issue is NOT what distribution can we construct and "mathematicise" over. That is irrelevant when we run into FSCO/I -- due to the need for the right parts in a proper config to work -- forcing small zones in the space of possible configs, and the scope of the config space being such that no search based on atoms being able to sample a fraction appreciably different from zero. Sometimes, there is just too much haystack, and too few, too isolated search resources to have hopes of finding needles. For 500 bits and the gamut of the solar system, we can set up each of 10^57 atoms as a searching observer and give it a string of 500 coins to watch, updating every 10^-14 s, as fast as ionic chem rxns,for 10^17 s . . . a typical lifetime estimate. Impossibly generous, but the result is that the sample to the space of 3.27 * 10^150 possibilities for 500 bits, is as a one straw sample to a cubical haystack 1,000 light years thick, about as fat as our galaxy's central bulge. Effectively no sample of a size plausibly able to find reasonably rare clusters of configs. Superpose on our galactic neighbourhood and you can predict the result with all but certainty: straw. Doesn't matter the precise distribution, unless it is in effect not chance at all but a directed search or a programmed necessity. Which would point straight to a design by fine tuning. Remember, the first context for this is a warm pond with some organic precursors in it or the like, operating on known forces of thermodynamics (esp. diffusion and Brownian motion), and known chemistry and physics. No, the hoped for magic out of "natural selection" -- which is really a distractor as chance is the only actual candidate to write genetic code (differential reproductive success REMOVES info, the less successful varieties) -- is off the table. For, one of the things to be accounted for is exactly the self-replicating facility to be joined to a gated encapsulation and a metabolic automaton based on complex functionally specific molecular nanomachines. Hundreds of them, and in a context of key-lock fitting that needs homochirality. Which thermodynamics is not going to give us easily: mirror image molecules have the same energy dynamics. A toy example that gives an idea of the challenge is to think of a string of 500 fair coins all H, or alternating H and T or coins with the ASCII code for the first 72 characters of this message. No plausible blind chance process is going to get such in any trial under out observation, with all but certainty. For the overwhelming bulk cluster of outcomes of coin tossing or blindly arrived at configs will be near 50-50 H and T in no particular pattern. All of this and more has been repeatedly pointed out, but we must not underestimate the blinding power of an a priori ideology that demands that something much harder than this MUST have happened to get the ball rolling for life, and wraps that in the lab coat and demands that the only acceptable explanations will be those that start from blind chance and mechanical necessity. And when it comes to body plans, we should note that to get to such we are going to need jumps of 10 - 100+ million bits of just genetic info, as we can see form genome sizes and reasonable estimates alike. The notion that there is a smoothly varying incremental path from a unicellular ancestor tot he branches of the tree of life, is not an empirically warranted explanation based on demonstrated capacity, but an ideological a priori demand. just as one illustration, the incrementalist position would logically imply that transitional forms would utterly dominate life forms, and after observing billions of fossils in the ground, with millions taken as samples and over 250 000 fossil species, the gaps Darwin was embarrassed by are still there, stronger than ever. And, there is no credible observed evidence that blind chance and mechanical necessity on the gamut of our solar system can write even 500 bits of code. For our observed cosmos, move up to 1,000 bits. The only empirically warranted source of such a level of code as 10 - 1000 mn bits is design. And code is an expression of symbolic language towards a purpose, all of the which are strong indicators of design. Save to those locked up in an ideological system that locks such out a priori. And of course all of this has been pointed out ov er and over and over, with reasons, days and weeks at a time, again and again. But id there is an ideological lock out there is a lock out. No amount of evidence or reasoning will shift that, only coming to a point where there is a systemic collapse that makes it obvious this is a sinking ship. How do I know that? History. The analysis that showed how marxist central planning would fail was done in the 1920's. It was fended off and dismissed until the system collapsed in the late 1980's. But it was important for some despised few to stand their ground for 60 long years. In an info age sooner or later enough will wake up to the source of FSCO/I to make the system collapse. Just we have to stand ground and point to the fallacies again and again until the break-point happens. And that is the context, WD, in which I took time to point out that whatever the obfuscatory rhetorical ink clouds that may be spewed to cloud the issue, what is meant by chance in the design inference is fairly simple, and not at all a strained or dubious notion. KFkairosfocus
December 31, 2013
December
12
Dec
31
31
2013
03:03 PM
3
03
03
PM
PDT
Hmm, This post seems to say precisely that "chance", left hanging by itself, it too vague and imprecise a term to describe a cause. As you say, we can test a chance explanation for a series of dice rolls, but only if we test a specific chance hypothesis (a fair die, rolled not placed). The post that started all that wasted energy and cross talk about "chance as a cause" very specifically didn't provide present a specific chance hypothesis. More to the point, what's the appropriate probability distribution for, say, the evolution of a particular amino acid sequence given mutation, drift and natural selection? If you don't have that, then how can you reject the "chance" hypothesis?wd400
December 31, 2013
December
12
Dec
31
31
2013
01:30 PM
1
01
30
PM
PDT
Querius, It is probably true that pi, and in fact most numbers, contain all possible numeric sequences. It's no possible to prove it though,wd400
December 31, 2013
December
12
Dec
31
31
2013
01:19 PM
1
01
19
PM
PDT
Haha, good one Cantor! But can you actually prove mathematically that ALL numeric sequences can be found in pi (versus other types of random numbers)? Think cryptography. Just wondering. ;-) -QQuerius
December 31, 2013
December
12
Dec
31
31
2013
12:15 PM
12
12
15
PM
PDT
Which of these is pi digits...
Correct answer: A thru E are pi digitscantor
December 31, 2013
December
12
Dec
31
31
2013
08:45 AM
8
08
45
AM
PDT
Not planned, not controlled, sometimes not controllable (by us).kairosfocus
December 31, 2013
December
12
Dec
31
31
2013
07:23 AM
7
07
23
AM
PDT
What is chance? Chance is nothing more than happenstance, accidental, ie not plannedJoe
December 31, 2013
December
12
Dec
31
31
2013
07:15 AM
7
07
15
AM
PDT
Box, I added the vid. BA: There are ever so many fascinating twists and turns out there indeed. I am however here trying to nail down a shingle on a fog bank so to speak. I am thinking cavils may need to go on that growing list of Darwinist debate tactics and fallacies here. And of course, tricks with chance and necessity. KFkairosfocus
December 31, 2013
December
12
Dec
31
31
2013
06:41 AM
6
06
41
AM
PDT
kf, I certainly don't want to take anything away from the explanatory filter. Nor do I take lightly the concerted effort at obfuscation that Darwinists continually employ towards the word 'chance' (and everything else in the debate for that matter). I just thought that you would appreciate the subtle, but important, distinction that its to be found between the random entropic events of the universe and the 'unbounded' randomness in quantum mechanics that results as a consequence of our 'free will' choice as to how we choose to consciously observe an event.,, Personally, although many i's have to be dotted and t's crossed to further delineate the two, I found the distinction between the two types of 'chance', uncovered thus far, to be fascinating.bornagain77
December 31, 2013
December
12
Dec
31
31
2013
06:31 AM
6
06
31
AM
PDT
Box: That could work in many cases -- especially if you include an indefinite number of uncontrolled perturbing events, but the more effective way is the direct empirical one: set up quite similar initial circumstances and see how the results come out. Drop a die in the "same" position from a holder at a given height and place over a surface 500 times, and see the result. (Or try a Quincunx or Galton Board machine that simulates the Normal distribution, cf video. Notice the ideal model and the stock distribution histogram.) Chance in action. KFkairosfocus
December 31, 2013
December
12
Dec
31
31
2013
06:18 AM
6
06
18
AM
PDT
Thx KF, One more question: I understand contingent in this context as "depending on unknown events / conditions". Do you agree?Box
December 31, 2013
December
12
Dec
31
31
2013
05:45 AM
5
05
45
AM
PDT
PPS: Which of these is pi digits, which sky noise, which phone numbers (admittedly the local phone directory is a bit on the scanty side), and why does the pattern stand out so clearly at D and at E: A: 821051141354735739523 B: 733615329964125325790 C: 698312217625358227195 D: 123409876135791113151 E: 113581321345589146235 (Ans: C -- sky, A - pi, B - phone, last 2 digits each of line codes.)kairosfocus
December 31, 2013
December
12
Dec
31
31
2013
05:45 AM
5
05
45
AM
PDT
Box, low and high. Low (or ideally no) contingency, high contingency in context. KFkairosfocus
December 31, 2013
December
12
Dec
31
31
2013
05:21 AM
5
05
21
AM
PDT
PS: I should note that chance variations or mutations can be triggered by radioactivity. An Alpha particle ionises water molecules, triggering messing with the genes by processes that are accidental, uncorrelated. Non-foresighted variation results. A gene changes in an uncontrolled way through resulting chemical reaction. Suppose the creature is not killed by being hit by a large dose. (Radiation Physics was a gruesome course.) It has a hereditable variation. That feeds into the gene pool, again with all sorts of uncontrolled factors. A new variety pops up. Somehow, in some env't, it is slightly advantageous. Less advantaged varieties then are outcompeted, and the population is modified. Continue long enough and voila, tree of life. Big problems. The only actual adder of info was chance. The natural selection part is really culling out of the disadvantaged for whatever reason. Mods do happen, but with the scope of searches for new body plans etc, 10 - 100+ mn bits, with reasonable pops, mut rates, reproduction rates and the step limit warranted empirically of 6 - 7 co-ordinated muts at one go, we do not have enough time, population or maybe even atoms to sufficiently explore the space of possibilities to give a plausible chance of getting to novel islands of function. But of course, this is hotly denied by the Darwinists. They need to provide empirical observation backed answers, and not beg questions by imposing a priori materialism. Starting with OOL. (You will remember my year long tree of life challenge, which does not have a really solid attempt, even though I put something together from various remarks. And OOL is the ROOT of the Darwinist tree of life.) KFkairosfocus
December 31, 2013
December
12
Dec
31
31
2013
05:20 AM
5
05
20
AM
PDT
In the flowchart: what do the abbreviations "Lo" and "Hi" mean?Box
December 31, 2013
December
12
Dec
31
31
2013
05:20 AM
5
05
20
AM
PDT
BA77: Fair enough to raise such issues and concerns. Our problem is, however, that we are dealing with determined sometimes ruthless zero concession objectors and onlookers tossed into confusion by clouds of rhetorical squid ink. So we have to start with the familiar, get things clear, and build out from there. As used by the man in the street, the common relevant meaning of chance is what you get from fair dice or coins or things somewhat like that, or else by accident, what in my native land we call "buck ups." (One can even have children by "buck ups." That is, unplanned and obviously uncontrolled events. Here, the talk is "drive carefully, most people are the result of accidents.") A good example is that it is possible to use the phone book as a random number table, based on the lack of correlation between names, area of residence and line codes. Each of these is separately not mere happenstance at all, but because they lack correlation, the resulting pattern is sufficiently random for this to work. The same obtains for lack of correlation between the decimal place value system and the ratio of circumference to diameter for a circle, leading to how one can use blocks of digits of pi from a million digit value, to give effectively random numbers. And so forth. We can then take this up to the thermodynamic level and the Maxwell-Boltzmann distribution; as I do in my discussion here, app 1 my usual linked note (which is in effect the implicit context for every comment I have ever made at UD):
___________ >> f] The key point is that when raw energy enters a body, it tends to make its entropy rise. This can be envisioned on a simple model of a gas-filled box with piston-ends at the left and the right: ================================= ||::::::::::::::::::::::::::::::::::::::::::|| ||::::::::::::::::::::::::::::::::::::::::::||=== ||::::::::::::::::::::::::::::::::::::::::::|| ================================= 1: Consider a box as above, filled with tiny perfectly hard marbles [so collisions will be elastic], scattered similar to a raisin-filled Christmas pudding (pardon how the textual elements give the impression of a regular grid, think of them as scattered more or less hap-hazardly as would happen in a cake). 2: Now, let the marbles all be at rest to begin with. 3: Then, imagine that a layer of them up against the leftmost wall were given a sudden, quite, quite hard push to the right [the left and right ends are pistons]. 4: Simply on Newtonian physics, the moving balls would begin to collide with the marbles to their right, and in this model perfectly elastically. So, as they hit, the other marbles would be set in motion in succession. A wave of motion would begin, rippling from left to right 5:As the glancing angles on collision will vary at random, the marbles hit and the original marbles would soon begin to bounce in all sorts of directions. Then, they would also deflect off the walls, bouncing back into the body of the box and other marbles, causing the motion to continue indefinitely. 6: Soon, the marbles will be continually moving in all sorts of directions, with varying speeds, forming what is called the Maxwell-Boltzmann distribution, a bell-shaped curve. 7: And, this pattern would emerge independent of the specific initial arrantgement or how we impart motion to it, i.e. this is an attractor in the phase space: once the marbles are set in motion somehow, and move around and interact, they will soon enough settle into the M-B pattern. E.g. the same would happen if a small charge of explosive were set off in the middle of the box, pushing our the balls there into the rest, and so on. And once the M-B pattern sets in, it will strongly tend to continue. (That is, the process is ergodic.) 8: A pressure would be exerted on the walls of the box by the average force per unit area from collisions of marbles bouncing off the walls, and this would be increased by pushing in the left or right walls (which would do work to push in against the pressure, naturally increasing the speed of the marbles just like a ball has its speed increased when it is hit by a bat going the other way, whether cricket or baseball). Pressure rises, if volume goes down due to compression. (Also, volume of a gas body is not fixed.) 9: Temperatureemerges as a measure of the average random kinetic energy of the marbles in any given direction, left, right, to us or away from us. Compressing the model gas does work on it, so the internal energy rises, as the average random kinetic energy per degree of freedom rises. Compression will tend to raise temperature. (We could actually deduce the classical — empirical — P, V, T gas laws [and variants] from this sort of model.) 10: Thus, from the implications of classical, Newtonian physics, we soon see the hard little marbles moving at random, and how that randomness gives rise to gas-like behaviour. It also shows how there is a natural tendency for systems to move from more orderly to more disorderly states, i.e. we see the outlines of the second law of thermodynamics. 11: Is the motion really random? First, we define randomness in the relevant sense:
In probability and statistics, a random process is a repeating process whose outcomes follow no describable deterministic pattern, but follow a probability distribution, such that the relative probability of the occurrence of each outcome can be approximated or calculated. For example, the rolling of a fair six-sided die in neutral conditions may be said to produce random results, because one cannot know, before a roll, what number will show up. However, the probability of rolling any one of the six rollable numbers can be calculated.
12: This can be seen by the extension of the thought experiment of imagining a large collection of more or less identically set up boxes, each given the same push at the same time, as closely as we can make it. At first, the marbles in the boxes will behave very much alike, but soon, they will begin to diverge as to path. The same overall pattern of M-B statistics will happen, but each box will soon be going its own way. That is, the distribution pattern is the same but the specific behaviour in each case will be dramatically different. 13: Q: Why? 14: A: This is because tiny, tiny differences between the boxes, and the differences in the vibrating atoms in the walls and pistons, as well as tiny irregularites too small to notice in the walls and pistons will make small differences in initial and intervening states -- perfectly smooth boxes and pistons are an unattainable ideal. Since the system is extremely nonlinear, such small differences will be amplified, making the behaviour diverge as time unfolds. A chaotic system is not predictable in the long term. So, while we can deduce a probabilistic distribution, we cannot predict the behaviour in detail, across time. Laplace's demon who hoped to predict the future of the universe from the covering laws and the initial conditions, is out of a job . . . >> ___________
So, chance and randomness enter even before we get to the quantum level, and they lead to entropy as a measure of in effect lack of information/degree of freedom at micro level consistent with gross, lab level macro conditions. (This is the context of s = k log w, and of the perception that entropy often is an index of degree of disorder, though of course there are ever so many subtleties and surprises involved so that is over simplified.) When we get to quantum level phenomena, stochastic distributions of unknown root crop up everywhere. The nice crisp orbits of electrons fuzz out into probabilistically distributed orbitals. Potential barriers too high for classical cases can be tunnelled [NB there is a wave optics case on frustration of total internal reflection . . . ], etc etc. For this case, I use the alpha emission radioactivity case as it is a classic and gives rise to a pretty easily observed macro effect, as we count with Geiger Counters or watch with ZnS scintillation screens etc. The random tiny greenish flashes are unforgettable. The counter chatter too. The relevance of these is that we then see that chance is the inferred cause of highly contingent outcomes that tend to follow what we would expect from appropriate stochastic models. And, as a result, when the outcomes are sufficiently complex and especially functionally specific to the point where we have deeply isolated islands of function in large config spaces, we may not be able to get a big enough sample that it is reasonable to hit such islands blindly. But routinely, e.g. posts in this thread, designers using intelligence, do so. That marks a pretty sharp distinction and a reliable sign of design. Which gets us back to the reason the explanatory filter works. KFkairosfocus
December 31, 2013
December
12
Dec
31
31
2013
05:01 AM
5
05
01
AM
PDT
But why should the random entropic events of the universe care if and when I decide to observe a particle if, as Darwinists hold, I'm suppose to be the result of random entropic events in the first place? The following experiment goes even further in the differentiation of the entropic randomness of the space-time of the universe and free will randomness found in quantum mechanics. And is also very good in highlighting just how deeply the deterministic, no-free will, materialistic view of reality has been undermined by quantum mechanics.,, Here’s a recent variation of Wheeler’s Delayed Choice experiment, which highlights the ability of the conscious observer to effect ‘spooky action into the past’. Furthermore in the following experiment, the claim that past material states determine future conscious choices (materialistic determinism) is directly falsified by the fact that present conscious choices are in fact effecting past material states:
Quantum physics mimics spooky action into the past – April 23, 2012 Excerpt: The authors experimentally realized a “Gedankenexperiment” called “delayed-choice entanglement swapping”, formulated by Asher Peres in the year 2000. Two pairs of entangled photons are produced, and one photon from each pair is sent to a party called Victor. Of the two remaining photons, one photon is sent to the party Alice and one is sent to the party Bob. Victor can now choose between two kinds of measurements. If he decides to measure his two photons in a way such that they are forced to be in an entangled state, then also Alice’s and Bob’s photon pair becomes entangled. If Victor chooses to measure his particles individually, Alice’s and Bob’s photon pair ends up in a separable state. Modern quantum optics technology allowed the team to delay Victor’s choice and measurement with respect to the measurements which Alice and Bob perform on their photons. “We found that whether Alice’s and Bob’s photons are entangled and show quantum correlations or are separable and show classical correlations can be decided after they have been measured”, explains Xiao-song Ma, lead author of the study. According to the famous words of Albert Einstein, the effects of quantum entanglement appear as “spooky action at a distance”. The recent experiment has gone one remarkable step further. “Within a naïve classical world view, quantum mechanics can even mimic an influence of future actions on past events”, says Anton Zeilinger. http://phys.org/news/2012-04-quantum-physics-mimics-spooky-action.html
In other words, if my conscious choices really are just merely the result of whatever state the material particles in my brain happen to be in in the past (deterministic) how in blue blazes are my free will choices instantaneously effecting the state of material particles into the past? The preceding experiment is simply completely impossible on a materialistic/deterministic view of reality!,,, I consider the preceding experimental evidence to be a vast improvement over the traditional ‘uncertainty’ argument for free will, from quantum mechanics, that had been used for decades to undermine the deterministic belief of materialists:
Why Quantum Physics (Uncertainty) Ends the Free Will Debate – Michio Kaku – video http://www.youtube.com/watch?v=lFLR5vNKiSw
Of related note as to free will and the creation of new information (of note: neo-Darwinian processes have yet to demonstrate the origination of new information!)
Algorithmic Information Theory, Free Will and the Turing Test – Douglas S. Robertson Excerpt: Chaitin’s Algorithmic Information Theory shows that information is conserved under formal mathematical operations and, equivalently, under computer operations. This conservation law puts a new perspective on many familiar problems related to artificial intelligence. For example, the famous “Turing test” for artificial intelligence could be defeated by simply asking for a new axiom in mathematics. Human mathematicians are able to create axioms, but a computer program cannot do this without violating information conservation. Creating new axioms and free will are shown to be different aspects of the same phenomena: the creation of new information. http://cires.colorado.edu/~doug/philosophy/info8.pdf
Of important note as to how almighty God exercises His free will in all of this:
BRUCE GORDON: Hawking’s irrational arguments – October 2010 Excerpt: The physical universe is causally incomplete and therefore neither self-originating nor self-sustaining. The world of space, time, matter and energy is dependent on a reality that transcends space, time, matter and energy. This transcendent reality cannot merely be a Platonic realm of mathematical descriptions, for such things are causally inert abstract entities that do not affect the material world. Neither is it the case that “nothing” is unstable, as Mr. Hawking and others maintain. Absolute nothing cannot have mathematical relationships predicated on it, not even quantum gravitational ones. Rather, the transcendent reality on which our universe depends must be something that can exhibit agency – a mind that can choose among the infinite variety of mathematical descriptions and bring into existence a reality that corresponds to a consistent subset of them. This is what “breathes fire into the equations and makes a universe for them to describe.,,, the evidence for string theory and its extension, M-theory, is nonexistent; and the idea that conjoining them demonstrates that we live in a multiverse of bubble universes with different laws and constants is a mathematical fantasy. What is worse, multiplying without limit the opportunities for any event to happen in the context of a multiverse – where it is alleged that anything can spontaneously jump into existence without cause – produces a situation in which no absurdity is beyond the pale. For instance, we find multiverse cosmologists debating the “Boltzmann Brain” problem: In the most “reasonable” models for a multiverse, it is immeasurably more likely that our consciousness is associated with a brain that has spontaneously fluctuated into existence in the quantum vacuum than it is that we have parents and exist in an orderly universe with a 13.7 billion-year history. This is absurd. The multiverse hypothesis is therefore falsified because it renders false what we know to be true about ourselves. Clearly, embracing the multiverse idea entails a nihilistic irrationality that destroys the very possibility of science. Universes do not “spontaneously create” on the basis of abstract mathematical descriptions, nor does the fantasy of a limitless multiverse trump the explanatory power of transcendent intelligent design. What Mr. Hawking’s contrary assertions show is that mathematical savants can sometimes be metaphysical simpletons. Caveat emptor. per Washington Times The Absurdity of Inflation, String Theory and The Multiverse – Dr. Bruce Gordon – video http://vimeo.com/34468027
Here is the last power-point slide of the preceding video:
The End Of Materialism? * In the multiverse, anything can happen for no reason at all. * In other words, the materialist is forced to believe in random miracles as a explanatory principle. * In a Theistic universe, nothing happens without a reason. Miracles are therefore intelligently directed deviations from divinely maintained regularities, and are thus expressions of rational purpose. * Scientific materialism is (therefore) epistemically self defeating: it makes scientific rationality impossible.
Supplemental note: , finding ‘free will conscious observation’ to be ‘built into’ our best description of foundational reality, quantum mechanics, as a starting assumption, 'free will observation' which is indeed the driving aspect of randomness in quantum mechanics, is VERY antithetical to the entire materialistic philosophy which demands that a 'non-telological randomness' be the driving force of creativity in Darwinian evolution! Also of interest:
Scientific Evidence That Mind Effects Matter – Random Number Generators – video http://www.metacafe.com/watch/4198007 Correlations of Random Binary Sequences with Pre-Stated Operator Intention: A Review of a 12-Year Program - 1997 Abstract: Strong correlations between output distribution means of a variety of random binary processes and pre-stated intentions of some 100 individual human operators have been established over a 12-year experimental program. More than 1000 experimental series, employing four different categories of random devices and several distinctive protocols, show comparable magnitudes of anomalous mean shifts from chance expectation, with similar distribution structures. Although the absolute effect sizes are quite small, of the order of 10–4 bits deviation per bit processed, over the huge databases accumulated the composite effect exceeds 7 ?( p approx.= 3.5 × 10 –13). These data display significant disparities between female and male operator performances, and consistent serial position effects in individual and collective results. Data generated by operators far removed from the machines and exerting their efforts at times other than those of machine operation show similar effect sizes and structural details to those of the local, on-time experiments. Most other secondary parameters tested are found to have little effect on the scale and character of the results, with one important exception: studies performed using fully deterministic pseudorandom sources, either hard-wired or algorithmic, yield null overall mean shifts, and display no other anomalous feature. http://www.princeton.edu/~pear/pdfs/1997-correlations-random-binary-sequences-12-year-review.pdf Mass Consciousness: Perturbed Randomness Before First Plane Struck on 911 - July 29 2012 Excerpt: The machine apparently sensed the September 11 attacks on the World Trade Centre four hours before they happened - but in the fevered mood of conspiracy theories of the time, the claims were swiftly knocked back by sceptics. But it also appeared to forewarn of the Asian tsunami just before the deep sea earthquake that precipitated the epic tragedy.,, Now, even the doubters are acknowledging that here is a small box with apparently inexplicable powers. 'It's Earth-shattering stuff,' says Dr Roger Nelson, emeritus researcher at Princeton University in the United States, who is heading the research project behind the 'black box' phenomenon. http://www.network54.com/Forum/594658/thread/1343585136/1343657830/Mass+Consciousness-+Perturbed+Randomness++Before+First+Plane+Struck+on+911
I once asked a evolutionist, after showing him the preceding experiments, “Since you ultimately believe that the ‘god of random chance’ produced everything we see around us, what in the world is my mind doing pushing your god around?” Here are some of the papers to go with the preceding video and articles;
Princeton Engineering Anomalies Research - Scientific Study of Consciousness-Related Physical Phenomena - publications http://www.princeton.edu/~pear/publications.html The Global Consciousness Project - Meaningful Correlations in Random Data http://teilhard.global-mind.org/
bornagain77
December 31, 2013
December
12
Dec
31
31
2013
04:27 AM
4
04
27
AM
PDT
But where do we delineate 'quantum randomness' from entropic randomness in all this? Well let's add some perspective shall we? Around the 13:20 minute mark of the following video Pastor Joe Boot comments on the self-defeating nature of the atheistic worldview in regards to absolute truth:
Defending the Christian Faith – Pastor Joe Boot – video http://www.youtube.com/watch?v=wqE5_ZOAnKo "If you have no God, then you have no design plan for the universe. You have no prexisting structure to the universe.,, As the ancient Greeks held, like Democritus and others, the universe is flux. It's just matter in motion. Now on that basis all you are confronted with is innumerable brute facts that are unrelated pieces of data. They have no meaningful connection to each other because there is no overall structure. There's no design plan. It's like my kids do 'join the dots' puzzles. It's just dots, but when you join the dots there is a structure, and a picture emerges. Well, the atheists is without that (final picture). There is no preestablished pattern (to connect the facts given atheism)." Pastor Joe Boot
The scientist in the following video, who works within the field of Quantum Mechanics, scientifically confirms Pastor Joe Boots intuition and shows how conservation of energy in the universe requires quantum non-locality to be true in order for the universe to have coherence.
Is Richard Dawkins proving the existence of God after all? - Antoine Suarez - video http://www.youtube.com/watch?v=jIXXqv9zKEw
The difference between Quantum and Entropic randomness is that the ‘randomness’ of quantum mechanics is, unlike bounded entropic randomness (Planck), found to be associated with the free will of the conscious observer. In the following video, at the 37:00 minute mark, Anton Zeilinger, a leading researcher in quantum teleportation with many breakthroughs under his belt, humorously reflects on just how deeply the determinism of materialism has been undermined by quantum mechanics by musing that perhaps such a deep lack of determinism in quantum mechanics may provide some of us a loop hole when we meet God on judgment day.
Prof Anton Zeilinger speaks on quantum physics. at UCT – video http://www.youtube.com/watch?feature=player_detailpage&v=s3ZPWW5NOrw#t=2237s
This ‘unbounded random’ situation found in quantum mechanics is brought out a bit more clearly in this following article:
People Keep Making Einstein’s (Real) Greatest Blunder – July 2011 Excerpt: It was in these debates (with Bohr) that Einstein declared his real greatest blunder: “God does not play dice with the Universe.” As much as we all admire Einstein,, don’t keep making his (real) greatest blunder. I’ll leave the last word to Bohr, who allegedly said, “Don’t tell God what to do with his dice.” ,,, To clarify, it isn’t simply that there’s randomness; that at some level, “God plays dice.” Even local, real interpretations of quantum mechanics with hidden variables can do that. It’s that we know something about the type of dice (at the quantum level) that the Universe plays. And the dice cannot be both local and real; people claiming otherwise have experimental data to answer to. http://scienceblogs.com/startswithabang/2011/07/01/people-keep-making-einsteins-g/
Personally, I felt that such a deep undermining of determinism by quantum mechanics, far from providing a ‘loop hole’ on judgment day as Dr. Zeilinger was musing about, actually restores free will to its rightful place in the grand scheme of things, thus making God’s final judgments on men’s souls all the more fully binding since, as far as science can tell us, man truly is a ‘free moral agent’, just as Theism has always maintained. To solidify this basic theistic ‘free will’ claim for how reality is now found to be constructed on the quantum level, the following study came along a few months after I had seen Dr. Zeilinger’s video:
Can quantum theory be improved? – July 23, 2012 Excerpt: Building on nearly a century of investigative work on this topic, a team of physicists has recently performed an experiment whose results show that, despite its imperfections, quantum theory still seems to be the optimal way to predict measurement outcomes., However, in the new paper, the physicists have experimentally demonstrated that there cannot exist any alternative theory that increases the predictive probability of quantum theory by more than 0.165, with the only assumption being that measurement (*conscious observation) parameters can be chosen independently (free will) of the other parameters of the theory.,,, ,, the experimental results provide the tightest constraints yet on alternatives to quantum theory. The findings imply that quantum theory is close to optimal in terms of its predictive power,,, http://phys.org/news/2012-07-quantum-theory.html to clarify: What does the term “measurement” mean in quantum mechanics? - “Measurement” or “observation” in a quantum mechanics context are really just other ways of saying that the observer is interacting with the quantum system and measuring the result in toto. http://boards.straightdope.com/sdmb/showthread.php?t=597846 Henry Stapp on the Conscious Choice and the Non-Local Quantum Entangled Effects – video http://www.youtube.com/watch?v=HJN01s1gOqA
Moreover,
In the beginning was the bit - New Scientist Excerpt: Zeilinger's principle leads to the intrinsic randomness found in the quantum world. Consider the spin of an electron. Say it is measured along a vertical axis (call it the z axis) and found to be pointing up. Because one bit of information has been used to make that statement, no more information can be carried by the electron's spin. Consequently, no information is available to predict the amounts of spin in the two horizontal directions (x and y axes), so they are of necessity entirely random. If you then measure the spin in one of these directions, there is an equal chance of its pointing right or left, forward or back. This fundamental randomness is what we call Heisenberg's uncertainty principle. http://www.quantum.at/fileadmin/links/newscientist/bit.html
So just as I had somewhat suspected after watching Dr. Zeilinger’s video, it is found that there is indeed a required assumption of ‘free will’ in quantum mechanics (that measurement parameters can be chosen independently), and that it is ‘free will’ that is what necessarily drives the completely random (non-deterministic) aspect of quantum mechanics.,,, To further differentiate the ‘spooky’ randomness of quantum mechanics, (which is directly associated with the free will of our conscious choices), from that of the ‘bounded entropic randomness’ of the space-time of General Relativity, it is found that,,
Quantum Zeno effect Excerpt: The quantum Zeno effect is,,, an unstable particle, if observed continuously, will never decay. https://uncommondescent.com/intelligent-design/tonights-feature-presentation-epigenetics-the-next-evolutionary-cliff/#comment-445840
bornagain77
December 31, 2013
December
12
Dec
31
31
2013
04:22 AM
4
04
22
AM
PDT
1 2 3

Leave a Reply