Uncommon Descent Serving The Intelligent Design Community

ID Foundations, 1a: What is “Chance”? (a rough “definition”)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
arroba Email

Just what is “chance”?

This point has come up as contentious in recent UD discussions, so let me clip the very first UD Foundations post, so we can look at a paradigm example, a falling and tumbling die:

A pair of dice showing how 12 edges and 8 corners contribute to a flat random distribution of outcomes as they first fall under the mechanical necessity of gravity, then tumble and roll influenced by the surface they have fallen on. So, uncontrolled small differences make for maximum uncertainty as to final outcome. (Another way for chance to act is by  quantum probability distributions such as tunnelling for alpha particles in a radioactive nucleus)
A pair of dice showing how 12 edges and 8 corners contribute to a flat random distribution of outcomes as they first fall under the mechanical necessity of gravity, then tumble and roll influenced by the surface they have fallen on. So, uncontrolled small differences make for maximum uncertainty as to final outcome. (Another way for chance to act is by quantum probability distributions such as tunnelling for alpha particles in a radioactive nucleus)

2 –>As an illustration, we may discuss a falling, tumbling die:

Heavy objects tend to fall under the law-like natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance.

But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes!

[Also, the die may be loaded, so that it will be biased or even of necessity will produce a desired outcome. Or, one may simply set a die to read as one wills.]

{We may extend this by plotting the (observed) distribution of dice . . . observing with Muelaner [here] , how the sum tends to a normal curve as the number of dice rises:}

central-limit-theorem-300x149
How the distribution of values varies with number of dice (HT: Muelaner)

Then, from No 21 in the series, we may bring out thoughts on the two types of chance:

Chance:

TYPE I: the clash of uncorrelated trains of events such as is seen when a dropped fair die hits a table etc and tumbles, settling to readings in the set {1, 2, . . . 6} in a pattern that is effectively flat random. In this sort of event, we often see manifestations of sensitive dependence on initial conditions, aka chaos, intersecting with uncontrolled or uncontrollable small variations yielding a result predictable in most cases only up to a statistical distribution which needs not be flat random.

TYPE II: processes — especially quantum ones — that are evidently random, such as quantum tunnelling as is the explanation for phenomena of alpha decay. This is used in for instance zener noise sources that drive special counter circuits to give a random number source. Such are sometimes used in lotteries or the like, or presumably in making one time message pads used in decoding.

{Let’s add a Quincunx or Galton Board demonstration, to see the sort of contingency we are speaking of in action and its results . . . here in a normal bell-shaped curve, note how the ideal math model and the stock distribution histogram align with the beads:}

[youtube AUSKTk9ENzg]

Why the fuss and feathers?

Because stating a clear enough understanding of what design thinkers are talking about when we refer to “chance” is now important given some of the latest obfuscatory talking points. So, bearing the above in mind, let us look afresh at a flowchart of the design inference process:

explan_filter

(So, we first envision nature acting by low contingency mechanical necessity such as with F = m*a . . . think a heavy unsupported object near the earth’s surface falling with initial acceleration g = 9.8 N/kg or so. That is the first default. Similarly, we see high contingency knocking out the first default — under similar starting conditions, there is a broad range of possible outcomes. If things are highly contingent in this sense, the second default is: CHANCE. That is only knocked out if an aspect of an object, situation, or process etc. exhibits, simultaneously: (i) high contingency, (ii) tight specificity of configuration relative to possible configurations of the same bits and pieces, (iii)  high complexity or information carrying capacity, usually beyond 500 – 1,000 bits. And for more context you may go back to the same first post, on the design inference. And yes, that will now also link this for an all in one go explanation of chance, so there!)

Okie, let us trust there is sufficient clarity for further discussion on the main point. Remember, whatever meanings you may wish to inject into “chance,” the above is more or less what design thinkers mean when we use it — and I daresay, it is more or less what most people (including most scientists) mean by chance in light of experience with dice-using games, flipped coins, shuffled cards, lotteries, molecular agitation, Brownian motion and the like. At least, when hair-splitting debate points are not being made.  It would be appreciated if that common sense based usage by design thinkers is taken into reckoning. END

Comments
wd400: You may have missed my prior comment in the thread, so thought I would ask again: If OOL theorists don't think that functional proteins result from amino acids bumping into each other, then how do they think functional proteins came along? Eric Anderson
F/N: Notice, how the objections on how vague "chance" is, have suddenly vanished? Without any acknowledgement that there is a point here? As in, the zero acknowledgements, zero concessions, zero apologies tactics in action. Mentally bookmark that tactic, for future reference. And of course, let us remember the triple-context:
1: The key darwinist claim that chance variation yielding varieties [CV], less differential reproductive success of varieties [DRS] leads to incremental descent with modification [IDWM] thence branching tree evo at micro and macro . . . body plan . . . levels [BTE, m&M] thence the Darwinist tree of life {DTOL]:
CV - DRS --> IDWM --> BTE, m&M --> DTOL
2: The explanatory filter challenge on the limits of chance variation in finding islands of function in large config spaces. For, in the intervening large non-functional spaces, there is no function to give advantage, so no attractor, no slope to draw to function without foresight. Where also, the requisites of many properly organised, matched parts to achieve function, naturally lock function to rare islands in the space of possible configs. (Which is of course common experience, those imagining a vast continent of incrementally accessible function need to show us demonstrations of that extraordinary claim . . . which of course they have not.) 3: That, any case of complex organisation is informationally equivalent to a coded describing string that sets out its nodes and arcs, so coin flipping is WLOG. So also, viewing our solar system's 10^57 atoms as coin flipping observers doing a flip and view exercise on 500 coins each every 10^-14 s, for 10^17 s, will dominate any realistic chance hyp in ability to search out the space for 500 coins, 3.27*10^150 configs from TTT . . . through THTH ... to HHHHH . . . but will only be able to sample as one straw to a cubical haystack as thick as our galaxy at its central bulge. Superpose the stack on our stellar neighbourhood and blindly pick a one straw sample. On needle in haystack grounds, with all but absolute certainty, you will pick nothing but straw. Straw -- non function -- just plain utterly dominates the config space, never mind that there are lots of zones of interest in it. Blind search is strictly limited in its capability, and is far surpassed by intelligent imagination and creativity. Just as, the text for this comment was not produced by a blind needle in haystack search.
The shoe pinches a bit on the other foot, nuh. KF kairosfocus
Eric #59, Thank you Eric. I think the opponents of design are trying to defeat the observations by ignoring them. At least that appears to be WD's strategy. :( Upright BiPed
wd400:
As I say, I can think of several chance mechanisms that stand a good probability of creating a string equvialent to 500H, and random walk being one such.
I for one would personally be very interested to know what mechanisms you have in mind. Can you describe one for us that would create a complex, specified string? (Keeping in mind, that 500H is not complex and is most likely explained as a result of necessity, so we really need to talk about a complex specified string.) Eric Anderson
UB @46: Thanks for always keeping things focused on the heart of the matter. The desire to squirm out of the discussion of information and symbols and into vague, general assertions of things like "natural selection" is very tempting. Keep up the pressure. ----- And KF, thanks for a valuable post. Eric Anderson
Even if the lower edges of the two balls were at identical heights at the point of release, if we're talking about a lag of 2 inches, there is also the question of the center of gravity of the objects in question. Presumably the edge of the cannon ball is 2 or more inches from its center of gravity. I don't know if it would have an impact over that short of a distance, but it could. As could air resistance, updraft, crosswinds, etc. Which is why subsequent experiments with the feather are conducted in a vacuum. Eric Anderson
F/N: The wiki cite is accurate, at least to the book -- went to the local library which I knew had a copy, read esp pp 19 - 20. Looks like in old age Galileo told a student of the incident, and it has been passed down. There are many back-forths over the issue involving inter alia problems on releasing the balls at the same time. KF kairosfocus
Joe and EA: Of course I did not ignore NS, but rephrased it in more descriptive, accurate terms that show that this is just another way of saying less fit varieties die out so NS actually subtracts info, if anything. This leaves chance variation as the source of info, the only source. With the hope that there is that easy fast incremental path up the back slope of Mt Improbable. The real problem is that this is a description of micro evo, grossly extrapolated to macro evo without empirical observational warrant for a vast continent of function that can be achieved incrementally in a branching tree pattern. Hence the significance of missing fossils of transitionals as a dominant feature of the record and in stead the pattern of sudden appearance, stasis, disappearance. Not to mention molecular stuff such as singleton proteins in isolated fold domains etc. Then we have the problem of the pop genetics and the pacing of the process. But all of this then comes back to, what is chance capable of in given time and scope of atomic materials. Where the limit of 500 bits comes from the solar system's 10^57 atoms, each making observations of 500 coin toss exercises every 10^-14 s. For 10^17 s. Sampling 1 straw to a 1,000 LY on the side cubical haystack. Effectively zero sample relative tot he task of finding isolated narrow zones or islands of function. But the objectors look like they will go down with the ship of denial, never mind the implications of the thought exercise of a microjet assembled in a vat that shows why FSCO/I will be very rare in a realistic config space. Nah, it's just a play of words -- nope, it is a gedankenexperiment -- and we are not interested anyway. Well it seems there may have been a real demo of dropped musket and cannon balls at Pisa, and the objectors whose side predicted a 60 ft lag of the lighter behind the heavier ball, then crowed as to how a 2 inch lag meant they could dismiss Galileo's overall point. H'mm . . . KF kairosfocus
wd400:
Well, you seem to be ignoring natural selection, focusing on the origin of life and saying that functional proteins wouldn’t fall out of a “prebiotic soup” as a result of amino acids bumping into each other.
Umm natural selection doesn't do anything so there isn't anything to ignore. Joe
wd400 @23:
Well, you seem to be ignoring natural selection, focusing on the origin of life and saying that functional proteins wouldn’t fall out of a “prebiotic soup” as a result of amino acids bumping into each other. But no one (that I know of) thinks that would happen.
Then what do they think would happen? Eric Anderson
PS: A second place to look on singletons and implications. KF kairosfocus
F/N: A First place to look on isolation of functional forms in Protein sequence space: Axe, 2004. KF kairosfocus
wd400 pondered in 19
It is probably true that pi, and in fact most numbers, contain all possible numeric sequences. It’s no possible to prove it though,
You missed my point. Just as all infinities are not the same, I'm wondering whether all irrational numbers are not the same as well. If I pick a random point in space, its coordinates will be irrational numbers (by an overwhelming probability). Since my picking the point was randomized my unsteady hand, it seems reasonable to assume that the digits are also random, and will contain all finite numeric sequences. However, numbers such as pi and the square root of 2 are computed, and might not behave the same way. Before you dismiss this idea as silly, remember that pi varies with the curvature of space and the size of the circle. I can easily imagine a curvature with a circle of given size in which the measured value of PI is exactly 3.00000... and if the circle was large enough, pi could be as small as 2.00000... Because we're living in gravitational energy wells, the measured value for pi will vary depending on the direction that the diameter is measured. -Q Querius
I did read the short play about nanobots. Singleton protein folds don't prove proteins are form unreachable islands in sequence space - that's croco-duck thinking. wd400
UB: I think your contributions are very valuable and that in an information age more and more will see the point. I also suspect that for people who make the error on the meaning of reification we have been seeing, "abstract" is another word for not real, i.e. they are materialists of the crudest sort, or are confused by that self-refuting scheme of thought. KF kairosfocus
PS: And I don't need to show the case for proteins, just look up singleton protein fold domains and the like. As I noted already, there are many such islands of function that can be shown to exist in the biological world. I gave the example of a vat and a jet with a million 1 micron parts, in order that we can have something more amenable to analysis. It may smoke your calculator to try to work it out but you can work out the number of ways 10^6 parts can be arranged among 10^18 cells. You can then multiply for possible arrangements of orientation etc [at simplistic level we can see a cube having six orientations in a grid , , , and for each of those the whole may be different . . ), and types of parts of various kinds etc. All of these will simply add to the point already made. kairosfocus
WD: Did you actually read the part of the post in which I explained by instructive example, of a vat with diffusion vs nanobots? KF kairosfocus
Upright, I hope you feel better having made this contribution.
Yes, I did. Thanks.
I really think talk of symbols and information almost always obscures rather than helps in these cases. We are talking about biology (and chemistry), so we should focus on that not an abstraction.
All symbol systems operate in material objects, regardless of their provenance or purpose. Biology is no different. The fact remains that biology requires physical effects to be produced from the translation of recorded information. The cell cannot be organized without it (and Darwinian evolution would not exist). This phenomenon requires a unique set of material conditions, which do not occur anywhere else in the physical world except during the translation of recorded information. And the central feature of those conditions requires a local discontinuity to be instantiated within the system between the medium of the information and the effects produced by its translation. The translation of information during protein synthesis (like any other form of recorded information) is not reducible to the material that makes up the system. It can’t be or the system would not function. Therefore, far from obscuring the issues, it is pointless to talk about the origin of biology without those facts (i.e. the systems material requirements) on the table. Upright BiPed
And there you go "That puts you in the position where sufficiently isolated zones . . . which FSCO/I is going to be because of functional constraints, are simply too isolated to be credibly hit in a practical situation. You assume that which you need to esablish. If you cold show that protein functions are "isolated islands" in sequence space you wouldn't need any of this information stuff to show evolutionary theory as it stands can't explain the biological world. It would be bloody obvious. That's the question you want to get at, but to do that you'd need to do some biochemistry (and now Axe's protein croco-ducks). Anway, I'm pretty sure we are wasting each other's time now. Nothing will disuade you from the idea that you are right about this, and I see nothing of interest in your ideas. wd400
WD: The constraint is not the specific probability distribution, until it becomes so biased that it effectively isn't chance anymore but mechanical necessity or programmed necessity -- loading on steroids. The challenge is that you are blindly (needle in haystack) sampling from a very large config space . . . much worse than the toy example given. Where, as the old physicist's joke about two drunks looking for lost contacts under a lamp puts it, after a time A says to B, are you SURE you lost your contacts here? B says, no, they were lost over there in the dark but this is where the light is. This is usually told on a case like why we study ideal gases, or the like instead of more realistic cases. The complexity shoots way up real fast. (E.g. I did a very rough cut approximate calc for just the 50-50 distribution, assuming my tired eyes calc has no material errors, about 2 * 10^147, out of 3.27*10^150, and the nearby 100 or so +/- 50 is going to catch most of the rest. By contrast, we were looking to get one of five possibilities. Effectively, zero.) That puts you in the position where sufficiently isolated zones . . . which FSCO/I is going to be because of functional constraints, are simply too isolated to be credibly hit in a practical situation. But at this point, you are simply turning yourself into yet another study in the art of avoiding facing the material point. which is that there are very good needle in haystack grounds for seeing why special clusters deeply isolated in vast config spaces are not going to credibly come up by blind chance. Essentially for the same reasons why systems free to move at micro level so strongly tend to entropy maximising equilibrium clusters. Maybe I am a glutton for punishment as the Bajans say, I will give a short outline of a case that brings to bear relevant factors, via a closer to reality toy. This updates an example from my note, app 1. We take a 1 cu m vat, with some unspecified fluid, good enough that layering effects a la Boltzmann's atmospheric distribution will not be relevant.
Now, imagine a micro-jet aircraft, made up from some 10^6 1 micron cubical parts that have to be properly arranged for it to be flyable. Decant parts into the vat, so that 10^6 of 10^18 1 micron cells are occupied by parts. Diffusion and Brownian motion naturally occur, scattering the parts at random. (a: Why is that?) Now, let us think: b:Would it be plausible that the parts will ever in our observation reassemble themselves in a clump by chance? ANS: No, as the forces at work are such that the number of scattered arrangements or configurations so vastly exceeds clumped ones, and the clumped ones just will not be likely. Much more likely will be small clumps. c: What if we introduced an army of nanobots that cooperatively work to clump and encapsulate parts? ANS: Much more likely to work, and we can see how the entropy of the parts would now be vastly reduced. But still, the likelihood of a flyable jet would be small as the number of ways to arrange 1 mn parts vs ways that are flyable, will again be in utter disproportion, just not as bad as before. d: Now, pour in more nanobots, which are programmed to catalogue the clumped parts and rearrange them into a flyable jet according to a blueprint. Would this have a reasonable prospect of success? ANS: Obviously yes, showing the power of intelligent guidance in accord with a design. e: What if we had more complex nanobots capable of doing both jobs at once? ANS: they could to the job too. And on the nature of entropy the direct reduction would be similar to the summed reductions to clump then congfigure to flyable condition. f: Does this illustrate that FSCO/I will strongly tend to be deeply isolated in the config space of possible scattered or even clumped parts? ANS: yes, as though there may be many ways to configure parts to flyable condition the number of ways parts not constrained by that condition could be arranged is much much higher. Similarly, the number of ways 10^18 parts could be scattered at random by diffusive and Brownian motion forces across the vat is even utterly higher yet. f: So is a monkey at keyboards or needle in haystack or trying to find islands in a great ocean analogy reasonable? ANS: Obviously yes, as illustrations. g: What about arrangements of 500 coins in a string? ANS: Any 3-d config can be described by a sufficiently long string that specifies nodes, arcs between them, and orientations, couplings if necessary, as say AutoCAD shows. So, a coded string of sufficient length is informationally equivalent to the parts in the vat. But as we are dealing with 10^6 parts amidst 10^18 possible locations, we have strings much much larger than 500 bits to contend with. As AutoCAD drawing file sizes tell us. Thus, the 500 coin string thought exercise is a much simpler but directly relevant exercise. And, if the system is such that certain preferred clusters of configs become very probable without something that makes such probable . . . nanobots, something is fishy. In effect you are saying that allegedly random diffusive forces are not, they are carrying out what the nanobots would do based on programming.
I trust the point is clear enough. Organising work demands explanation on forces credibly able to carry such out within the atomic and temporal gamut of the solar system or the observed cosmos. the only such empirically warranted force capable of generating FSCO/I is design. If you wish to deny this, kindly provide empirically f=grounded warrant. On my side the very posts in this thread, which are bit string equivalent, are cases in point. The only credible explanation for a post in English that makes sense in context of the thread is design. So are the computers we are using, which as we saw WLOG reduce to strings, informationally. And so forth. KF kairosfocus
And as the binomial theorem will quickly tell us, the vast bulk of the set of possibilities will be outcomes near 50-50 H & T, in no particular order . . . three of the five members of our zone of interest are in fact 50-50, but in very particular orders indeed. The exact 50-50 outcomes as a set are something like 500!/ [(250!)^2], and those near 50-50 are going to be likewise very large fractions of the space of possibilities, in aggregate these dominate. (That near 50-50 subset will include essentially the code for every 72 character string of text in reasonable English. Very large but absolutely overwhelmed by the nonsense strings in no particular order.) If you look at the OP, you will see a diagram that illustrates how such peakiness emerges with dice as the number of possibilities goes up, and the video of a Quincunx machine in action shows how an effectively normal curve emerges from a probabilistic chance process with many possibilities for beads to go a step left/right in succession. Notice in this case how after 5,0000 beads, very few are in the far skirts, and yet at each pin it hits, a bead can with 50-50 odds go left or right one step. It's interesting, isn't it, that all you're examples have very specific chance hypotheses associated with them. You'd get different behaviour with different chance mechanisms (skewed distributions with loaded dice, endlessly increasing varinace with random walks...). What I don't follow if your leap from these well define examples to all chance hypotheses. As I say, I can think of several chance mechanisms that stand a good probability of creating a string equvialent to 500H, and random walk being one such. wd400
KF: Happy new year to you and to all! :) gpuccio
F/N: BTW, 10^57 atoms flipping coins and having tables to flip them on, would probably just about exhaust the atomic resources of the observed cosmos. KF kairosfocus
A happy new year to you GP and to you all. There have been some very interesting turn of new year discussions here at UD, I link one in the above, too. kairosfocus
wd400: In case you are interested, exactly two years ago I started a long exchange with Elizabeth Liddle about how it is possible to model the neo darwinian algorithm, including both the RV and NS part, and using dFSCI. You can find that discussion here: https://uncommondesc.wpengine.com/intelligent-design/evolutionist-youre-misrepresenting-natural-selection/ It starts more or less at post 137, with my "happy new year" for 2012 (not relevant to the discussion :) ), and goes on for post 223. The most relevant part starts at post 194. In brief, I try to define exactly how the random part of the algorithm can be evaluated in specific hypotheses, how dFSCI can be computed for a specific protein, and how explicit paths of NS would modify the computation. I discuss also the non relevance of genetic drift to the computation of probabilities, which is a point often misunderstood. If you are interested in discussing any aspect of that, I am here. gpuccio
WD: Pardon, but the notion that you must calculate the specific probabilities of a particular hypothesis before being able to see it as implausible is both a now common objecting talking point to the design inference and one that has been a failure from the outset. Whether or no you are independently putting it up, that remains the case. The basic fallacy in it is the underlying assumption that chance can account for anything given enough time and resources. In my student days, a common way to put it by evolutionists was to recite what seems to be an urban legend that in a debate with Bishop Wilberforce (or the like) Huxley gave the example that enough monkeys typing for enough time could come up with the text of Shakespeare's plays, or just that of Hamlet. This has recently been multiplied by the idea that since every individual configuration of 500 tosses of a fair coin is equiprobable, we should be no more surprised to see (i) 500 H (or the same with tails), or (ii) alternating H & T, or (iii) the ascii code for the first 72 characters of this post, as for any other. (NB: This last shows how coin tossing is informationally equivalent to the monkey typing exercise.) Nonsense. What happens is that we have clustering of sets of possible outcomes, so that we are interested above in a set of just five possibilities . . . or something substantially similar. Everything else in the config space of 2^500 ~ 3.27*10^150 possibilities will not be an event from our clustered zone of interest. And as the binomial theorem will quickly tell us, the vast bulk of the set of possibilities will be outcomes near 50-50 H & T, in no particular order . . . three of the five members of our zone of interest are in fact 50-50, but in very particular orders indeed. The exact 50-50 outcomes as a set are something like 500!/ [(250!)^2], and those near 50-50 are going to be likewise very large fractions of the space of possibilities, in aggregate these dominate. (That near 50-50 subset will include essentially the code for every 72 character string of text in reasonable English. Very large but absolutely overwhelmed by the nonsense strings in no particular order.) If you look at the OP, you will see a diagram that illustrates how such peakiness emerges with dice as the number of possibilities goes up, and the video of a Quincunx machine in action shows how an effectively normal curve emerges from a probabilistic chance process with many possibilities for beads to go a step left/right in succession. Notice in this case how after 5,0000 beads, very few are in the far skirts, and yet at each pin it hits, a bead can with 50-50 odds go left or right one step. Clearly -- with the Quincunx, you can WATCH it happen -- there are some net outcomes that are vastly improbable relative to others. Some zones of interest [notice the neat little columns . . . ] are much less likely to be hit by a random chance process than others. Empirical fact, easily seen. (And, the fact that stock returns also fit the pattern pretty well, should give warning about investment schemes that effectively promise the heavens. Unless you have very good internal knowledge and warrant, such schemes should be viewed as too good to be true. But if your name is Bill Gates c. 1980, you already know the profs have nothing to teach you . . . even if very few will believe you. [Give me your address to bill you for advice you need to heed financially even if you refuse to heed it on your worldviews commitments, where whether or no you believe it, as Pascal -- a founder of probability theory -- warned so long ago now, you are wagering your soul.]) The 500 coin toy challenge therefore gives us a picture that is reasonable: finding zones of interest in large config spaces. Just like, how Shakespeare's entire corpus is in that space, 72 characters at a time. So, it should be "simple" for our imaginary monkeys flipping coins or sitting at keyboards to hit the right keys or flips, nuh? We shouldn't be surprised then to see the text of Hamlet by happy chance! Rubbish. Patent absurdity. For the very same reason why Bill Gates and co paid programmers to intelligently design their operating systems and office software instead of running banana plantations and running an operation based on millions of monkeys busy at keyboards or at coin flipping. The needle in haystack search challenge makes nonsense of such a notion. (Cf. here earlier in the ID Foundations series on monkeys, keyboards and search space challenges. Please read the onward linked 2009 Abel paper on the universal plausibility bound. This addresses islands of function as implied by the requisites of getting multiple components to be properly matched, fitted together and organised to achieve function. Also, protein synthesis can be seen as a clear example of an algorithmically controlled information based process with Ribosomes as NC assembly machines, here on in context. Ask yourself, if proteins in functional clusters can so readily form and work in Darwin's pond or the like, why then is the cell such a Rube Goldberg-ish contraption, going the looooong way around to do what should be ever so simple? Or, is it a case where this is the type of factory it takes, much as we see in a pharmaceuticals plant that makes in much less elegant and far more resources intensive ways, bioactive compounds. Or, in an aircraft assembly plant.) Let's go back, to my remarks at 21 above to see what happens when we substitute for monkeys the 10^57 atoms of our solar system working as impossibly fast 500-coin flippers [500 coins flipped every 10^-14 s, as fast as fast ionic rxns . . . organic rxns are orders of magnitude slower], for 10^17 s:
The issue is NOT what distribution can we construct and “mathematicise” over. That is irrelevant when we run into FSCO/I — due to the need for the right parts in a proper config to work — forcing small zones in the space of possible configs, and the scope of the config space being such that no search based on atoms being able to sample a fraction appreciably different from zero. Sometimes, there is just too much haystack, and too few, too isolated search resources to have hopes of finding needles. For 500 bits and the gamut of the solar system, we can set up each of 10^57 atoms as a searching observer and give it a string of 500 coins to watch, updating every 10^-14 s, as fast as ionic chem rxns,for 10^17 s . . . a typical lifetime estimate. Impossibly generous, but the result is that the sample to the space of 3.27 * 10^150 possibilities for 500 bits, is as a one straw sample to a cubical haystack 1,000 light years thick, about as fat as our galaxy’s central bulge. Effectively no sample of a size plausibly able to find reasonably rare clusters of configs. Superpose on our galactic neighbourhood and you can predict the result with all but certainty: straw. Doesn’t matter the precise distribution, unless it is in effect not chance at all but a directed search or a programmed necessity. Which would point straight to a design by fine tuning. Remember, the first context for this is a warm pond with some organic precursors in it or the like, operating on known forces of thermodynamics (esp. diffusion and Brownian motion), and known chemistry and physics. No, the hoped for magic out of “natural selection” — which is really a distractor as chance is the only actual candidate to write genetic code (differential reproductive success REMOVES info, the less successful varieties) — is off the table. For, one of the things to be accounted for is exactly the self-replicating facility to be joined to a gated encapsulation and a metabolic automaton based on complex functionally specific molecular nanomachines. Hundreds of them, and in a context of key-lock fitting that needs homochirality. Which thermodynamics is not going to give us easily: mirror image molecules have the same energy dynamics. A toy example that gives an idea of the challenge is to think of a string of 500 fair coins all H, or alternating H and T or coins with the ASCII code for the first 72 characters of this message. No plausible blind chance process is going to get such in any trial under out observation, with all but certainty. For the overwhelming bulk cluster of outcomes of coin tossing or blindly arrived at configs will be near 50-50 H and T in no particular pattern . . .
Why so much stress on a toy example? Let me cite the clip from Wiki's Infinite Monkeys article in IDF # 11:
These images invite the reader to consider the incredible improbability of a large but finite number of monkeys working for a large but finite amount of time producing a significant work, and compare this with the even greater improbability of certain physical events. Any physical process that is even less likely than such monkeys’ success is effectively impossible, and it may safely be said that such a process will never happen.
That is, we are dealing with that which is empirically effectively impossible of observation on the gamut of our solar system, or by extension to just 1,000 bits or coins, our observed cosmos. Our ONLY observed cosmos . . . to try to drag in a speculative multiverse at this point is to traipse into philosophy, which means we have a perfect right to demand a full bore comparative difficulties analysis across major families of worldviews, cf. here on in context. Now, in Darwin's little pond or the like, we will have racemic [near 50-50] mixes of various relevant molecules, and a great many others that are not so relevant, plus water molecules that will readily hydrolyse and break up chains. We are dealing with a challenge of functionally specific complex organisation and associated information [FSCO/I] that dwarfs the challenge of aircraft assembly or building a pharmaceuticals plant. That can be readily seen from the biochem flowchart for the metabolic pathways of the cell, e.g. here. The point here is that a complex, functionally specific organised entity has an implicit wiring diagram "blueprint" that can be represented as a chain of coded bit strings. Just ask the makers of AutoCAD. And just ask anyone who has had to do the development of a microprocessor controlled system from the ground up, hard and soft ware, whether he would trust monkeys at keyboards or flipping coins to solve the organisation challenge involved . . . as one such, I readily answer: no, thank you. Worse, the key entity in view includes an additional facility, a code based von Neuman kinematic self replication function. As Paley long ago pointed out in his Ch 2 example that somehow almost always gets omitted in the rush to dismiss his point, a watch that is self replicating is even more evidently a wonderful contrivance than a "simple" time-telling watch. That means -- surprise (NOT) -- the coin flipping monkeys and atoms toy example ALSO covers the origin of such organisation. And so, we begin to see the magnitude of the challenge. Which, Denton aptly summarised:
To grasp the reality of life as it has been revealed by molecular biology, we must magnify a cell a thousand million times until it is twenty kilometers in diameter [[so each atom in it would be “the size of a tennis ball”] and resembles a giant airship large enough to cover a great city like London or New York. What we would then see would be an object of unparalleled complexity and adaptive design. On the surface of the cell we would see millions of openings, like the port holes of a vast space ship, opening and closing to allow a continual stream of materials to flow in and out. If we were to enter one of these openings we would find ourselves in a world of supreme technology and bewildering complexity. We would see endless highly organized corridors and conduits branching in every direction away from the perimeter of the cell, some leading to the central memory bank in the nucleus and others to assembly plants and processing units. The nucleus itself would be a vast spherical chamber more than a kilometer in diameter, resembling a geodesic dome inside of which we would see, all neatly stacked together in ordered arrays, the miles of coiled chains of the DNA molecules. A huge range of products and raw materials would shuttle along all the manifold conduits in a highly ordered fashion to and from all the various assembly plants in the outer regions of the cell. We would wonder at the level of control implicit in the movement of so many objects down so many seemingly endless conduits, all in perfect unison. We would see all around us, in every direction we looked, all sorts of robot-like machines . . . . We would see that nearly every feature of our own advanced machines had its analogue in the cell: artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of components, error fail-safe and proof-reading devices used for quality control, assembly processes involving the principle of prefabrication and modular construction . . . . However, it would be a factory which would have one capacity not equaled in any of our own most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours . . . . Unlike our own pseudo-automated assembly plants, where external controls are being continually applied, the cell's manufacturing capability is entirely self-regulated . . . . [Denton, Michael, Evolution: A Theory in Crisis, Adler, 1986, pp. 327 – 331.]
In short, it should be clear why the living cell is an information rich system, and why its claimed spontaneous origin -- a hugely contingent process -- falls under the coin-flipping exercise challenge. And if you imagine that I exaggerate the degree to which the various schools of thought on OOL have deadlocked to mutual ruin [and if you imagine you can dismiss me as wanting to get to a living cell in one step . . . ], let me cite an exchange from a few years back between two major spokesmen for the genes- first and the metabolism- first schools:
[Shapiro:] RNA's building blocks, nucleotides contain a sugar, a phosphate and one of four nitrogen-containing bases as sub-subunits. Thus, each RNA nucleotide contains 9 or 10 carbon atoms, numerous nitrogen and oxygen atoms and the phosphate group, all connected in a precise three-dimensional pattern . . . . [[S]ome writers have presumed that all of life's building could be formed with ease in Miller-type experiments and were present in meteorites and other extraterrestrial bodies. This is not the case. A careful examination of the results of the analysis of several meteorites led the scientists who conducted the work to a different conclusion: inanimate nature has a bias toward the formation of molecules made of fewer rather than greater numbers of carbon atoms, and thus shows no partiality in favor of creating the building blocks of our kind of life . . . . To rescue the RNA-first concept from this otherwise lethal defect, its advocates have created a discipline called prebiotic synthesis. They have attempted to show that RNA and its components can be prepared in their laboratories in a sequence of carefully controlled reactions, normally carried out in water at temperatures observed on Earth . . . . Unfortunately, neither chemists nor laboratories were present on the early Earth to produce RNA . . . [Orgel:] If complex cycles analogous to metabolic cycles could have operated on the primitive Earth, before the appearance of enzymes or other informational polymers, many of the obstacles to the construction of a plausible scenario for the origin of life would disappear . . . . It must be recognized that assessment of the feasibility of any particular proposed prebiotic cycle must depend on arguments about chemical plausibility, rather than on a decision about logical possibility . . . few would believe that any assembly of minerals on the primitive Earth is likely to have promoted these syntheses in significant yield . . . . Why should one believe that an ensemble of minerals that are capable of catalyzing each of the many steps of [[for instance] the reverse citric acid cycle was present anywhere on the primitive Earth [[8], or that the cycle mysteriously organized itself topographically on a metal sulfide surface [[6]? . . . Theories of the origin of life based on metabolic cycles cannot be justified by the inadequacy of competing theories: they must stand on their own . . . . The prebiotic syntheses that have been investigated experimentally almost always lead to the formation of complex mixtures. Proposed polymer replication schemes are unlikely to succeed except with reasonably pure input monomers. No solution of the origin-of-life problem will be possible until the gap between the two kinds of chemistry is closed. Simplification of product mixtures through the self-organization of organic reaction sequences, whether cyclic or not, would help enormously, as would the discovery of very simple replicating polymers. However, solutions offered by supporters of geneticist or metabolist scenarios that are dependent on “if pigs could fly” hypothetical chemistry are unlikely to help.
In short, RNA components can be hard to synthesise, and chains are vulnerable to hydrolysis through which water molecules split up the chain. Worse, to get them to functionally chain to relevant lengths and code for, then join up with functional proteins on say a clay surface then inside a suitable protective membrane, is problematic. And, it is similarly problematic to get the clusters of functional bio-molecules to carry out typical metabolic processes. No wonder, then, that science writer Richard Robinson has noted that neither main model -- despite what we may sometimes read to the contrary in more popular writings or see and hear in "science news" articles (or even, sadly, textbooks) -- is "robust." The "simple" cell ain't, and the "simple" pathway up from an autocatalytic reaction set or some spontaneously formed RNA or some cometary debris etc etc isn;t. Functionally specific, complex organisation and associated information are not going to be had on the cheap. Just ask Bill Gates. There is no free lunch. And, that is why in the book of that name, Wm A Dembski, UD's founder, went on record:
p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity, or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . . Biological specification always refers to function . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .” p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”
There is no free lunch WD, and no chance hypothesis that genuinely is a chance driven hyp is going to do better than the 10^57 500-coin flipping atoms of our solar system at it for 10^17 s. Such an ideal case is woefully inadequate to sample better than one straw to a cubical haystack 1,000 LY across, of the 3.27*10^150 possibilities for just 500 coins, 72 or so ASCII characters worth of info. Which as an experienced programmer you know can do very little by itself. The genomes for living forms start out at 100 - 1,000 kbits, and easily go on into the billions, with major body plans requiring increments of 10 - 100+ billions, dozens of times over. The resources to do that incrementally in realistic populations just are not there, which is the message of the 10^57 coin-flipping atoms. So, you need to squarely face the implications of Lewontin's a priori materialism, and how it biases the ability to address the real challenge of origins of the world of life:
the problem is to get [the general public] to reject irrational and supernatural explanations [--> note the implicit bias, polarising rhetoric and refusal to address the real alternative posed by design theory, assessing natural (= chance and/or necessity) vs ART-ificial alternative causes on empirically tested reliable signs] of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth [--> NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting]. . . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [[--> another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [[--> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door. [From: “Billions and Billions of Demons,” NYRB, January 9, 1997. if you think this is "quote mined," I suggest you read the fuller cite and notes here.]
Philip Johnson's retort in Nov that year was richly deserved:
For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose." . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
WD, please, go flip 500 coins for a bit, and see what happens, watch the Quincunx in action, and then ponder the thought exercise of 10^57 coin flipping atoms> Then, ponder what that is telling us about the credible -- and only observed -- source of FSCO/I, intelligence. Then, reflect on the FSCO/I in the living cell and in the many complex body plans for life forms, and ask your self whether you REALLY have empirical observational evidence that backs the claim that blind chance and mechanical necessity suffice to explain what you see, why. Enjoy the new year. KF kairosfocus
Yes. My point was you don't need to calculate a precise number but you do have to consider the specific chance hypothesis. The probability of a given protein function is very different under evolutionary mechanisms than by atoms bumping into each other, surely? wd400
wd400: I said nothing about CSI or evolution with natural selection. You said you don't see how one can make a claim about the plausibility of a thing unless they can compute the probability. My example was to show that it is indeed possible to know a thing is implausible without formally computing the probability. If you found the verbatim text of War and Peace written out in molecules in the DNA of an ancient ant embalmed in amber (tip of the hat to Dr. Liddle), is it a plausible hypothesis that it got there by any non-intelligent process? Can you compute the probabilities? Knowing the formal probability is not necessary to reach a reasonable finding of the plausibility of a hypothesis. William J Murray
It's certainly true that you have to find the right frame in which to study questions in biology. You'd get nowhere trying to study ecological interactions if you started with molecular biology. The point I didn't quite make, is that very often physcicists/engineers/software people coming to biology mistake the map for the territory. As I say, if you want to study the origin of life then you need to do chemistry. wd400
WD400 It seems to me that the root of biology is basically abstract chemistry (protein synthesis, etc.), and the resulting ecology and economy of nature that emerges is abstract. How can you do biology and avoid abstractions? littlejohn
Sal, I write software myself, and have long noted over-representation of IT folks in ID fields (along with engineers). Biology is always beset by people from other fields who think there own field, be it IT engineering or physics provides the best way to do biology. Very rarely, these people learn enough biology to make a genuine contribution. I've yet to see an IDist ITers who falls into that category. We won't agree on that, which is fine, but I prefer to focus on the science rather than the abstraction. Upright, I hope you feel better having made this contribution, William Murray, I'm not someone who cares that CSI conditioned on evolutionary processes can't be calculated. And I can't calculate the specific probability that "just physics" would create a laptop screen. The point is, and is hard to imagine I'm still trying to make this, you need to know precisely what the chance hypothesis is. If it's evolution by mutation, selection, drift, speciation and the like then it's a different question than atoms bumping into each other. wd400
wd400 says:
I really don’t see how you can asses the plausibility of a hypothesis without specifically calculating the probability of that hypothesis.
Can you assess whether or not it it is plausible that the molecule configuration you are looking at right now (the configuration of the molecules that make up the pixels in your viewscreen) was not generated by an unseen, intelligent agent, but rather was generated by chance (undirected) interactions of chemical properties according to physics? Answer: Yes, you can make such an assessment: it is not plausible. Question 2: Can you specifically calculate the probability that this configuration of molecules could have been generated by chance interactions of chemical properties according to physics? No? Hmmm. William J Murray
WD,
I really think talk of symbols and information almost always obscures rather than helps in these cases.
It's 8:00 o'clock in my time zone, on New Years Eve. I have some obscure old jazz on the box; the house looks like an amalgamation of a family deli and a package store, and the Mrs in floating around the house to the music as guest arrive at the door. I obviously don't have time at the moment to address your comment, but suffice it to say, with all due respect to you, you simply have not studied the issue to the level required to comment on it. You would have never said what you just said. Happy New Year to you and yours. Upright BiPed
I really think talk of symbols and information almost always obscures rather than helps in these cases. We are talking about biology (and chemistry), so we should focus on that not an abstraction.
I respect that you feel that way and that highlight a conflict between the ID and non-ID camps that is not just metaphysical. ID proponents are disproportionately individuals in the IT industry. They see life as an information processing, software intensive system. Developmental mechanisms, DNA translation, regulation, are information intensive. Yes, physics and chemistry are involved just like physics and chemistry are involved in the hardware of a computer, but the software doesn't come from chance and law mechanisms, it comes from intelligence. DNA software of is critical to making proteins, and proteins are critical to making DNA, but that become a chicken and egg problem. The OOL problem is one of building both the hardware and software simultaneously before chemical degradation blows apart the pre-cursors.
We are talking about biology (and chemistry), so we should focus on that not an abstraction
We could do that too, and it shows the expected evolution of the chemicals is away from a living organism, not toward one. We look at any dead creature, if it's not devoured and decomposed by other creatures, the experimental and observational expectation is the collection of dead parts will become even more dead over time -- less likely to ever become living again. If that happens with organisms that were alive, how much more will life not arise in a pre-biotic soup. Real chemical evolution is toward death. Like 500 fair coins heads, the origin of life seems to be at variance with theoretical expectation from basic chemistry and physics. We might ascribe expectation to a chance process or whatever, but OOL seems deeply at variance with expectation. It doesn't mean life is impossible any more than all-coins-heads is impossible, it just doesn't seem consistent with expectation of a mindless process. scordova
I really think talk of symbols and information almost always obscures rather than helps in these cases. We are talking about biology (and chemistry), so we should focus on that not an abstraction. The questions for the OOL are about whether metabolic processes can start spontaneously, under what conditions self-replication arises and what reactions can give rise to "precurssor" molecules. We should consider thouse questions (which, of course, remain largely unanswered). wd400
@ KF A very helpful post indeed. It's amazing how even simple concepts are made outrageously esoteric in the defense of Darwin. Yeesh! Optimus
Selection won't work for OOL. Symbolic information processing must, as a matter of principle, be decoupled from physics and chemistry, much like the symbolism of head/tails is decoupled from mechanical considerations. That's exactly why all heads stands out as a design pattern, it violates experimental expectation. Symbolic organization in the first life will also violate experimental expectation from a prebiotic soup both on theoretical ground and empirical grounds (i.e. dead dogs stay dead dogs). Even granting for the sake of argument Darwinian Selection actually works as advertised, it cannot solve the OOL problem, which is quite severe. Of course, our estimates of distribution could be wrong, but what's wrong with a falsifiable hypothesis? That's a good thing. scordova
I'm not repeating talking points. I really don't see how you can asses the plausibility of a hypothesis without specifically calculating the probability of that hypothesis. I can think of many chance processes that create a 500 H or T sequence of coins (or equivalent). You seem to be saying no chance hypothesis could ever explain that? You also, as far as I can tell from all the "->" business, fail to grasp that natural selection makes the avaliable sequence space much smaller than the theoretical one. wd400
PS: And simply repeating talking points about all chance hyps is not going to make the challenge that your chance based search process, no matter how powerful is not going to beat 10^57 observers updating every 10^-14 s for 10^17 y, so they cannot account for even the finding of islands of function in a space of configs for 500 bits, a toy sized space compared to that for a genome of 100,000 bases or increments of 10 - 100+ mn bases. If anyone has been "ignoring," WD, it is you. kairosfocus
WD: Did you observe the following remarks just above?
Remember, the first context for this is a warm pond with some organic precursors in it or the like, operating on known forces of thermodynamics (esp. diffusion and Brownian motion), and known chemistry and physics. No, the hoped for magic out of “natural selection” — which is really a distractor as chance is the only actual candidate to write genetic code (differential reproductive success REMOVES info, the less successful varieties) — is off the table. For, one of the things to be accounted for is exactly the self-replicating facility to be joined to a gated encapsulation and a metabolic automaton based on complex functionally specific molecular nanomachines. Hundreds of them, and in a context of key-lock fitting that needs homochirality. Which thermodynamics is not going to give us easily: mirror image molecules have the same energy dynamics . . . . And when it comes to body plans, we should note that to get to such we are going to need jumps of 10 – 100+ million bits of just genetic info, as we can see form genome sizes and reasonable estimates alike. The notion that there is a smoothly varying incremental path from a unicellular ancestor tot he branches of the tree of life, is not an empirically warranted explanation based on demonstrated capacity, but an ideological a priori demand. just as one illustration, the incrementalist position would logically imply that transitional forms would utterly dominate life forms, and after observing billions of fossils in the ground, with millions taken as samples and over 250 000 fossil species, the gaps Darwin was embarrassed by are still there, stronger than ever. And, there is no credible observed evidence that blind chance and mechanical necessity on the gamut of our solar system can write even 500 bits of code. For our observed cosmos, move up to 1,000 bits. The only empirically warranted source of such a level of code as 10 – 1000 mn bits is design. And code is an expression of symbolic language towards a purpose, all of the which are strong indicators of design.
Kindly explain to me how these constitute IGNORING "natural selection." To highlight: 1 --> In the warm pond or the like, until you have encapsulation and gating, diffusion and cross-reactions will break up reaction sets. 2 --> want of homochirality will break up the key-lock fitting. 3 --> Until you have a metabolic automaton joined to a code based self replicating entity within the encapsulated system, you do not have cell based life. And the speculative models mutually ruin one another. That is why OOL is empty just so stories at popular level and increasing crisis at technical level. 4 --> No self replication, no reproduction, and no differential reproductive success leading to subtracting out the less fit varieties. 5 --> There is a tendency to reify "natural selection" and treat it as if it has creative powers. This is an error, that part of the Darwinian model SUBTRACTS info, it does not add it. Differential reproductive success leads to REMOVAL of the less fit. 6 --> Let's write as an expression:
a: chance variation (CV) b: LESS less reproductively successful varieties (LRSV) c: Gives incremental descent with modification (IDWM = micro evo) d: Which goes to a branching tree pattern of diversification (BTPD) e: Which accumulates as macro evo (Macro Evo)
7 --> That is: CV - LRSV --> IDWM = Micro Evo --> BTPD --> Macro Evo 8 --> As the minus sign emphasises, the ONLY adder of info and organisation is CV. And, on empirical studies the steps are up to maybe 6 - 7 bases. 9 --> Blend in reasonable generation times, mut rates, pop sizes etc, and very modest changes will require easily 100's of millions of years. We have 500 - 600 MY or so since the Cambrian. And if fossil dates are taken, we have several MY to a few dozen MY to account for HUGE body plan changes. 10 --> And that is assuming a smooth incremental path, so that incremental transitions do the job. The evidence is missing, and there is reason to believe that body plans exist in islands of function. 11 --> If you doubt this, simply think of the many hundreds of protein fold domains that have only one or a few members and are locked away in islands in amino acid chain space. That is the first building brick to a new body plan. So, what we plainly have is a mechanism that might explain minor adaptations forced to punch far above its weight because of the a priori materialism imposed under the guise of a modest methodological requirement. And no, I have definitely not "ignored" natural selection. KF kairosfocus
Well, you seem to be ignoring natural selection, focusing on the origin of life and saying that functional proteins wouldn't fall out of a "prebiotic soup" as a result of amino acids bumping into each other. But no one (that I know of) thinks that would happen. So what's the point? If we want to test the plasuability of a particular origin of life scenario we need to understand that particular hypothesis, this 500bit business isn't going to account for all "chance" hypotheses. wd400
C & Q: You are technically right, but in fact the list of sources as given was the direct source of the sets of digits. D and E were constructed D notionally, E based on the Fibonacci series. You would probably have to get 10^22 or so digits of pi to be fairly sure that you would catch 21 digit numbers, and I think you would need to get a supercomputer to search. KF kairosfocus
WD: The issue at stake first, is what does "chance" mean. It answers, using dice as an illustration of one type. Quantum sources are also mentioned. The matter is then extended to an illustrative chance mutation scenario. Then the issue of searching config spaces comes in. I get the feeling, this is no longer a familiar topic. The issue is NOT what distribution can we construct and "mathematicise" over. That is irrelevant when we run into FSCO/I -- due to the need for the right parts in a proper config to work -- forcing small zones in the space of possible configs, and the scope of the config space being such that no search based on atoms being able to sample a fraction appreciably different from zero. Sometimes, there is just too much haystack, and too few, too isolated search resources to have hopes of finding needles. For 500 bits and the gamut of the solar system, we can set up each of 10^57 atoms as a searching observer and give it a string of 500 coins to watch, updating every 10^-14 s, as fast as ionic chem rxns,for 10^17 s . . . a typical lifetime estimate. Impossibly generous, but the result is that the sample to the space of 3.27 * 10^150 possibilities for 500 bits, is as a one straw sample to a cubical haystack 1,000 light years thick, about as fat as our galaxy's central bulge. Effectively no sample of a size plausibly able to find reasonably rare clusters of configs. Superpose on our galactic neighbourhood and you can predict the result with all but certainty: straw. Doesn't matter the precise distribution, unless it is in effect not chance at all but a directed search or a programmed necessity. Which would point straight to a design by fine tuning. Remember, the first context for this is a warm pond with some organic precursors in it or the like, operating on known forces of thermodynamics (esp. diffusion and Brownian motion), and known chemistry and physics. No, the hoped for magic out of "natural selection" -- which is really a distractor as chance is the only actual candidate to write genetic code (differential reproductive success REMOVES info, the less successful varieties) -- is off the table. For, one of the things to be accounted for is exactly the self-replicating facility to be joined to a gated encapsulation and a metabolic automaton based on complex functionally specific molecular nanomachines. Hundreds of them, and in a context of key-lock fitting that needs homochirality. Which thermodynamics is not going to give us easily: mirror image molecules have the same energy dynamics. A toy example that gives an idea of the challenge is to think of a string of 500 fair coins all H, or alternating H and T or coins with the ASCII code for the first 72 characters of this message. No plausible blind chance process is going to get such in any trial under out observation, with all but certainty. For the overwhelming bulk cluster of outcomes of coin tossing or blindly arrived at configs will be near 50-50 H and T in no particular pattern. All of this and more has been repeatedly pointed out, but we must not underestimate the blinding power of an a priori ideology that demands that something much harder than this MUST have happened to get the ball rolling for life, and wraps that in the lab coat and demands that the only acceptable explanations will be those that start from blind chance and mechanical necessity. And when it comes to body plans, we should note that to get to such we are going to need jumps of 10 - 100+ million bits of just genetic info, as we can see form genome sizes and reasonable estimates alike. The notion that there is a smoothly varying incremental path from a unicellular ancestor tot he branches of the tree of life, is not an empirically warranted explanation based on demonstrated capacity, but an ideological a priori demand. just as one illustration, the incrementalist position would logically imply that transitional forms would utterly dominate life forms, and after observing billions of fossils in the ground, with millions taken as samples and over 250 000 fossil species, the gaps Darwin was embarrassed by are still there, stronger than ever. And, there is no credible observed evidence that blind chance and mechanical necessity on the gamut of our solar system can write even 500 bits of code. For our observed cosmos, move up to 1,000 bits. The only empirically warranted source of such a level of code as 10 - 1000 mn bits is design. And code is an expression of symbolic language towards a purpose, all of the which are strong indicators of design. Save to those locked up in an ideological system that locks such out a priori. And of course all of this has been pointed out ov er and over and over, with reasons, days and weeks at a time, again and again. But id there is an ideological lock out there is a lock out. No amount of evidence or reasoning will shift that, only coming to a point where there is a systemic collapse that makes it obvious this is a sinking ship. How do I know that? History. The analysis that showed how marxist central planning would fail was done in the 1920's. It was fended off and dismissed until the system collapsed in the late 1980's. But it was important for some despised few to stand their ground for 60 long years. In an info age sooner or later enough will wake up to the source of FSCO/I to make the system collapse. Just we have to stand ground and point to the fallacies again and again until the break-point happens. And that is the context, WD, in which I took time to point out that whatever the obfuscatory rhetorical ink clouds that may be spewed to cloud the issue, what is meant by chance in the design inference is fairly simple, and not at all a strained or dubious notion. KF kairosfocus
Hmm, This post seems to say precisely that "chance", left hanging by itself, it too vague and imprecise a term to describe a cause. As you say, we can test a chance explanation for a series of dice rolls, but only if we test a specific chance hypothesis (a fair die, rolled not placed). The post that started all that wasted energy and cross talk about "chance as a cause" very specifically didn't provide present a specific chance hypothesis. More to the point, what's the appropriate probability distribution for, say, the evolution of a particular amino acid sequence given mutation, drift and natural selection? If you don't have that, then how can you reject the "chance" hypothesis? wd400
Querius, It is probably true that pi, and in fact most numbers, contain all possible numeric sequences. It's no possible to prove it though, wd400
Haha, good one Cantor! But can you actually prove mathematically that ALL numeric sequences can be found in pi (versus other types of random numbers)? Think cryptography. Just wondering. ;-) -Q Querius
Which of these is pi digits...
Correct answer: A thru E are pi digits cantor
Not planned, not controlled, sometimes not controllable (by us). kairosfocus
What is chance? Chance is nothing more than happenstance, accidental, ie not planned Joe
Box, I added the vid. BA: There are ever so many fascinating twists and turns out there indeed. I am however here trying to nail down a shingle on a fog bank so to speak. I am thinking cavils may need to go on that growing list of Darwinist debate tactics and fallacies here. And of course, tricks with chance and necessity. KF kairosfocus
kf, I certainly don't want to take anything away from the explanatory filter. Nor do I take lightly the concerted effort at obfuscation that Darwinists continually employ towards the word 'chance' (and everything else in the debate for that matter). I just thought that you would appreciate the subtle, but important, distinction that its to be found between the random entropic events of the universe and the 'unbounded' randomness in quantum mechanics that results as a consequence of our 'free will' choice as to how we choose to consciously observe an event.,, Personally, although many i's have to be dotted and t's crossed to further delineate the two, I found the distinction between the two types of 'chance', uncovered thus far, to be fascinating. bornagain77
Box: That could work in many cases -- especially if you include an indefinite number of uncontrolled perturbing events, but the more effective way is the direct empirical one: set up quite similar initial circumstances and see how the results come out. Drop a die in the "same" position from a holder at a given height and place over a surface 500 times, and see the result. (Or try a Quincunx or Galton Board machine that simulates the Normal distribution, cf video. Notice the ideal model and the stock distribution histogram.) Chance in action. KF kairosfocus
Thx KF, One more question: I understand contingent in this context as "depending on unknown events / conditions". Do you agree? Box
PPS: Which of these is pi digits, which sky noise, which phone numbers (admittedly the local phone directory is a bit on the scanty side), and why does the pattern stand out so clearly at D and at E: A: 821051141354735739523 B: 733615329964125325790 C: 698312217625358227195 D: 123409876135791113151 E: 113581321345589146235 (Ans: C -- sky, A - pi, B - phone, last 2 digits each of line codes.) kairosfocus
Box, low and high. Low (or ideally no) contingency, high contingency in context. KF kairosfocus
PS: I should note that chance variations or mutations can be triggered by radioactivity. An Alpha particle ionises water molecules, triggering messing with the genes by processes that are accidental, uncorrelated. Non-foresighted variation results. A gene changes in an uncontrolled way through resulting chemical reaction. Suppose the creature is not killed by being hit by a large dose. (Radiation Physics was a gruesome course.) It has a hereditable variation. That feeds into the gene pool, again with all sorts of uncontrolled factors. A new variety pops up. Somehow, in some env't, it is slightly advantageous. Less advantaged varieties then are outcompeted, and the population is modified. Continue long enough and voila, tree of life. Big problems. The only actual adder of info was chance. The natural selection part is really culling out of the disadvantaged for whatever reason. Mods do happen, but with the scope of searches for new body plans etc, 10 - 100+ mn bits, with reasonable pops, mut rates, reproduction rates and the step limit warranted empirically of 6 - 7 co-ordinated muts at one go, we do not have enough time, population or maybe even atoms to sufficiently explore the space of possibilities to give a plausible chance of getting to novel islands of function. But of course, this is hotly denied by the Darwinists. They need to provide empirical observation backed answers, and not beg questions by imposing a priori materialism. Starting with OOL. (You will remember my year long tree of life challenge, which does not have a really solid attempt, even though I put something together from various remarks. And OOL is the ROOT of the Darwinist tree of life.) KF kairosfocus
In the flowchart: what do the abbreviations "Lo" and "Hi" mean? Box
BA77: Fair enough to raise such issues and concerns. Our problem is, however, that we are dealing with determined sometimes ruthless zero concession objectors and onlookers tossed into confusion by clouds of rhetorical squid ink. So we have to start with the familiar, get things clear, and build out from there. As used by the man in the street, the common relevant meaning of chance is what you get from fair dice or coins or things somewhat like that, or else by accident, what in my native land we call "buck ups." (One can even have children by "buck ups." That is, unplanned and obviously uncontrolled events. Here, the talk is "drive carefully, most people are the result of accidents.") A good example is that it is possible to use the phone book as a random number table, based on the lack of correlation between names, area of residence and line codes. Each of these is separately not mere happenstance at all, but because they lack correlation, the resulting pattern is sufficiently random for this to work. The same obtains for lack of correlation between the decimal place value system and the ratio of circumference to diameter for a circle, leading to how one can use blocks of digits of pi from a million digit value, to give effectively random numbers. And so forth. We can then take this up to the thermodynamic level and the Maxwell-Boltzmann distribution; as I do in my discussion here, app 1 my usual linked note (which is in effect the implicit context for every comment I have ever made at UD):
___________ >> f] The key point is that when raw energy enters a body, it tends to make its entropy rise. This can be envisioned on a simple model of a gas-filled box with piston-ends at the left and the right: ================================= ||::::::::::::::::::::::::::::::::::::::::::|| ||::::::::::::::::::::::::::::::::::::::::::||=== ||::::::::::::::::::::::::::::::::::::::::::|| ================================= 1: Consider a box as above, filled with tiny perfectly hard marbles [so collisions will be elastic], scattered similar to a raisin-filled Christmas pudding (pardon how the textual elements give the impression of a regular grid, think of them as scattered more or less hap-hazardly as would happen in a cake). 2: Now, let the marbles all be at rest to begin with. 3: Then, imagine that a layer of them up against the leftmost wall were given a sudden, quite, quite hard push to the right [the left and right ends are pistons]. 4: Simply on Newtonian physics, the moving balls would begin to collide with the marbles to their right, and in this model perfectly elastically. So, as they hit, the other marbles would be set in motion in succession. A wave of motion would begin, rippling from left to right 5:As the glancing angles on collision will vary at random, the marbles hit and the original marbles would soon begin to bounce in all sorts of directions. Then, they would also deflect off the walls, bouncing back into the body of the box and other marbles, causing the motion to continue indefinitely. 6: Soon, the marbles will be continually moving in all sorts of directions, with varying speeds, forming what is called the Maxwell-Boltzmann distribution, a bell-shaped curve. 7: And, this pattern would emerge independent of the specific initial arrantgement or how we impart motion to it, i.e. this is an attractor in the phase space: once the marbles are set in motion somehow, and move around and interact, they will soon enough settle into the M-B pattern. E.g. the same would happen if a small charge of explosive were set off in the middle of the box, pushing our the balls there into the rest, and so on. And once the M-B pattern sets in, it will strongly tend to continue. (That is, the process is ergodic.) 8: A pressure would be exerted on the walls of the box by the average force per unit area from collisions of marbles bouncing off the walls, and this would be increased by pushing in the left or right walls (which would do work to push in against the pressure, naturally increasing the speed of the marbles just like a ball has its speed increased when it is hit by a bat going the other way, whether cricket or baseball). Pressure rises, if volume goes down due to compression. (Also, volume of a gas body is not fixed.) 9: Temperatureemerges as a measure of the average random kinetic energy of the marbles in any given direction, left, right, to us or away from us. Compressing the model gas does work on it, so the internal energy rises, as the average random kinetic energy per degree of freedom rises. Compression will tend to raise temperature. (We could actually deduce the classical — empirical — P, V, T gas laws [and variants] from this sort of model.) 10: Thus, from the implications of classical, Newtonian physics, we soon see the hard little marbles moving at random, and how that randomness gives rise to gas-like behaviour. It also shows how there is a natural tendency for systems to move from more orderly to more disorderly states, i.e. we see the outlines of the second law of thermodynamics. 11: Is the motion really random? First, we define randomness in the relevant sense:
In probability and statistics, a random process is a repeating process whose outcomes follow no describable deterministic pattern, but follow a probability distribution, such that the relative probability of the occurrence of each outcome can be approximated or calculated. For example, the rolling of a fair six-sided die in neutral conditions may be said to produce random results, because one cannot know, before a roll, what number will show up. However, the probability of rolling any one of the six rollable numbers can be calculated.
12: This can be seen by the extension of the thought experiment of imagining a large collection of more or less identically set up boxes, each given the same push at the same time, as closely as we can make it. At first, the marbles in the boxes will behave very much alike, but soon, they will begin to diverge as to path. The same overall pattern of M-B statistics will happen, but each box will soon be going its own way. That is, the distribution pattern is the same but the specific behaviour in each case will be dramatically different. 13: Q: Why? 14: A: This is because tiny, tiny differences between the boxes, and the differences in the vibrating atoms in the walls and pistons, as well as tiny irregularites too small to notice in the walls and pistons will make small differences in initial and intervening states -- perfectly smooth boxes and pistons are an unattainable ideal. Since the system is extremely nonlinear, such small differences will be amplified, making the behaviour diverge as time unfolds. A chaotic system is not predictable in the long term. So, while we can deduce a probabilistic distribution, we cannot predict the behaviour in detail, across time. Laplace's demon who hoped to predict the future of the universe from the covering laws and the initial conditions, is out of a job . . . >> ___________
So, chance and randomness enter even before we get to the quantum level, and they lead to entropy as a measure of in effect lack of information/degree of freedom at micro level consistent with gross, lab level macro conditions. (This is the context of s = k log w, and of the perception that entropy often is an index of degree of disorder, though of course there are ever so many subtleties and surprises involved so that is over simplified.) When we get to quantum level phenomena, stochastic distributions of unknown root crop up everywhere. The nice crisp orbits of electrons fuzz out into probabilistically distributed orbitals. Potential barriers too high for classical cases can be tunnelled [NB there is a wave optics case on frustration of total internal reflection . . . ], etc etc. For this case, I use the alpha emission radioactivity case as it is a classic and gives rise to a pretty easily observed macro effect, as we count with Geiger Counters or watch with ZnS scintillation screens etc. The random tiny greenish flashes are unforgettable. The counter chatter too. The relevance of these is that we then see that chance is the inferred cause of highly contingent outcomes that tend to follow what we would expect from appropriate stochastic models. And, as a result, when the outcomes are sufficiently complex and especially functionally specific to the point where we have deeply isolated islands of function in large config spaces, we may not be able to get a big enough sample that it is reasonable to hit such islands blindly. But routinely, e.g. posts in this thread, designers using intelligence, do so. That marks a pretty sharp distinction and a reliable sign of design. Which gets us back to the reason the explanatory filter works. KF kairosfocus
But why should the random entropic events of the universe care if and when I decide to observe a particle if, as Darwinists hold, I'm suppose to be the result of random entropic events in the first place? The following experiment goes even further in the differentiation of the entropic randomness of the space-time of the universe and free will randomness found in quantum mechanics. And is also very good in highlighting just how deeply the deterministic, no-free will, materialistic view of reality has been undermined by quantum mechanics.,, Here’s a recent variation of Wheeler’s Delayed Choice experiment, which highlights the ability of the conscious observer to effect ‘spooky action into the past’. Furthermore in the following experiment, the claim that past material states determine future conscious choices (materialistic determinism) is directly falsified by the fact that present conscious choices are in fact effecting past material states:
Quantum physics mimics spooky action into the past – April 23, 2012 Excerpt: The authors experimentally realized a “Gedankenexperiment” called “delayed-choice entanglement swapping”, formulated by Asher Peres in the year 2000. Two pairs of entangled photons are produced, and one photon from each pair is sent to a party called Victor. Of the two remaining photons, one photon is sent to the party Alice and one is sent to the party Bob. Victor can now choose between two kinds of measurements. If he decides to measure his two photons in a way such that they are forced to be in an entangled state, then also Alice’s and Bob’s photon pair becomes entangled. If Victor chooses to measure his particles individually, Alice’s and Bob’s photon pair ends up in a separable state. Modern quantum optics technology allowed the team to delay Victor’s choice and measurement with respect to the measurements which Alice and Bob perform on their photons. “We found that whether Alice’s and Bob’s photons are entangled and show quantum correlations or are separable and show classical correlations can be decided after they have been measured”, explains Xiao-song Ma, lead author of the study. According to the famous words of Albert Einstein, the effects of quantum entanglement appear as “spooky action at a distance”. The recent experiment has gone one remarkable step further. “Within a naïve classical world view, quantum mechanics can even mimic an influence of future actions on past events”, says Anton Zeilinger. http://phys.org/news/2012-04-quantum-physics-mimics-spooky-action.html
In other words, if my conscious choices really are just merely the result of whatever state the material particles in my brain happen to be in in the past (deterministic) how in blue blazes are my free will choices instantaneously effecting the state of material particles into the past? The preceding experiment is simply completely impossible on a materialistic/deterministic view of reality!,,, I consider the preceding experimental evidence to be a vast improvement over the traditional ‘uncertainty’ argument for free will, from quantum mechanics, that had been used for decades to undermine the deterministic belief of materialists:
Why Quantum Physics (Uncertainty) Ends the Free Will Debate – Michio Kaku – video http://www.youtube.com/watch?v=lFLR5vNKiSw
Of related note as to free will and the creation of new information (of note: neo-Darwinian processes have yet to demonstrate the origination of new information!)
Algorithmic Information Theory, Free Will and the Turing Test – Douglas S. Robertson Excerpt: Chaitin’s Algorithmic Information Theory shows that information is conserved under formal mathematical operations and, equivalently, under computer operations. This conservation law puts a new perspective on many familiar problems related to artificial intelligence. For example, the famous “Turing test” for artificial intelligence could be defeated by simply asking for a new axiom in mathematics. Human mathematicians are able to create axioms, but a computer program cannot do this without violating information conservation. Creating new axioms and free will are shown to be different aspects of the same phenomena: the creation of new information. http://cires.colorado.edu/~doug/philosophy/info8.pdf
Of important note as to how almighty God exercises His free will in all of this:
BRUCE GORDON: Hawking’s irrational arguments – October 2010 Excerpt: The physical universe is causally incomplete and therefore neither self-originating nor self-sustaining. The world of space, time, matter and energy is dependent on a reality that transcends space, time, matter and energy. This transcendent reality cannot merely be a Platonic realm of mathematical descriptions, for such things are causally inert abstract entities that do not affect the material world. Neither is it the case that “nothing” is unstable, as Mr. Hawking and others maintain. Absolute nothing cannot have mathematical relationships predicated on it, not even quantum gravitational ones. Rather, the transcendent reality on which our universe depends must be something that can exhibit agency – a mind that can choose among the infinite variety of mathematical descriptions and bring into existence a reality that corresponds to a consistent subset of them. This is what “breathes fire into the equations and makes a universe for them to describe.,,, the evidence for string theory and its extension, M-theory, is nonexistent; and the idea that conjoining them demonstrates that we live in a multiverse of bubble universes with different laws and constants is a mathematical fantasy. What is worse, multiplying without limit the opportunities for any event to happen in the context of a multiverse – where it is alleged that anything can spontaneously jump into existence without cause – produces a situation in which no absurdity is beyond the pale. For instance, we find multiverse cosmologists debating the “Boltzmann Brain” problem: In the most “reasonable” models for a multiverse, it is immeasurably more likely that our consciousness is associated with a brain that has spontaneously fluctuated into existence in the quantum vacuum than it is that we have parents and exist in an orderly universe with a 13.7 billion-year history. This is absurd. The multiverse hypothesis is therefore falsified because it renders false what we know to be true about ourselves. Clearly, embracing the multiverse idea entails a nihilistic irrationality that destroys the very possibility of science. Universes do not “spontaneously create” on the basis of abstract mathematical descriptions, nor does the fantasy of a limitless multiverse trump the explanatory power of transcendent intelligent design. What Mr. Hawking’s contrary assertions show is that mathematical savants can sometimes be metaphysical simpletons. Caveat emptor. per Washington Times The Absurdity of Inflation, String Theory and The Multiverse – Dr. Bruce Gordon – video http://vimeo.com/34468027
Here is the last power-point slide of the preceding video:
The End Of Materialism? * In the multiverse, anything can happen for no reason at all. * In other words, the materialist is forced to believe in random miracles as a explanatory principle. * In a Theistic universe, nothing happens without a reason. Miracles are therefore intelligently directed deviations from divinely maintained regularities, and are thus expressions of rational purpose. * Scientific materialism is (therefore) epistemically self defeating: it makes scientific rationality impossible.
Supplemental note: , finding ‘free will conscious observation’ to be ‘built into’ our best description of foundational reality, quantum mechanics, as a starting assumption, 'free will observation' which is indeed the driving aspect of randomness in quantum mechanics, is VERY antithetical to the entire materialistic philosophy which demands that a 'non-telological randomness' be the driving force of creativity in Darwinian evolution! Also of interest:
Scientific Evidence That Mind Effects Matter – Random Number Generators – video http://www.metacafe.com/watch/4198007 Correlations of Random Binary Sequences with Pre-Stated Operator Intention: A Review of a 12-Year Program - 1997 Abstract: Strong correlations between output distribution means of a variety of random binary processes and pre-stated intentions of some 100 individual human operators have been established over a 12-year experimental program. More than 1000 experimental series, employing four different categories of random devices and several distinctive protocols, show comparable magnitudes of anomalous mean shifts from chance expectation, with similar distribution structures. Although the absolute effect sizes are quite small, of the order of 10–4 bits deviation per bit processed, over the huge databases accumulated the composite effect exceeds 7 ?( p approx.= 3.5 × 10 –13). These data display significant disparities between female and male operator performances, and consistent serial position effects in individual and collective results. Data generated by operators far removed from the machines and exerting their efforts at times other than those of machine operation show similar effect sizes and structural details to those of the local, on-time experiments. Most other secondary parameters tested are found to have little effect on the scale and character of the results, with one important exception: studies performed using fully deterministic pseudorandom sources, either hard-wired or algorithmic, yield null overall mean shifts, and display no other anomalous feature. http://www.princeton.edu/~pear/pdfs/1997-correlations-random-binary-sequences-12-year-review.pdf Mass Consciousness: Perturbed Randomness Before First Plane Struck on 911 - July 29 2012 Excerpt: The machine apparently sensed the September 11 attacks on the World Trade Centre four hours before they happened - but in the fevered mood of conspiracy theories of the time, the claims were swiftly knocked back by sceptics. But it also appeared to forewarn of the Asian tsunami just before the deep sea earthquake that precipitated the epic tragedy.,, Now, even the doubters are acknowledging that here is a small box with apparently inexplicable powers. 'It's Earth-shattering stuff,' says Dr Roger Nelson, emeritus researcher at Princeton University in the United States, who is heading the research project behind the 'black box' phenomenon. http://www.network54.com/Forum/594658/thread/1343585136/1343657830/Mass+Consciousness-+Perturbed+Randomness++Before+First+Plane+Struck+on+911
I once asked a evolutionist, after showing him the preceding experiments, “Since you ultimately believe that the ‘god of random chance’ produced everything we see around us, what in the world is my mind doing pushing your god around?” Here are some of the papers to go with the preceding video and articles;
Princeton Engineering Anomalies Research - Scientific Study of Consciousness-Related Physical Phenomena - publications http://www.princeton.edu/~pear/publications.html The Global Consciousness Project - Meaningful Correlations in Random Data http://teilhard.global-mind.org/
bornagain77
But where do we delineate 'quantum randomness' from entropic randomness in all this? Well let's add some perspective shall we? Around the 13:20 minute mark of the following video Pastor Joe Boot comments on the self-defeating nature of the atheistic worldview in regards to absolute truth:
Defending the Christian Faith – Pastor Joe Boot – video http://www.youtube.com/watch?v=wqE5_ZOAnKo "If you have no God, then you have no design plan for the universe. You have no prexisting structure to the universe.,, As the ancient Greeks held, like Democritus and others, the universe is flux. It's just matter in motion. Now on that basis all you are confronted with is innumerable brute facts that are unrelated pieces of data. They have no meaningful connection to each other because there is no overall structure. There's no design plan. It's like my kids do 'join the dots' puzzles. It's just dots, but when you join the dots there is a structure, and a picture emerges. Well, the atheists is without that (final picture). There is no preestablished pattern (to connect the facts given atheism)." Pastor Joe Boot
The scientist in the following video, who works within the field of Quantum Mechanics, scientifically confirms Pastor Joe Boots intuition and shows how conservation of energy in the universe requires quantum non-locality to be true in order for the universe to have coherence.
Is Richard Dawkins proving the existence of God after all? - Antoine Suarez - video http://www.youtube.com/watch?v=jIXXqv9zKEw
The difference between Quantum and Entropic randomness is that the ‘randomness’ of quantum mechanics is, unlike bounded entropic randomness (Planck), found to be associated with the free will of the conscious observer. In the following video, at the 37:00 minute mark, Anton Zeilinger, a leading researcher in quantum teleportation with many breakthroughs under his belt, humorously reflects on just how deeply the determinism of materialism has been undermined by quantum mechanics by musing that perhaps such a deep lack of determinism in quantum mechanics may provide some of us a loop hole when we meet God on judgment day.
Prof Anton Zeilinger speaks on quantum physics. at UCT – video http://www.youtube.com/watch?feature=player_detailpage&v=s3ZPWW5NOrw#t=2237s
This ‘unbounded random’ situation found in quantum mechanics is brought out a bit more clearly in this following article:
People Keep Making Einstein’s (Real) Greatest Blunder – July 2011 Excerpt: It was in these debates (with Bohr) that Einstein declared his real greatest blunder: “God does not play dice with the Universe.” As much as we all admire Einstein,, don’t keep making his (real) greatest blunder. I’ll leave the last word to Bohr, who allegedly said, “Don’t tell God what to do with his dice.” ,,, To clarify, it isn’t simply that there’s randomness; that at some level, “God plays dice.” Even local, real interpretations of quantum mechanics with hidden variables can do that. It’s that we know something about the type of dice (at the quantum level) that the Universe plays. And the dice cannot be both local and real; people claiming otherwise have experimental data to answer to. http://scienceblogs.com/startswithabang/2011/07/01/people-keep-making-einsteins-g/
Personally, I felt that such a deep undermining of determinism by quantum mechanics, far from providing a ‘loop hole’ on judgment day as Dr. Zeilinger was musing about, actually restores free will to its rightful place in the grand scheme of things, thus making God’s final judgments on men’s souls all the more fully binding since, as far as science can tell us, man truly is a ‘free moral agent’, just as Theism has always maintained. To solidify this basic theistic ‘free will’ claim for how reality is now found to be constructed on the quantum level, the following study came along a few months after I had seen Dr. Zeilinger’s video:
Can quantum theory be improved? – July 23, 2012 Excerpt: Building on nearly a century of investigative work on this topic, a team of physicists has recently performed an experiment whose results show that, despite its imperfections, quantum theory still seems to be the optimal way to predict measurement outcomes., However, in the new paper, the physicists have experimentally demonstrated that there cannot exist any alternative theory that increases the predictive probability of quantum theory by more than 0.165, with the only assumption being that measurement (*conscious observation) parameters can be chosen independently (free will) of the other parameters of the theory.,,, ,, the experimental results provide the tightest constraints yet on alternatives to quantum theory. The findings imply that quantum theory is close to optimal in terms of its predictive power,,, http://phys.org/news/2012-07-quantum-theory.html to clarify: What does the term “measurement” mean in quantum mechanics? - “Measurement” or “observation” in a quantum mechanics context are really just other ways of saying that the observer is interacting with the quantum system and measuring the result in toto. http://boards.straightdope.com/sdmb/showthread.php?t=597846 Henry Stapp on the Conscious Choice and the Non-Local Quantum Entangled Effects – video http://www.youtube.com/watch?v=HJN01s1gOqA
Moreover,
In the beginning was the bit - New Scientist Excerpt: Zeilinger's principle leads to the intrinsic randomness found in the quantum world. Consider the spin of an electron. Say it is measured along a vertical axis (call it the z axis) and found to be pointing up. Because one bit of information has been used to make that statement, no more information can be carried by the electron's spin. Consequently, no information is available to predict the amounts of spin in the two horizontal directions (x and y axes), so they are of necessity entirely random. If you then measure the spin in one of these directions, there is an equal chance of its pointing right or left, forward or back. This fundamental randomness is what we call Heisenberg's uncertainty principle. http://www.quantum.at/fileadmin/links/newscientist/bit.html
So just as I had somewhat suspected after watching Dr. Zeilinger’s video, it is found that there is indeed a required assumption of ‘free will’ in quantum mechanics (that measurement parameters can be chosen independently), and that it is ‘free will’ that is what necessarily drives the completely random (non-deterministic) aspect of quantum mechanics.,,, To further differentiate the ‘spooky’ randomness of quantum mechanics, (which is directly associated with the free will of our conscious choices), from that of the ‘bounded entropic randomness’ of the space-time of General Relativity, it is found that,,
Quantum Zeno effect Excerpt: The quantum Zeno effect is,,, an unstable particle, if observed continuously, will never decay. https://uncommondesc.wpengine.com/intelligent-design/tonights-feature-presentation-epigenetics-the-next-evolutionary-cliff/#comment-445840
bornagain77
In fact, it has been argued that Gravity arises as an ‘entropic force’,,
Evolution is a Fact, Just Like Gravity is a Fact! UhOh! – January 2010 Excerpt: The results of this paper suggest gravity arises as an entropic force, once space and time themselves have emerged. https://uncommondesc.wpengine.com/intelligent-design/evolution-is-a-fact-just-like-gravity-is-a-fact-uhoh/
In fact, entropy is pervasive in its explanatory power for physical events that occur in this universe,,
Shining Light on Dark Energy – October 21, 2012 Excerpt: It (Entropy) explains time; it explains every possible action in the universe;,, Even gravity, Vedral argued, can be expressed as a consequence of the law of entropy. ,,, The principles of thermodynamics are at their roots all to do with information theory. Information theory is simply an embodiment of how we interact with the universe —,,, http://crev.info/2012/10/shining-light-on-dark-energy/
In fact it was, in large measure, by studying the entropic considerations of black holes that Roger Penrose was able to derive the gargantuan 1 in 10^10^123 number as to the necessary initial entropic state for the universe:
Roger Penrose – How Special Was The Big Bang? “But why was the big bang so precisely organized, whereas the big crunch (or the singularities in black holes) would be expected to be totally chaotic? It would appear that this question can be phrased in terms of the behaviour of the WEYL part of the space-time curvature at space-time singularities. What we appear to find is that there is a constraint WEYL = 0 (or something very like this) at initial space-time singularities-but not at final singularities-and this seems to be what confines the Creator’s choice to this very tiny region of phase space.” How special was the big bang? – Roger Penrose Excerpt: This now tells us how precise the Creator’s aim must have been: namely to an accuracy of one part in 10^10^123. (from the Emperor’s New Mind, Penrose, pp 339-345 – 1989) Roger Penrose discusses initial entropy of the universe. – video http://www.youtube.com/watch?v=WhGdVMBk6Zo The Physics of the Small and Large: What is the Bridge Between Them? Roger Penrose Excerpt: “The time-asymmetry is fundamentally connected to with the Second Law of Thermodynamics: indeed, the extraordinarily special nature (to a greater precision than about 1 in 10^10^123, in terms of phase-space volume) can be identified as the “source” of the Second Law (Entropy).” http://www.pul.it/irafs/CD%20IRAFS%2702/texts/Penrose.pdf
But what is devastating for the atheist (or even for the Theistic Evolutionist) who wants ‘randomness’ to be the source for all creativity in the universe, is that randomness, (i.e. the entropic processes of the universe), are now shown, scientifically, to be vastly more likely to destroy functional information within the cell rather than ever building it up’. Here are my notes along that line:
“Is there a real connection between entropy in physics and the entropy of information? …. The equations of information theory and the second law are the same, suggesting that the idea of entropy is something fundamental…” Tom Siegfried, Dallas Morning News, 5/14/90 – Quotes attributed to Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin in the article Demonic device converts information to energy – 2010 Excerpt: “This is a beautiful experimental demonstration that information has a thermodynamic content,” says Christopher Jarzynski, a statistical chemist at the University of Maryland in College Park. In 1997, Jarzynski formulated an equation to define the amount of energy that could theoretically be converted from a unit of information2; the work by Sano and his team has now confirmed this equation. “This tells us something new about how the laws of thermodynamics work on the microscopic scale,” says Jarzynski. http://www.scientificamerican.com/article.cfm?id=demonic-device-converts-inform
,,having a empirically demonstrated direct connection between entropy of the universe and the information inherent within a cell is extremely problematic for Darwinists because of the following principle,,,
“Gain in entropy always means loss of information, and nothing more.” Gilbert Newton Lewis – preeminent Chemist of the first half of last century “Bertalanffy (1968) called the relation between irreversible thermodynamics and information theory one of the most fundamental unsolved problems in biology.” Charles J. Smith – Biosystems, Vol.1, p259.
and this principle is confirmed empirically:
“The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain – Michael Behe – December 2010 Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain. http://behe.uncommondescent.com/2010/12/the-first-rule-of-adaptive-evolution/
Thus, Darwinists are found to be postulating that the ‘random’ entropic events of the universe, which are found to be consistently destroying information in the cell, are instead what are creating information in the cell. ,,, It is the equivalent in science of someone (in this case a ‘consensus of scientists’) claiming that Gravity makes things fall up instead of down, and that is not overstating the bizarre situation we find ourselves in in the least with the claims of atheistic Darwinists and Theistic Evolutionists. It is also very interesting to note that Ludwig Boltzmann, an atheist, when he linked entropy and probability, did not, as Max Planck a Christian Theist points out in the following link, think to look for a constant for entropy:
The Austrian physicist Ludwig Boltzmann first linked entropy and probability in 1877. However, the equation as shown, involving a specific constant, was first written down by Max Planck, the father of quantum mechanics in 1900. In his 1918 Nobel Prize lecture, Planck said: “This constant is often referred to as Boltzmann’s constant, although, to my knowledge, Boltzmann himself never introduced it – a peculiar state of affairs, which can be explained by the fact that Boltzmann, as appears from his occasional utterances, never gave thought to the possibility of carrying out an exact measurement of the constant.” http://www.daviddarling.info/encyclopedia/B/Boltzmann_equation.html
I hold that the primary reason why Boltzmann, an atheist, never thought to carry out, or even propose, a precise measurement for the constant on entropy is that he, as an atheist, had thought he had arrived at the ultimate ‘random’ explanation for how everything in the universe operates when he had link probability with entropy. i.e. In linking entropy with probability, Boltzmann, again an atheist, thought he had explained everything that happens in the universe to a ‘random’ chance basis. To him, as an atheist, I hold that it would simply be unfathomable for him to conceive that the ‘random chance’ (probabilistic) events of entropy in the universe should ever be constrained by a constant that would limit the effects of ‘random’ entropic events of the universe. Whereas on the contrary, to a Christian Theist such as Planck, it is expected that even these seemingly random entropic events of the universe should be bounded by a constant. In fact modern science was born out of such thinking:
‘Men became scientific because they expected Law in Nature, and they expected Law in Nature because they believed in a Legislator. In most modern scientists this belief has died: it will be interesting to see how long their confidence in uniformity survives it. Two significant developments have already appeared—the hypothesis of a lawless sub-nature, and the surrender of the claim that science is true.’ Lewis, C.S., Miracles: a preliminary study, Collins, London, p. 110, 1947.
Verse and Music:
Romans 8:20-21 For the creation was subjected to frustration, not by its own choice, but by the will of the one who subjected it, in hope that the creation itself will be liberated from its bondage to decay and brought into the glorious freedom of the children of God. Phillips, Craig & Dean – When The Stars Burn Down – Worship Video with lyrics http://www.youtube.com/watch?v=rPuxnQ_vZqY
bornagain77
kairosfocus, I noticed that you delineated chance into two different forms. i.e. A pair of die and quantum. I would suggest a more 'scientific' delineation, as would be pertinent to the ID vs Darwin debate, would involve delineating chance along the entropic and quantum boundary. Randomness (Chance) - Entropic and Quantum I think the whole Theistic Evolution issue, in which some Theists think God somehow guides evolution through ‘random chance and/or chaotic’ processes, hinges on the misapplication of the term ‘random chance’. For something to be considered a ‘random chance’ event in the universe is generally regarded as something lacking predictability to its occurrence or lacking a pattern to it. i.e. Generally the cause of the event is held to be unknown, but no one in their right mind would say that ‘nothing’ caused the random event to occur!. But how, in a general sense, when an atheist invokes randomness, as if he has issued a statement of final causality, is that any different from a Theist saying an event was ‘miraculous’, if the atheists says an event ‘just happened’ for no particular reason at all? Indeed, it has been observed by no less than the noted physicist Wolfgang Pauli that the word ‘random chance’, as used by Biologists, is synonymous with the word ‘miracle’:
Nobel Prize-Winning Physicist Wolfgang Pauli on the Empirical Problems with Neo-Darwinism – Casey Luskin – February 27, 2012 Excerpt: While they (Darwinian Biologists) pretend to stay in this way completely ‘scientific’ and ‘rational,’ they become actually very irrational, particularly because they use the word ‘chance’, not any longer combined with estimations of a mathematically defined probability, in its application to very rare single events more or less synonymous with the old word ‘miracle.’” Wolfgang Pauli (pp. 27-28) - http://www.evolutionnews.org/2012/02/nobel_prize-win056771.html
Talbott humorously reflects on the awkward situation between Atheists and Theists here:
Evolution and the Illusion of Randomness – Talbott – Fall 2011 Excerpt: In the case of evolution, I picture Dennett and Dawkins filling the blackboard with their vivid descriptions of living, highly regulated, coordinated, integrated, and intensely meaningful biological processes, and then inserting a small, mysterious gap in the middle, along with the words, “Here something random occurs.” This “something random” looks every bit as wishful as the appeal to a miracle. It is the central miracle in a gospel of meaninglessness, a “Randomness of the gaps,” demanding an extraordinarily blind faith. At the very least, we have a right to ask, “Can you be a little more explicit here?” http://www.thenewatlantis.com/publications/evolution-and-the-illusion-of-randomness
Also of related interest:
Scientific American: Evolution "To some extent, it just happens" - July 2013 "Complexity, they say, is not purely the result of millions of years of fine-tuning through natural selection—the process that Richard Dawkins famously dubbed “the blind watchmaker.” To some extent, it just happens. Biologists and philosophers have pondered the evolution of complexity for decades, but according to Daniel W. McShea, a paleobiologist at Duke University, they have been hobbled by vague definitions. “It’s not just that they don’t know how to put a number on it. They don’t know what they mean by the word,” McShea says." https://uncommondesc.wpengine.com/evolution/scientific-american-studying-how-organisms-evolve-elaborate-structures-without-darwinian-selection/ “It is our contention that if ‘random’ is given a serious and crucial interpretation from a probabilistic point of view, the randomness postulate is highly implausible and that an adequate scientific theory of evolution must await the discovery and elucidation of new natural laws—physical, physico-chemical, and biological.” Murray Eden, “Inadequacies of Neo-Darwinian Evolution as a Scientific Theory,” Mathematical Challenges to the Neo-Darwinian Interpretation of Evolution, editors Paul S. Moorhead and Martin M. Kaplan, June 1967, p. 109.
Basically, if the word random (chance) were left in this fuzzy, undefined, state one could very well argue as Theistic Evolutionists argue, and as even Alvin Plantinga himself has argued at the 8:15 minute mark of this following video,,
How can an Immaterial God Interact with the Physical Universe? (Alvin Plantinga) – video http://www.youtube.com/watch?v=2kfzD3ofUb4
,,, that each random (chance) event that occurs in the universe could be considered a ‘miracle’ of God. And thus, I guess the Theistic Evolutionists would contend, God could guide evolution through what seem to us to be ‘random’ events. And due to the synonymous nature between the two words, random (chance) and miracle, in this ‘fuzzy’, undefined, state, this argument that random events can be considered ‘miraculous’, while certainly true in the overall sense, would none-the-less concede the intellectual high ground to the atheists since, by and large, the word random, as it is defined in popular imagination, is not associated with the word miraculous at all but the word random is most strongly associated with unpleasant ‘random’ events. Associated with ‘natural’ disasters, and such events as that. Events that many people would prefer to distance God from in their thinking, or that many people, even hardcore Christian Theists, are unable to easily associate an all loving God with (i.e. the problem of evil). Such as tornadoes, earthquakes, and other such catastrophes. Moreover, Darwinists, as Casey Luskin and Jay Richards pointed out in a disagreement with Alvin Plantinga, have taken full advantage of the popular definition of the word ‘random event’, (as in the general notion of unpredictable tragic events being separated from God’s will), in textbooks to mislead the public that a ‘random’ event is truly separated from God’s divine actions,,,
Unguided or Not? How Do Darwinian Evolutionists Define Their Theory? – Casey Luskin – August 11, 2012 Excerpt: While many new atheists undoubtedly make poor philosophers, the “unguided” nature of Darwinian evolution is not a mere metaphysical “add on.” Rather, it’s a core part of how the theory of Darwinian evolution has been defined by its leading proponents. Unfortunately, even some eminent theistic and intelligent design-friendly philosophers appear unaware of the history and scientific development of neo-Darwinian theory. http://www.evolutionnews.org/2012/08/unguided_or_not_1063191.html
More notes along that line:
The term “chance” can be defined several ways: a mathematical probability, such as the chance involved in flipping a coin; however, when scientists use this term, generally it’s substituting for a more precise word such as “cause,” especially when the cause is not known. “To personify ‘chance’ as if we were talking about a causal agent,” notes biophysicist Donald M. MacKay, “is to make an illegitimate switch from a scientific to a quasi-religious mythological concept.” Similarly, Robert C. Sproul points out: “By calling the unknown cause ‘chance’ for so long, people begin to forget that a substitution was made. . . . The assumption that ‘chance equals an unknown cause’ has come to mean for many that ‘chance equals cause.’” Nobel laureate Jacques L. Monod, for one, used this chance-equals-cause line of reasoning. “Pure chance, absolutely free but blind, [is] at the very root of the stupendous edifice of evolution,” he wrote. “Man knows at last that he is alone in the universe’s unfeeling immensity, out of which he emerged only by chance.” Note he says: ‘BY chance.’ Monod does what many others do—he elevates chance to a creative principle. Chance is offered as the means by which life came to be on earth. In fact, dictionaries show that “chance” is “the assumed impersonal purposeless determiner of unaccountable happenings.” Thus, if one speaks about life coming about by chance, he is saying that it came about by a causal power that is not known. per UD blogger Barbara
But, because of the advance of modern science, we need not be armchair philosophers that must forever, endlessly, wrangle over the precise meaning of the word random being synonymous with the word miraculous, (all the while conceding the public relations battle to the Darwinists over the word ‘random’), we can now more precisely define exactly what the word random means, as to a causal chain, so as to see exactly what a Darwinist means when he claims a ‘random’ event has occurred! ,, In this endeavor, in order to bring clarity to the word random, it is first and foremost very important to note that when computer programmers/engineers want to build a better random number generator for any particular computer program they are building then a better source of entropy is required to be found by them in order for them to achieve the increased randomness they desire for their program:
Cryptographically secure pseudorandom number generator Excerpt: From an information theoretic point of view, the amount of randomness, the entropy that can be generated is equal to the entropy provided by the system. But sometimes, in practical situations, more random numbers are needed than there is entropy available. http://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator By the way, if you need some really good random numbers, go here: http://www.random.org/bytes/ These are truly random (not pseudo-random) and are generated from atmospheric noise. per Gil Dodgen
Along that line:
Entropy Excerpt: It is often said that entropy is an expression of the disorder, or randomness of a system, or of our lack of information about it (which on some views of probability, amounts to the same thing as randomness). http://en.wikipedia.org/wiki/Entropy#Order_and_disorder
Also of interest, not that computer programmers will ever tap into it, but the maximum source for entropy (randomness) in the universe is now known to be black holes,,,
Entropy of the Universe – Hugh Ross – May 2010 Excerpt: Egan and Lineweaver found that supermassive black holes are the largest contributor to the observable universe’s entropy. They showed that these supermassive black holes contribute about 30 times more entropy than what the previous research teams estimated. http://www.reasons.org/entropy-universe
bornagain77
But, chance is so vague and imprecise . . . ? How can you say you are using it as a legitimate scientific concept? (And so forth) kairosfocus

Leave a Reply