Uncommon Descent Serving The Intelligent Design Community

The paradox of almost definite knowledge in the face of maximum uncertainty — the basis of ID

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

When facing maximum uncertainty, it seems paradoxical that one can have great assurance about certain things. This has enormous relevance to ID because Darwinists will argue, “how can you be so certain of something when it is apparent there is great uncertainty in the system.” I will respond by saying, “when we have maximum uncertainty about what specific configuration 500 fair coins is in (by randomizing the coins in some vigorous fashion), we simultaneously have almost near certainty about which configurations it cannot be in — such as all-coins heads or a pre-specified sequence….”

When a process like a biotic soup maximizes uncertainty about possible polymer sequences that can evolve, it gives us near certainty life will not evolve by chance. When a process like random mutation and real selection (as opposed to DFFM) maximizes uncertainty for the direction of evolution, we know then that complex functions will not evolve via RM + RS (where RS stats for real selection in the wild, not Darwin’s Falsified Fantasy Mechanism (DFFM)).

If I apply a stochastic process that maximizes the uncertainty in a system (aka chance process), depending on the situation, we can often land high certainty about what configuration the system cannot be in, even in principle. Thus at least in principle, given a distribution, we can reject chance as hypothesis for a particular configuration, and thus we can in principle identify design.

Darwinists will complain, we know so little about how the system will evolve via chance, especially since stochastic mechanisms are in play. An ID proponent should then respond by saying, “Great! And even better, suppose we have chance maximize the uncertainty of a system’s configuration even more as it evolves over time. In that scenario, we can be quite sure chance won’t make a living organism nor complex integrated functions almost as surely as when we randomize a large set of fair coins laid out on table, they will not be 100% heads but instead approach the expectation value of 50% heads.”

This paradox is related to the Law of Large Numbers, a law which I regard as The Fundamental Law of Intelligent Design. Some have expressed concern the Law of Large Numbers can’t be applied to specific sequences. It can be, you just have to be a little creative. For examples, see: Coordinated Complexity, the key to refuting single target and postdiction objections.

NOTES

1. I’ve tried to define “chance” as a stochastic process that maximizes the uncertainty in a system relative to the degrees of freedom of the relevant symbols. Sorry for being so anally meticulous about what “chance” means, but given how there were 111 some comments in Statistics Question for Nick Matzke going off on all sorts of tangents about the meaning of chance, I provide a tentative definition to immunize this discussion from such clutter.

2. A designed system is defined as a system not the result of law and chance. Whether in the ultimate sense such a system is created by a conscious intelligence is a separate question. But the reason defining design in this way is important is that it takes away the complaints like “the structure of such systems as being special is merely a figment of our postdictive imagination like seeing faces in clouds.” As far as not defining design with respect to intelligence see: It is useful to separate design from theories of intelligence and intelligent agency.

3. We have an illustration of this principle in thermodynamics. As a system approaches thermal equilibrium and uniform temperature, the uncertainty of the specific energy configuration is maximized, but in the process of maximizing uncertainty about the specific energy configuration (and uncertainty is also entropy, S = k log w) we have high certainty about the uniformity of temperature. All I’m doing is importing the illustrations of thermodynamics into the realm of ID — as an evolutionary process maximizes the uncertainty of a particular configuration it also precludes the evolution of highly specific outcomes as a matter of principle to a high level of certainty.

4. This is not an argument from ignorance against evolutionism, it is Proof by Contradiction.

UPDATE:

5. There is an distinction between uncertainty because we were unable to observe data, and uncertainty because a process (like randomizing a set of fair coins) induces uncertainty. This essay deals with the latter sense of the notion of uncertainty.

Comments
Ho-De-Ho, Hey, nice to see you again. Long time. Thanks for the kind words and insights. Salscordova
December 20, 2013
December
12
Dec
20
20
2013
10:40 AM
10
10
40
AM
PDT
HDH @32: Well said, particularly with respect to the false charge of ID being a "science stopper." There are so many aspects of the evolutionary storyline that are "explained" away by just saying, in effect, "Well, stuff happens." This is not just with coins, it has actually happened more than once over the decades in scientific research. The materialist creation myth is much more of a science stopper than any possible reference to intelligent intervention has ever been.Eric Anderson
December 20, 2013
December
12
Dec
20
20
2013
08:00 AM
8
08
00
AM
PDT
NetResearchMan, Good thoughts, and welcome. I'm still trying to find anyone who can point me to the alleged "self-replicating molecule." I have to laugh every time I hear Dawkins' quote about a "self-replicating molecule" that kicks evolution off. I suppose in principle a self-replicating molecule could be created, but it would be an astoundingly complicated task, and thus far no-one has ever found such an elusive beast nor been able to point to one, despite decades of strenuous research. So the whole idea of a "self-replicating molecule" that kicks things off is, like so much of the rest of the materialistic creation myth, just a figment of the imagination.Eric Anderson
December 20, 2013
December
12
Dec
20
20
2013
07:56 AM
7
07
56
AM
PDT
Thank you KF, if I may call you that. Pleasure to make your acquaintance. I have enjoyed a very many of your posts. Ho-De-HoHo-De-Ho
December 20, 2013
December
12
Dec
20
20
2013
04:48 AM
4
04
48
AM
PDT
HDH: Interesting, and welcome. KFkairosfocus
December 20, 2013
December
12
Dec
20
20
2013
04:05 AM
4
04
05
AM
PDT
NRM: really interesting post. Ta very much.Ho-De-Ho
December 20, 2013
December
12
Dec
20
20
2013
02:25 AM
2
02
25
AM
PDT
Whatho everybody. This coin flipping sequence of threads has been deuced interesting to follow. I have often heard it said by some of the ID detractors that ID does not make any predictions. From following the statements of all of the good people here at UD who have been casting 500 coins about hither and thither, I think a case for a positive prediction can clearly be seen. I shall state it now. If a person were to be sauntering through the Roman districts one sunny afternoon and happened upon the Trevi fountain, they may plod over to have a look into its waters at all of the coins thrown into it by the many visiting lovers and wish-makers. Now what would our coin spotting adventurer expect to see? From the "500 coin" comments, I would say that if our adventurer towed the ID line, they would expect to see a pretty even distribution of heads and tails displayed amidst the booty. Why? Because, that kind of configuration does not need an intelligence or physical law to explain it and is observed repeatedly on a regular basis. On the other hand, from following this debate, one would conclude that our Roman sojourner, if he were anti-ID, would not predict anything whatsoever. If the coins were all Heads, well there you are. All Tails? Not a concern. A mixture of the two? So what. After all, all configurations are possible are they not. On top of this, what would be the response of of the ID advocate if all coins were heads up? They would no doubt say to themselves: "Gosh that is odd. Why are they all like that? Very rummy. Perhaps some child is diving down and turning them all Heads up. Or is there some particularly localised force of nature at work here? Let me see..." - or thoughts to that effect. If the urge took them, they could set up a camcorder or whatever, in order to see if there is a diving boy or girl interfering with the coins orientation. If not, let's start doing some more detailed research on the fountain. This is weird stuff what? By contrast, the anti-ID figure (as posited on these threads), on finding all coins glimmering their Heads to him, would merely say. Nothing interesting here. No need for inquiry. This is of note since I have often heard it said that ID stultifies curiosity in science. However, that is clearly not the case in the science of coin tossing at least. The opposite view, in this instance, would appear to be waving that flag. By-the-by, I am sure nobody opposing ID would actually hold these singular views, but merely from the comments here at UD these views are what an impartial bystander could conclude. Thanks for reading. Ho-De-HoHo-De-Ho
December 20, 2013
December
12
Dec
20
20
2013
02:24 AM
2
02
24
AM
PDT
NRM: Well said, and welcome. KFkairosfocus
December 20, 2013
December
12
Dec
20
20
2013
12:17 AM
12
12
17
AM
PDT
PS: Maybe it will help to say this. The number of possibilities for 500 coins is so large that by a very long shot, we could not lay out a display of all the 3.27 * 10^150 possibilities using the entire atomic resources of the observable cosmos.kairosfocus
December 20, 2013
December
12
Dec
20
20
2013
12:15 AM
12
12
15
AM
PDT
NetResearchMan, I don't believe we've met. Greetings. Welcome to UncommonDescent. Thank you for your insightful comment. Salscordova
December 20, 2013
December
12
Dec
20
20
2013
12:08 AM
12
12
08
AM
PDT
SC: My onward thought is, that the key point is to understand the scale of "spaces" of possible outcomes, what I have called configuration spaces. For 500 coins we have 2^500 = 3.27 * 10^150. Huge, there are only about 10^80 atoms in the observed cosmos. We then must reckon with sampling from that space. Our solar system is our effective universe for interactions between atoms other than by transmission of light. Mechanical, chemical, etc. The fastest chem rxns are about 10^-14 s, and organic ones tend to be much slower. 10^17 s is an upper limit of plausible age. Treating each atom as a search machine is overly generous. Running through more or less these numbers, we cannot ever exhaustively search all configs for 500 coins. The resources do not permit. Indeed, we max out at roughly a ratio of 1 straw to a cubical haystack 1,0000 light years across. As thick as our galaxy at its central bulge. I have commonly suggested, superpose the haystack on our galactic neighbourhood and do a one straw sized sample. With all but certainty, we will pick up straw, though many, many thousands of star systems would be in the notional haystack. (Don't forget, stars tend to be several light years apart.) Too much stack and too little of needles. Which, is of course your point in your original post. Shifting slightly, here is Wiki on the monkeys at keyboards challenge:
The infinite monkey theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type a given text, such as the complete works of William Shakespeare. In this context, "almost surely" is a mathematical term with a precise meaning, and the "monkey" is not an actual monkey, but a metaphor for an abstract device that produces an endless random sequence of letters and symbols. One of the first instances of the monkey metaphor being used comes from French mathematician Émile Borel in 1913[1] but it may go back further than that. The relevance of the theorem is questionable—the probability of a universe full of monkeys typing a complete work such as Shakespeare's Hamlet is so tiny that the chance of it occurring during a period of time hundreds of thousands of orders of magnitude longer than the age of the universe is extremely low (but technically not zero). Variants of the theorem include multiple and even infinitely many typists, and the target text varies between an entire library and a single sentence. The history of these statements can be traced back to Aristotle's On Generation and Corruption and Cicero's De natura deorum (On the Nature of the Gods), through Blaise Pascal and Jonathan Swift, and finally to modern statements with their iconic typewriters. In the early 20th century, Émile Borel and Arthur Eddington used the theorem to illustrate the timescales implicit in the foundations of statistical mechanics.
In short, speaking against interest, Wiki acknowledges the point. A finite search constrained by cosmic resources will run into search limitations that make it all but certain that something past the FSCO/I threshold is all but certainly not going to be discovered by blind chance dominated attempts. And that is the big problem for evolutionary materialist accounts of the origin of life and of major body plans. In Darwin's pond or the like, there are only forces of physics and chemistry at work. Until a self replication facility that also creates an encapsulated metabolic automaton is generated, one cannot appeal to differential reproductive success. There is no reproduction. Until there is encapsulation, diffusive etc forces will scatter or allow interference with processes. Until there is a metabolic automaton, input materials and energy cannot be effectively processed. Where also just the code is credibly 100,0000 to 1 Mn bits. Similarly, to transform to create novel body plans we need 10 - 100+ mn bits each, dozens of times. And with essentially zero evidence of a smooth incremental slope uphill to the body plans. Where also, population genetics on feature fixing is not temporally favourable. The cumulative effect of such points away from blind chance and/or mechanical necessity to the only empirically warranted cause of FSCO/I, design. But, this is too often not ideologically acceptable. Hence the deadlock. Though, I think there are signs of desperation on the other side. KFkairosfocus
December 20, 2013
December
12
Dec
20
20
2013
12:07 AM
12
12
07
AM
PDT
F/N: Related, clipping Wikipedia as speaking against known ideological inclinations: RANDOM VARIABLE: >> In probability and statistics, a random variable or stochastic variable is a variable whose value is subject to variations due to chance (i.e. randomness, in a mathematical sense).[1]:391 As opposed to other mathematical variables, a random variable conceptually does not have a single, fixed value (even if unknown); rather, it can take on a set of possible different values, each with an associated probability. >> RANDOMNESS: >> Applied usage in science, mathematics and statistics recognizes a lack of predictability when referring to randomness, but admits regularities in the occurrences of events whose outcomes are not certain. For example, when throwing two dice and counting the total, we can say that a sum of 7 will randomly occur twice as often as 4. This view, where randomness simply refers to situations where the certainty of the outcome is at issue, applies to concepts of chance, probability, and information entropy. In these situations, randomness implies a measure of uncertainty, and notions of haphazardness are irrelevant. The fields of mathematics, probability, and statistics use formal definitions of randomness. In statistics, a random variable is an assignment of a numerical value to each possible outcome of an event space. This association facilitates the identification and the calculation of probabilities of the events. A random process is a sequence of random variables describing a process whose outcomes do not follow a deterministic pattern, but follow an evolution described by probability distributions. These and other constructs are extremely useful in probability theory. >> PROBABILITY: >> Probability is a measure or estimation of likelihood of occurrence of an event.[1] Probabilities are given a value between 0 (0% chance or will not happen) and 1 (100% chance or will happen).[2] The higher the degree of probability, the more likely the event is to happen, or, in a longer series of samples, the greater the number of times such event is expected to happen. A simple example is a coin toss that has 0.5 or 50% chance of landing with the "head" side facing up. These concepts have been given an axiomatic mathematical derivation in probability theory . . . >> Significantly, Wiki has no article specifically on chance, but one on indeterminism that begins:
Indeterminism is the concept that events (certain events, or events of certain types) are not caused, or not caused deterministically (cf. causality) by prior events. It is the opposite of determinism and related to chance. It is highly relevant to the philosophical problem of free will, particularly in the form of metaphysical libertarianism. In science, most specifically quantum theory in physics, indeterminism is the belief that no event is certain and the entire outcome of anything is a probability. The Heisenberg uncertainty relations and the “Born rule”, proposed by Max Born, are often starting points in support of the indeterministic nature of the universe. Indeterminism is also asserted by Sir Arthur Eddington, and Murray Gell-Mann. Indeterminism has been promoted by the French biologist Jacques Monod's essay "Chance and Necessity". The physicist-chemist Ilya Prigogine argued for indeterminism in complex systems . . .
Britannica, 2001: >> Chances, probabilities, and odds. Events or outcomes that are equally probable have an equal chance of occurring in each instance. In games of pure chance, each instance is a completely independent one; that is, each play has the same probability as each of the others of producing a given outcome. Probability statements apply in practice to a long series of events but not to individual ones. The "law of large numbers" is an expression of the fact that the ratios predicted by probability statements are increasingly accurate as the number of events increases; but the absolute number of outcomes of a particular type departs from expectation with increasing frequency as the number of repetitions increases. It is the ratios that are accurately predictable, not the individual events or precise totals. The probability of a favourable outcome among all possibilities can be expressed: probability (p) equals the total number of favourable outcomes (f ) divided by the total number of possibilities (t), or p = f/t. But this holds only in situations governed by chance alone . . . >> In short, there is a usage that builds on the flat random case, but also there is room in the chance concept for variables that are not so evenly distributed. The fundamental point being that chance is an influence when there is significant contingency and the outcome is credibly not being controlled by an agent. Which -- after all the huffing and puffing by objectors -- brings us full circle to the approach long since on the table. Let me clip from my longstanding always linked note, noting too that the three categories go all the way back to Plato in The Laws, Bk X:
. . . the decision faced once we see an apparent message, is first to decide its source across a trichotomy: (1) chance; (2) natural regularity rooted in mechanical necessity (or as Monod put it in his famous 1970 book, echoing Plato, simply: "necessity"); (3) intelligent agency. These are the three commonly observed causal forces/factors in our world of experience and observation . . . . Each of these forces stands at the same basic level as an explanation or cause, and so the proper question is to rule in/out relevant factors at work, not to decide before the fact that one or the other is not admissible as a "real" explanation. This often confusing issue is best initially approached/understood through a concrete example . . .
A CASE STUDY ON CAUSAL FORCES/FACTORS -- A Tumbling Die: Heavy objects tend to fall under the law-like natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance.
But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes! This concrete, familiar illustration should suffice to show that the three causal factors approach is not at all arbitrary or dubious -- as some are tempted to imagine or assert.
Note, the case study. Where also, the odds of any side of a fair die being uppermost will be the same, but if we take two dice and sum the pips, there is a peak at 7. The sum variable, s, is driven by a chance process and is a case of chance, but is definitely not going to be equiprobable from 2 to 12. Pardon if this is a bit longish, it is in response to ever so many arguments and side tracks that I think have clouded what should not even be seriously contentious. KFkairosfocus
December 19, 2013
December
12
Dec
19
19
2013
11:37 PM
11
11
37
PM
PDT
Jerad #18: But, as stated, this has very little to do with evolution. There was no pre-specified sequence that was a target.
The point of the 500 coins argument is merely to get anti-IDers to acknowledge that if the probability of a configuration of matter occurring as a result of random processes is below some threshold, chance can be ruled out as an explanation. It's true that evolution doesn't have an exact pre-specified sequence as a target. However, it does specify a configuration of matter belonging to a certain 'class' came about through random processes, namely a configuration of matter which can implement RM+NS (random mutation + natural selection) to allow evolution to start. Darwinists typically argue that "any" self-replicating molecule, no matter how simple, by definition implements RM+NS, but I don't think this is true. For example, Dawkins' comment that a self replicating molecule "only had to happen once" for evolution to start, without any quantization of the complexity such a molecule would need to implement RM+NS, not just self-replicate. I've read a half dozen research papers on self-replicators, and none of them implement RM+NS. They catalyze formation of random tiny polymers, or catalyze formation of copies of themselves (if you've seen an experiment that has done better, let me know). They don't randomly mutate, and they don't differentially utilize resources to be able to compete (no natural selection). And that description of self-replicator research is EXTREMELY charitable, because all of these experiments only work in chemically pure environments, with the organic building blocks, chemical energy, and complex catalysts provided by the experimenter. The experimental results are laughable, yet if you go to any evolution friendly forum on the internet, lay believers in evolution claim based on references to such research that OOL is a solved problem! I wonder if any of them have actually read any one of these papers? While it's true that there are infinite ways to imagine a self-replicator capable of RM+NS might work, the number of configurations of matter with that property is clearly an infinitesimal fraction of all possible configurations of matter. If you don't believe this, consider placing a living creature in a blender, turning it on for a minute, then stopping. What percentage of the time do you imagine the creature will still be living at the end of this process? Clearly never. Is the problem helped significantly if it's random organic molecules swirling around in a soup trying to form a cell instead of a macroscopic creature? Is it helped at all if some of those molecules are simple self replicators of the best kind researchers have come up with? The macroscopic creature in a blender has a bunch of complex self replicators left in it (still living cells), yet surely no evolutionist would argue that *occasionally* those self replicators would band together and restore the shredded animal to life. Why would you suppose then that a random collection of infinitely more primitive self replicators than any given cell in that creature could band together to form something capable of RM+NS? One interesting example of a "self replicator" is the Spielegman Monster. It's a family of RNA chains capable of self replication. Never mind that this "self" replicator requires an RNA replication enzyme from a biological source as a starting point (which it can't itself produce), plus a steady supply of nucleotides and energy provided by the experimenter. Even if you accept all that as reasonable, the Spielegman Monster shows that simple self replicators have a tendency to *devolve* toward lesser complexity, not increasing complexity. The original formation of the Monster involved starting with a long string of RNA from a virus capable of replication, and over time, it devolves into a shorter and shorter replicating strand, as the replicator selects against sections of RNA that don't immediately contribute to replication. This proves that even were a self replicator possible without a biological enzyme and carefully controlled environment as a starting point, such a self replicator would select *against* increasing complexity necessary to bring that replicator to a point where it could implement RM+NS. Game over evolution...NetResearchMan
December 19, 2013
December
12
Dec
19
19
2013
11:31 PM
11
11
31
PM
PDT
KF, Your criticism is a good one. I suspect there is a way to fix the complication you identified. I have to give it some thoughts. Thank you. I hope you liked the essay nonetheless, and I hope you and your family are well. Merry Christmas, Salscordova
December 19, 2013
December
12
Dec
19
19
2013
11:01 PM
11
11
01
PM
PDT
SC: I repeat my concern as given previously, with slight adjustments. Namely:
Pardon but I am a little uncomfortable with a maximising uncertainty definition for chance phenomena [which points to flat randomness], as random variables can also show bias by central tendency or inclination to one end or another of a range of possibilities or more. That is why I favour a more physical approach that starts with a paradigm case such as fair dice then uses it to introduce the concept of highly contingent outcomes for similar initial conditions that have no credible intelligent direction. As you know I spoke of:
(i) clashing uncorrelated chains of events that yield a result that varies seemingly at random across a range due to chaos effects (such as a die falling, tumbling and settling . . . Newtonian forces & factors, but 8 corners and 12 edges amplify uncontrollable small differences) and also (ii) of the sort of hard core written in randomness that we find in quantum phenomena and statistical mechanics etc.
For statistical mechanics phenomena I think the model of a box of marbles with pistons at the ends that can give a hard push and set in train movements and collisions culminating in Maxwell Boltzmann statistics is useful and points to thermodynamics: ======================================= ||::::::::::::::::::::::::::::::|| ||::::::::::::::::::::::::::::::||=== ||::::::::::::::::::::::::::::::|| ======================================= Brownian motion gives us an observable and sufficiently close case that we may observe "giant molecules" taking part in the resulting "dance." One, that played a role in award of a Nobel prize. That of course then raises the issue of when do we see from results that intelligence is a likely cause? That raises the issue that at some reasonable threshold of complexity measured by scope of configuration space [500 coins . . . 3.27 * 10^150 possibilities], and available search resources [a solar system of 10^57 atoms, with fastest chem rxns 10^-14 s, and 10^17 s], a blind process such as chance becomes maximally implausible as an explanation of unusual configs such as 500 H, or alternating H & T or the first 72 characters of this message in ASCII code. (Lets just say that generously treating each solar system atom as an observer of a 500 coin array every 10^-14 s for a "reasonable" lifespan comes up to 10^88 samples from the space of over 10^150. Far too small to be confident of catching something that is rare.) It is not hard to see — save for those with a will problem — that something that is functionally specific and complex beyond 500 bits worth of possible configs, will not plausibly result from blind chance and/or mechanical necessity searching on the gamut of our solar system. As the 500 H coins in a row case will aptly illustrate, and as would a similar row of coins spelling out the ASCII code for the first 72 or so characters of this message. The only empirically grounded explanation for such is design.
So, while obviously a flat random pattern like ideal dice is a case of chance, the number of H's in a tossed set of 500 fair coins laid out in a single row is also a matter of chance. But, the latter -- being binomially distributed -- will sharply peak around 250 H. Likewise, Boltzmann's distribution is a reverse-J and yet is held to reflect the chance-dominated distribution of energy in collections of molecules or the like. So also, while I agree with the maximum uncertainty case as a very good example of a random variable reflecting chance, some chance processes will reflect biases leading to bells, reverse J's, U's [often with unequal peaks], etc. Maybe, you want to suggest the flat random case as an ideal, with some systematic influence or cumulative process generating distributions with the other shapes? I hope that helps. KFkairosfocus
December 19, 2013
December
12
Dec
19
19
2013
10:54 PM
10
10
54
PM
PDT
Piltdown2 @16:
But I agree with Sal, natural selection as imagined by Darwinists is not based on real-world observations, where the incredibly rare beneficial mutations are much more likely to be wiped out by subsequent mutations or other natural causes than to be spread throughout a population and lead to new species.
Oh, the problem is much more fundamental than that. Natural selection is just a shorthand term for a process that is essentially never identified nor properly understood. And if we really understood what was going on in any particular case, we could explain it quite fully without every using the term "natural selection." Natural selection is not a force; it doesn't do anything. It is just an after-the-fact label applied to the observation that this-or-that change has taken place in a population. It is remarkable how many people reverentially refer to natural selection as though it were some actual process, or some actual force, or had some actual causal explanatory power.Eric Anderson
December 19, 2013
December
12
Dec
19
19
2013
10:36 PM
10
10
36
PM
PDT
Graham2: Yes, the old, "Natural Selection changes the equation" bluff. Tell us, please, what precisely does natural selection do to eliminate chance. What does it drive towards. (And, no, don't say something like "survivability," because that is just circular.)Eric Anderson
December 19, 2013
December
12
Dec
19
19
2013
10:17 PM
10
10
17
PM
PDT
This may (or may not) be of interest to your post: Quantum Computation Basics and Reflections On Quantum Cognitive Science - video https://www.youtube.com/watch?v=L-qkQ86fXIs Of note: Prisoner's dilemma is solved by quantum computationbornagain77
December 19, 2013
December
12
Dec
19
19
2013
02:50 PM
2
02
50
PM
PDT
Jerad: Obviously it’s more likely that there will be a roughly half and half mix of heads and tails but that’s not specifying a particular sequence before hand.
I didn't say it was, you're attributing an argument to me which I didn't make. I said:
we simultaneously have almost near certainty about which configurations it cannot be in — such as all-coins heads or a pre-specified sequence….”
A pre-specified sequence is a sequence specified independently before observing the actual sequence after the randomizing process. You said:
Please mathematically define or state what you mean by ‘almost near certainty’.
How about 99.999999......% or whatever a 22-sigma deviation from expectation will give you. You apparently learned nothing from the calculations that were provided in the links above (and these were discussion you participated in). :roll:scordova
December 19, 2013
December
12
Dec
19
19
2013
12:56 PM
12
12
56
PM
PDT
=>Graham2 #15, What are the components of 'Natural Selction'? In trying to understand it, I have listed some components. Can you add more? 1) Predators 2) Genetic mutation 3) Reproduction 4) Natural calamitiescoldcoffee
December 19, 2013
December
12
Dec
19
19
2013
02:25 AM
2
02
25
AM
PDT
I will respond by saying, “when we have maximum uncertainty about what specific configuration 500 fair coins is in (by randomizing the coins in some vigorous fashion), we simultaneously have almost near certainty about which configurations it cannot be in — such as all-coins heads or a pre-specified sequence….”
Please mathematically define or state what you mean by 'almost near certainty'. As has been stated many, many times: Any given, pre-specified sequence of heads and tails is just as likely/unlikely as any other sequence given a purely random generation process. Obviously it's more likely that there will be a roughly half and half mix of heads and tails but that's not specifying a particular sequence before hand. There are many, many sequences of heads and tails which are 'roughly' half and half so the chances of something from that 'class' arising is greater. But each individual sequence has the same probability of being generated. Again: ANY pre-specified sequence of heads and tails has a 1/2^500 chance of being generated in a random generation process. The 'meaning' you assign to some sequences has nothing to do with their chances of arising in a purely random process. But, as stated, this has very little to do with evolution. There was no pre-specified sequence that was a target.Jerad
December 18, 2013
December
12
Dec
18
18
2013
11:42 PM
11
11
42
PM
PDT
Sal, You’ve been tossing out some excellent posts in the past few months. Hats off, and keep ‘em coming. Thanks! Happy Holidays
Thank you for the kind words. Happy Holidays as well. My goal is to make ID material accessible and understandable by being succinct and clear and stating my case in a way that will hopefully be unassailable. I've tried to distill the essentials of 500 pages of Bill Dembski's writings in a way the public can more easily understand. To my knowledge, I'm the only ID proponent who emphasizes the use of Expected Values and the Law of Large Numbers. The fact that the concepts seem easier to grasp when phrased in term of expectation and the LLN, encourages me to pursue this as an alternate way of framing the ID arguments at the college level. I think I have some empirical evidence of the effectiveness of this new line of argumentation since it is already reducing one famous critic to ashes. :-)scordova
December 18, 2013
December
12
Dec
18
18
2013
06:44 PM
6
06
44
PM
PDT
I first heard the term "theory of natural selection" used by a Darwinist in reference to evolution on a recent Medved show. Got me thinking that since this is only half of the RM+NS equation, it must be easier to defend than the whole theory. But I agree with Sal, natural selection as imagined by Darwinists is not based on real-world observations, where the incredibly rare beneficial mutations are much more likely to be wiped out by subsequent mutations or other natural causes than to be spread throughout a population and lead to new species.Piltdown2
December 18, 2013
December
12
Dec
18
18
2013
06:37 PM
6
06
37
PM
PDT
ow: Natural selection. Now you have heard about it. Predators etc do the selecting. If this isnt 'teological' then Im sorry, but thats how the real world works.Graham2
December 18, 2013
December
12
Dec
18
18
2013
05:12 PM
5
05
12
PM
PDT
Graham2: NS is a filter, not an agent; a non-random process of selection, i.e. agency, demands teleology, which Darwinists/evolutionists and practically the entire scientific and philosophical establishment for at least the past century and a half steadfastly refuses to let inside the tent. If you can explain a non-teleological version of non-random ns then I would truly love to hear about it.owendw
December 18, 2013
December
12
Dec
18
18
2013
05:05 PM
5
05
05
PM
PDT
Sal at #10: Im not disputing the coin stuff, yes the universe is random, no one is denying that, but its selection that imposes a non-random direction on biological systems, but you have rejected this, so the coin stuff is all unnecessary. You dont need to mention coins at all. Without selection, evolution is not possible. See 'straw man'.Graham2
December 18, 2013
December
12
Dec
18
18
2013
02:46 PM
2
02
46
PM
PDT
Graham2, This time I will thank you for bringing up why you think evolution works, namely Darwinian selection. Lots of people hold to this view. Dawkins swears by it. But it is a mistaken view. Your appearance in this discussion raises some questions that some of our readers have in the back of their minds. So thank you for participating.scordova
December 18, 2013
December
12
Dec
18
18
2013
02:41 PM
2
02
41
PM
PDT
Graham2: If evolution were driven by chance alone, then we would never expect organized life forms to appear, no one is suggesting that. It is selection that makes it all possible, but you have discarded selection.
How about irreducible complexity? It is for evolutionists to argue that there is a gradual array of functional intermediates for selection to select upon for e.g. the bacterium flagellum.Box
December 18, 2013
December
12
Dec
18
18
2013
02:36 PM
2
02
36
PM
PDT
A Darwinist might object and say, "systems are so complex, we don't know a lot about nature, how can you make such assertions and reduce it to simple statistics?" I'll respond, "conversely, you how can you insist evolutionism is as certain as gravity when you aren't even certain what correct statistics should be! Further, with respect to certain distributions, more knowledge about the details of the system won't necessarily give you a better distribution. If I have fair coins, would it really help if I could even in principle know the position and momentum of every atom in the coins? No. Coins are only a proxy of the inherent randomizing forces of nature and the uncertain boundary conditions that can act on the coins over time. Since we can't determine position and momentum of every particle in the universe, the randomness of coins, the randomness of DNA strings in a pre-biotic soup, the randomness of amino acid strings in a pre-biotic soup are only reflective of the underlying chance mechanism already embedded in nature. When the proxies (coins, DNA) for this improbability embedded deep in nature can be reduced to approximately independent trials, the assumed distributions can be viewed as quite legitimate assuming no outside influence like a machine or an intelligence."scordova
December 18, 2013
December
12
Dec
18
18
2013
02:35 PM
2
02
35
PM
PDT
The point is, that selection renders all the stuff about coins moot. Yes, 500 coins is an outcome not expected by chance alone, but so what ? If evolution were driven by chance alone, then we would never expect organized life forms to appear, no one is suggesting that. It is selection that makes it all possible, but you have discarded slection. Its sort of like rejecting physics or something, then everything else: gods, angels ets are all on the table.
So you accepts the definitions of chance, and the coin thought experiment in-and-of itself. But your objection is one of a false analogy? That while one may be able to reject "chance" in the coin example, it cannot accurately enough apply to the biological discussion? No snark intended, I'm honestly curious.TSErik
December 18, 2013
December
12
Dec
18
18
2013
02:07 PM
2
02
07
PM
PDT
1 2

Leave a Reply