Uncommon Descent Serving The Intelligent Design Community

Jerad and Neil Rickert Double Down

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In the combox to my last post Jerad and Neil join to give us a truly pristine example of Darwinist Derangement Syndrome in action.  Like the person suffering from Tourette’s they just don’t seem to be able to help themselves.

Here are the money quotes:

Barry:  “The probability of [500 heads in a row] actually happening is so vanishingly small that it can be considered a practical impossibility.  If a person refuses to admit this, it means they are either invincibly stupid or piggishly obstinate or both.  Either way, it makes no sense to argue with them.”

Sal to Neil:  “But to be clear, do you think 500 fair coins heads violates the chance hypothesis?”

Neil:  “If that happened to me, I would find it startling, and I would wonder whether there was some hanky-panky going on. However, a strict mathematical analysis tells me that it is just as probable (or improbable) as any other sequence. So the appearance of this sequence by itself does not prove unfairness.”

Jared chimes in:  “There is no mathematical argument that would say that 500 heads in 500 coin tosses is proof of intervention.” And “But if 500 Hs did happen it’s not an indication of design.”

I do not believe Jerad and Neil are invincibly stupid.  They must know that what they are saying is blithering nonsense.  They are, of course, being piggishly obstinate, and I will not argue with them.  But who needs to argue?  When one’s opponents say such outlandish things one wins by default.

And I can’t resist adding this one last example of DDS:

Barry to Jerad:  “Is there ANY number of heads in a row that would satisfy you. Let’s say that the coin was flipped 100 million times and they all came up heads. Would you then know for a moral certainty that the coin is not fair without having to check it?”

Jerad:  “A moral certainty? What does that mean?”

It’s funny how often when one catches a Darwinist in really painful-to-watch idiocy, and call them on it, their response is something like “me no speaka the English.”

Jerad, let me help you out:  http://en.wikipedia.org/wiki/Moral_certainty

Comments
I could not fail to notice that you are still dodging the question. I see that you did not follow the link I provided to you explaining the concept of “moral certainty.” If you had, you would have learned something. You would have learned that “moral certainty” has nothing to do with morals in the sense of ethics (which is the dodge you are using). So, your dodge does not work. It only makes you like more obstinate, which is some trick itself.
You're right, I hadn't read the link. I have now. I've already said that if I got 500 heads I would be very suspicious and check everything out but if nothing was wrong I'd conclude a fluke result. That's an explanation that depends on existing causes without the need to invoke a designer. There is no need to fall back on 'beyond reasonable doubt'. We have a perfectly good explanation for getting any prespecified sequence on the first try: it just happened.
FYI, it is readily shown that in theory any member of a pop can be accessed by a sample, but that simply distracts from the material issue.
That's my only issue. You want to escalate the discussion so that it traipses into other realms.
And, on the same point, you clustered a clip from someone else about presuppositions with what I said, about the issue of strictly limited and relatively tiny searches of vast config spaces W, that have in them zones of interest that are sparse indeed.)
Yup, I did address two points from different people in one post. Once again: if anyone can point out something mathematical that I've got wrong then I'll change my stance. Please restrict your criticisms to things I've actually said and addressed.Jerad
June 24, 2013
June
06
Jun
24
24
2013
06:25 AM
6
06
25
AM
PDT
MF: There you go, dragging red herrings away to ad hominem-laced strawmen -- here the subtext of my imagined ignorance and/or stupidity such that you want GP to help you correct me. Kindly, see what I have clipped just above from Fisher's mouth, and ponder on how a "natural" special zone like a far tail to a bell that is hard to hit by dropping darts scattering at random (relative to the ease of hitting the bulk) illustrate aptly the catching the needle in the haystack on a small blind sample problem. KFkairosfocus
June 24, 2013
June
06
Jun
24
24
2013
06:12 AM
6
06
12
AM
PDT
MF: Are you familiar with what Fisher actually did, which pivoted on areas under the curve beyond a given point, relativised into p-values? [Probability being turned into likelihood of an evenly scattered sample hitting a given fraction of the whole area. As in, exactly what the dart-dropping exercise puts in more intuitive terms?] As in, further, a reasonable blind sample will reliably gravitate to the bulk rather than the far tails? Hence, if, contrary to reasonable expectation on sampling we are where we should not expect to be, Fisher said: "either an exceptionally rare chance has occurred or the theory [--> he here means the model that would scatter results on the relevant bell curve] is not true." The NP discussion on type I/II errors etc. is post the relevant point. Kindly cf. the linked review article. KFkairosfocus
June 24, 2013
June
06
Jun
24
24
2013
06:06 AM
6
06
06
AM
PDT
Gpuccio 18 Well glad you recognise that the Fisher/NP/Likelihood/Bayes issue is not a strawman. Maybe you can explain that to KF? As I said Fisherian techniques work because in a wide range of situations they lead to the same decision as a Bayesian approach and they are easier to use. However, the conceptual problems are rather severe and they become very relevant when you are trying to tackle more philosophical subjects like ID. Here are a few of the problems: * No justification for one rejection region over another. Clearly illustrated when you justify one-tail as opposed to 2-tail testing but actually applies more generally. * No justification for any particular significance level. Why 95% or 99% or 99.9%? * No proof that the same significance level represents the same level of evidence in any two situations - so there is no reason to suppose that 95% significance is a higher level of evidence than 90% significance in two different situations. * Can get two different significance levels from the same experiment with the same results depending on the experimenter's intentions! (See http://www.indiana.edu/~kruschke/articles/Kruschke2010TiCS.pdf) But perhaps most important of all - it measures the wrong thing! We want to know how probable the hypothesis is given the data. Fisher's method tells us only how probable it is that the data would have fallen into certain categories given the hypothesis. Bayesian approaches avoid all these problems - which seem to me to be worth avoiding and rather more substantial than an excuse to introduce my worldview. The cost is - they can be hard to calculate and sometimes (not always) they require subjective estimates of the priors.Mark Frank
June 24, 2013
June
06
Jun
24
24
2013
06:00 AM
6
06
00
AM
PDT
KF 13
You know or should know that the material issue at stake is partitioned config spaces and relative statistical weights of clusters of possible outcomes, leading to the dominance of the bulk of a bell distribution under relevant sampling circumstances;
I was only pointing out problems with Fisherian hypothesis testing. If Fisherian hypothesis is not relevant then I apologise - but then I have to wonder why you raised it?Mark Frank
June 24, 2013
June
06
Jun
24
24
2013
05:38 AM
5
05
38
AM
PDT
BA: It seems Jerad et al need to make acquaintance with the ordinary unprejudiced man in the Clapham bus stop. Or, with the following from Simon Greenleaf, in Evidence, vol I Ch 1, on the same basic point. KFkairosfocus
June 24, 2013
June
06
Jun
24
24
2013
05:32 AM
5
05
32
AM
PDT
Jerad: See what I mean about tilting at strawmen -- as in, there you go again? FYI, it is readily shown that in theory any member of a pop can be accessed by a sample, but that simply distracts from the material issue. Let me put it in somewhat symbolised terms, as saying the equivalent in English seems to make no impression:
1: config spaces of possibilities, W are partitioned into zones of interest that are naturally significant -- far tails, text strings in English not repetitions or typical random gibberish, etc -- which we can symbolise z1, z2, . . . zn, where 2: SUM on i (Zi) is much, much, much less than W, putting us in the needle in the haystack context. 3: Also, where search resources leading to credible blind and unguided sample size s, is also incredibly less than W. 4: So, it is highly predictable/reliable -- in cases where W = 2^500 - 2^1,000 or more, all but certainly -- that a blind search of W of scope s [10^84 - 10^111 samples) will come from the overwhelming bulk of W, not the special zones in aggregate. 5: That is, for relevantly large S, the overwhelming likelihood is that blind searches will come from W - {SUM on i(zi)}, not from SUM on i (zi) 6: And so, if instead we see the opposite, the BEST, EMPIRICALLY WARRANTED EXPLANATION is that such arose by choice contingency [for relevant cases where this is the reasonable alt), not chance. 7: Which is a design inference. 8: Where also, in relevant cases, requisites of specific function, e,g. as text in English, sharply constrain acceptable possible strings from W. 9: That is, that SUM on i (zi) is much, much less than W is not an unreasonable criterion.
(And, on the same point, you clustered a clip from someone else about presuppositions with what I said, about the issue of strictly limited and relatively tiny searches of vast config spaces W, that have in them zones of interest that are sparse indeed.) KFkairosfocus
June 24, 2013
June
06
Jun
24
24
2013
05:22 AM
5
05
22
AM
PDT
Jerad @ 5. I could not fail to notice that you are still dodging the question. I see that you did not follow the link I provided to you explaining the concept of "moral certainty." If you had, you would have learned something. You would have learned that "moral certainty" has nothing to do with morals in the sense of ethics (which is the dodge you are using). So, your dodge does not work. It only makes you like more obstinate, which is some trick itself.Barry Arrington
June 24, 2013
June
06
Jun
24
24
2013
05:06 AM
5
05
06
AM
PDT
I must say that I am really surprised of Neil Rickert. I did not expect such a position from him. From Jerad, on the other hand...gpuccio
June 24, 2013
June
06
Jun
24
24
2013
05:02 AM
5
05
02
AM
PDT
Mark: I would say that Fisherian hypothesis testing works perfectly in empirical sciences, provided that it is applied with a correct methodology. The hypothesis testing procedure is perfectly correct and sound, but the methodology must be correct too. We have to make reasonable questions, and the answers must be pertinent. Frankly, the only reason that I can see for your personal (and of others too) insistence in a Bayesian approach is that you use it only to introduce your personal worldview commitments (under the "noble" word of priors), computing irrational and improbable probabilities for all that you don't want to accept (such as the existence of non physical conscious beings). If that is the only addition that a Bayesian approach can offer us in this context, I gladly leave it to you. I am happy to discuss my and others' worldviews, but I will certainly not do that in terms of "probabilities".gpuccio
June 24, 2013
June
06
Jun
24
24
2013
05:00 AM
5
05
00
AM
PDT
Don’t you see that you are tilting at a strawman at this point? Cf 9 just above on partitioned config spaces and relative statistical weights of resulting clusters. Or, don’t you see that you are inviting the conclusion that you are revealing by actions speaking louder than words, that you think any and all tactics are “fair” in debate — “fair” on such a view being a mere social construct after all so that the honourable thing is one thing by nature and another by laws made for the moment by men through power struggles on the principle that might and manipulation make ‘right.’ Do you really want to go there?) KF
I started by responding to a post from Saturday discussing the probability of getting a result 22 standard deviations from the mean of a binomial distribution. That's all I'm doing. And defending what I've said when others have brought it up again in other threads. I'm happy to address partiioned configuration spaces if you want. As far as I can see no one has actually be able to show that my mathematics is wrong. Some have attacked a strawman of what I've said. And there's been a certain amount of abuse (Jerad's DDS . . . ) which I'm doing my best to ignore. Perhaps you'd like to caution some of the other commentors about their tone and correct their mathematical errors.
Yet apparently Jerad and other materialists/atheists are blind, either willingly or otherwise, to the fact that science would be impossible without Theistic presuppositions:
Since you can't prove that negative it's just an assertion on your part, a hypothesis that science has no need of. I look at the universe and see chaos, destruction and waste and, yes, some beauty and order. But, from a naturalistic point of view, if there were no order then I wouldn't be here to discover it. That does not mean that I can look backwards and anthropomorphically say things were/are designed. We live on this planet in this solar system in this galaxy because it happens to be one (of probably billions) that has the right combination of conditions to foster the beginning of life. But there are many, many, many other planets and solar systems where the conditions are completely hostile. If that meteor hadn't helped doom the dinosaurs the human race might not ever had existed at all. Stuff happens, all the time, every day. Sometimes there's an amazing coincidence or synchronicity that makes you stop in awe. Happens to me all the time. There's no magic director back in the studio bending events so certain things happen. You are going to get coincidences and really, really improbable things happening.Jerad
June 24, 2013
June
06
Jun
24
24
2013
04:54 AM
4
04
54
AM
PDT
Jerad repeats the oft repeated false mantra of materialists/atheists:
Supposing we live in a Theistic universe is not science though.
Yet apparently Jerad and other materialists/atheists are blind, either willingly or otherwise, to the fact that science would be impossible without Theistic presuppositions: A few quick notes to that effect:
John Lennox - Science Is Impossible Without God - Quotes - video remix http://www.metacafe.com/watch/6287271/ Not the God of the Gaps, But the Whole Show - John Lennox - April 2012 Excerpt: God is not a "God of the gaps", he is God of the whole show. http://www.christianpost.com/news/the-god-particle-not-the-god-of-the-gaps-but-the-whole-show-80307/ Philosopher Sticks Up for God Excerpt: Theism, with its vision of an orderly universe superintended by a God who created rational-minded creatures in his own image, “is vastly more hospitable to science than naturalism,” with its random process of natural selection, he (Plantinga) writes. “Indeed, it is theism, not naturalism, that deserves to be called ‘the scientific worldview.’” http://www.nytimes.com/2011/12/14/books/alvin-plantingas-new-book-on-god-and-science.html?_r=1&pagewanted=all "You find it strange that I consider the comprehensibility of the world (to the extent that we are authorized to speak of such a comprehensibility) as a miracle or as an eternal mystery. Well, a priori, one should expect a chaotic world, which cannot be grasped by the mind in any way.. the kind of order created by Newton's theory of gravitation, for example, is wholly different. Even if a man proposes the axioms of the theory, the success of such a project presupposes a high degree of ordering of the objective world, and this could not be expected a priori. That is the 'miracle' which is constantly reinforced as our knowledge expands." Albert Einstein - Goldman - Letters to Solovine p 131. Comprehensibility of the world - April 4, 2013 Excerpt:,,,So, for materialism, the Einstein’s question remains unanswered. Logic and math (that is fully based on logic), to be so effective, must be universal truths. If they are only states of the brain of one or more individuals – as materialists maintain – they cannot be universal at all. Universal truths must be objective and absolute, not just subjective and relative. Only in this way can they be shared among all intelligent beings.,,, ,,,Bottom line: without an absolute Truth, (there would be) no logic, no mathematics, no beings, no knowledge by beings, no science, no comprehensibility of the world whatsoever. https://uncommondescent.com/mathematics/comprehensibility-of-the-world/ The Great Debate: Does God Exist? - Justin Holcomb - audio of the 1985 debate available on the site Excerpt: The transcendental proof for God’s existence is that without Him it is impossible to prove anything. The atheist worldview is irrational and cannot consistently provide the preconditions of intelligible experience, science, logic, or morality. The atheist worldview cannot allow for laws of logic, the uniformity of nature, the ability for the mind to understand the world, and moral absolutes. In that sense the atheist worldview cannot account for our debate tonight.,,, http://theresurgence.com/2012/01/17/the-great-debate-does-god-exist Random Chaos vs. Uniformity Of Nature - Presuppositional Apologetics - video http://www.metacafe.com/w/6853139 "Clearly then no scientific cosmology, which of necessity must be highly mathematical, can have its proof of consistency within itself as far as mathematics go. In absence of such consistency, all mathematical models, all theories of elementary particles, including the theory of quarks and gluons...fall inherently short of being that theory which shows in virtue of its a priori truth that the world can only be what it is and nothing else. This is true even if the theory happened to account for perfect accuracy for all phenomena of the physical world known at a particular time." Stanley Jaki - Cosmos and Creator - 1980, pg. 49 Taking God Out of the Equation - Biblical Worldview - by Ron Tagliapietra - January 1, 2012 Excerpt: Kurt Gödel (1906–1978) proved that no logical systems (if they include the counting numbers) can have all three of the following properties. 1. Validity . . . all conclusions are reached by valid reasoning. 2. Consistency . . . no conclusions contradict any other conclusions. 3. Completeness . . . all statements made in the system are either true or false. The details filled a book, but the basic concept was simple and elegant. He summed it up this way: “Anything you can draw a circle around cannot explain itself without referring to something outside the circle—something you have to assume but cannot prove.” For this reason, his proof is also called the Incompleteness Theorem. Kurt Gödel had dropped a bomb on the foundations of mathematics. Math could not play the role of God as infinite and autonomous. It was shocking, though, that logic could prove that mathematics could not be its own ultimate foundation. Christians should not have been surprised. The first two conditions are true about math: it is valid and consistent. But only God fulfills the third condition. Only He is complete and therefore self-dependent (autonomous). God alone is “all in all” (1 Corinthians 15:28), “the beginning and the end” (Revelation 22:13). God is the ultimate authority (Hebrews 6:13), and in Christ are hidden all the treasures of wisdom and knowledge (Colossians 2:3). http://www.answersingenesis.org/articles/am/v7/n1/equation#
etc.. etc..bornagain77
June 24, 2013
June
06
Jun
24
24
2013
04:25 AM
4
04
25
AM
PDT
PPS: Those interested in following up the rabbit trail discussion may wish to go here for a review. I am highlighting that unlike arbitrarily chosen not naturally evident target zones, far tails of bell distributions are naturally evident special zones that illustrate the effect of partitioning a config space into drastically different statistical weight clusters, and then searching blindly with restricted resources.kairosfocus
June 24, 2013
June
06
Jun
24
24
2013
03:27 AM
3
03
27
AM
PDT
PS: Remember, target zones of interest are not merely arbitrarily chosen groups of outcomes, another fallacy in the strawman arguments above. E.g. functional configs such as 72+ ASCII character text in English are readily recognisable and distinct from either (i) repeating short patterns: THETHE. . . . THE, and (ii) typical, expected at random outcomes: GHJDXTOU%&OUHYER&KLJGUD . . . HTUI. There seems to be a willful refusal to accept the reality of functionally specific configs showing functional sequence complexity, FSC, that are readily observable as distinct from RSC or OSC.kairosfocus
June 24, 2013
June
06
Jun
24
24
2013
03:13 AM
3
03
13
AM
PDT
MF: You are demonstrably wrong, and your snipping out of context allowed you to set up a strawman and knock it over. You know or should know that the material issue at stake is partitioned config spaces and relative statistical weights of clusters of possible outcomes, leading to the dominance of the bulk of a bell distribution under relevant sampling circumstances; that is an easily observed fact as doing the darts and charts exercise will rapidly show EMPIRICALLY -- a 4 - 5 SD tail (as discussed) will be very thin indeed. Discussions of NP etc and the shaving off of a slice from the bulk -- which has no natural special significance here serves only as a red herring distraction from a point that is quite plain and easily shown empirically. Thence, you seem to have used the red herring led out to a strawman to duck the more direct issue on the table, where this applies to the sort of beyond astronomical config spaces and relatively tiny special, known attractive target zones and small blind samples we are dealing with. The suspect pattern continues. Do better next time, please. KFkairosfocus
June 24, 2013
June
06
Jun
24
24
2013
02:52 AM
2
02
52
AM
PDT
Jerad: Don't you see that you are tilting at a strawman at this point? Cf 9 just above on partitioned config spaces and relative statistical weights of resulting clusters. Or, don't you see that you are inviting the conclusion that you are revealing by actions speaking louder than words, that you think any and all tactics are "fair" in debate -- "fair" on such a view being a mere social construct after all so that the honourable thing is one thing by nature and another by laws made for the moment by men through power struggles on the principle that might and manipulation make 'right.' Do you really want to go there?) KFkairosfocus
June 24, 2013
June
06
Jun
24
24
2013
02:44 AM
2
02
44
AM
PDT
It probably does not help, that old fashioned Fisherian Hyp testing has fallen out of academic fashion, never mind that its approach is sound on sampling theory. Yes it is not as cool as Bayesian statistics etc, but there is a reason why it works well in practice.
Fisherian hypothesis testing has fallen out of fashion because it has become widely recognised that it is wrong. It only worked for all those years because in a wide range of circumstances it leads to much the same decisions as a Bayesian approach and it was much easier to use. With the advent of computers and clearer thinking about the foundations of statistics this is less and less necessary. In fact in many contexts pure Fisherian hypothesis testing fell out of favour several decades ago and was superseded by the Neyman-Pearson approach which requires an alternative hypothesis to be clearly articulated (and is thus moving in the direction of Bayes). Without the NP approach you cannot calculate such vital parameters as the power of the test. Whether you use a pure Fisherian or NP approach there are deep conceptual problems. To take your example of throwing darts at a Gaussian distribution. What that shows is that you are more likely to get a result between 0 and 1 SD than between 1 and 2 SD and so on. However,this does not in itself provide a justification for the rejection region being at the extremities. Fisherian thinking justifies the rejection region on the basis of the probability of hitting it being less than the significance level. You can draw such a region anywhere on your Gaussian distribution. Near the middle this would be a much narrow region than it would near the tails but it would still fall below the significance level. The only reason why using the tails of the distribution as a rejection region usually works is because the alternative hypothesis almost always gives a greater likelihood to this area than it does to the center. But there has to be an alternative hypothesis. Indeed in classical hypothesis testing it is common to decide that the rejection region is just one tail and not both - single tailed hypothesis testing. How is this decision made? By deciding that only plausible alternative hypotheses is one side of the distribution and not the other. I hope you are not going to ignore this corrective :-)Mark Frank
June 24, 2013
June
06
Jun
24
24
2013
02:39 AM
2
02
39
AM
PDT
To be able to have a ‘fair coin flip’ in the first place presupposes that we live in a Theistic universe where what we perceive to be random events are bounded within overriding constraints that prevent complete chaos from happening. Chaos such as the infamous Boltzmann’s brain that would result in a universe where infinite randomness was allowed to rule supreme with no constraint.
Supposing we live in a Theistic universe is not science though.
FYI, BA spoke of moral certainty in a PROBABILISTIC mathematical context, where it is relevant on the application of the calcs. KF
That makes no sense to me whatsoever. If something is possible and it happens and it looks like there was no intervention or bias then what do morals have to do with it?Jerad
June 24, 2013
June
06
Jun
24
24
2013
02:34 AM
2
02
34
AM
PDT
F/N: let me re-post a clip from comment 48 int eh previous thread, which was studiously ignored by Jerad, KeithS, Neil Rickert et al, in haste to make their favourite talking points. _______________________ [Clipping 48 in the DDS mendacity thread, for record:] >>It seems people have a major problem appreciating: (a) configuration spaces clustered into partitions of vastly unequal statistical weight,and (ii) BLIND sampling/searching of populations under these circumstances. It probably does not help, that old fashioned Fisherian Hyp testing has fallen out of academic fashion, never mind that its approach is sound on sampling theory. Yes it is not as cool as Bayesian statistics etc, but there is a reason why it works well in practice. It is all about needles and haystacks. Let’s start with a version of an example I have used previously, a large plot of a Gaussian distribution using a sheet of bristol board or the like, baked by a sheet of bagasse board or the like. Mark it into 1-SD wide stripes, say it is wide enough that we can get 5 SDs on either side. Lay it flat on the floor below a balcony, and drop small darts from a height that would make the darts scatter roughly evenly across the whole board. Any one point is indeed as unlikely as any other to be hit by a dart. BUT THAT DOES NOT EXTEND TO ANY REGION. As a result, as we build up the set of dart-drops, we will see a pattern, where the likelihood of getting hit is proportionate to area, as should be obvious. That immediately means that the bulk of the distribution, near the mean value peak, is far more likely to be hit than the far tails. For exactly the same reason why if one blindly reaches into a haystack and pulls a handful, one is going to have a hard time finding a needle in it. The likelihood of getting straw so far exceeds that of getting needle that searching for a needle in a haystack has become proverbial. In short, a small sample of a very large space that is blindly taken, will by overwhelming likelihood, reflect the bulk of the distribution, not relatively tiny special zones. (BTW, this is in fact a good slice of the statistical basis for the second law of thermodynamics.) The point of Fisherian testing is that skirts are special zones and take up a small part of the area of a distribution, so typical samples are rather unlikely to hit on them by chance. So much so that one can determine a degree of confidence of a suspicious sample not being by chance, based on its tendency to go for the far skirt. How does this tie into the design inference? By virtue of the analysis of config spaces — populations of possibilities for configurations — which can have W states and then we look at small, special, specific zones T in them. Those zones T are at the same time the sort of things that designers may want to target, clusters of configs that do interesting things, like spell out strings of at least 72 – 143 ASCII characters in contextually relevant, grammatically correct English, or object code for a program of similar complexity in bits [500 - 1,000] or the like. 500 bits takes up 2^500 possibilities, or 3.27*10^150. 1,000 bits takes up 2^1,000, or 1.07*10^301 possibilities. To give an idea of just how large these numbers are, I took up the former limit, and said now our solar system’s 10^57 atoms (by far and away mostly H and He in the sun but never mind) for its lifespan can go through a certain number of ionic chemical reaction time states taking 10^-14s. Where our solar system is our practical universe for atomic interactions, the next star over being 4.2 light years away . . . light takes 4.2 years to traverse the distance. (Now you know why warp drives or space folding etc is so prominent in Sci Fi literature.) Now, set these 10^57 atoms the task of observing possible states of the configs of 500 coins, at one observation per 10^-14 s. For a reasonable estimate of the solar system’s lifespan. Now, make that equivalent in scope to one straw. By comparison, the set of possibilities for 500 coins will take up a cubical haystack 1,000 LY on the side, about as thick as our galaxy. Now, superpose this haystack on our galactic neighbourhood, with several thousand stars in it etc. Notice, there is no particular shortage of special zones here, just that they are not going to be anywhere near the bulk, which for light years at a stretch will be nothing but straw. Now, your task, should you choose to accept it is to take a one-straw sized blind sample of the whole. Intuition, backed up by sampling theory — without need to worry over making debatable probability calculations — will tell us the result, straight off. By overwhelming likelihood, we would sample only straw. That is why the instinct that getting 500 H’s in a row or 500 T’s or alternating H’s and T’s or ASCII code for a 72 letter sequence in English, etc, is utterly unlikely to happen by blind chance but is a lot more likely to happen by intent, is sound. And this is a simple, toy example case of a design inference on FSCO/I as sign. A very reliable inference indeed, as is backed up by literally billions of cases in point. Now, onlookers, it is not that more or less the same has not been put forth before and pointed out to the usual circles of objectors. Over and over and over again in fact. And in fact, here is Wm A Dembski in NFL:
p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity, or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . . Biological specification always refers to function . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .” p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”
(And, Stephen Meyer presents much the same point in his Signature in the Cell, 2009, not exactly an unknown book.) Why then do so many statistically or mathematically trained objectors to design theory so often present the strawman argument that appears so many times yet again in this thread? First, it cannot be because of lack of capacity to access and understand the actual argument, we are dealing with those with training in relevant disciplines. Nor is it that the actual argument is hard to access, especially for those who have hung around at UD for years. Nor is such a consistent error explicable by blind chance, chance would make them get it right some of the time, by any reasonable finding, given their background. So, we are left with ideological blindness, multiplied by willful neglect of duties of care to do due diligence to get facts straight before making adverse comment, and possibly willful knowing distortion out of the notion that debates are a game in which all is fair if you can get away with it. Given that there has been corrective information presented over and over and over again, including by at least one Mathematics professor who appears above, the collective pattern is, sadly, plainly: seeking rhetorical advantage by willful distortion. Mendacity in one word. If we were dealing with seriousness about the facts, someone would have got it right and there would be at least a debate that nope, we are making a BIG mistake. The alignment is too perfect. Yes, at the lower end, those looking for leadership and blindly following are jut that, but at the top level there is a lot more responsibility than that. Sad, but not surprising. This fits a far wider, deeply disturbing pattern that involves outright slander and hateful, unjustified stereotyping and scapegoating. Where, enough is enough.>> ______________ Now, just prove me wrong by addressing the merits with seriousness. But I predict that we will see yet more of the all too commonly seen willful ignoring or evasive side tracking. Please, please, please, prove me wrong. KFkairosfocus
June 24, 2013
June
06
Jun
24
24
2013
01:32 AM
1
01
32
AM
PDT
Jerad: FYI, BA spoke of moral certainty in a PROBABILISTIC mathematical context, where it is relevant on the application of the calcs. KFkairosfocus
June 24, 2013
June
06
Jun
24
24
2013
01:29 AM
1
01
29
AM
PDT
To be able to have a 'fair coin flip' in the first place presupposes that we live in a Theistic universe where what we perceive to be random events are bounded within overriding constraints that prevent complete chaos from happening. Chaos such as the infamous Boltzmann's brain that would result in a universe where infinite randomness was allowed to rule supreme with no constraint. Proverbs 16:33 The lot is cast into the lap, but its every decision is from the LORD. Evolution and the Illusion of Randomness – Talbott – Fall 2011 Excerpt: In the case of evolution, I picture Dennett and Dawkins filling the blackboard with their vivid descriptions of living, highly regulated, coordinated, integrated, and intensely meaningful biological processes, and then inserting a small, mysterious gap in the middle, along with the words, “Here something random occurs.” This “something random” looks every bit as wishful as the appeal to a miracle. It is the central miracle in a gospel of meaninglessness, a “Randomness of the gaps,” demanding an extraordinarily blind faith. At the very least, we have a right to ask, “Can you be a little more explicit here?” http://www.thenewatlantis.com/publications/evolution-and-the-illusion-of-randomness Randomness - Entropic and Quantum https://docs.google.com/document/d/1St4Rl5__iKFraUBfSZCeRV6sNcW5xy6lgcqqKifO9c8/edit see also presuppositional apologeticsbornagain77
June 23, 2013
June
06
Jun
23
23
2013
11:21 PM
11
11
21
PM
PDT
If you get 500 (or 50 or 5000) heads in a row you should dismiss the hypothesis that this is a fair coin toss. But not because it is so improbable. That cannot be the case because, as we all accept, all sequences are equally improbable. The reason is because there are so many other possible hypotheses which give that outcome a vastly greater likelihood - some of which involve some element of design some of which do not. For example, the tossing method (whatever it is) might have got stuck or it might be a coin with two heads or it might be some trick of Darren Brown.Mark Frank
June 23, 2013
June
06
Jun
23
23
2013
11:09 PM
11
11
09
PM
PDT
Barry, you asked be about a moral certainty in a mathematical context.
Is there ANY number of heads in a row that would satisfy you. Let’s say that the coin was flipped 100 million times and they all came up heads. Would you then know for a moral certainty that the coin is not fair without having to check it?
That is what I was questioning. My morals are fine but I don't tend to apply them in mathematical situations.Jerad
June 23, 2013
June
06
Jun
23
23
2013
10:35 PM
10
10
35
PM
PDT
I commend the candor of all liars. How else would we know they are liars?Mung
June 23, 2013
June
06
Jun
23
23
2013
08:58 PM
8
08
58
PM
PDT
I commend Neil and Jerad's candor. Of course chance could be an explanation, but if one will admit odds that remote, one could also admit the possibility of God's existence with odds comparably remote. Jerry Coyne rates himself a 6.9 on a scale of 7 for the certainty of God's non-existence. So he says God has a 1.47% chance of existence based on his estimation. So Jerry Coyne would sooner believe God exists than finding a set of 500 coins all heads being the result of chance.scordova
June 23, 2013
June
06
Jun
23
23
2013
08:45 PM
8
08
45
PM
PDT
I hope they do :) Shall I post my local casino?Mung
June 23, 2013
June
06
Jun
23
23
2013
08:00 PM
8
08
00
PM
PDT
I hope Jerad and Neil don't play poker for money.Blue_Savannah
June 23, 2013
June
06
Jun
23
23
2013
06:30 PM
6
06
30
PM
PDT
1 2 3

Leave a Reply