Uncommon Descent Serving The Intelligent Design Community

Jerad and Neil Rickert Double Down

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In the combox to my last post Jerad and Neil join to give us a truly pristine example of Darwinist Derangement Syndrome in action.  Like the person suffering from Tourette’s they just don’t seem to be able to help themselves.

Here are the money quotes:

Barry:  “The probability of [500 heads in a row] actually happening is so vanishingly small that it can be considered a practical impossibility.  If a person refuses to admit this, it means they are either invincibly stupid or piggishly obstinate or both.  Either way, it makes no sense to argue with them.”

Sal to Neil:  “But to be clear, do you think 500 fair coins heads violates the chance hypothesis?”

Neil:  “If that happened to me, I would find it startling, and I would wonder whether there was some hanky-panky going on. However, a strict mathematical analysis tells me that it is just as probable (or improbable) as any other sequence. So the appearance of this sequence by itself does not prove unfairness.”

Jared chimes in:  “There is no mathematical argument that would say that 500 heads in 500 coin tosses is proof of intervention.” And “But if 500 Hs did happen it’s not an indication of design.”

I do not believe Jerad and Neil are invincibly stupid.  They must know that what they are saying is blithering nonsense.  They are, of course, being piggishly obstinate, and I will not argue with them.  But who needs to argue?  When one’s opponents say such outlandish things one wins by default.

And I can’t resist adding this one last example of DDS:

Barry to Jerad:  “Is there ANY number of heads in a row that would satisfy you. Let’s say that the coin was flipped 100 million times and they all came up heads. Would you then know for a moral certainty that the coin is not fair without having to check it?”

Jerad:  “A moral certainty? What does that mean?”

It’s funny how often when one catches a Darwinist in really painful-to-watch idiocy, and call them on it, their response is something like “me no speaka the English.”

Jerad, let me help you out:  http://en.wikipedia.org/wiki/Moral_certainty

Comments
But the argument against evolution by random chance alone, without the insertion of improved designs, is that we have MANY examples of wildly improbable events in natural systems, and this becomes the same as arguing that a tornado MIGHT assemble a 747 whilst passing through a junk yard (or the spare parts warehouse at Boeing).
Well, I guess it's a good thing no one is making that claim!! Since only the genetic mutations and some environmental conditions and culls are random as far as evolution is concerned. Jerad
gpuccio:
It is, IMO, a very serious fault of the current academy to have refuted ID as a scientific theory, to have fought it with all possible means, to have transformed what could have been a serious and stimulating scientific and philosophical discussion into a war. I don’t like that, but really I don’t believe that the ID folks can be considered responsible for that.
First: nobody has refuted ID. Second: I think that ID folks are at the very least partially responsible. Consider, for example, the notorious Wedge document. Third: Creationist "science" journals, at least, have a long history of requiring that any contributors sign up to a statement of faith, and at least some prominent ID proponents belong to academic institutions that require such a commitment. the same is not true of what you call "the academy". Even Dembski was made to retract a statement he made about the Flood by his employer. Behe, on the other hand, remains employed at an "academy" institution. Having said all that: it is time the war ended. That is why I started my own site - so that we could try to get past the tribalism and down to what really divides (and often, to our surprise, unites) us. I'm not always successful in suppressing the skirmishes, but I think we do pretty well. I'd be honoured if you would occasionally drop by. Elizabeth B Liddle
Mark:
A Bayesian approach is to judge an alternative hypothesis on its merits. It takes into account how likely the hypothesis is to be true without the data and how likely the data is given the hypothesis. What other merits would be relevant? All Bayes formula does is link all the merits in a systematic and mathematically justified way. It is the weakness of other approaches that they do not give sufficient weight to all the merits.
this. And I'd add that it's what ID proponents do all the time - particularly when they express astonishment that materialists should believe something so unlikely! There are always far more unrejected models from a Fisherian test than rejected models. How we decide between them depends on how much weight we give those unrejected alternatives. The great thing about using a Bayesian approach is that it forces you to make your priors explicit. The result is of course less conclusive, but so it should be. Bayes forces us to confront what we still do not know. It stops us making "of the gaps" arguments, whether for materialist or non-materialist explanations, and, above all, tells us the probability that we are interested in - that our hypothesis is correct, whereas a Fisher p value simple tells us the probability of our data, given the null. Not very informative, unless we have an extremely restricted relevant null! (such as "fair coin") Elizabeth B Liddle
From a purely statistical point of view, 1 instance of 500 heads is practically required. The same as 1 instance of 500 (consecutive) tails. But the argument against evolution by random chance alone, without the insertion of improved designs, is that we have MANY examples of wildly improbable events in natural systems, and this becomes the same as arguing that a tornado MIGHT assemble a 747 whilst passing through a junk yard (or the spare parts warehouse at Boeing). During the Vietnam War, an American infantryman who was aiming at a Viet Cong guerrilla had the odd experience of "catching" a bullet from the guerilla straight down the barrel of his M16. Since the 7.62mm round is larger than the 5.56mm barrel, it plugged the end. I've seen the photograph. Considering the small size of both the bullet and the gun barrel and the very precise angular alignments required, the probability of this happening is infinitesimally small. But billions of bullets were fired over a period of many years. So, odd things happen every day by chance. But it's been a long time since we stopped believing that weather occurs randomly, or death from infection or the alignment of the Sun and Moon to produce an eclipse. mahuna
gpuccio,
I want to thank you for your contribution to this discussion, which has been constructive and stimulating.
I think we showed what can be accomplished: a greater understanding. And I'm pleased to have talked with you and, hopefully, helped some other understand both our positions. Jerad
Gpuccio I am delighted to agree to disagree on some many points. And you are remarkable in being one of the few IDists who is prepared to examine what the ID hypothesis entails. But I am disappointed in this:
I believe that, when I use Fisherian reasonings here, I know what I do. I will accept any valid objection to my specific reasonings, while I am not interested in a generic refusal of Fisherian reasoning in itself.
It seems that as long you know how to use the Fisher process and it seems to you to be working in practice you are not interested in why it is successful. This means you are always at risk of coming to wrong conclusions (and in a stochastic world you may not know they are wrong). As I said in #67 most published research findings are wrong and the use of Fisher processes is behind a lot of it. Luckily most published research findings are also ignored. You write:
here can be more than one alternative hypothesis, and they must be judged on their merits, not on a probability, unless you use a Bayesian approach, which I don’t.
A Bayesian approach is to judge an alternative hypothesis on its merits. It takes into account how likely the hypothesis is to be true without the data and how likely the data is given the hypothesis. What other merits would be relevant? All Bayes formula does is link all the merits in a systematic and mathematically justified way. It is the weakness of other approaches that they do not give sufficient weight to all the merits. Mark Frank
Jerad: I want to thank you for your contribution to this discussion, which has been constructive and stimulating. My final summary just wanted to stress the essential difference between our positions, not deny the many things we have agreed upon. Yes, if you want to put it that way, I am absolutely "biased" against accepting order, and especially function, that is completely improbable as a "fluke". That is, IMO, against any intuition of truth and any common sense. I will not do it. My epistemology is obviously different from yours. Only, I would not call that a "bias", but simply an explicit cognitive choice. If in doubt about the terminology, we can always turn Bayesian and call it a "prior" :) . My alternative hypothesis, for order and function, has always been design: the intervention of consciousness. I have detailed the many positive reasons why that is perfectly reasonable, IMO. However, for simple "order" many other alternative hypotheses are certainly viable, and must be investigated thoroughly. It is my firm conviction that for complex function, instead, any non design explanation will be found to be utterly lacking. The neo darwinian theory is a good example of that failure. I am really sure, in my heart and mind, that only consciousness can generate dFSCI. Finally, I would not be so disappointed that we have been, in a way, "left alone" here. It is a general, and perfectly acceptable, fact that as a discussion becomes more precise and technical (and therefore, IMO, much more interesting and valid) the "general public" becomes less interested. No harm in that. That's why I am, always have been, and always will be, a "minority guy". This is a blog. While I personally refrain form discussing here topics that are not directly or indirectly pertinent to the ID theory (especially religious topics), it's perfectly fine with me that others love to do that. But the ID discussion is another thing. It is, IMO, a very serious fault of the current academy to have refuted ID as a scientific theory, to have fought it with all possible means, to have transformed what could have been a serious and stimulating scientific and philosophical discussion into a war. I don't like that, but really I don't believe that the ID folks can be considered responsible for that. ID is a very powerful scientific paradigm. It will never be "brushed away". Either the academy accepts to seriously give it the intellectual role it deserves, or the war will go on, and it will ever more become a war against the academy. That is the final result of dogmatism and intellectual intolerance. gpuccio
gpuccio,
Just one thing: to reject the null, you need not necessarily one alternative hypothesis. In general, the alternative hypothesis is simply “not H0?, that is, what we observe is extremely improbable as a random effect. There can be more than one alternative hypothesis, and they must be judged on their merits, not on a probability, unless you use a Bayesian approach, which I don’t. So for me, once rejected the null, the duty remains to choose the best “non random” explanation. As I have tried to show in my example. Unfair coins, a man in the room, or some trick from the child at the input, or some strange physical force, are all possible candidates. Each hypothesis will be evaluated according to its explanatory merits, or to its consistency, or falsifiability. Statistics is no more useful at this level.
We agree on much. I'm still not sure what your bottom line alternate hypothesis is, when all other explanations have been ruled out. But this has been one of my criticisms for a long time. And it's odd that you choose to pick the best non-random explanation. Sounds like you have a bias!!
“You, the “fluke 500 heads” guy. I, the “there must be another explanation” guy.” still summarizes well our differences.
I find that a bit disappointing after our informative and insightful conversation as it brushes aside the huge amount that we agree on, that we'd both do our utmost to try and root out any detectable bias. And I find it disappointing that you cannot state a clear final conclusion. "There must be another explanation" is pretty wishy-washy but that's your call. What I find very disappointing is that most of the commentators at UD have lost interest in the whole discussion and are now off chasing other perceived slurs against ID or imagined examples of stupid science. There was lots of shouting and finger pointing and then off they go, not willing to stick around for some substantive conversation. You seem actually interested in learning but I'm not so sure about many of your fellows. Jerad
Mark: I find your last post(s) very reasonable. I can agree almost about all. To be more clear: Under a fairly wide range of conditions classical hypothesis testing leads to the same conclusion as Bayesian thinking – although it also goes badly wrong under a wide range of conditions as well. I believe it is essentially a problem of correct methodology, whatever statistical approach one uses. I believe that, when I use Fisherian reasonings here, I know what I do. I will accept any valid objection to my specific reasonings, while I am not interested in a generic refusal of Fisherian reasoning in itself. The most important point about Bayesian thinking is that it is comparative. It requires you to not only to think not only about the hypothesis you are disproving, but the alternative you propose in instead. IDists don’t like this approach because it entails exploring the details of the design hypothesis. But as a competent statistician you will know that it is poor practice to dismiss H0 without articulating the alternative hypothesis. You don’t have to adopt a completely Bayesian approach (although that would be ideal). Even Neyman-Pearson requires articulating the alternative hypothesis. OK, I am happy that I have not to adopt a Bayesian approach (except when computing specificity and sensitivity). And I perfectly agree on providing one or more alternative hypothesis. But, for me, the null of a random effect is rejected on statistical ground, while the "positive" explanation must be compared with a reasoning that goes well beyond any statistical consideration, Bayesian or not. I believe this is a fundamental difference in our approach. As you can see, if you have read my example of the dark room, I have proposed many possible necessity explanations for the 500 heads sequence in that scenario, and choosing between them requires a lot of scientific reasoning, and will in some way be subjective in the end. That's why, in my epistemology, scientific theories are chosen by each one of us according to their being "the best explanation" for the person who chooses it. That's why many different scientific explanations of the same facts can happily live together, for shorter or longer spans of time, passionately "defended" by different groups of followers. That's exactly my idea of science.
The answer is simple and probably acceptable to you. The 500 heads mean something to lots of people, as does the 250/250 string and the opening lines of Hamlet. Therefore, it is plausible that someone might want to make the string come out that way for their purposes. It may not be very likely that such a person exists and that they could fiddle the results – but it only has to be marginally likely to overwhelm the hypothesis that it was a fair coin. But it does require the alternative hypothesis to be considered. The reason we differ so much on ID is twofold: 1) I don’t think the evolutionary theory hypothesis is comparable to the fair coin hypothesis. It is less well defined but seems to me that the outcome is plausible. 2) The alternative hypothesis has not been articulated – but if it had then I suspect it would be absurdly implausible.
Yes, I can accept most of that, except obviously the last two statements. In particular, I like very much your reference to consciousness as the origin of specification: "The 500 heads mean something to lots of people, as does the 250/250 string and the opening lines of Hamlet." I think you know my views, but as you give me an occasion to summarize them, I will do that as an asnswer to your last two statements: 1) I would definitely say that the RV part of the neo darwinian hypothesis is perfectly comparable to the fair coin hypothesis (excluding the effect of NS). It is a random walk, and not a coin toss, but the distribution of probabilities for unrelated states is grossly uniform, as I have tried to show many times. That part, therefore, can be accepted or rejected as a null, but we need a metrics to do that. I believe that dFSCI is a valid metrics for that. Please, consider that such an "evaluation of the null" can be done not only for a whole transition to a new functional protein, but also for the steps of that transition, if and when those steps are explicitly offered (I mean, obviously, the naturally selectable intermediaries). IOWs, the dFSCI can be applied to any section of the algorithm which implies a purely random variation, and that's why it is a very valuable concept. I have many times expressed many reasons to reject NS as a valid component of the process, at least for basic protein domains, most recently in the discussion with Elizabeth on another thread here. I hope you have read those posts. 2) I hope you can admit that I have always tried to detail my alternative hypothesis as much as possible here. I believe that the empirical observation that only humans seem to be able to generate dFSCI, and in great quantity, while the rest of the universe seems incapable of that (always suspending our judgement about biological information) has a simple fundamental explanation: dFSCI originates only in subjective conscious processes, including the recognition of meaning, the feeling of purpose, and the ability to output free actions. That's why I always relate ID to the problem of consciousness, and that's why my "faith" in ID is so related to my convinced rejection of the whole strong AI theory. The relation of the design process to those subjective experiences is certainly subjectively confirmed by what we can observe in ourselves when we design things. It is objectively confirmed, although only indirectly, by the unique relationship between conscious design processes and the objective property of dFSCI in objects. So, my alternative hypothesis is simple: if an object exhibits dFSCI, the null hypothesis if a rnadom generation of that information can be safely rejected. If reasonable necessity explanations of other kinds are not available, the best explanation is that some conscious being outputted that specific functional form to the object. I have gone to greater detail, many times, stating: a) That IMO for biological information humans are not a viable answer, and the existing data suggest that the designer(s) could be some non physical conscious being. Aliens are an alternative, but I am not a fan of that theory. b) That the existence of non physical conscious beings has been believed by most human beings for most recorded time. It still is, today. It does not seem such a ridiculous "prior", unless you decide that you and the minority of others who so fiercely reject it today are a superior elite, appointed by God to detain truth (ehm, no, here something did not work :) ). c) That such non physical conscious beings could have many forms, not necessarily that of a monotheistic creator. Indeed, the act of design of biological information is not in any way necessarily a "creation". It is rather a modeling, more similar, in form, to what we humans do every day. You may have noticed that I strictly avoid, here, religious arguments, of any kind. d) That I have repeatedly admitted that, while it is perfectly true that a design inference does not require any knowledge about the designer, except for the hypothesis that he is a conscious intelligent being and can manipulate matter, it is equally true that, once a design inference is made, and even momentarily accepted, we have a duty to ask all possible questions about the designer and the design process, and verify if answers, even partial, can be derived from the existing data. e) That I have many times admitted that some of those answers can certainly be given. For example, we can certainly try to answer, as our accumulation of biological knowledge increases, the following questions: e1) When and where does design appear in natural history? dFSCI offers a simple method to look for those answers: the emergence of new dFSCI will be a clue to a design process. OOL, the transition form prokaryotes to eukaryotes, the Cambrian explosion are very good candidates for a localization, both in time and space, of the design process. e2) How does the designer model the information into matter? I have suggested different possible mechanisms, all of them with different, recognizable consequences in the genome and proteome history. Guided mutation and intelligent selection are the most obvious alternatives. Each of them can come in different forms, and both can work together. e3) Finally, it is legitimate, although not easy, to ask questions about the purposes of the designer. That can include both wide range purposes and local purposes. I have argued many times that, from what we observe, a desire to express ever new different functions explains the variety we observe in biology much better than the darwinian concept of "reproductive fitness". So, that's all, or most of it, in a nutshell. Thank you for the kind attention :) gpuccio
Jerad: OK, your point of view is clear enough. I maintain mine, which I hope is clear too. Just one thing: to reject the null, you need not necessarily one alternative hypothesis. In general, the alternative hypothesis is simply "not H0", that is, what we observe is extremely improbable as a random effect. There can be more than one alternative hypothesis, and they must be judged on their merits, not on a probability, unless you use a Bayesian approach, which I don't. So for me, once rejected the null, the duty remains to choose the best "non random" explanation. As I have tried to show in my example. Unfair coins, a man in the room, or some trick from the child at the input, or some strange physical force, are all possible candidates. Each hypothesis will be evaluated according to its explanatory merits, or to its consistency, or falsifiability. Statistics is no more useful at this level. But these are trivial points. I believe that the following: "You, the “fluke 500 heads” guy. I, the “there must be another explanation” guy." still summarizes well our differences. gpuccio
Perhaps I should be sure my views are clear. If the null hypothesis is: the coin flipping process is fair, i.e. truly fair And the alternate hypothesis is: the coin flipping process is not fair. Then I'd most likely reject the null hypothesis if we got a string of 500 heads, depending on the confidence interval you specified. It all depends on what your alternate hypothesis is. I sound like Bill Clinton now. Sigh. If your alternate hypothesis is: the system is bias then I'd most likely reject the null hypothesis again depending on the confidence interval. If your alternate hypothesis is: there's a guy in Moscow who is psychically affecting the coin tosses then . . . I think you'd better use a Bayesian approach where other factors are introduced. What is the plausibility that psychic powers can do such a thing? Could the man in Moscow be getting the signal the coin was being flipped in time to affect its outcome? If you're going to make statistical arguments then be precise and follow the procedures. Give me an clear and testable alternate hypothesis. And, ideally, a confidence interval you'd like to use. But, remember, there is no such thing as a 100% confidence interval. And remember what a confidence interval tells you: that your refection of the null hypothesis is blah% sure to not be down to a chance result. And that is based on the distribution of the variable being tested. You see confidence intervals all the time in poll results. Mostly you don't see the confidence percentage reported which is just sloppy journalism. Fairly obviously, the higher the confidence the bigger the sample size has to be. So I'd really like to get that nailed down as well. Jerad
a) We seem to agree that a series of 500 heads is not something we “expect” from a random system, even if it has the same individual probability of any other sequence of that length.
Agreed.
b) We agree that the reason for that is that we are not comparing the probabilities of each single sequence, but rather the probabilities of two very different subsets of the search space. Do we agree on that?
Um . . . not really. Since every possible sequence of Hs and Ts is equally likely it is only our pattern seeking mental processes that trump our statistical reasoning powers most of the time. I'm just like you, 500 Hs would be a real WTF moment for me. And I'd probably spend days or months or even years trying to be sure there was no bias before I accepted and explanation of chance. But really, 500 Hs is just as likely as any other particular sequence. But, clearly, a vast majority of the time we'll get a jumbled sequence of Hs and Ts and won't find those outcomes surprising in the least.
And, if I couldn’t find one, if I was very sure the whole procedure was ‘fair’ then I’d say the result was a fluke. You decide whether I’m being ‘empirical’. Maybe you are easily satisfied. I would look further for an explanation.
Oh no, I'd have to be very, very, very, VERY sure there was no bias before I accepted a chance explanation.
Who is the more empirical here? Well, I am happy there is still something we don’t agree about. You, the “fluke 500 heads” guy. I, the “there must be another explanation” guy. Maybe I am becoming a skeptic, after all.
Maybe. :-)
That’s what I mean by “empirically impossible”: something that is not logically impossible, but rather so extremely improbable that I will never accept it as a random outcome, and will always look for a different explanation.
I'd just stick with extremely improbable which is less confusing.
c) We definitely don’t agree about Hamlet. Ah! I feel better, after all.
A rose by any other name?
For the first time: you must be mad, at best. So, you would be “extremely suspicious” of the random emergence of Hamlet’s text, but in the end you can accept it? Good luck, my friend…
After more scrutiny than even I can imagine.
I would not be “extremely suspicious”: I would be absolutely sure that the outcome is not the product of a random system. And I would never, never even entertain the idea of a fluke. Well, anyone can choose his own position on that. All are free to comments on that.
Fair enough. There are things in this world that cannot be explained by your philosophy.
Give me a null hypothesis and an alternate hypothesis, a testing procedure and a level of significance. That is a right request. So, I will propose a scenario, maybe a little complicated, but just to have the right components at their place.
Good.
Let’s say that there is a big closed room, and we know nothing of what is in it. On one wall there is an “input” coin slot. On another wall there is an “output” coin slot, where a coin can come out and rest quietly on a frame. . . . . The null hypothesis is very simple: each coin is taken randomly from the bag, randomly inserted into the input coin slot, and it comes out from the output coin slot in the same position it had when it was inserted into the input slot. We can also suppose that something happens within the dark room, but if it so, that “something” is again a random procedure, where each side of the coin still has 0.5 probability to be the upward side in the end. For example, each coin could be randomly tossed in the dark room, and then outputted to the output slot as it is. IOWs, the null hypothesis, as usual, is that what we observe as an outcome is the result of random variation.
That's not quite the normal way of stating it. I'd just say the null hypothesis is that the coin and procedure are fair, i.e. random. But that's just quibbling.
Now, to be simple, we are sure that all the coins are fair, and that there is no other “interference£ out of the dark room. So, our whole interest is focused on the main question: What happens in the dark room?
So, what is your alternate hypothesis? The thing you're testing?
You ask for a level of significance. There is really no reason that I give you one, you can choose for yourself. With a search space of 500 bits, and a subset of outcomes with “only heads ot tails” whose numerosity is 2, we are in the order of magnitude of 1E-150 for the probability of the outcome we observe. What level do you like? 1E-10? 1E-20? You choose.
Uh, that's not how it's done. The level of significance is used to set up a confidence interval say 90% or 95%. Sometimes this is referred to as picking the p-value. Well . . . they're related. The point being if you're going to reject the null hypothesis in favour of the alternate hypothesis you want to be 90 or 95% sure that the outcome you observed was not down to chance. You cannot have a 100% confidence interval which is why I'd never be 100% sure the outcome wasn't due to chance.
Do you reject the null (H0)?
At what level of significance? I'll save you the effort. By common statistical analysis you probably would. But in favour of an alternate hypothesis which would NOT be there was design but in favour of an alternate hypothesis along the lines of "the coin and/or process is not fair'.
Our explanations (H1s) can be may. ID is one of them. The ID explanation is that there is one person in the dark room, that he takes the coin that has been inputted, checks its condition, and simply outputs it through the output slot with head upwards. Very simple indeed.
But you didn't give an alternate hypothesis so I don't know what you're testing. And if your trying to test something complicated then a Bayesian approach would be more pertinent. How big is the dark room? Is there a system of air circulation? Etc.
But other explanations are possible. In the room, there could be a mechanism that can read the position of the coin and invert it only when tail is upward. That would probably still be an ID explanation, because whence did that mechanism come? But yes, we must be thorough, and investigate the possibility that such a mechanism spontaneously arose in the dark room.
Like I said, I'd be extremely diligent in checking for all possible testable plausible causes of bias.
An interesting aspect of this explanation is that it teaches us something about the nature of ordered strings and the concept of Kolmogorov complexity. Indeed, if the mechanism is responsible for the order of the final string, then the complexity of the mechanism should be taken in place of the complexity of the string, if it is lower.
I'm not an expert on such matters.
The import point is that, once you have the mechanism working, you can increase the brute complexity of the output as much as you like: you can have an output of 500 heads, or of 5000, or of 5 billion heads. While the apparent complexity of the outcome increases exponentially, its true, Kolmogorov complexity remains the same: the complexity of the mechanism.
Again, I'm no expert.
Finally, let’s say that from the output slot you get a binary sequence that corresponds to the full text of Hamlet. Again, do you reject the null? I suppose you do.
Again, depending on what your alternate hypothesis is. If it's just: the system isn't random then certainly I would. Easily and gladly. But you haven't told me what your alternate hypothesis is so I don't know what I'm rejecting the null hypothesis for.
Here, the situation is definitely different. Not only because Hamlet in binary form is certainly much longer than 500 bits. But because the type of specific information here is completely different. A drama in english is not “an ordered sequence”, like the all heads sequence. It can never be outputted by any algorithm, however complex, unless the algorithm already knows the text.
As extremely unlikely as it is it could be the result of a random generating process.
Hamlet is certainly the output of a conscious being. I will have no hesitation in inferring design (well, not necessarily Shakespeare himself in the dark room, but certainly Shakespeare at the beginning of the transcriptions of all kinds that have brought the text to our dark room).
I believe there was man called William Shakespeare who wrote the play Hamlet. That's much more plausible than it was arrived at by some chance event. But Shakespeare was a man who was known by other men for whom we have documentary evidence and whose abilities are not beyond what we've seen other men at that time do.
Do you agree on that? Probably not. You will probably insist with the “fluke” theory.
I hope my responses clarify my views. Jerad
Mark: It wold be your turn, but I am very tired. Later, I hope. By... gpuccio
keiths: You say: You’re battling a strawman. If I flipped a coin and got the exact text of Hamlet, then I would be almost certain that the outcome was NOT due to chance. Well, that's certainly progress. I still beg to differ about the "almost". For me, there is no almost at all. But we cannot all be the same... However, that just means that the non-chance explanation is far, far likelier to be correct. It doesn’t mean that the chance explanation is impossible. Not logically impossible, as I have always said. I agree with you completely. The chance explanation is not "logically impossible", but certainly "empirically impossible": it will never, never be accepted as a credible empirical explanation by anyone in his right mind. You say: I don’t know if this is an Italian/English issue or a conceptual issue, but your statement doesn’t make sense. To call a sequence “random” just means that it was produced by a random process. It doesn’t tell you about its content. Thank you for being understanding. You are right, that is badly worded. I was writing at an early hour in the morning, and I am human (had you ever inferred that? :) ). I should have said: "A sequence with no apparent special order (let's call it "apparently random") is extremely more “probable” than a highly ordered sequence. That is the simple point that many people here, in their passion for pseudo-statistics, seem to forget." I apologize for the imprecision, your criticism is correct. The all-heads sequence is just as random as a mixed sequence if both are produced by random processes. That's perfectly correct. Likewise, a random-looking sequence isn’t random if it is produced by a deterministic process. Correct, again. Your statement is correct only if you meant to say something like “if we generate a sequence at random, it is more likely to be a mized sequence of heads and tails than it is to be all heads or all tails.” That was the idea. But all fixed sequences, whether they look random or not, are equally probable, as you said yourself: Sure. Individually, the probability is the same. As members of partitions, however, everything changes. Finally, you say: Again, you’re misunderstanding me if you think that I’m claiming that partitions don’t matter. They do matter, but it’s the sizes of the sets that matter for purposes of calculating probabilities, not their actual content (with the proviso that the distribution is flat, as it is for coin flip sequences). True, for calculating probabilities the actual content has no importance. But as I said, scientific inferences are not simply a matter of probabilities. They are, first of all, a question of methodology. And, for our methodology to be correct, and our inferences valid, the actual content of our partitions in our model is very, very important. Computing probabilities is only an intermediate step of the scientific procedure. Before that, and after that, we have to reason correctly. Define the model, define the question, verify the possible answers. For all that, the content of our concepts is extremely important. Do you agree on that? If you agree on the fundamental importance of the nature and content of our partitions, I am satisfied. gpuccio
Jerad: First, our possible points of "agreement": a) We seem to agree that a series of 500 heads is not something we "expect" from a random system, even if it has the same individual probability of any other sequence of that length. I quote you: I’ve said, MANY MANY times now that if I flipped a coin 500 times I’d be very, very, very suspicious that something funny was going on and I’d do my best to try and find an explanation for that. b) We agree that the reason for that is that we are not comparing the probabilities of each single sequence, but rather the probabilities of two very different subsets of the search space. Do we agree on that? But you add: And, if I couldn’t find one, if I was very sure the whole procedure was ‘fair’ then I’d say the result was a fluke. You decide whether I’m being ‘empirical’. Maybe you are easily satisfied. I would look further for an explanation. Who is the more empirical here? Well, I am happy there is still something we don't agree about. You, the "fluke 500 heads" guy. I, the "there must be another explanation" guy. Maybe I am becoming a skeptic, after all. That's what I mean by "empirically impossible": something that is not logically impossible, but rather so extremely improbable that I will never accept it as a random outcome, and will always look for a different explanation. c) We definitely don't agree about Hamlet. Ah! I feel better, after all. You say: FOR THE NTH TIME! I’d be extremely suspicious of any such result and would exhaustively check to see if there was any detectable bias in the process. But, if none could be found then I’d say it was a fluke result. For the first time: you must be mad, at best. So, you would be "extremely suspicious" of the random emergence of Hamlet's text, but in the end you can accept it? Good luck, my friend... I would not be "extremely suspicious": I would be absolutely sure that the outcome is not the product of a random system. And I would never, never even entertain the idea of a fluke. Well, anyone can choose his own position on that. All are free to comments on that. You say: Give me a null hypothesis and an alternate hypothesis, a testing procedure and a level of significance. That is a right request. So, I will propose a scenario, maybe a little complicated, but just to have the right components at their place. Let's say that there is a big closed room, and we know nothing of what is in it. On one wall there is an "input" coin slot. On another wall there is an "output" coin slot, where a coin can come out and rest quietly on a frame. At the input wall, there is a bag with 500 coins. They are thoroughly mixed by an automatic system. A child, blinded, takes one coin at a time from the bag and inputs it, always blindly, into the input coin slot. We have reasonable certainty that the child is completely blinded, and that he cannot have any information about the orientation of the coin, not even by touching it (let's say he wears thick gloves). We have very good control of all those parts. Let's say that we are at the output coin slot. Our duty is to register correctly, for each coin that comes out and rests on the frame, whether its visible side is head or tail. We write that down. Let's say that, at the end of the procedure, we have a sequence of all heads. This is the testing procedure. The null hypothesis is very simple: each coin is taken randomly from the bag, randomly inserted into the input coin slot, and it comes out from the output coin slot in the same position it had when it was inserted into the input slot. We can also suppose that something happens within the dark room, but if it so, that "something" is again a random procedure, where each side of the coin still has 0.5 probability to be the upward side in the end. For example, each coin could be randomly tossed in the dark room, and then outputted to the output slot as it is. IOWs, the null hypothesis, as usual, is that what we observe as an outcome is the result of random variation. Now, to be simple, we are sure that all the coins are fair, and that there is no other "interference£ out of the dark room. So, our whole interest is focused on the main question: What happens in the dark room? You ask for a level of significance. There is really no reason that I give you one, you can choose for yourself. With a search space of 500 bits, and a subset of outcomes with "only heads ot tails" whose numerosity is 2, we are in the order of magnitude of 1E-150 for the probability of the outcome we observe. What level do you like? 1E-10? 1E-20? You choose. So, my first question is: Do you reject the null (H0)? Definitely, I would. Our explanations (H1s) can be may. ID is one of them. The ID explanation is that there is one person in the dark room, that he takes the coin that has been inputted, checks its condition, and simply outputs it through the output slot with head upwards. Very simple indeed. But other explanations are possible. In the room, there could be a mechanism that can read the position of the coin and invert it only when tail is upward. That would probably still be an ID explanation, because whence did that mechanism come? But yes, we must be thorough, and investigate the possibility that such a mechanism spontaneously arose in the dark room. An interesting aspect of this explanation is that it teaches us something about the nature of ordered strings and the concept of Kolmogorov complexity. Indeed, if the mechanism is responsible for the order of the final string, then the complexity of the mechanism should be taken in place of the complexity of the string, if it is lower. The import point is that, once you have the mechanism working, you can increase the brute complexity of the output as much as you like: you can have an output of 500 heads, or of 5000, or of 5 billion heads. While the apparent complexity of the outcome increases exponentially, its true, Kolmogorov complexity remains the same: the complexity of the mechanism. Finally, let's say that from the output slot you get a binary sequence that corresponds to the full text of Hamlet. Again, do you reject the null? I suppose you do. Again, what are our H1s? Here, the situation is definitely different. Not only because Hamlet in binary form is certainly much longer than 500 bits. But because the type of specific information here is completely different. A drama in english is not "an ordered sequence", like the all heads sequence. It can never be outputted by any algorithm, however complex, unless the algorithm already knows the text. Hamlet is certainly the output of a conscious being. I will have no hesitation in inferring design (well, not necessarily Shakespeare himself in the dark room, but certainly Shakespeare at the beginning of the transcriptions of all kinds that have brought the text to our dark room). Do you agree on that? Probably not. You will probably insist with the "fluke" theory. In a sense, that is reassuring... gpuccio
1) I don’t think the evolutionary theory hypothesis is comparable to the fair coin hypothesis.
What is this alleged evolutionary theory hypothesis? Joe
To Jerad, keiths, Mark Frank: Wow, guy! It seems that I did stir some reaction... Well, just an initial collective comment, and then I will go on answering you individually. If I have misunderstood something that you have said, I apologize. I have no interest in demonstrating that you are wrong. I am only interested in the final results of a discussion. So, if you agree with me, I can only be happy :) But as things are never as good as they seem, let's see if we really agree, and on what. I think it is better to go on one by one. Jerad first! gpuccio
***** accidentally hit enter before finishing 69 (hence typos) ***** .... But it does require the alternative hypothesis to be considered. The reason we differ so much on ID is twofold: 1) I don't think the evolutionary theory hypothesis is comparable to the fair coin hypothesis. It is less well defined but seems to me that the outcome is plausible. 2) The alternative hypothesis has not been articulated - but if it had then I suspect it would be absurdly implausible. Mark Frank
Gpuccio I am afraid my response will come out bit by bit this morning - too many other things going on. I realise that we are largely talking at cross-purposes as Lizzie has pointed out. I agree that in any real situation if a coin was tossed 500 times and they were all heads then there is overwhelming evidence that it was not a fair coin toss (the coin might have been fair - it might have been the way it was being tossed that was not fair). My interest is purely in why it is overwhelming evidence and what exactly it is overwhelming evidence for - because I think that is relevant when applying the same logic to the ID inference. The most important point about Bayesian thinking is that it is comparative. It requires you to not only to think not only about the hypothesis you are disproving, but the alternative you propose in instead. IDists don't like this approach because it entails exploring the details of the design hypothesis. But as a competent statistician you will know that it is poor practice to dismiss H0 without articulating the alternative hypothesis. You don't have to adopt a completely Bayesian approach (although that would be ideal). Even Neyman-Pearson requires articulating the alternative hypothesis. You may think it does not apply to the coin tosses - but it does. Why do we all reject a fair coin when it is 500 heads but not when it is a meaningless string of heads and tails? The explanation cannot be: * Because the 500 heads are so vastly improbable. The meaningless string is equally improbable. * Because the 500 heads are at the extremity of the statistic - no of heads. We would also reject a string of 250 heads followed by 250 tails which falls bang in the middle of that statistic. * Because the string is compressible. We would also reject strings that are incompressible but happened to spell out the opening lines of Hamlet in ASCII. The answer is simple and probably acceptable to you. The 500 heads mean something to lots of people, as does the 250/250 string and the opening lines of Hamlet. Therefore, it is plausible that someone might want to make the string come out that way for their purposes. It may not be very likely that such a person exists and that they could fiddle the results - but it only has to be marginally likely to overwhelm the hypothesis that it was a fair coin. But it does req Mark Frank
Under a fairly wide range of conditions classical hypothesis testing leads to the same conclusion as Bayesian thinking – although it also goes badly wrong under a wide range of conditions as well.
Agreed!! If you're talking about some ideal, hypothetical 'fair' coin being tossed 500 times then classical hypothesis testing is fine. It seems to me that the real conflict here is why some of us won't accept some kind of design behind a highly improbable result. It's just my opinion but I'd stake my claim to faith on impossible events occurring not improbable ones. Jerad
Gpuccio I will address your other points later now but this one is the most important.
I simply want to do an easy and correct calculation which can be the basis for a sound empirical inference, like I do every day in my medical practice. According to your position, the whole empirical knowledge of the last decades should be discarded.
Not the whole of empirical knowledge but the majority of scientific research findings are false - see Ionaddis Why Most Published Research Findings Are False. He doesn't specifically refer to Bayes but he uses Bayesian thinking and many of the problems he identifies would have been avoided with a Bayesian approach. I highly recommend Nate Silver The Signal and The Noise for an easy to read explanation of the importance of Bayesian thinking (among other things). Of course there an enormous amount of science is true. The technology on which it is based works. There are several reasons for this. 1) A lot of science is not probabilistic. Newton didn't calculate any significance levels. 2) Scientists use Bayesian thinking without realising - you do every time you think about sensitivity and specificity. Even Dembksi uses it without realising. 3) Under a fairly wide range of conditions classical hypothesis testing leads to the same conclusion as Bayesian thinking - although it also goes badly wrong under a wide range of conditions as well. Mark Frank
gpuccio,
And possibly explain why a partition “Hamlet” versus “non Hamlet” is of no importance in making inferences?
Again, you're misunderstanding me if you think that I'm claiming that partitions don't matter. They do matter, but it's the sizes of the sets that matter for purposes of calculating probabilities, not their actual content (with the proviso that the distribution is flat, as it is for coin flip sequences). Have you read my new post at TSZ that I mentioned above? It's only about 650 words long, and it answers many of the questions you are asking. keiths
Hi gpuccio, You're battling a strawman. If I flipped a coin and got the exact text of Hamlet, then I would be almost certain that the outcome was NOT due to chance. However, that just means that the non-chance explanation is far, far likelier to be correct. It doesn't mean that the chance explanation is impossible. Regarding your comment to Lizzie:
A random sequence is extremely more “probable” than a highly ordered sequence. That is the simple point that many people here, in their passion for pseudo-statistics, seem to forget.
I don't know if this is an Italian/English issue or a conceptual issue, but your statement doesn't make sense. To call a sequence "random" just means that it was produced by a random process. It doesn't tell you about its content. The all-heads sequence is just as random as a mixed sequence if both are produced by random processes. Likewise, a random-looking sequence isn't random if it is produced by a deterministic process. Your statement is correct only if you meant to say something like "if we generate a sequence at random, it is more likely to be a mized sequence of heads and tails than it is to be all heads or all tails." But all fixed sequences, whether they look random or not, are equally probable, as you said yourself:
a) Any individual sequence of 500 coin tosses has obviously the same probability to be the outcome of a single experiment of coin tossing. A very, very low probability. I hope we all agree on that.
keiths
It seems that you really don’t understand statistics and probability. If an outcomes happens 2, 3 or 100 times in a row, that simply gives another calculation of the probabilities of the whole series, considered as a single outcome.
Which is why I've already asked: what is a data point in this scenario, one coin toss or 500? I have yet to have anyone actually write down a null hypothesis, an alternate hypothesis, a testing protocol and a level of significance required. Lay out what you want to test and then ask. Just to save you the trouble . . . no, I won't. You tell me what it is you're testing by laying it all out clearly and properly.
IOWs, the probability of having 3 head in ten tosses is rather high. The probability of having 500 heads in a row is laughable, and it is in reality the global probability of having the same result in 500 events, where the probability of that outcome for each event is 0.5.
Do you really think I don't understand all that? I also said I would NEVER bet on the possibility of getting 10 heads in a row. NEVER. Is anyone actually reading what I've said?
The phrase for you was rather: “will you simply accept that observation as a perfectly expected result, given that its probability is the same as the probability of any other sequence of the same length?” Do you think you are too insulted, or maybe you can try to answer?
As I've said MANY TIMES if all manner and possibility of bias was ruled out then I would say 500 heads in a row was a fluke result.
So, why don’t you answer?
It seems to me I've been answering the same question over and over again. It seems like people aren't really reading what I've written. Or they are only reading the responses directed at themselves.
No, it is not impossible. It IS very highly improbable.
That’s why I said “empirically” impossible, and not “logically” impossible. I am happy you agree with me on that, although maybe you did’n realize it.
I just wish you'd use standard statistical term rather than making stuff up. I've said, MANY MANY times now that if I flipped a coin 500 times I'd be very, very, very suspicious that something funny was going on and I'd do my best to try and find an explanation for that. And, if I couldn't find one, if I was very sure the whole procedure was 'fair' then I'd say the result was a fluke. You decide whether I'm being 'empirical'.
Getting 500 heads is just as likely from a purely random selection process as is any other sequence of 500 Hs and Ts. If you have any mathematical arguments against that then please provide them.
Yes, I have. You can can find them in my #41. And by the way, a “selection process” is not a “random process”, as usually even darwinists can understand.
We're talking mathematics here, some terms may not mean what you think they mean. I find your vocabulary idiosyncratic and confusing at times. I had read post 41 and responded.
The probabilities that should be compared are not the probability of having 500 heads and of having a single specific random sequence. We must compare the probability of having an outcome from a subset of two sequences (500 heads or 500 tails), or if you prefer from any well specified and recognizable ordered subset, rather than form the vast subset of random non ordered sequences, which comprise almost all the sequences in the search space.
Give me a null hypothesis and an alternate hypothesis, a testing procedure and a level of significance.
Ah, you read that too. While I can maybe accept that it is “clever”, I cannot in any reasonable way conceive why it should be a “restatement of your views”. And “incorrect”, just to add! Will you clarify that point, or is it destined to remain a mystery forever, like many other statements of yours?
I've been saying the same thing over and over and over again. Your saying that we have to think about the problem differently is fine but you haven't laid out exactly what you want your testing criteria to be. After you've done that then I can respond to that DIFFERENT issue.
I really don’t know how I could be more specific that this. I have been specific almost to the point of discourtesy. What can I do more?
Lay out your procedure, and give an null and an alternate hypothesis. And PLEASE try and use commonly accepted statistical terms.
Just a simple question: if you get a binary sequence that, in ascii interpretation, is the exact text of Hamlet, and you are told that the sequence arose as a random result of fair coin tossing, will you simply accept that observation as a perfectly expected result, given that its probability is the same as the probability of any other sequence of the same length?
FOR THE NTH TIME! I'd be extremely suspcious of any such result and would exhaustively check to see if there was any detectable bias in the process. But, if none could be found then I'd say it was a fluke result. Why do you keep asking me the same basic question over and over and over again? Jerad
Elizabeth: But that doesn’t mean that 500 Heads isn’t just as possible as any other sequence, under the Law of Large Numbers or anything else! It is "possible" (not "as possible", because how do you measure "possibility"? Possibility is a binary category), but certainly not "probable" as anything else. A random sequence is extremely more "probable" than a highly ordered sequence. That is the simple point that many people here, in their passion for pseudo-statistics, seem to forget. gpuccio
keiths: The math doesn’t distinguish between partitions that are important versus partitions that are arbitrary or trivial — and it shouldn’t. Whether some outcome is important has no necessary bearing on whether it is probable. The math doesn't distinguish because the math is not us. We are the thinkers of the math. And we do distinguish. Statistics is nothing, if correct thinking and methodology do not use it correctly and usefully. Scientific "explanations" are not mere statistical effects: they are judgements. A judgement happens in the consciousness of an intelligent beings, not in numbers. All empirical science has been built on the principles that you and your friends seem to doubt. Our whole understanding of the objective world is based mostly on inferences based on partitions that are "important". By the way, why don't you try to answer the question that I have repeatedly offered here? I type it again for your convenience: "Just a simple question: if you get a binary sequence that, in ascii interpretation, is the exact text of Hamlet, and you are told that the sequence arose as a random result of fair coin tossing, will you simply accept that observation as a perfectly expected result, given that its probability is the same as the probability of any other sequence of the same length?" And possibly explain why a partition "Hamlet" versus "non Hamlet" is of no importance in making inferences? gpuccio
gpuccio @47, thanks for the kind words. :) Chance Ratcliff
KF,
There are a LOT of situations where the sort of partitioning of a config space we are talking about is real and important. Start with that scene in the Da Vinci Code where a bank vault must be accessed first shot or else.
The math doesn't distinguish between partitions that are important versus partitions that are arbitrary or trivial -- and it shouldn't. Whether some outcome is important has no necessary bearing on whether it is probable. For example, the odds of rolling a particular class of 9-digit number remain the same whether a) somebody's life depends on it, or b) I'm just using the number to decide where to have lunch. If you haven't read the rest of my post, keep reading. keiths
Sorry that last sentence should read: But that doesn’t mean that 500 Heads isn't just as possible as any other sequence, under the Law of Large Numbers or anything else! oops :o Elizabeth B Liddle
Point taken, KF. But they strike me as having a family resemblance at the descriptive level, possibly at the formal level, I don't know. But I do think that the key point here is not about the physics of coin toss sequences, but about what the alternatives hypotheses are. When something is vanishingly unlikely, almost any other hypothesis, however unlikely (Sal? Cheating?) becomes a near certainty. My Bayesian output gives the right answer, and that's because enables us to weigh alternative explanations. That's why I keep saying that almost everyone is basically correct, here, even those who disagree. The people saying that all sequences are equally probably are correct (and Barry agrees). Almost everyone also agrees that a "special" sequence would raise serious eyebrows. The only real disagreement seems to me over why we raise our eyebrows. Common sense says: because skulduggery is much more likely. Bayes says; because skulduggery is much more likely. IBE says: because skulduggery is much more likely. But that doesn't mean that 500 Heads is just as possible as any other sequence, under the Law of Large Numbers or anything else! Elizabeth B Liddle
Dr Liddle, IBE is not generally a Bayesian probability inference with weighting on probabilities, or even a likelihood one. Scoring superiority on factual adequacy, coherence and explanatory power in light of empirical observation is not generally Bayesian. Though, in limited cases it can be. KF kairosfocus
Yep. That would seem to be "inference to best explanation", KF! Glad we can agree on something for once :) Cheers Lizzie Elizabeth B Liddle
KS, re:
It still seems arbitrary and subjective to divide the 9-digit numbers into two categories, “significant to me” and “not significant to me”.
There are a LOT of situations where the sort of partitioning of a config space we are talking about is real and important. Start with that scene in the Da Vinci Code where a bank vault must be accessed first shot or else. In another, text in English is sharply distinct from repetitive short blocks or typical at random gibberish, and the three do not function the same. The above is little more than a case of wishing away a very important and vital phenomenon that is inconvenient. KF kairosfocus
Biasing coins? Easy -- get a double-head coin. KF kairosfocus
Sal: I am not disputing what you said, or what you meant. My point is much more hypothetical, but very important, and it is that the reason we can conclude from observing a "special" sequence that something weird happened (it doesn't matter whether it was that the coin had two heads, or that it wasn't actually tossed, either scenario will do) isn't that such a sequence is "impossible" or "empirically impossible" or "against the Laws of Physics" or anything else about probability of the sequence. It's about because we know that Something Weird is much MORE probable than tossing one of those rare sequences. As I said, if we knew, with certainty, that the coin was fair, and the tossing fair, then we would simply have to conclude that, well, that the coin was fair and the tossing fair! We could not conclude "design" because we would know, a priori, that design was not the cause! In other words, our confidence that the sequence was designed stems from the relative probability that it was, compared with the probability that it was thrown by chance. Even if we are extremely confident that the coin was fair, and tossed fairly, it is still much more likely that the coin was not as fair as we thought it was, or that the tossing was somehow a conjuring trick, than that the sequence was tossed by chance. That is because we are less certain of non-design than we are of not tossing such a rarer kind of sequence. Bayes is a GOOD tool for ID, not a bad one. It's exactly what IDists here (including gpuccio, though he thinks he doesn't!) use, although usually it's called "inference to the best explanation" or some such (for some reason Bayes is a bad word in ID circles I think). But I have to say, I think all this back-biting about other people's probability smarts is completely unjustified. There are very few errors being made on these threads, but boy, is there a lot of misunderstanding of each other's meaning! As I said above, most people are mostly right. Where you guys are disagreeing is over the meaning of words, not the math. *growl* Elizabeth B Liddle
A resolution of the 'all-heads paradox' keiths
Sal has told us that the coin was fair. How great is his confidence that the coin is fair? Has Sal used the coin himself many times, and always previously got non-special sequences? If not, perhaps we should not place too much confidence in Sal’s confidence! And even if he tells us he has, do we trust his honesty? Probably, but not absolutely. In fact, is there any way we can be absolutelysure that Sal tossed a fair coin, fairly? No, there is no way. We can test the coin subsequently; we can subject Sal to a polygraph test; but we have no way of knowing, for sure, a priori, whether Sal tossed a fair coin fairly or not.
I clarified this point in other discussions. But I'll repeat it. The coins is presumed fair based on physics. It is presumed reasonably symmetric, if you like you can even hypothetically test it. Even if it is slightly unfair, for a sufficiently large number of coins the chance hypothesis can be rejected for all heads. For example the binomial distribution for a coin that has a 75% propensity for heads is still remote. The probability is (.75)^500 = 3.39 x 10^-63 and this is confirmed by the stat trek calculator: http://stattrek.com/online-cal.....omial.aspx By way of contrast a fair coin being all heads has a probability of: (.5)^500 = 3.1 x 10^-151 Given this, even unfair coins are not a good explanation for all coins being observed to be heads. It’s a better explanation, but 3.39 x 10^-63 probabilities isn’t anything I’d wager on. We can reject the fair coin hypothesis and accept it is unfair within reasonable limits just to be generous (like say 75% propensity for heads). All coins heads for a sufficiently large set of coins would still reasonably (not absolutely) suggest a non-random process was the driver for the configuration. All heads for approximately 1205 unfair coins (at 75% propensity) for heads will be as unlikely as 500 fair coins. Next, I never said the coins were tossed randomly. I said they were observed to be in the all heads state. This could mean, for example you open a box and find all the coins in the all heads state. The issue of fair tosses mention only for considering whether the configuration of all heads of a fair (or even slightly unfair) coins is consistent with random tosses. I never said the coins were actually tossed. The original statement and post where all this began was: Siding with Mathgrrl. Where I said:
consider if we saw 500 fair coins all heads, do we actually have to consider human subjectivity when looking at the pattern and concluding it is designed? No. Why? We can make an alternative mathematical argument that says if coins are all heads they are sufficiently inconsistent with the Binomial Distribution for randomly tossed coins, hence we can reject the chance hypothesis.
I never said the coins were randomly tossed. I only said we can compare the configuration of the coins against the hypothesis that they were randomly tossed. Given these considerations, and given that we know humans are capable of making all coins heads, a reasonable (but not absolute) inference is that the configuration arrived by design. And finally, severely biased coins are considered rare: You can load dice, you can't bias coins scordova
Incidentally, the mistake Neil Rickert makes is extremely common among anti-design people. Sometimes it is not articulated in a mathematical sense, but in a more everyday-life-experiences sense. I remember listening to an evolution/design debate on a talk show and the anti-design person argued that improbable things happen all the time by saying, in essence, "What are the odds that you and I would be here together the same day on the same show at the same time? If anyone had asked either of us a year ago we would have said it was extremely unlikely, and yet here we are. Improbable things happen all the time." There are several problems with this kind of thinking, but one that perhaps doesn't get enough play, is the intervention of the intelligent agent, so I'll highlight that here. Specifically, there were many decision points that were crossed toward making that particular talk show happen. And at each step of the way, it became more and more probable that it would occur. For instance, once the invitations had been sent and accepted, once the date had been selected and the time slot determined, there was a very high likelihood that it would take place. Then once the planes had already been caught, the taxi's grabbed, and the individuals had shown up at the studio's address, it was practically a certainty that the show would take place. So the answer to the anti-design person's cute question "What are the odds we would both be here today on this show?" is: "Given the preparations, the planning, and the decisions made by the parties involved, the odds were near certain. Now, having dispensed with your ridiculous example, tell us again why you think it is likely that a bunch of amino acids would happen to bump into each other and form life?" Eric Anderson
MF: All I will say for the moment is that if you were to drop 30 - 100 darts in the case envisioned, it is reasonably certain that the one sigma bands will pick up a proportion og hits that is linked to relative area. Tails being small, will tend to be hit less often, and if our far tails are involved, we are unlikely to see any hits at all. But the bulk will pick up most of the hits. Now, you can pick an arbitrarily narrow stripe near the peak and it will have the same pattern of being of low proportion less likely to be hit. That simply underscores the point that such special zones are unlikely to be found on a reasonably limited blind search. Which is one of the points I was highlighting. You do understand the first point, on trying to blindly catch needles in haystacks with limited searches. Now, the further point you tried to divert attention from is not strictly central to where I am going, but let's note it. The far tails of a bell are natural examples of narrow zones T in a much larger distribution of possibilities W. Now that the first hurdle is behind us, look next at relevant cases where W = 2^500 to 2^1,000 or more. The search capacity of the solar system's 10^57 atoms, acting for a plausible lifespan ~ 10^17 s, could not sample more than 1 straw sized pluck from a cubical haystack 1,000 light years on the side. About as thick as our galaxy. Since stars are on average several LY apart in our neighbourhood, if such a stack were superposed on our galaxy, with all but certainty, such a sample -- and we have just one shot, will all but certainly pick straw. At 1,000 bits worth of configs, the conceptual haystack would swallow up the observable cosmos worse than a haystack swallows up a needle. In short, with all but certainty, when we have config spaces at least that big, cosmic scale search resources are going to be vastly inadequate to find anything but the bulk, configs in no particular pattern, much less a linguistically or computationally relevant one. Where also functional specificity and complexity get us into needing very tightly specified, atypical configs. Where also, as AutoCAD shows us 3-d machines and systems can be represented by strings, an analysis on strings is WLOG. KF PS: The simplest case of fluctuations I can think of for the moment is how for small particles in a fluid we see brownian motion, but as size goes up, impacts on the various sides average off and the effect vanishes. Likewise, it is all but certain that the molecules of oxygen in the room you sit in would spontaneously rush off to one end and leave you gasping. It can be shown we are unlikely to observe this once in the lifespan of the observed cosmos. And yet such is a valid distribution. Just, its statistical weight is so overwhelmed by the scattered at random ones -- the overwhelming bulk -- that it is maximally improbable and practically unobservable. There is a difference between abstract possibility and empirical observability without deliberate intervention to set up simply describable but extremely atypical configs of the space of possibilities W. kairosfocus
groovamos @35: Well said. A similar point has been made many times, to those willing to listen, but I like the way you articulated it. I'm going to shamelessly steal your thinking. Eric Anderson
Chance Ratcliff: Thank you for you #45. It's really good to read intelligent and reasonable words, once in a while! :) gpuccio
Jerad: Where is my mathematical fallacy? I have explained it very clearly in my #41.
Sigh. As I’ve said several times already . . .if some specific specified sequence is randomly generated on the first trial then I would be very, very, very careful to check and see if there was any kind of bias in the system. And, if I was very, very, very sure there was not then I would say such a result was a fluke, a lucky result. There is no reason that design should be inferred to such a single outcome. What you really should be asking is: what if it happened two times in a row. Or 3 out of 5 times.
It seems that you really don't understand statistics and probability. If an outcomes happens 2, 3 or 100 times in a row, that simply gives another calculation of the probabilities of the whole series, considered as a single outcome. IOWs, the probability of having 3 head in ten tosses is rather high. The probability of having 500 heads in a row is laughable, and it is in reality the global probability of having the same result in 500 events, where the probability of that outcome for each event is 0.5. So, as you can see, your observations about something "happening two times in a row" are completely pointless. The probability of having 500 heads in a row is so low, that it certainly is much less acceptable than the probability of having less rare, empirically possible events, two or three times in a row. You must always consider the total probability of the event or set of events you are analyzing. Your comment about whether I would wonder if Shakespeare ever existed is pretty insulting really. Don't worry. The comment about Bayesian arguments for Shakespeare's existence was rather meant for Mark. Maybe he can feel insulted instead of you, although I hope not. The phrase for you was rather: "will you simply accept that observation as a perfectly expected result, given that its probability is the same as the probability of any other sequence of the same length?" Do you think you are too insulted, or maybe you can try to answer? Gee thanks. So, why don't you answer? No, it is not impossible. It IS very highly improbable. That's why I said "empirically" impossible, and not "logically" impossible. I am happy you agree with me on that, although maybe you did'n realize it. Getting 500 heads is just as likely from a purely random selection process as is any other sequence of 500 Hs and Ts. If you have any mathematical arguments against that then please provide them. Yes, I have. You can can find them in my #41. And by the way, a "selection process" is not a "random process", as usually even darwinists can understand. Then please be very specific and state your claims cogently And, if I’ve made a mathematical error then please find it. Ehm, I see that you have already read my #41, and maybe not understood it. Must I say the same things again? OK, I will do it. The probabilities that should be compared are not the probability of having 500 heads and of having a single specific random sequence. We must compare the probability of having an outcome from a subset of two sequences (500 heads or 500 tails), or if you prefer from any well specified and recognizable ordered subset, rather than form the vast subset of random non ordered sequences, which comprise almost all the sequences in the search space. Please, read carefully my example about gas mechanics, and maybe you will understand. That is a clever but incorrect restatement of my views. Ah, you read that too. While I can maybe accept that it is "clever", I cannot in any reasonable way conceive why it should be a "restatement of your views". And "incorrect", just to add! Will you clarify that point, or is it destined to remain a mystery forever, like many other statements of yours? I’m tired of being misinterpreted and having words put in my mouth. Find something wrong with what I’ve said, be specific please. I really don't know how I could be more specific that this. I have been specific almost to the point of discourtesy. What can I do more? Just to begin, why don't you answer the question that was, definitely, meant for you? "Just a simple question: if you get a binary sequence that, in ascii interpretation, is the exact text of Hamlet, and you are told that the sequence arose as a random result of fair coin tossing, will you simply accept that observation as a perfectly expected result, given that its probability is the same as the probability of any other sequence of the same length?" gpuccio
groovamos' comment @35 echoes my thinking on this. A similar subject was brought up a few months ago by Phinehas and I replied in kind. To say that any outcome is equiprobable and hence just as unlikely as any other is to tacitly define an event in the sample space that is equal to the sample space: S = {a1, a2, ... an}, E = S, hence P(E) = 1. With regard to coin tosses, a specification in this sense would be an E for which 0 < P(E) < 1, and it forces a partition onto the sample space, such that S is equal to the union of E and not E. Specifying an outcome of all heads defines a specific sequence in the sample space. For 500 tosses, this sequence has a probability of P(E) = 2^-500, and P(~E) = 1 - P(E). There is no equiprobability with this partition, and we should never expect to see E occur. As gpuccio points out, this is empirical. The sequence is not logically impossible, and this was never at issue. We can be near-absolutely certain, that for any sequence of 500 coin tosses, there has never been one that comes up all heads, since the first coin was tossed by the first monetarily-aware person. The implication for a sample space of 500 bits is that any sequence that one can specify by any means whatsoever has likely never occurred at random, nor will it likely ever occur. Ever. Chance Ratcliff
Mark: I simply want to do an easy and correct calculation which can be the basis for a sound empirical inference, like I do every day in my medical practice. According to your position, the whole empirical knowledge of the last decades should be discarded. Moreover, I do deny that you can involve a calculation of "priors" where worldviews are concerned. Probability can never say, either in a Fisherian or Bayesian way, if it is reasonable to accept the idea that consciousness can exist in other, non physical forms, or if a materialist reductionist point of view is better. Such choices are the fruit of a global commitments of one's cognition, feeling, intuition and free will. By the way, have you answered my explicit questions in #40 and 41? Regarding your objections to the Fisherian method in dFSCI, I think I have already commented, but I will do it in more detail here. No justification for one rejection region over another. Clearly illustrated when you justify one-tail as opposed to 2-tail testing but actually applies more generally. As I have said, here we have not a normal distribution of a continuous variable. We just have a simple rate of two discrete sets. The problem is very simple, and I don't see how the "tail" question applies. No justification for any particular significance level. Why 95% or 99% or 99.9%? The only justification is that it is appropriate for the inference you have to make. When I proposed 150 bits as a dFSCI threshold for a biological system, I considered about 120 bits for the maximal probabilistic resource in the planet earth system in 5 billion years. I added 30 bits to get to my proposed threshold of 150 bits. That would be an alpha level of 9,31323E-10. Such a value would be considered absolutely safe in any context, including all the inferences that are routinely made in the darwinian field about homologies. Do you suggest that it is not enough? Do you think that there are level of probabilities, whatever the context, Fisherian or Bayesian, that give us absolute knowledge of truth? That would be a strange concept. No proof that the same significance level represents the same level of evidence in any two situations – so there is no reason to suppose that 95% significance is a higher level of evidence than 90% significance in two different situations. I never consider a p value under 0.05 as evidence of anything. I am not stupid. But I can ensure you that, when I get a p value of 9,31323E-10, or even lower, I am absolutely sure, empirically, that what I am observing is real. R, the statistical software that I routinely use, does not even compute values under 2.2e-16, probably because at that level it's completely pointless to have a definite numeric value: you are already absolutely safe to reject the null hypothesis whatever the context. gpuccio
Neil and Jerad have stated the absurd. Repeating an infamous fallacy that has been very popular in the worst darwinist propaganda.
Where is my mathematical fallacy?
Just a simple question: if you get a binary sequence that, in ascii interpretation, is the exact text of Hamlet, and you are told that the sequence arose as a random result of fair coin tossing, will you simply accept that observation as a perfectly expected result, given that its probability is the same as the probability of any other sequence of the same length? Or will you recur to Bayesian arguments to evaluate the probability that Shakespeare ever existed?
Sigh. As I've said several times already . . .if some specific specified sequence is randomly generated on the first trial then I would be very, very, very careful to check and see if there was any kind of bias in the system. And, if I was very, very, very sure there was not then I would say such a result was a fluke, a lucky result. There is no reason that design should be inferred to such a sinlge outcome. What you really should be asking is: what if it happened two times in a row. Or 3 out of 5 times. Your comment about whether I would wonder if Shakespeare ever existed is pretty insulting really. As is Barry's Tourette's dig.
By the way, Neil and Jerad are cordially invited to express their opinion too, illuminating us a little bit more about our logical fallacies.
Gee thanks.
e) So, for those who understand probability, the only rational question that applies here is: how likely is to have an outcome from the extremely small subset of two sequences with only one value, or even form some of the other highly ordered subsets in the search space? The answer is very simple: with a 500 bit search space, that’s empirically impossible.
No, it is not impossible. It IS very highly improbable.
f) This is the correct reasoning why a sequence of 500 heads is totally unexpected, while a random sequence is completely expected. Maybe Neil and Jerad would like to comment on this simple concept.
Getting 500 heads is just as likely from a purely random selection process as is any other sequence of 500 Hs and Ts. If you have any mathematical arguments against that then please provide them.
IOWs, we are not comparing the probability of single outcomes, but the probability of different subsets of outcomes.
Then please be very specific and state your claims cogently And, if I've made a mathematical error then please find it.
If we reasoned like Neil and Jerad, we would not at all be surprised by any strange behaviour of a natural gas, such as it filling only one half of the available space!
That is a clever but incorrect restatement of my views. I'm tired of being misinterpreted and having words put in my mouth. Find something wrong with what I've said, be specific please. Jerad
Gpuccio Unfortunately you don't address the problems with Fisherian inference - you just declare that they are irrelevant (including the major objection that it answers the wrong question). Meanwhile you seem to be content to dismiss Bayesian inference on the grounds that it is hard to do the sums (even though it answers the right question). Do you want to do an easy calculation to answer the wrong question or a hard calculation to answer the right question? Mark Frank
To all: A few comments to try to clarify this important point. First of all, my compliments to groovamos (#35), who has very correctly stated the fundamental point. I would only add, for clarity, the following: a) Any individual sequence of 500 coin tosses has obviously the same probability to be the outcome of a single experiment of coin tossing. A very, very low probability. I hope we all agree on that. b) As groovamos very correctly states, the probability that a sequence, one of the 2^500 possible ones, will be the outcome of a single experiment is very easy to compute: it is 1 (necessity). c) The problem here is that, among the 2^500 sequences, there are specific subsets that have some recognizable formal property. The subset £sequences where only one value is obtained 550 times", is made of two sequences: 500 heads and 500 tails. d) While there are certainly many other "subsets" more or less ordered or recognizable, the vast vast majority, quite the totality of the 2^500 sequences will be of the random kind, with no special recognizable order. e) So, for those who understand probability, the only rational question that applies here is: how likely is to have an outcome from the extremely small subset of two sequences with only one value, or even form some of the other highly ordered subsets in the search space? The answer is very simple: with a 500 bit search space, that's empirically impossible. f) This is the correct reasoning why a sequence of 500 heads is totally unexpected, while a random sequence is completely expected. Maybe Neil and Jerad would like to comment on this simple concept. IOWs, we are not comparing the probability of single outcomes, but the probability of different subsets of outcomes. If we reasoned like Neil and Jerad, we would not at all be surprised by any strange behaviour of a natural gas, such as it filling only one half of the available space! By the way, Mark, the fallacy so well outlined by groovamos is also the fallacy that, certainly in good faith, but not so good statistical and methodological clarity, you tried on me at the time of the famous dFSCI challenge. You may remember your "argument" about the random sequence that pointed to a set of papers in a database. A set defined by the numbers randomly obtained. As I hope you can see, the probability of getting a sequence, say, of 5 numbers from 1 to 1000 pointing to 5 items in a database where the items are numbered from 1 to 100 is, exactly 1. So, you may be clever in statistics, but being clever does not save us from error, when a cognitive bias is our strong motivator. gpuccio
Mark (#24): Please, compare this statement of yours: However, the conceptual problems are rather severe and they become very relevant when you are trying to tackle more philosophical subjects like ID. with this other one: The cost is – they can be hard to calculate and sometimes (not always) they require subjective estimates of the priors. That's exactly your problem when you use such Bayesian arguments to refute ID. In what you declare as a "philosophical subject" (and I don't agree!), you propose to avoid a method which is simple and vastly used in all empirical sciences with a method that requires "subjective estimates of the priors". That seems folly, to me. Look at my treatment of dFSCI. It's simple, it's Fisherain, it's valid. You cannot accept it because of your priors, and so you shift to Bayesian objections. There is nothing good in that. Look at the absurd position of Neil and Jerad: they refute what is empirically evident, through a philosophical misunderstanding of probability. If these are the results of being Bayesian, I am very happy that I am a Fisherian. Your objections to the Fisherian method have really no relevance to a correctly argued Fisherian testing in a real empirical context, such as the problem of protein information. In my dFSCI procedure, I compute the probabilistic resources of a system to reject the null that some specific amount of functional protein information could arise by chance in that system. Once taken into account the probabilistic resource, it's enough to add enough bits to be sure of an alpha level extremely low (not certainly 0.05, or 0.01!) to be empirically sure that such an amount of functional protein information could not arise by chance in that system. There is nothing philosophical in that. Here we are dealing with definite discrete states (the protein sequences). The probability of reaching a specific subset is well defined by the ration of the subset to the search space. Your objections do not apply. Neil and Jerad have stated the absurd. Repeating an infamous fallacy that has been very popular in the worst darwinist propaganda. Just a simple question: if you get a binary sequence that, in ascii interpretation, is the exact text of Hamlet, and you are told that the sequence arose as a random result of fair coin tossing, will you simply accept that observation as a perfectly expected result, given that its probability is the same as the probability of any other sequence of the same length? Or will you recur to Bayesian arguments to evaluate the probability that Shakespeare ever existed? Just to know. By the way, Neil and Jerad are cordially invited to express their opinion too, illuminating us a little bit more about our logical fallacies. gpuccio
Barry:
Like the person suffering from Tourette’s they just don’t seem to be able to help themselves.
Just to pick up a point you might be interested in: we have good evidence to demonstrate that far from people with Tourette's being unable to help themselves, they do such a fantastic job of learning to control their tics that they perform better than the rest of us at tasks that involve suppressing instinctive responses, e.g. on the Stroop task, or on an anti-saccade task (where you have to look in the opposite direction to a visual cue). Compensatory Neural Reorganization in Tourette Syndrome Neuroscience isn't all bunk :) Elizabeth B Liddle
I think what KF is saying, Mark, is that the nearer a class of pattern is to the tails of a distribution, the less likely we are to draw one at random, and so if we do find one, it demands an explanation in the way that finding a pattern from the middle of the distribution would not. This means that if we only have a few trials, we are very unlikely to sample from the tails, and that if something is so unlikely as to require 2^500 trials to have any decent chance of finding it, then we aren't going to find it by blind search before we exhaust the number of possible trials in the universe. The more familiar way of saying the same thing would be to say that if your random sample has a mean and distribution that is very different from the mean and distribution of the population you postulated under your null, you can reject that null that your sample was randomly drawn from that population. So if we find a sample of functional sequences out of a vast population of sequences, the overwhelming majority of which are non-functional, we can reject the null that it is a random sample from that population. Elizabeth B Liddle
KF re 36. I am quite confused by the point you are making but I will try my best.
Do you hear the point I have made by citing Fisher on what we would call fluctuations in stat mech [we are here close to the basis for the second law of thermodynamics],
Sorry no - I am struggling to understand the point you are making.
and do you see the reason why the darts would dot themselves in proportion tot he areas of the strips on the chart on the floor,
Yes - no problem with that.
thus also why the far tails would be unlikely to be captured in relatively small samples?
No less likely than any other equally small area on the chart e.g. a very thin strip in the middle.
(Do you see why I point out that far tails are natural zones of interest and low probability, in a context of partitioning a space of possibilities in ways that bring out the needle in haystack effect?
I struggle to make head or tail of this sentence :-)
You will notice that from the beginning that is what I highlighted [also, it is what the clip from Fisher points to], and that the side-debate you have provoked is at6 best tangential.)
Well no - because I am not sure what it is your are highlighting. Maybe you could write out your argument as a series of short simple sentences with no jargon and no abbreviations? That would really help me understand your point. Mark Frank
MF: Do you hear the point I have made by citing Fisher on what we would call fluctuations in stat mech [we are here close to the basis for the second law of thermodynamics], and do you see the reason why the darts would dot themselves in proportion tot he areas of the strips on the chart on the floor, thus also why the far tails would be unlikely to be captured in relatively small samples? (Do you see why I point out that far tails are natural zones of interest and low probability, in a context of partitioning a space of possibilities in ways that bring out the needle in haystack effect? You will notice that from the beginning that is what I highlighted [also, it is what the clip from Fisher points to], and that the side-debate you have provoked is at6 best tangential.) KF kairosfocus
Neil Rickert: Flip a coin 500 times. Write down the exact sequence that you got. We can say of that sequence, that it had a probability of (1/2)^500. It is a sequence that we would not expect to see even once. Yet we saw it. This is a common fallacy about probabilistic thinking. You are making one particular sequence as especially improbably, when all sequences are equally improbable. And since what you wrote down came from an actual sequence, you can see that highly improbable things can happen. Although it is highly improbable for any particular person to win the lottery, we regularly see people winning. Why the above is meaningful * not: a coin toss of 500 trials will select from an outcome set of (.5)^-500 members. The probability that a member of the set is selected is 1.0. What you are really saying (even though the words can be construed otherwise) is a masquerade of what is needed, by saying ANY PARTICULAR member of the set being selected is unexpected, or has a probability of (.5)^500. If you remove the word PARTICULAR from the previous, then the quirky English language we use prods (not forcing) us to a drastically different interpretation, the one that has any meaning for the discussion. Worth repeating: the only interpretation having didactic meaning here for the discussion. And there is no "common fallacy" involved. Your statement then is only a hashing of the statement: "The probability that a member of the set is selected is 1.0.". Since this statement contains no new information, it is information-free or in the context of our discussion, meaningless. BTW Neil: We have had a cooler than normal early June, cool nights, hot in the late afternoon. I tried to get you and Dr. Tour together at Rice U. and we have fabulous hotels in a city clearly emerging on the international scene. What happened? groovamos
(It took me so long to write my comment re Bayes, that the conversation has moved on to this thread, and I see that finally the Bayes story has emerged! Here is the comment I posted on the other thread:) I don't think I've ever seen a thread generate so much heat with so little actual fundamental disagreement! Almost everyone (including Sal, Eigenstate, Neil, Shallit, Jerad, and Barry) is correct. It’s just that massive and inadvertent equivocation is going on regarding the word “probability”. The compressibility thing is irrelevant. Where we all agree is that "special" sequences are vastly outnumbered by "non-special" sequences, however we define "special", whether it’s the sequence I just generated yesterday in Excel, or highly compressible sequences, or sequences with extreme ratios of H:T, or whatever. It doesn't matter in what way a sequence is "special" as long as it was either deemed special before you started, or is in a clear class of "special" numbers that anyone would agree was cool. The definition of “special” (the Specification) is not the problem. The problem is that “probability” under a frequentist interpretation means something different than under a Bayesian interpretation, and we are sliding from frequentist interpretation (“how likely is this event?”) which we start with, to a Bayesian interpretation (“what caused this event?”) , which is what we want, but without noticing that we are doing so. Under the frequentist interpretation of probability, a probability distribution is simply a normalised frequency distribution - if you toss enough sequences, you can plot the frequency of each sequence, and get a nice histogram which you then normalise by dividing by the total number of observations to generate a "probability distribution". You can also compute it theoretically, but it still just gives you a normalised frequency distribution albeit a theoretical one. In other words, a frequentist probability distribution, when applied to future events, simply tells you how frequently you can expect to observe that event. It therefore tells you how confident you can be (how probable it is) that that the event will happen on your next try. The problem is arises when we try to turn frequentist probabilities about future events into a measure of confidence about the cause of a past event. We are asking a frequency probability distribution to do a job it isn't built for. We are trying to turn a normalised frequency, which tells us the how much confidence we can have of a future event, given some hypothesis, into a measure of confidence in some hypothesis concerning a past event. These are NOT THE SAME THING. So how do we convert our confidence about whether a future event will occur into a measure of confidence that a past event had a particular cause? To do so, we have to look beyond the reported event itself (the tossing of 500 heads), and include more data. Sal has told us that the coin was fair. How great is his confidence that the coin is fair? Has Sal used the coin himself many times, and always previously got non-special sequences? If not, perhaps we should not place too much confidence in Sal’s confidence! And even if he tells us he has, do we trust his honesty? Probably, but not absolutely. In fact, is there any way we can be <absolutelysure that Sal tossed a fair coin, fairly? No, there is no way. We can test the coin subsequently; we can subject Sal to a polygraph test; but we have no way of knowing, for sure, a priori, whether Sal tossed a fair coin fairly or not. So, let’s say I set the prior probability that Sal is not honest, at something really very low (after all, in my experience, he seems to be a decent guy): let’s say, p=.0001. And I put the probability of getting a “special” sequence at something fairly generous – let’s say there are 1000 sequences of 500 coin tosses that I would seriously blink at, making the probability of getting one of them 1000/2^500. I’ll call the observed sequence of heads S, and the hypothesis that Sal was dishonest, D. From Bayes theorem we have: P(D|S)=[P(S|D)*P(D)]/[ P(S|D)*P(D)*+ P(T|~D)*P(~D)] where P(D|S) is what we actually want to know, which is the probability of Sal being Dishonest, given the observed Sequence. We can set the probability of P(S|D) (i.e. the probability of a Special sequence given the hypothesis that Sal was Dishonest) as 1 (there’s a tiny possibility he meant to be Dishonest, but forgot, and tossed honestly by mistake, but we can discount that for simplicity). We have already set the probability of D (Sal being Dishonest) as .0001. So we have: P(D|S)=[1*.0001]/[1*.0001+1000/2^500*(1-.0001)] Which is, as near as dammit, 1. In other words, despite the very low prior probability of Sal being dishonest, now that we have observed him claiming that he tossed 500 heads with a fair coin, the probability that he was being Dishonest, is now a virtual certainty, even though throwing 500 Heads honestly is perfectly possible, entirely consistent with the Laws of Physics, and, indeed, the Laws of Statistics. Because the parameter (P(T|~D) (the probability of the Target given not-Dishonesty) is so tiny, any realistic evaluation of P(~D) (the probability that Sal was not Dishonest) , however great, is still going to make the term on the denominator, P(T|~W)]P(~W), negligible, and the denominator always only very slightly larger than the numerator. Only if our confidence in Sal’s integrity exceeds 500 bits will we be forced to conclude that the sequence could just or more easily have been Just One Of Those Crazy Things that occasionally happen when a person tosses 500 fair coins honestly. In other words, the reason we know with near certainty that if we see 500 Heads tossed, the Tosser must have been Dishonest, is simply that Dishonest people are more common (frequent!) than tossing 500 Heads. It’s so obvious, a child can see it, as indeed we all could. It’s just that we don’t notice the intuitive Bayesian reasoning we do to get there – which involves not only computing the prior probability of 500 Heads under the null of Fair Coin, Fairly Tossed, but also the prior probability of Honest Sal. Both of which we can do using Frequentist statistics, because they tell us about the future (hence “prior”). But to get the Posterior (the probability that a past event had one cause rather than another) we need to plug them into Bayes. The possibly unwelcome implication of this, for any inference about past events, is that when we try to estimate our confidence that a particular past event had a particular cause (whether it is a bacterial flagellum or a sequence of coin-tosses), we cannot simply estimate it from observed frequency distribution of the data. We also need to factor in our degree of confidence in various causal hypotheses. And that degree of confidence will depend on all kinds of things, including our personal experience, for example, of an unseen Designer altering our lives in apparently meaningful and physical ways (increasing our priors for the existence of Unseen Designers), our confidence in expertise, our confidence in witness reports, our experience of running phylogenetic analyses, or writing evolutionary algorithms. In other words, it’s subjective. That doesn’t mean it isn’t valid, but it does mean that we should be wary (on all sides!) of making over confident claims based on voodoo statistics in which frequentist predictions are transmogrified into Bayesian inferences without visible priors. Elizabeth B Liddle
Better link to Dimitrov e-book: 50 Nobel Laureates and other great scientists who believed in God by Tihomir Dimitrov http://www.nobelists.net/ bornagain77
It’s funny how often when one catches a Darwinist in really painful-to-watch idiocy, and call them on it, their response is something like “me no speaka the English.”
LOL! Eric Anderson
Jerad @ 27. It appears that you have no shame. Barry Arrington
corrected link: Founders of Modern Science Who Believe in GOD – Tihomir Dimitrov (pg. 222) http://www.academia.edu/2739607/Scientific_GOD_Journal bornagain77
KF 25 Yes thanks - I am familiar with Fisher and NP. I have a diploma in statistics and have had a strong interest in the foundations of hypothesis testing for many years. The article you pointed me to appears to give a nice introduction to both but I didn't have time to read it all in detail. Before taking this discussion any further let's check we both talking about the same thing. I am debating the validity of Fisherian hypothesis testing as opposed to a Bayesian approach. Do you agree that is the issue and that it is relevant? If not, we should drop it immediately. Mark Frank
Contrary to what Einstein found to be miraculous, Jerad maintains that he should not be surprised at all that he is able comprehend the universe. But alas, contrary to Jerad's complacency, Jerad's own atheistic/materialistic worldview, whether he wants to admit it or not, results in the epistemological failure of the entire enterprise of modern science that he has paid such empty lip service to admiring so much:
Epistemology – Why Should The Human Mind Even Be Able To Comprehend Reality? – Stephen Meyer - video – (Notes in description) http://vimeo.com/32145998 BRUCE GORDON: Hawking's irrational arguments - October 2010 Excerpt: What is worse, multiplying without limit the opportunities for any event to happen in the context of a multiverse - where it is alleged that anything can spontaneously jump into existence without cause - produces a situation in which no absurdity is beyond the pale. For instance, we find multiverse cosmologists debating the "Boltzmann Brain" problem: In the most "reasonable" models for a multiverse, it is immeasurably more likely that our consciousness is associated with a brain that has spontaneously fluctuated into existence in the quantum vacuum than it is that we have parents and exist in an orderly universe with a 13.7 billion-year history. This is absurd. The multiverse hypothesis is therefore falsified because it renders false what we know to be true about ourselves. Clearly, embracing the multiverse idea entails a nihilistic irrationality that destroys the very possibility of science. http://www.washingtontimes.com/news/2010/oct/1/hawking-irrational-arguments/ The Absurdity of Inflation, String Theory and The Multiverse - Dr. Bruce Gordon - video http://vimeo.com/34468027
This 'lack of a guarantee', for trusting our perceptions and reasoning in science to be trustworthy in the first place, even extends into evolutionary naturalism itself;
Scientific Peer Review is in Trouble: From Medical Science to Darwinism - Mike Keas - October 10, 2012 Excerpt: Survival is all that matters on evolutionary naturalism. Our evolving brains are more likely to give us useful fictions that promote survival rather than the truth about reality. Thus evolutionary naturalism undermines all rationality (including confidence in science itself). Renown philosopher Alvin Plantinga has argued against naturalism in this way (summary of that argument is linked on the site:). Or, if your short on time and patience to grasp Plantinga's nuanced argument, see if you can digest this thought from evolutionary cognitive psychologist Steve Pinker, who baldly states: "Our brains are shaped for fitness, not for truth; sometimes the truth is adaptive, sometimes it is not." Steven Pinker, evolutionary cognitive psychologist, How the Mind Works (W.W. Norton, 1997), p. 305. http://blogs.christianpost.com/science-and-faith/scientific-peer-review-is-in-trouble-from-medical-science-to-darwinism-12421/ Why No One (Can) Believe Atheism/Naturalism to be True - video Excerpt: "Since we are creatures of natural selection, we cannot totally trust our senses. Evolution only passes on traits that help a species survive, and not concerned with preserving traits that tell a species what is actually true about life." Richard Dawkins - quoted from "The God Delusion" http://www.youtube.com/watch?v=N4QFsKevTXs
The following interview is sadly comical as a evolutionary psychologist realizes that neo-Darwinism can offer no guarantee that our faculties of reasoning will correspond to the truth, not even for the truth that he is purporting to give in the interview, (which begs the question of how was he able to come to that particular truthful realization, in the first place, if neo-Darwinian evolution were actually true?);
Evolutionary guru: Don't believe everything you think - October 2011 Interviewer: You could be deceiving yourself about that.(?) Evolutionary Psychologist: Absolutely. http://www.newscientist.com/article/mg21128335.300-evolutionary-guru-dont-believe-everything-you-think.html "But then with me the horrid doubt always arises whether the convictions of man’s mind, which has been developed from the mind of the lower animals, are of any value or at all trustworthy. Would any one trust in the convictions of a monkey’s mind, if there are any convictions in such a mind?" - Charles Darwin - Letter To William Graham - July 3, 1881
also of note:
The Origin of Science Jaki writes: Herein lies the tremendous difference between Christian monotheism on the one hand and Jewish and Muslim monotheism on the other. This explains also the fact that it is almost natural for a Jewish or Muslim intellectual to become a patheist. About the former Spinoza and Einstein are well-known examples. As to the Muslims, it should be enough to think of the Averroists. With this in mind one can also hope to understand why the Muslims, who for five hundred years had studied Aristotle's works and produced many commentaries on them failed to make a breakthrough. The latter came in medieval Christian context and just about within a hundred years from the availability of Aristotle's works in Latin.. As we will see below, the break-through that began science was a Christian commentary on Aristotle's De Caelo (On the Heavens).,, Modern experimental science was rendered possible, Jaki has shown, as a result of the Christian philosophical atmosphere of the Middle Ages. Although a talent for science was certainly present in the ancient world (for example in the design and construction of the Egyptian pyramids), nevertheless the philosophical and psychological climate was hostile to a self-sustaining scientific process. Thus science suffered still-births in the cultures of ancient China, India, Egypt and Babylonia. It also failed to come to fruition among the Maya, Incas and Aztecs of the Americas. Even though ancient Greece came closer to achieving a continuous scientific enterprise than any other ancient culture, science was not born there either. Science did not come to birth among the medieval Muslim heirs to Aristotle. …. The psychological climate of such ancient cultures, with their belief that the universe was infinite and time an endless repetition of historical cycles, was often either hopelessness or complacency (hardly what is needed to spur and sustain scientific progress); and in either case there was a failure to arrive at a belief in the existence of God the Creator and of creation itself as therefore rational and intelligible. Thus their inability to produce a self-sustaining scientific enterprise. If science suffered only stillbirths in ancient cultures, how did it come to its unique viable birth? The beginning of science as a fully fledged enterprise took place in relation to two important definitions of the Magisterium of the Church. The first was the definition at the Fourth Lateran Council in the year 1215, that the universe was created out of nothing at the beginning of time. The second magisterial statement was at the local level, enunciated by Bishop Stephen Tempier of Paris who, on March 7, 1277, condemned 219 Aristotelian propositions, so outlawing the deterministic and necessitarian views of creation. These statements of the teaching authority of the Church expressed an atmosphere in which faith in God had penetrated the medieval culture and given rise to philosophical consequences. The cosmos was seen as contingent in its existence and thus dependent on a divine choice which called it into being; the universe is also contingent in its nature and so God was free to create this particular form of world among an infinity of other possibilities. Thus the cosmos cannot be a necessary form of existence; and so it has to be approached by a posteriori investigation. The universe is also rational and so a coherent discourse can be made about it. Indeed the contingency and rationality of the cosmos are like two pillars supporting the Christian vision of the cosmos. http://www.columbia.edu/cu/augustine/a/science_origin.html Founders of Modern Science Who Believe in GOD - Tihomir Dimitrov http://www.scigod.com/index.php/sgj/article/viewFile/18/18
bornagain77
I could not fail to notice that you are still dodging the question. I see that you did not follow the link I provided to you explaining the concept of “moral certainty.” If you had, you would have learned something. You would have learned that “moral certainty” has nothing to do with morals in the sense of ethics (which is the dodge you are using). So, your dodge does not work. It only makes you like more obstinate, which is some trick itself.
You're right, I hadn't read the link. I have now. I've already said that if I got 500 heads I would be very suspicious and check everything out but if nothing was wrong I'd conclude a fluke result. That's an explanation that depends on existing causes without the need to invoke a designer. There is no need to fall back on 'beyond reasonable doubt'. We have a perfectly good explanation for getting any prespecified sequence on the first try: it just happened.
FYI, it is readily shown that in theory any member of a pop can be accessed by a sample, but that simply distracts from the material issue.
That's my only issue. You want to escalate the discussion so that it traipses into other realms.
And, on the same point, you clustered a clip from someone else about presuppositions with what I said, about the issue of strictly limited and relatively tiny searches of vast config spaces W, that have in them zones of interest that are sparse indeed.)
Yup, I did address two points from different people in one post. Once again: if anyone can point out something mathematical that I've got wrong then I'll change my stance. Please restrict your criticisms to things I've actually said and addressed. Jerad
MF: There you go, dragging red herrings away to ad hominem-laced strawmen -- here the subtext of my imagined ignorance and/or stupidity such that you want GP to help you correct me. Kindly, see what I have clipped just above from Fisher's mouth, and ponder on how a "natural" special zone like a far tail to a bell that is hard to hit by dropping darts scattering at random (relative to the ease of hitting the bulk) illustrate aptly the catching the needle in the haystack on a small blind sample problem. KF kairosfocus
MF: Are you familiar with what Fisher actually did, which pivoted on areas under the curve beyond a given point, relativised into p-values? [Probability being turned into likelihood of an evenly scattered sample hitting a given fraction of the whole area. As in, exactly what the dart-dropping exercise puts in more intuitive terms?] As in, further, a reasonable blind sample will reliably gravitate to the bulk rather than the far tails? Hence, if, contrary to reasonable expectation on sampling we are where we should not expect to be, Fisher said: "either an exceptionally rare chance has occurred or the theory [--> he here means the model that would scatter results on the relevant bell curve] is not true." The NP discussion on type I/II errors etc. is post the relevant point. Kindly cf. the linked review article. KF kairosfocus
Gpuccio 18 Well glad you recognise that the Fisher/NP/Likelihood/Bayes issue is not a strawman. Maybe you can explain that to KF? As I said Fisherian techniques work because in a wide range of situations they lead to the same decision as a Bayesian approach and they are easier to use. However, the conceptual problems are rather severe and they become very relevant when you are trying to tackle more philosophical subjects like ID. Here are a few of the problems: * No justification for one rejection region over another. Clearly illustrated when you justify one-tail as opposed to 2-tail testing but actually applies more generally. * No justification for any particular significance level. Why 95% or 99% or 99.9%? * No proof that the same significance level represents the same level of evidence in any two situations - so there is no reason to suppose that 95% significance is a higher level of evidence than 90% significance in two different situations. * Can get two different significance levels from the same experiment with the same results depending on the experimenter's intentions! (See http://www.indiana.edu/~kruschke/articles/Kruschke2010TiCS.pdf) But perhaps most important of all - it measures the wrong thing! We want to know how probable the hypothesis is given the data. Fisher's method tells us only how probable it is that the data would have fallen into certain categories given the hypothesis. Bayesian approaches avoid all these problems - which seem to me to be worth avoiding and rather more substantial than an excuse to introduce my worldview. The cost is - they can be hard to calculate and sometimes (not always) they require subjective estimates of the priors. Mark Frank
KF 13
You know or should know that the material issue at stake is partitioned config spaces and relative statistical weights of clusters of possible outcomes, leading to the dominance of the bulk of a bell distribution under relevant sampling circumstances;
I was only pointing out problems with Fisherian hypothesis testing. If Fisherian hypothesis is not relevant then I apologise - but then I have to wonder why you raised it? Mark Frank
BA: It seems Jerad et al need to make acquaintance with the ordinary unprejudiced man in the Clapham bus stop. Or, with the following from Simon Greenleaf, in Evidence, vol I Ch 1, on the same basic point. KF kairosfocus
Jerad: See what I mean about tilting at strawmen -- as in, there you go again? FYI, it is readily shown that in theory any member of a pop can be accessed by a sample, but that simply distracts from the material issue. Let me put it in somewhat symbolised terms, as saying the equivalent in English seems to make no impression:
1: config spaces of possibilities, W are partitioned into zones of interest that are naturally significant -- far tails, text strings in English not repetitions or typical random gibberish, etc -- which we can symbolise z1, z2, . . . zn, where 2: SUM on i (Zi) is much, much, much less than W, putting us in the needle in the haystack context. 3: Also, where search resources leading to credible blind and unguided sample size s, is also incredibly less than W. 4: So, it is highly predictable/reliable -- in cases where W = 2^500 - 2^1,000 or more, all but certainly -- that a blind search of W of scope s [10^84 - 10^111 samples) will come from the overwhelming bulk of W, not the special zones in aggregate. 5: That is, for relevantly large S, the overwhelming likelihood is that blind searches will come from W - {SUM on i(zi)}, not from SUM on i (zi) 6: And so, if instead we see the opposite, the BEST, EMPIRICALLY WARRANTED EXPLANATION is that such arose by choice contingency [for relevant cases where this is the reasonable alt), not chance. 7: Which is a design inference. 8: Where also, in relevant cases, requisites of specific function, e,g. as text in English, sharply constrain acceptable possible strings from W. 9: That is, that SUM on i (zi) is much, much less than W is not an unreasonable criterion.
(And, on the same point, you clustered a clip from someone else about presuppositions with what I said, about the issue of strictly limited and relatively tiny searches of vast config spaces W, that have in them zones of interest that are sparse indeed.) KF kairosfocus
Jerad @ 5. I could not fail to notice that you are still dodging the question. I see that you did not follow the link I provided to you explaining the concept of "moral certainty." If you had, you would have learned something. You would have learned that "moral certainty" has nothing to do with morals in the sense of ethics (which is the dodge you are using). So, your dodge does not work. It only makes you like more obstinate, which is some trick itself. Barry Arrington
I must say that I am really surprised of Neil Rickert. I did not expect such a position from him. From Jerad, on the other hand... gpuccio
Mark: I would say that Fisherian hypothesis testing works perfectly in empirical sciences, provided that it is applied with a correct methodology. The hypothesis testing procedure is perfectly correct and sound, but the methodology must be correct too. We have to make reasonable questions, and the answers must be pertinent. Frankly, the only reason that I can see for your personal (and of others too) insistence in a Bayesian approach is that you use it only to introduce your personal worldview commitments (under the "noble" word of priors), computing irrational and improbable probabilities for all that you don't want to accept (such as the existence of non physical conscious beings). If that is the only addition that a Bayesian approach can offer us in this context, I gladly leave it to you. I am happy to discuss my and others' worldviews, but I will certainly not do that in terms of "probabilities". gpuccio
Don’t you see that you are tilting at a strawman at this point? Cf 9 just above on partitioned config spaces and relative statistical weights of resulting clusters. Or, don’t you see that you are inviting the conclusion that you are revealing by actions speaking louder than words, that you think any and all tactics are “fair” in debate — “fair” on such a view being a mere social construct after all so that the honourable thing is one thing by nature and another by laws made for the moment by men through power struggles on the principle that might and manipulation make ‘right.’ Do you really want to go there?) KF
I started by responding to a post from Saturday discussing the probability of getting a result 22 standard deviations from the mean of a binomial distribution. That's all I'm doing. And defending what I've said when others have brought it up again in other threads. I'm happy to address partiioned configuration spaces if you want. As far as I can see no one has actually be able to show that my mathematics is wrong. Some have attacked a strawman of what I've said. And there's been a certain amount of abuse (Jerad's DDS . . . ) which I'm doing my best to ignore. Perhaps you'd like to caution some of the other commentors about their tone and correct their mathematical errors.
Yet apparently Jerad and other materialists/atheists are blind, either willingly or otherwise, to the fact that science would be impossible without Theistic presuppositions:
Since you can't prove that negative it's just an assertion on your part, a hypothesis that science has no need of. I look at the universe and see chaos, destruction and waste and, yes, some beauty and order. But, from a naturalistic point of view, if there were no order then I wouldn't be here to discover it. That does not mean that I can look backwards and anthropomorphically say things were/are designed. We live on this planet in this solar system in this galaxy because it happens to be one (of probably billions) that has the right combination of conditions to foster the beginning of life. But there are many, many, many other planets and solar systems where the conditions are completely hostile. If that meteor hadn't helped doom the dinosaurs the human race might not ever had existed at all. Stuff happens, all the time, every day. Sometimes there's an amazing coincidence or synchronicity that makes you stop in awe. Happens to me all the time. There's no magic director back in the studio bending events so certain things happen. You are going to get coincidences and really, really improbable things happening. Jerad
Jerad repeats the oft repeated false mantra of materialists/atheists:
Supposing we live in a Theistic universe is not science though.
Yet apparently Jerad and other materialists/atheists are blind, either willingly or otherwise, to the fact that science would be impossible without Theistic presuppositions: A few quick notes to that effect:
John Lennox - Science Is Impossible Without God - Quotes - video remix http://www.metacafe.com/watch/6287271/ Not the God of the Gaps, But the Whole Show - John Lennox - April 2012 Excerpt: God is not a "God of the gaps", he is God of the whole show. http://www.christianpost.com/news/the-god-particle-not-the-god-of-the-gaps-but-the-whole-show-80307/ Philosopher Sticks Up for God Excerpt: Theism, with its vision of an orderly universe superintended by a God who created rational-minded creatures in his own image, “is vastly more hospitable to science than naturalism,” with its random process of natural selection, he (Plantinga) writes. “Indeed, it is theism, not naturalism, that deserves to be called ‘the scientific worldview.’” http://www.nytimes.com/2011/12/14/books/alvin-plantingas-new-book-on-god-and-science.html?_r=1&pagewanted=all "You find it strange that I consider the comprehensibility of the world (to the extent that we are authorized to speak of such a comprehensibility) as a miracle or as an eternal mystery. Well, a priori, one should expect a chaotic world, which cannot be grasped by the mind in any way.. the kind of order created by Newton's theory of gravitation, for example, is wholly different. Even if a man proposes the axioms of the theory, the success of such a project presupposes a high degree of ordering of the objective world, and this could not be expected a priori. That is the 'miracle' which is constantly reinforced as our knowledge expands." Albert Einstein - Goldman - Letters to Solovine p 131. Comprehensibility of the world - April 4, 2013 Excerpt:,,,So, for materialism, the Einstein’s question remains unanswered. Logic and math (that is fully based on logic), to be so effective, must be universal truths. If they are only states of the brain of one or more individuals – as materialists maintain – they cannot be universal at all. Universal truths must be objective and absolute, not just subjective and relative. Only in this way can they be shared among all intelligent beings.,,, ,,,Bottom line: without an absolute Truth, (there would be) no logic, no mathematics, no beings, no knowledge by beings, no science, no comprehensibility of the world whatsoever. https://uncommondesc.wpengine.com/mathematics/comprehensibility-of-the-world/ The Great Debate: Does God Exist? - Justin Holcomb - audio of the 1985 debate available on the site Excerpt: The transcendental proof for God’s existence is that without Him it is impossible to prove anything. The atheist worldview is irrational and cannot consistently provide the preconditions of intelligible experience, science, logic, or morality. The atheist worldview cannot allow for laws of logic, the uniformity of nature, the ability for the mind to understand the world, and moral absolutes. In that sense the atheist worldview cannot account for our debate tonight.,,, http://theresurgence.com/2012/01/17/the-great-debate-does-god-exist Random Chaos vs. Uniformity Of Nature - Presuppositional Apologetics - video http://www.metacafe.com/w/6853139 "Clearly then no scientific cosmology, which of necessity must be highly mathematical, can have its proof of consistency within itself as far as mathematics go. In absence of such consistency, all mathematical models, all theories of elementary particles, including the theory of quarks and gluons...fall inherently short of being that theory which shows in virtue of its a priori truth that the world can only be what it is and nothing else. This is true even if the theory happened to account for perfect accuracy for all phenomena of the physical world known at a particular time." Stanley Jaki - Cosmos and Creator - 1980, pg. 49 Taking God Out of the Equation - Biblical Worldview - by Ron Tagliapietra - January 1, 2012 Excerpt: Kurt Gödel (1906–1978) proved that no logical systems (if they include the counting numbers) can have all three of the following properties. 1. Validity . . . all conclusions are reached by valid reasoning. 2. Consistency . . . no conclusions contradict any other conclusions. 3. Completeness . . . all statements made in the system are either true or false. The details filled a book, but the basic concept was simple and elegant. He summed it up this way: “Anything you can draw a circle around cannot explain itself without referring to something outside the circle—something you have to assume but cannot prove.” For this reason, his proof is also called the Incompleteness Theorem. Kurt Gödel had dropped a bomb on the foundations of mathematics. Math could not play the role of God as infinite and autonomous. It was shocking, though, that logic could prove that mathematics could not be its own ultimate foundation. Christians should not have been surprised. The first two conditions are true about math: it is valid and consistent. But only God fulfills the third condition. Only He is complete and therefore self-dependent (autonomous). God alone is “all in all” (1 Corinthians 15:28), “the beginning and the end” (Revelation 22:13). God is the ultimate authority (Hebrews 6:13), and in Christ are hidden all the treasures of wisdom and knowledge (Colossians 2:3). http://www.answersingenesis.org/articles/am/v7/n1/equation#
etc.. etc.. bornagain77
PPS: Those interested in following up the rabbit trail discussion may wish to go here for a review. I am highlighting that unlike arbitrarily chosen not naturally evident target zones, far tails of bell distributions are naturally evident special zones that illustrate the effect of partitioning a config space into drastically different statistical weight clusters, and then searching blindly with restricted resources. kairosfocus
PS: Remember, target zones of interest are not merely arbitrarily chosen groups of outcomes, another fallacy in the strawman arguments above. E.g. functional configs such as 72+ ASCII character text in English are readily recognisable and distinct from either (i) repeating short patterns: THETHE. . . . THE, and (ii) typical, expected at random outcomes: GHJDXTOU%&OUHYER&KLJGUD . . . HTUI. There seems to be a willful refusal to accept the reality of functionally specific configs showing functional sequence complexity, FSC, that are readily observable as distinct from RSC or OSC. kairosfocus
MF: You are demonstrably wrong, and your snipping out of context allowed you to set up a strawman and knock it over. You know or should know that the material issue at stake is partitioned config spaces and relative statistical weights of clusters of possible outcomes, leading to the dominance of the bulk of a bell distribution under relevant sampling circumstances; that is an easily observed fact as doing the darts and charts exercise will rapidly show EMPIRICALLY -- a 4 - 5 SD tail (as discussed) will be very thin indeed. Discussions of NP etc and the shaving off of a slice from the bulk -- which has no natural special significance here serves only as a red herring distraction from a point that is quite plain and easily shown empirically. Thence, you seem to have used the red herring led out to a strawman to duck the more direct issue on the table, where this applies to the sort of beyond astronomical config spaces and relatively tiny special, known attractive target zones and small blind samples we are dealing with. The suspect pattern continues. Do better next time, please. KF kairosfocus
Jerad: Don't you see that you are tilting at a strawman at this point? Cf 9 just above on partitioned config spaces and relative statistical weights of resulting clusters. Or, don't you see that you are inviting the conclusion that you are revealing by actions speaking louder than words, that you think any and all tactics are "fair" in debate -- "fair" on such a view being a mere social construct after all so that the honourable thing is one thing by nature and another by laws made for the moment by men through power struggles on the principle that might and manipulation make 'right.' Do you really want to go there?) KF kairosfocus
It probably does not help, that old fashioned Fisherian Hyp testing has fallen out of academic fashion, never mind that its approach is sound on sampling theory. Yes it is not as cool as Bayesian statistics etc, but there is a reason why it works well in practice.
Fisherian hypothesis testing has fallen out of fashion because it has become widely recognised that it is wrong. It only worked for all those years because in a wide range of circumstances it leads to much the same decisions as a Bayesian approach and it was much easier to use. With the advent of computers and clearer thinking about the foundations of statistics this is less and less necessary. In fact in many contexts pure Fisherian hypothesis testing fell out of favour several decades ago and was superseded by the Neyman-Pearson approach which requires an alternative hypothesis to be clearly articulated (and is thus moving in the direction of Bayes). Without the NP approach you cannot calculate such vital parameters as the power of the test. Whether you use a pure Fisherian or NP approach there are deep conceptual problems. To take your example of throwing darts at a Gaussian distribution. What that shows is that you are more likely to get a result between 0 and 1 SD than between 1 and 2 SD and so on. However,this does not in itself provide a justification for the rejection region being at the extremities. Fisherian thinking justifies the rejection region on the basis of the probability of hitting it being less than the significance level. You can draw such a region anywhere on your Gaussian distribution. Near the middle this would be a much narrow region than it would near the tails but it would still fall below the significance level. The only reason why using the tails of the distribution as a rejection region usually works is because the alternative hypothesis almost always gives a greater likelihood to this area than it does to the center. But there has to be an alternative hypothesis. Indeed in classical hypothesis testing it is common to decide that the rejection region is just one tail and not both - single tailed hypothesis testing. How is this decision made? By deciding that only plausible alternative hypotheses is one side of the distribution and not the other. I hope you are not going to ignore this corrective :-) Mark Frank
To be able to have a ‘fair coin flip’ in the first place presupposes that we live in a Theistic universe where what we perceive to be random events are bounded within overriding constraints that prevent complete chaos from happening. Chaos such as the infamous Boltzmann’s brain that would result in a universe where infinite randomness was allowed to rule supreme with no constraint.
Supposing we live in a Theistic universe is not science though.
FYI, BA spoke of moral certainty in a PROBABILISTIC mathematical context, where it is relevant on the application of the calcs. KF
That makes no sense to me whatsoever. If something is possible and it happens and it looks like there was no intervention or bias then what do morals have to do with it? Jerad
F/N: let me re-post a clip from comment 48 int eh previous thread, which was studiously ignored by Jerad, KeithS, Neil Rickert et al, in haste to make their favourite talking points. _______________________ [Clipping 48 in the DDS mendacity thread, for record:] >>It seems people have a major problem appreciating: (a) configuration spaces clustered into partitions of vastly unequal statistical weight,and (ii) BLIND sampling/searching of populations under these circumstances. It probably does not help, that old fashioned Fisherian Hyp testing has fallen out of academic fashion, never mind that its approach is sound on sampling theory. Yes it is not as cool as Bayesian statistics etc, but there is a reason why it works well in practice. It is all about needles and haystacks. Let’s start with a version of an example I have used previously, a large plot of a Gaussian distribution using a sheet of bristol board or the like, baked by a sheet of bagasse board or the like. Mark it into 1-SD wide stripes, say it is wide enough that we can get 5 SDs on either side. Lay it flat on the floor below a balcony, and drop small darts from a height that would make the darts scatter roughly evenly across the whole board. Any one point is indeed as unlikely as any other to be hit by a dart. BUT THAT DOES NOT EXTEND TO ANY REGION. As a result, as we build up the set of dart-drops, we will see a pattern, where the likelihood of getting hit is proportionate to area, as should be obvious. That immediately means that the bulk of the distribution, near the mean value peak, is far more likely to be hit than the far tails. For exactly the same reason why if one blindly reaches into a haystack and pulls a handful, one is going to have a hard time finding a needle in it. The likelihood of getting straw so far exceeds that of getting needle that searching for a needle in a haystack has become proverbial. In short, a small sample of a very large space that is blindly taken, will by overwhelming likelihood, reflect the bulk of the distribution, not relatively tiny special zones. (BTW, this is in fact a good slice of the statistical basis for the second law of thermodynamics.) The point of Fisherian testing is that skirts are special zones and take up a small part of the area of a distribution, so typical samples are rather unlikely to hit on them by chance. So much so that one can determine a degree of confidence of a suspicious sample not being by chance, based on its tendency to go for the far skirt. How does this tie into the design inference? By virtue of the analysis of config spaces — populations of possibilities for configurations — which can have W states and then we look at small, special, specific zones T in them. Those zones T are at the same time the sort of things that designers may want to target, clusters of configs that do interesting things, like spell out strings of at least 72 – 143 ASCII characters in contextually relevant, grammatically correct English, or object code for a program of similar complexity in bits [500 - 1,000] or the like. 500 bits takes up 2^500 possibilities, or 3.27*10^150. 1,000 bits takes up 2^1,000, or 1.07*10^301 possibilities. To give an idea of just how large these numbers are, I took up the former limit, and said now our solar system’s 10^57 atoms (by far and away mostly H and He in the sun but never mind) for its lifespan can go through a certain number of ionic chemical reaction time states taking 10^-14s. Where our solar system is our practical universe for atomic interactions, the next star over being 4.2 light years away . . . light takes 4.2 years to traverse the distance. (Now you know why warp drives or space folding etc is so prominent in Sci Fi literature.) Now, set these 10^57 atoms the task of observing possible states of the configs of 500 coins, at one observation per 10^-14 s. For a reasonable estimate of the solar system’s lifespan. Now, make that equivalent in scope to one straw. By comparison, the set of possibilities for 500 coins will take up a cubical haystack 1,000 LY on the side, about as thick as our galaxy. Now, superpose this haystack on our galactic neighbourhood, with several thousand stars in it etc. Notice, there is no particular shortage of special zones here, just that they are not going to be anywhere near the bulk, which for light years at a stretch will be nothing but straw. Now, your task, should you choose to accept it is to take a one-straw sized blind sample of the whole. Intuition, backed up by sampling theory — without need to worry over making debatable probability calculations — will tell us the result, straight off. By overwhelming likelihood, we would sample only straw. That is why the instinct that getting 500 H’s in a row or 500 T’s or alternating H’s and T’s or ASCII code for a 72 letter sequence in English, etc, is utterly unlikely to happen by blind chance but is a lot more likely to happen by intent, is sound. And this is a simple, toy example case of a design inference on FSCO/I as sign. A very reliable inference indeed, as is backed up by literally billions of cases in point. Now, onlookers, it is not that more or less the same has not been put forth before and pointed out to the usual circles of objectors. Over and over and over again in fact. And in fact, here is Wm A Dembski in NFL:
p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity, or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . . Biological specification always refers to function . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .” p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”
(And, Stephen Meyer presents much the same point in his Signature in the Cell, 2009, not exactly an unknown book.) Why then do so many statistically or mathematically trained objectors to design theory so often present the strawman argument that appears so many times yet again in this thread? First, it cannot be because of lack of capacity to access and understand the actual argument, we are dealing with those with training in relevant disciplines. Nor is it that the actual argument is hard to access, especially for those who have hung around at UD for years. Nor is such a consistent error explicable by blind chance, chance would make them get it right some of the time, by any reasonable finding, given their background. So, we are left with ideological blindness, multiplied by willful neglect of duties of care to do due diligence to get facts straight before making adverse comment, and possibly willful knowing distortion out of the notion that debates are a game in which all is fair if you can get away with it. Given that there has been corrective information presented over and over and over again, including by at least one Mathematics professor who appears above, the collective pattern is, sadly, plainly: seeking rhetorical advantage by willful distortion. Mendacity in one word. If we were dealing with seriousness about the facts, someone would have got it right and there would be at least a debate that nope, we are making a BIG mistake. The alignment is too perfect. Yes, at the lower end, those looking for leadership and blindly following are jut that, but at the top level there is a lot more responsibility than that. Sad, but not surprising. This fits a far wider, deeply disturbing pattern that involves outright slander and hateful, unjustified stereotyping and scapegoating. Where, enough is enough.>> ______________ Now, just prove me wrong by addressing the merits with seriousness. But I predict that we will see yet more of the all too commonly seen willful ignoring or evasive side tracking. Please, please, please, prove me wrong. KF kairosfocus
Jerad: FYI, BA spoke of moral certainty in a PROBABILISTIC mathematical context, where it is relevant on the application of the calcs. KF kairosfocus
To be able to have a 'fair coin flip' in the first place presupposes that we live in a Theistic universe where what we perceive to be random events are bounded within overriding constraints that prevent complete chaos from happening. Chaos such as the infamous Boltzmann's brain that would result in a universe where infinite randomness was allowed to rule supreme with no constraint. Proverbs 16:33 The lot is cast into the lap, but its every decision is from the LORD. Evolution and the Illusion of Randomness – Talbott – Fall 2011 Excerpt: In the case of evolution, I picture Dennett and Dawkins filling the blackboard with their vivid descriptions of living, highly regulated, coordinated, integrated, and intensely meaningful biological processes, and then inserting a small, mysterious gap in the middle, along with the words, “Here something random occurs.” This “something random” looks every bit as wishful as the appeal to a miracle. It is the central miracle in a gospel of meaninglessness, a “Randomness of the gaps,” demanding an extraordinarily blind faith. At the very least, we have a right to ask, “Can you be a little more explicit here?” http://www.thenewatlantis.com/publications/evolution-and-the-illusion-of-randomness Randomness - Entropic and Quantum https://docs.google.com/document/d/1St4Rl5__iKFraUBfSZCeRV6sNcW5xy6lgcqqKifO9c8/edit see also presuppositional apologetics bornagain77
If you get 500 (or 50 or 5000) heads in a row you should dismiss the hypothesis that this is a fair coin toss. But not because it is so improbable. That cannot be the case because, as we all accept, all sequences are equally improbable. The reason is because there are so many other possible hypotheses which give that outcome a vastly greater likelihood - some of which involve some element of design some of which do not. For example, the tossing method (whatever it is) might have got stuck or it might be a coin with two heads or it might be some trick of Darren Brown. Mark Frank
Barry, you asked be about a moral certainty in a mathematical context.
Is there ANY number of heads in a row that would satisfy you. Let’s say that the coin was flipped 100 million times and they all came up heads. Would you then know for a moral certainty that the coin is not fair without having to check it?
That is what I was questioning. My morals are fine but I don't tend to apply them in mathematical situations. Jerad
I commend the candor of all liars. How else would we know they are liars? Mung
I commend Neil and Jerad's candor. Of course chance could be an explanation, but if one will admit odds that remote, one could also admit the possibility of God's existence with odds comparably remote. Jerry Coyne rates himself a 6.9 on a scale of 7 for the certainty of God's non-existence. So he says God has a 1.47% chance of existence based on his estimation. So Jerry Coyne would sooner believe God exists than finding a set of 500 coins all heads being the result of chance. scordova
I hope they do :) Shall I post my local casino? Mung
I hope Jerad and Neil don't play poker for money. Blue_Savannah

Leave a Reply