Uncommon Descent Serving The Intelligent Design Community

Jerad and Neil Rickert Double Down

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In the combox to my last post Jerad and Neil join to give us a truly pristine example of Darwinist Derangement Syndrome in action.  Like the person suffering from Tourette’s they just don’t seem to be able to help themselves.

Here are the money quotes:

Barry:  “The probability of [500 heads in a row] actually happening is so vanishingly small that it can be considered a practical impossibility.  If a person refuses to admit this, it means they are either invincibly stupid or piggishly obstinate or both.  Either way, it makes no sense to argue with them.”

Sal to Neil:  “But to be clear, do you think 500 fair coins heads violates the chance hypothesis?”

Neil:  “If that happened to me, I would find it startling, and I would wonder whether there was some hanky-panky going on. However, a strict mathematical analysis tells me that it is just as probable (or improbable) as any other sequence. So the appearance of this sequence by itself does not prove unfairness.”

Jared chimes in:  “There is no mathematical argument that would say that 500 heads in 500 coin tosses is proof of intervention.” And “But if 500 Hs did happen it’s not an indication of design.”

I do not believe Jerad and Neil are invincibly stupid.  They must know that what they are saying is blithering nonsense.  They are, of course, being piggishly obstinate, and I will not argue with them.  But who needs to argue?  When one’s opponents say such outlandish things one wins by default.

And I can’t resist adding this one last example of DDS:

Barry to Jerad:  “Is there ANY number of heads in a row that would satisfy you. Let’s say that the coin was flipped 100 million times and they all came up heads. Would you then know for a moral certainty that the coin is not fair without having to check it?”

Jerad:  “A moral certainty? What does that mean?”

It’s funny how often when one catches a Darwinist in really painful-to-watch idiocy, and call them on it, their response is something like “me no speaka the English.”

Jerad, let me help you out:  http://en.wikipedia.org/wiki/Moral_certainty

Comments
But the argument against evolution by random chance alone, without the insertion of improved designs, is that we have MANY examples of wildly improbable events in natural systems, and this becomes the same as arguing that a tornado MIGHT assemble a 747 whilst passing through a junk yard (or the spare parts warehouse at Boeing).
Well, I guess it's a good thing no one is making that claim!! Since only the genetic mutations and some environmental conditions and culls are random as far as evolution is concerned.Jerad
June 26, 2013
June
06
Jun
26
26
2013
10:33 PM
10
10
33
PM
PDT
gpuccio:
It is, IMO, a very serious fault of the current academy to have refuted ID as a scientific theory, to have fought it with all possible means, to have transformed what could have been a serious and stimulating scientific and philosophical discussion into a war. I don’t like that, but really I don’t believe that the ID folks can be considered responsible for that.
First: nobody has refuted ID. Second: I think that ID folks are at the very least partially responsible. Consider, for example, the notorious Wedge document. Third: Creationist "science" journals, at least, have a long history of requiring that any contributors sign up to a statement of faith, and at least some prominent ID proponents belong to academic institutions that require such a commitment. the same is not true of what you call "the academy". Even Dembski was made to retract a statement he made about the Flood by his employer. Behe, on the other hand, remains employed at an "academy" institution. Having said all that: it is time the war ended. That is why I started my own site - so that we could try to get past the tribalism and down to what really divides (and often, to our surprise, unites) us. I'm not always successful in suppressing the skirmishes, but I think we do pretty well. I'd be honoured if you would occasionally drop by.Elizabeth B Liddle
June 26, 2013
June
06
Jun
26
26
2013
08:27 AM
8
08
27
AM
PDT
Mark:
A Bayesian approach is to judge an alternative hypothesis on its merits. It takes into account how likely the hypothesis is to be true without the data and how likely the data is given the hypothesis. What other merits would be relevant? All Bayes formula does is link all the merits in a systematic and mathematically justified way. It is the weakness of other approaches that they do not give sufficient weight to all the merits.
this. And I'd add that it's what ID proponents do all the time - particularly when they express astonishment that materialists should believe something so unlikely! There are always far more unrejected models from a Fisherian test than rejected models. How we decide between them depends on how much weight we give those unrejected alternatives. The great thing about using a Bayesian approach is that it forces you to make your priors explicit. The result is of course less conclusive, but so it should be. Bayes forces us to confront what we still do not know. It stops us making "of the gaps" arguments, whether for materialist or non-materialist explanations, and, above all, tells us the probability that we are interested in - that our hypothesis is correct, whereas a Fisher p value simple tells us the probability of our data, given the null. Not very informative, unless we have an extremely restricted relevant null! (such as "fair coin")Elizabeth B Liddle
June 26, 2013
June
06
Jun
26
26
2013
08:17 AM
8
08
17
AM
PDT
From a purely statistical point of view, 1 instance of 500 heads is practically required. The same as 1 instance of 500 (consecutive) tails. But the argument against evolution by random chance alone, without the insertion of improved designs, is that we have MANY examples of wildly improbable events in natural systems, and this becomes the same as arguing that a tornado MIGHT assemble a 747 whilst passing through a junk yard (or the spare parts warehouse at Boeing). During the Vietnam War, an American infantryman who was aiming at a Viet Cong guerrilla had the odd experience of "catching" a bullet from the guerilla straight down the barrel of his M16. Since the 7.62mm round is larger than the 5.56mm barrel, it plugged the end. I've seen the photograph. Considering the small size of both the bullet and the gun barrel and the very precise angular alignments required, the probability of this happening is infinitesimally small. But billions of bullets were fired over a period of many years. So, odd things happen every day by chance. But it's been a long time since we stopped believing that weather occurs randomly, or death from infection or the alignment of the Sun and Moon to produce an eclipse.mahuna
June 26, 2013
June
06
Jun
26
26
2013
08:08 AM
8
08
08
AM
PDT
gpuccio,
I want to thank you for your contribution to this discussion, which has been constructive and stimulating.
I think we showed what can be accomplished: a greater understanding. And I'm pleased to have talked with you and, hopefully, helped some other understand both our positions.Jerad
June 26, 2013
June
06
Jun
26
26
2013
12:32 AM
12
12
32
AM
PDT
Gpuccio I am delighted to agree to disagree on some many points. And you are remarkable in being one of the few IDists who is prepared to examine what the ID hypothesis entails. But I am disappointed in this:
I believe that, when I use Fisherian reasonings here, I know what I do. I will accept any valid objection to my specific reasonings, while I am not interested in a generic refusal of Fisherian reasoning in itself.
It seems that as long you know how to use the Fisher process and it seems to you to be working in practice you are not interested in why it is successful. This means you are always at risk of coming to wrong conclusions (and in a stochastic world you may not know they are wrong). As I said in #67 most published research findings are wrong and the use of Fisher processes is behind a lot of it. Luckily most published research findings are also ignored. You write:
here can be more than one alternative hypothesis, and they must be judged on their merits, not on a probability, unless you use a Bayesian approach, which I don’t.
A Bayesian approach is to judge an alternative hypothesis on its merits. It takes into account how likely the hypothesis is to be true without the data and how likely the data is given the hypothesis. What other merits would be relevant? All Bayes formula does is link all the merits in a systematic and mathematically justified way. It is the weakness of other approaches that they do not give sufficient weight to all the merits.Mark Frank
June 25, 2013
June
06
Jun
25
25
2013
11:41 PM
11
11
41
PM
PDT
Jerad: I want to thank you for your contribution to this discussion, which has been constructive and stimulating. My final summary just wanted to stress the essential difference between our positions, not deny the many things we have agreed upon. Yes, if you want to put it that way, I am absolutely "biased" against accepting order, and especially function, that is completely improbable as a "fluke". That is, IMO, against any intuition of truth and any common sense. I will not do it. My epistemology is obviously different from yours. Only, I would not call that a "bias", but simply an explicit cognitive choice. If in doubt about the terminology, we can always turn Bayesian and call it a "prior" :) . My alternative hypothesis, for order and function, has always been design: the intervention of consciousness. I have detailed the many positive reasons why that is perfectly reasonable, IMO. However, for simple "order" many other alternative hypotheses are certainly viable, and must be investigated thoroughly. It is my firm conviction that for complex function, instead, any non design explanation will be found to be utterly lacking. The neo darwinian theory is a good example of that failure. I am really sure, in my heart and mind, that only consciousness can generate dFSCI. Finally, I would not be so disappointed that we have been, in a way, "left alone" here. It is a general, and perfectly acceptable, fact that as a discussion becomes more precise and technical (and therefore, IMO, much more interesting and valid) the "general public" becomes less interested. No harm in that. That's why I am, always have been, and always will be, a "minority guy". This is a blog. While I personally refrain form discussing here topics that are not directly or indirectly pertinent to the ID theory (especially religious topics), it's perfectly fine with me that others love to do that. But the ID discussion is another thing. It is, IMO, a very serious fault of the current academy to have refuted ID as a scientific theory, to have fought it with all possible means, to have transformed what could have been a serious and stimulating scientific and philosophical discussion into a war. I don't like that, but really I don't believe that the ID folks can be considered responsible for that. ID is a very powerful scientific paradigm. It will never be "brushed away". Either the academy accepts to seriously give it the intellectual role it deserves, or the war will go on, and it will ever more become a war against the academy. That is the final result of dogmatism and intellectual intolerance.gpuccio
June 25, 2013
June
06
Jun
25
25
2013
11:12 PM
11
11
12
PM
PDT
gpuccio,
Just one thing: to reject the null, you need not necessarily one alternative hypothesis. In general, the alternative hypothesis is simply “not H0?, that is, what we observe is extremely improbable as a random effect. There can be more than one alternative hypothesis, and they must be judged on their merits, not on a probability, unless you use a Bayesian approach, which I don’t. So for me, once rejected the null, the duty remains to choose the best “non random” explanation. As I have tried to show in my example. Unfair coins, a man in the room, or some trick from the child at the input, or some strange physical force, are all possible candidates. Each hypothesis will be evaluated according to its explanatory merits, or to its consistency, or falsifiability. Statistics is no more useful at this level.
We agree on much. I'm still not sure what your bottom line alternate hypothesis is, when all other explanations have been ruled out. But this has been one of my criticisms for a long time. And it's odd that you choose to pick the best non-random explanation. Sounds like you have a bias!!
“You, the “fluke 500 heads” guy. I, the “there must be another explanation” guy.” still summarizes well our differences.
I find that a bit disappointing after our informative and insightful conversation as it brushes aside the huge amount that we agree on, that we'd both do our utmost to try and root out any detectable bias. And I find it disappointing that you cannot state a clear final conclusion. "There must be another explanation" is pretty wishy-washy but that's your call. What I find very disappointing is that most of the commentators at UD have lost interest in the whole discussion and are now off chasing other perceived slurs against ID or imagined examples of stupid science. There was lots of shouting and finger pointing and then off they go, not willing to stick around for some substantive conversation. You seem actually interested in learning but I'm not so sure about many of your fellows.Jerad
June 25, 2013
June
06
Jun
25
25
2013
10:31 PM
10
10
31
PM
PDT
Mark: I find your last post(s) very reasonable. I can agree almost about all. To be more clear: Under a fairly wide range of conditions classical hypothesis testing leads to the same conclusion as Bayesian thinking – although it also goes badly wrong under a wide range of conditions as well. I believe it is essentially a problem of correct methodology, whatever statistical approach one uses. I believe that, when I use Fisherian reasonings here, I know what I do. I will accept any valid objection to my specific reasonings, while I am not interested in a generic refusal of Fisherian reasoning in itself. The most important point about Bayesian thinking is that it is comparative. It requires you to not only to think not only about the hypothesis you are disproving, but the alternative you propose in instead. IDists don’t like this approach because it entails exploring the details of the design hypothesis. But as a competent statistician you will know that it is poor practice to dismiss H0 without articulating the alternative hypothesis. You don’t have to adopt a completely Bayesian approach (although that would be ideal). Even Neyman-Pearson requires articulating the alternative hypothesis. OK, I am happy that I have not to adopt a Bayesian approach (except when computing specificity and sensitivity). And I perfectly agree on providing one or more alternative hypothesis. But, for me, the null of a random effect is rejected on statistical ground, while the "positive" explanation must be compared with a reasoning that goes well beyond any statistical consideration, Bayesian or not. I believe this is a fundamental difference in our approach. As you can see, if you have read my example of the dark room, I have proposed many possible necessity explanations for the 500 heads sequence in that scenario, and choosing between them requires a lot of scientific reasoning, and will in some way be subjective in the end. That's why, in my epistemology, scientific theories are chosen by each one of us according to their being "the best explanation" for the person who chooses it. That's why many different scientific explanations of the same facts can happily live together, for shorter or longer spans of time, passionately "defended" by different groups of followers. That's exactly my idea of science.
The answer is simple and probably acceptable to you. The 500 heads mean something to lots of people, as does the 250/250 string and the opening lines of Hamlet. Therefore, it is plausible that someone might want to make the string come out that way for their purposes. It may not be very likely that such a person exists and that they could fiddle the results – but it only has to be marginally likely to overwhelm the hypothesis that it was a fair coin. But it does require the alternative hypothesis to be considered. The reason we differ so much on ID is twofold: 1) I don’t think the evolutionary theory hypothesis is comparable to the fair coin hypothesis. It is less well defined but seems to me that the outcome is plausible. 2) The alternative hypothesis has not been articulated – but if it had then I suspect it would be absurdly implausible.
Yes, I can accept most of that, except obviously the last two statements. In particular, I like very much your reference to consciousness as the origin of specification: "The 500 heads mean something to lots of people, as does the 250/250 string and the opening lines of Hamlet." I think you know my views, but as you give me an occasion to summarize them, I will do that as an asnswer to your last two statements: 1) I would definitely say that the RV part of the neo darwinian hypothesis is perfectly comparable to the fair coin hypothesis (excluding the effect of NS). It is a random walk, and not a coin toss, but the distribution of probabilities for unrelated states is grossly uniform, as I have tried to show many times. That part, therefore, can be accepted or rejected as a null, but we need a metrics to do that. I believe that dFSCI is a valid metrics for that. Please, consider that such an "evaluation of the null" can be done not only for a whole transition to a new functional protein, but also for the steps of that transition, if and when those steps are explicitly offered (I mean, obviously, the naturally selectable intermediaries). IOWs, the dFSCI can be applied to any section of the algorithm which implies a purely random variation, and that's why it is a very valuable concept. I have many times expressed many reasons to reject NS as a valid component of the process, at least for basic protein domains, most recently in the discussion with Elizabeth on another thread here. I hope you have read those posts. 2) I hope you can admit that I have always tried to detail my alternative hypothesis as much as possible here. I believe that the empirical observation that only humans seem to be able to generate dFSCI, and in great quantity, while the rest of the universe seems incapable of that (always suspending our judgement about biological information) has a simple fundamental explanation: dFSCI originates only in subjective conscious processes, including the recognition of meaning, the feeling of purpose, and the ability to output free actions. That's why I always relate ID to the problem of consciousness, and that's why my "faith" in ID is so related to my convinced rejection of the whole strong AI theory. The relation of the design process to those subjective experiences is certainly subjectively confirmed by what we can observe in ourselves when we design things. It is objectively confirmed, although only indirectly, by the unique relationship between conscious design processes and the objective property of dFSCI in objects. So, my alternative hypothesis is simple: if an object exhibits dFSCI, the null hypothesis if a rnadom generation of that information can be safely rejected. If reasonable necessity explanations of other kinds are not available, the best explanation is that some conscious being outputted that specific functional form to the object. I have gone to greater detail, many times, stating: a) That IMO for biological information humans are not a viable answer, and the existing data suggest that the designer(s) could be some non physical conscious being. Aliens are an alternative, but I am not a fan of that theory. b) That the existence of non physical conscious beings has been believed by most human beings for most recorded time. It still is, today. It does not seem such a ridiculous "prior", unless you decide that you and the minority of others who so fiercely reject it today are a superior elite, appointed by God to detain truth (ehm, no, here something did not work :) ). c) That such non physical conscious beings could have many forms, not necessarily that of a monotheistic creator. Indeed, the act of design of biological information is not in any way necessarily a "creation". It is rather a modeling, more similar, in form, to what we humans do every day. You may have noticed that I strictly avoid, here, religious arguments, of any kind. d) That I have repeatedly admitted that, while it is perfectly true that a design inference does not require any knowledge about the designer, except for the hypothesis that he is a conscious intelligent being and can manipulate matter, it is equally true that, once a design inference is made, and even momentarily accepted, we have a duty to ask all possible questions about the designer and the design process, and verify if answers, even partial, can be derived from the existing data. e) That I have many times admitted that some of those answers can certainly be given. For example, we can certainly try to answer, as our accumulation of biological knowledge increases, the following questions: e1) When and where does design appear in natural history? dFSCI offers a simple method to look for those answers: the emergence of new dFSCI will be a clue to a design process. OOL, the transition form prokaryotes to eukaryotes, the Cambrian explosion are very good candidates for a localization, both in time and space, of the design process. e2) How does the designer model the information into matter? I have suggested different possible mechanisms, all of them with different, recognizable consequences in the genome and proteome history. Guided mutation and intelligent selection are the most obvious alternatives. Each of them can come in different forms, and both can work together. e3) Finally, it is legitimate, although not easy, to ask questions about the purposes of the designer. That can include both wide range purposes and local purposes. I have argued many times that, from what we observe, a desire to express ever new different functions explains the variety we observe in biology much better than the darwinian concept of "reproductive fitness". So, that's all, or most of it, in a nutshell. Thank you for the kind attention :)gpuccio
June 25, 2013
June
06
Jun
25
25
2013
03:11 PM
3
03
11
PM
PDT
Jerad: OK, your point of view is clear enough. I maintain mine, which I hope is clear too. Just one thing: to reject the null, you need not necessarily one alternative hypothesis. In general, the alternative hypothesis is simply "not H0", that is, what we observe is extremely improbable as a random effect. There can be more than one alternative hypothesis, and they must be judged on their merits, not on a probability, unless you use a Bayesian approach, which I don't. So for me, once rejected the null, the duty remains to choose the best "non random" explanation. As I have tried to show in my example. Unfair coins, a man in the room, or some trick from the child at the input, or some strange physical force, are all possible candidates. Each hypothesis will be evaluated according to its explanatory merits, or to its consistency, or falsifiability. Statistics is no more useful at this level. But these are trivial points. I believe that the following: "You, the “fluke 500 heads” guy. I, the “there must be another explanation” guy." still summarizes well our differences.gpuccio
June 25, 2013
June
06
Jun
25
25
2013
02:18 PM
2
02
18
PM
PDT
Perhaps I should be sure my views are clear. If the null hypothesis is: the coin flipping process is fair, i.e. truly fair And the alternate hypothesis is: the coin flipping process is not fair. Then I'd most likely reject the null hypothesis if we got a string of 500 heads, depending on the confidence interval you specified. It all depends on what your alternate hypothesis is. I sound like Bill Clinton now. Sigh. If your alternate hypothesis is: the system is bias then I'd most likely reject the null hypothesis again depending on the confidence interval. If your alternate hypothesis is: there's a guy in Moscow who is psychically affecting the coin tosses then . . . I think you'd better use a Bayesian approach where other factors are introduced. What is the plausibility that psychic powers can do such a thing? Could the man in Moscow be getting the signal the coin was being flipped in time to affect its outcome? If you're going to make statistical arguments then be precise and follow the procedures. Give me an clear and testable alternate hypothesis. And, ideally, a confidence interval you'd like to use. But, remember, there is no such thing as a 100% confidence interval. And remember what a confidence interval tells you: that your refection of the null hypothesis is blah% sure to not be down to a chance result. And that is based on the distribution of the variable being tested. You see confidence intervals all the time in poll results. Mostly you don't see the confidence percentage reported which is just sloppy journalism. Fairly obviously, the higher the confidence the bigger the sample size has to be. So I'd really like to get that nailed down as well.Jerad
June 25, 2013
June
06
Jun
25
25
2013
09:51 AM
9
09
51
AM
PDT
a) We seem to agree that a series of 500 heads is not something we “expect” from a random system, even if it has the same individual probability of any other sequence of that length.
Agreed.
b) We agree that the reason for that is that we are not comparing the probabilities of each single sequence, but rather the probabilities of two very different subsets of the search space. Do we agree on that?
Um . . . not really. Since every possible sequence of Hs and Ts is equally likely it is only our pattern seeking mental processes that trump our statistical reasoning powers most of the time. I'm just like you, 500 Hs would be a real WTF moment for me. And I'd probably spend days or months or even years trying to be sure there was no bias before I accepted and explanation of chance. But really, 500 Hs is just as likely as any other particular sequence. But, clearly, a vast majority of the time we'll get a jumbled sequence of Hs and Ts and won't find those outcomes surprising in the least.
And, if I couldn’t find one, if I was very sure the whole procedure was ‘fair’ then I’d say the result was a fluke. You decide whether I’m being ‘empirical’. Maybe you are easily satisfied. I would look further for an explanation.
Oh no, I'd have to be very, very, very, VERY sure there was no bias before I accepted a chance explanation.
Who is the more empirical here? Well, I am happy there is still something we don’t agree about. You, the “fluke 500 heads” guy. I, the “there must be another explanation” guy. Maybe I am becoming a skeptic, after all.
Maybe. :-)
That’s what I mean by “empirically impossible”: something that is not logically impossible, but rather so extremely improbable that I will never accept it as a random outcome, and will always look for a different explanation.
I'd just stick with extremely improbable which is less confusing.
c) We definitely don’t agree about Hamlet. Ah! I feel better, after all.
A rose by any other name?
For the first time: you must be mad, at best. So, you would be “extremely suspicious” of the random emergence of Hamlet’s text, but in the end you can accept it? Good luck, my friend…
After more scrutiny than even I can imagine.
I would not be “extremely suspicious”: I would be absolutely sure that the outcome is not the product of a random system. And I would never, never even entertain the idea of a fluke. Well, anyone can choose his own position on that. All are free to comments on that.
Fair enough. There are things in this world that cannot be explained by your philosophy.
Give me a null hypothesis and an alternate hypothesis, a testing procedure and a level of significance. That is a right request. So, I will propose a scenario, maybe a little complicated, but just to have the right components at their place.
Good.
Let’s say that there is a big closed room, and we know nothing of what is in it. On one wall there is an “input” coin slot. On another wall there is an “output” coin slot, where a coin can come out and rest quietly on a frame. . . . . The null hypothesis is very simple: each coin is taken randomly from the bag, randomly inserted into the input coin slot, and it comes out from the output coin slot in the same position it had when it was inserted into the input slot. We can also suppose that something happens within the dark room, but if it so, that “something” is again a random procedure, where each side of the coin still has 0.5 probability to be the upward side in the end. For example, each coin could be randomly tossed in the dark room, and then outputted to the output slot as it is. IOWs, the null hypothesis, as usual, is that what we observe as an outcome is the result of random variation.
That's not quite the normal way of stating it. I'd just say the null hypothesis is that the coin and procedure are fair, i.e. random. But that's just quibbling.
Now, to be simple, we are sure that all the coins are fair, and that there is no other “interference£ out of the dark room. So, our whole interest is focused on the main question: What happens in the dark room?
So, what is your alternate hypothesis? The thing you're testing?
You ask for a level of significance. There is really no reason that I give you one, you can choose for yourself. With a search space of 500 bits, and a subset of outcomes with “only heads ot tails” whose numerosity is 2, we are in the order of magnitude of 1E-150 for the probability of the outcome we observe. What level do you like? 1E-10? 1E-20? You choose.
Uh, that's not how it's done. The level of significance is used to set up a confidence interval say 90% or 95%. Sometimes this is referred to as picking the p-value. Well . . . they're related. The point being if you're going to reject the null hypothesis in favour of the alternate hypothesis you want to be 90 or 95% sure that the outcome you observed was not down to chance. You cannot have a 100% confidence interval which is why I'd never be 100% sure the outcome wasn't due to chance.
Do you reject the null (H0)?
At what level of significance? I'll save you the effort. By common statistical analysis you probably would. But in favour of an alternate hypothesis which would NOT be there was design but in favour of an alternate hypothesis along the lines of "the coin and/or process is not fair'.
Our explanations (H1s) can be may. ID is one of them. The ID explanation is that there is one person in the dark room, that he takes the coin that has been inputted, checks its condition, and simply outputs it through the output slot with head upwards. Very simple indeed.
But you didn't give an alternate hypothesis so I don't know what you're testing. And if your trying to test something complicated then a Bayesian approach would be more pertinent. How big is the dark room? Is there a system of air circulation? Etc.
But other explanations are possible. In the room, there could be a mechanism that can read the position of the coin and invert it only when tail is upward. That would probably still be an ID explanation, because whence did that mechanism come? But yes, we must be thorough, and investigate the possibility that such a mechanism spontaneously arose in the dark room.
Like I said, I'd be extremely diligent in checking for all possible testable plausible causes of bias.
An interesting aspect of this explanation is that it teaches us something about the nature of ordered strings and the concept of Kolmogorov complexity. Indeed, if the mechanism is responsible for the order of the final string, then the complexity of the mechanism should be taken in place of the complexity of the string, if it is lower.
I'm not an expert on such matters.
The import point is that, once you have the mechanism working, you can increase the brute complexity of the output as much as you like: you can have an output of 500 heads, or of 5000, or of 5 billion heads. While the apparent complexity of the outcome increases exponentially, its true, Kolmogorov complexity remains the same: the complexity of the mechanism.
Again, I'm no expert.
Finally, let’s say that from the output slot you get a binary sequence that corresponds to the full text of Hamlet. Again, do you reject the null? I suppose you do.
Again, depending on what your alternate hypothesis is. If it's just: the system isn't random then certainly I would. Easily and gladly. But you haven't told me what your alternate hypothesis is so I don't know what I'm rejecting the null hypothesis for.
Here, the situation is definitely different. Not only because Hamlet in binary form is certainly much longer than 500 bits. But because the type of specific information here is completely different. A drama in english is not “an ordered sequence”, like the all heads sequence. It can never be outputted by any algorithm, however complex, unless the algorithm already knows the text.
As extremely unlikely as it is it could be the result of a random generating process.
Hamlet is certainly the output of a conscious being. I will have no hesitation in inferring design (well, not necessarily Shakespeare himself in the dark room, but certainly Shakespeare at the beginning of the transcriptions of all kinds that have brought the text to our dark room).
I believe there was man called William Shakespeare who wrote the play Hamlet. That's much more plausible than it was arrived at by some chance event. But Shakespeare was a man who was known by other men for whom we have documentary evidence and whose abilities are not beyond what we've seen other men at that time do.
Do you agree on that? Probably not. You will probably insist with the “fluke” theory.
I hope my responses clarify my views.Jerad
June 25, 2013
June
06
Jun
25
25
2013
09:18 AM
9
09
18
AM
PDT
Mark: It wold be your turn, but I am very tired. Later, I hope. By...gpuccio
June 25, 2013
June
06
Jun
25
25
2013
07:06 AM
7
07
06
AM
PDT
keiths: You say: You’re battling a strawman. If I flipped a coin and got the exact text of Hamlet, then I would be almost certain that the outcome was NOT due to chance. Well, that's certainly progress. I still beg to differ about the "almost". For me, there is no almost at all. But we cannot all be the same... However, that just means that the non-chance explanation is far, far likelier to be correct. It doesn’t mean that the chance explanation is impossible. Not logically impossible, as I have always said. I agree with you completely. The chance explanation is not "logically impossible", but certainly "empirically impossible": it will never, never be accepted as a credible empirical explanation by anyone in his right mind. You say: I don’t know if this is an Italian/English issue or a conceptual issue, but your statement doesn’t make sense. To call a sequence “random” just means that it was produced by a random process. It doesn’t tell you about its content. Thank you for being understanding. You are right, that is badly worded. I was writing at an early hour in the morning, and I am human (had you ever inferred that? :) ). I should have said: "A sequence with no apparent special order (let's call it "apparently random") is extremely more “probable” than a highly ordered sequence. That is the simple point that many people here, in their passion for pseudo-statistics, seem to forget." I apologize for the imprecision, your criticism is correct. The all-heads sequence is just as random as a mixed sequence if both are produced by random processes. That's perfectly correct. Likewise, a random-looking sequence isn’t random if it is produced by a deterministic process. Correct, again. Your statement is correct only if you meant to say something like “if we generate a sequence at random, it is more likely to be a mized sequence of heads and tails than it is to be all heads or all tails.” That was the idea. But all fixed sequences, whether they look random or not, are equally probable, as you said yourself: Sure. Individually, the probability is the same. As members of partitions, however, everything changes. Finally, you say: Again, you’re misunderstanding me if you think that I’m claiming that partitions don’t matter. They do matter, but it’s the sizes of the sets that matter for purposes of calculating probabilities, not their actual content (with the proviso that the distribution is flat, as it is for coin flip sequences). True, for calculating probabilities the actual content has no importance. But as I said, scientific inferences are not simply a matter of probabilities. They are, first of all, a question of methodology. And, for our methodology to be correct, and our inferences valid, the actual content of our partitions in our model is very, very important. Computing probabilities is only an intermediate step of the scientific procedure. Before that, and after that, we have to reason correctly. Define the model, define the question, verify the possible answers. For all that, the content of our concepts is extremely important. Do you agree on that? If you agree on the fundamental importance of the nature and content of our partitions, I am satisfied.gpuccio
June 25, 2013
June
06
Jun
25
25
2013
07:05 AM
7
07
05
AM
PDT
Jerad: First, our possible points of "agreement": a) We seem to agree that a series of 500 heads is not something we "expect" from a random system, even if it has the same individual probability of any other sequence of that length. I quote you: I’ve said, MANY MANY times now that if I flipped a coin 500 times I’d be very, very, very suspicious that something funny was going on and I’d do my best to try and find an explanation for that. b) We agree that the reason for that is that we are not comparing the probabilities of each single sequence, but rather the probabilities of two very different subsets of the search space. Do we agree on that? But you add: And, if I couldn’t find one, if I was very sure the whole procedure was ‘fair’ then I’d say the result was a fluke. You decide whether I’m being ‘empirical’. Maybe you are easily satisfied. I would look further for an explanation. Who is the more empirical here? Well, I am happy there is still something we don't agree about. You, the "fluke 500 heads" guy. I, the "there must be another explanation" guy. Maybe I am becoming a skeptic, after all. That's what I mean by "empirically impossible": something that is not logically impossible, but rather so extremely improbable that I will never accept it as a random outcome, and will always look for a different explanation. c) We definitely don't agree about Hamlet. Ah! I feel better, after all. You say: FOR THE NTH TIME! I’d be extremely suspicious of any such result and would exhaustively check to see if there was any detectable bias in the process. But, if none could be found then I’d say it was a fluke result. For the first time: you must be mad, at best. So, you would be "extremely suspicious" of the random emergence of Hamlet's text, but in the end you can accept it? Good luck, my friend... I would not be "extremely suspicious": I would be absolutely sure that the outcome is not the product of a random system. And I would never, never even entertain the idea of a fluke. Well, anyone can choose his own position on that. All are free to comments on that. You say: Give me a null hypothesis and an alternate hypothesis, a testing procedure and a level of significance. That is a right request. So, I will propose a scenario, maybe a little complicated, but just to have the right components at their place. Let's say that there is a big closed room, and we know nothing of what is in it. On one wall there is an "input" coin slot. On another wall there is an "output" coin slot, where a coin can come out and rest quietly on a frame. At the input wall, there is a bag with 500 coins. They are thoroughly mixed by an automatic system. A child, blinded, takes one coin at a time from the bag and inputs it, always blindly, into the input coin slot. We have reasonable certainty that the child is completely blinded, and that he cannot have any information about the orientation of the coin, not even by touching it (let's say he wears thick gloves). We have very good control of all those parts. Let's say that we are at the output coin slot. Our duty is to register correctly, for each coin that comes out and rests on the frame, whether its visible side is head or tail. We write that down. Let's say that, at the end of the procedure, we have a sequence of all heads. This is the testing procedure. The null hypothesis is very simple: each coin is taken randomly from the bag, randomly inserted into the input coin slot, and it comes out from the output coin slot in the same position it had when it was inserted into the input slot. We can also suppose that something happens within the dark room, but if it so, that "something" is again a random procedure, where each side of the coin still has 0.5 probability to be the upward side in the end. For example, each coin could be randomly tossed in the dark room, and then outputted to the output slot as it is. IOWs, the null hypothesis, as usual, is that what we observe as an outcome is the result of random variation. Now, to be simple, we are sure that all the coins are fair, and that there is no other "interference£ out of the dark room. So, our whole interest is focused on the main question: What happens in the dark room? You ask for a level of significance. There is really no reason that I give you one, you can choose for yourself. With a search space of 500 bits, and a subset of outcomes with "only heads ot tails" whose numerosity is 2, we are in the order of magnitude of 1E-150 for the probability of the outcome we observe. What level do you like? 1E-10? 1E-20? You choose. So, my first question is: Do you reject the null (H0)? Definitely, I would. Our explanations (H1s) can be may. ID is one of them. The ID explanation is that there is one person in the dark room, that he takes the coin that has been inputted, checks its condition, and simply outputs it through the output slot with head upwards. Very simple indeed. But other explanations are possible. In the room, there could be a mechanism that can read the position of the coin and invert it only when tail is upward. That would probably still be an ID explanation, because whence did that mechanism come? But yes, we must be thorough, and investigate the possibility that such a mechanism spontaneously arose in the dark room. An interesting aspect of this explanation is that it teaches us something about the nature of ordered strings and the concept of Kolmogorov complexity. Indeed, if the mechanism is responsible for the order of the final string, then the complexity of the mechanism should be taken in place of the complexity of the string, if it is lower. The import point is that, once you have the mechanism working, you can increase the brute complexity of the output as much as you like: you can have an output of 500 heads, or of 5000, or of 5 billion heads. While the apparent complexity of the outcome increases exponentially, its true, Kolmogorov complexity remains the same: the complexity of the mechanism. Finally, let's say that from the output slot you get a binary sequence that corresponds to the full text of Hamlet. Again, do you reject the null? I suppose you do. Again, what are our H1s? Here, the situation is definitely different. Not only because Hamlet in binary form is certainly much longer than 500 bits. But because the type of specific information here is completely different. A drama in english is not "an ordered sequence", like the all heads sequence. It can never be outputted by any algorithm, however complex, unless the algorithm already knows the text. Hamlet is certainly the output of a conscious being. I will have no hesitation in inferring design (well, not necessarily Shakespeare himself in the dark room, but certainly Shakespeare at the beginning of the transcriptions of all kinds that have brought the text to our dark room). Do you agree on that? Probably not. You will probably insist with the "fluke" theory. In a sense, that is reassuring...gpuccio
June 25, 2013
June
06
Jun
25
25
2013
06:46 AM
6
06
46
AM
PDT
1) I don’t think the evolutionary theory hypothesis is comparable to the fair coin hypothesis.
What is this alleged evolutionary theory hypothesis?Joe
June 25, 2013
June
06
Jun
25
25
2013
06:42 AM
6
06
42
AM
PDT
To Jerad, keiths, Mark Frank: Wow, guy! It seems that I did stir some reaction... Well, just an initial collective comment, and then I will go on answering you individually. If I have misunderstood something that you have said, I apologize. I have no interest in demonstrating that you are wrong. I am only interested in the final results of a discussion. So, if you agree with me, I can only be happy :) But as things are never as good as they seem, let's see if we really agree, and on what. I think it is better to go on one by one. Jerad first!gpuccio
June 25, 2013
June
06
Jun
25
25
2013
05:55 AM
5
05
55
AM
PDT
***** accidentally hit enter before finishing 69 (hence typos) ***** .... But it does require the alternative hypothesis to be considered. The reason we differ so much on ID is twofold: 1) I don't think the evolutionary theory hypothesis is comparable to the fair coin hypothesis. It is less well defined but seems to me that the outcome is plausible. 2) The alternative hypothesis has not been articulated - but if it had then I suspect it would be absurdly implausible.Mark Frank
June 25, 2013
June
06
Jun
25
25
2013
01:40 AM
1
01
40
AM
PDT
Gpuccio I am afraid my response will come out bit by bit this morning - too many other things going on. I realise that we are largely talking at cross-purposes as Lizzie has pointed out. I agree that in any real situation if a coin was tossed 500 times and they were all heads then there is overwhelming evidence that it was not a fair coin toss (the coin might have been fair - it might have been the way it was being tossed that was not fair). My interest is purely in why it is overwhelming evidence and what exactly it is overwhelming evidence for - because I think that is relevant when applying the same logic to the ID inference. The most important point about Bayesian thinking is that it is comparative. It requires you to not only to think not only about the hypothesis you are disproving, but the alternative you propose in instead. IDists don't like this approach because it entails exploring the details of the design hypothesis. But as a competent statistician you will know that it is poor practice to dismiss H0 without articulating the alternative hypothesis. You don't have to adopt a completely Bayesian approach (although that would be ideal). Even Neyman-Pearson requires articulating the alternative hypothesis. You may think it does not apply to the coin tosses - but it does. Why do we all reject a fair coin when it is 500 heads but not when it is a meaningless string of heads and tails? The explanation cannot be: * Because the 500 heads are so vastly improbable. The meaningless string is equally improbable. * Because the 500 heads are at the extremity of the statistic - no of heads. We would also reject a string of 250 heads followed by 250 tails which falls bang in the middle of that statistic. * Because the string is compressible. We would also reject strings that are incompressible but happened to spell out the opening lines of Hamlet in ASCII. The answer is simple and probably acceptable to you. The 500 heads mean something to lots of people, as does the 250/250 string and the opening lines of Hamlet. Therefore, it is plausible that someone might want to make the string come out that way for their purposes. It may not be very likely that such a person exists and that they could fiddle the results - but it only has to be marginally likely to overwhelm the hypothesis that it was a fair coin. But it does reqMark Frank
June 25, 2013
June
06
Jun
25
25
2013
01:33 AM
1
01
33
AM
PDT
Under a fairly wide range of conditions classical hypothesis testing leads to the same conclusion as Bayesian thinking – although it also goes badly wrong under a wide range of conditions as well.
Agreed!! If you're talking about some ideal, hypothetical 'fair' coin being tossed 500 times then classical hypothesis testing is fine. It seems to me that the real conflict here is why some of us won't accept some kind of design behind a highly improbable result. It's just my opinion but I'd stake my claim to faith on impossible events occurring not improbable ones.Jerad
June 24, 2013
June
06
Jun
24
24
2013
11:30 PM
11
11
30
PM
PDT
Gpuccio I will address your other points later now but this one is the most important.
I simply want to do an easy and correct calculation which can be the basis for a sound empirical inference, like I do every day in my medical practice. According to your position, the whole empirical knowledge of the last decades should be discarded.
Not the whole of empirical knowledge but the majority of scientific research findings are false - see Ionaddis Why Most Published Research Findings Are False. He doesn't specifically refer to Bayes but he uses Bayesian thinking and many of the problems he identifies would have been avoided with a Bayesian approach. I highly recommend Nate Silver The Signal and The Noise for an easy to read explanation of the importance of Bayesian thinking (among other things). Of course there an enormous amount of science is true. The technology on which it is based works. There are several reasons for this. 1) A lot of science is not probabilistic. Newton didn't calculate any significance levels. 2) Scientists use Bayesian thinking without realising - you do every time you think about sensitivity and specificity. Even Dembksi uses it without realising. 3) Under a fairly wide range of conditions classical hypothesis testing leads to the same conclusion as Bayesian thinking - although it also goes badly wrong under a wide range of conditions as well.Mark Frank
June 24, 2013
June
06
Jun
24
24
2013
11:23 PM
11
11
23
PM
PDT
gpuccio,
And possibly explain why a partition “Hamlet” versus “non Hamlet” is of no importance in making inferences?
Again, you're misunderstanding me if you think that I'm claiming that partitions don't matter. They do matter, but it's the sizes of the sets that matter for purposes of calculating probabilities, not their actual content (with the proviso that the distribution is flat, as it is for coin flip sequences). Have you read my new post at TSZ that I mentioned above? It's only about 650 words long, and it answers many of the questions you are asking.keiths
June 24, 2013
June
06
Jun
24
24
2013
11:19 PM
11
11
19
PM
PDT
Hi gpuccio, You're battling a strawman. If I flipped a coin and got the exact text of Hamlet, then I would be almost certain that the outcome was NOT due to chance. However, that just means that the non-chance explanation is far, far likelier to be correct. It doesn't mean that the chance explanation is impossible. Regarding your comment to Lizzie:
A random sequence is extremely more “probable” than a highly ordered sequence. That is the simple point that many people here, in their passion for pseudo-statistics, seem to forget.
I don't know if this is an Italian/English issue or a conceptual issue, but your statement doesn't make sense. To call a sequence "random" just means that it was produced by a random process. It doesn't tell you about its content. The all-heads sequence is just as random as a mixed sequence if both are produced by random processes. Likewise, a random-looking sequence isn't random if it is produced by a deterministic process. Your statement is correct only if you meant to say something like "if we generate a sequence at random, it is more likely to be a mized sequence of heads and tails than it is to be all heads or all tails." But all fixed sequences, whether they look random or not, are equally probable, as you said yourself:
a) Any individual sequence of 500 coin tosses has obviously the same probability to be the outcome of a single experiment of coin tossing. A very, very low probability. I hope we all agree on that.
keiths
June 24, 2013
June
06
Jun
24
24
2013
11:10 PM
11
11
10
PM
PDT
It seems that you really don’t understand statistics and probability. If an outcomes happens 2, 3 or 100 times in a row, that simply gives another calculation of the probabilities of the whole series, considered as a single outcome.
Which is why I've already asked: what is a data point in this scenario, one coin toss or 500? I have yet to have anyone actually write down a null hypothesis, an alternate hypothesis, a testing protocol and a level of significance required. Lay out what you want to test and then ask. Just to save you the trouble . . . no, I won't. You tell me what it is you're testing by laying it all out clearly and properly.
IOWs, the probability of having 3 head in ten tosses is rather high. The probability of having 500 heads in a row is laughable, and it is in reality the global probability of having the same result in 500 events, where the probability of that outcome for each event is 0.5.
Do you really think I don't understand all that? I also said I would NEVER bet on the possibility of getting 10 heads in a row. NEVER. Is anyone actually reading what I've said?
The phrase for you was rather: “will you simply accept that observation as a perfectly expected result, given that its probability is the same as the probability of any other sequence of the same length?” Do you think you are too insulted, or maybe you can try to answer?
As I've said MANY TIMES if all manner and possibility of bias was ruled out then I would say 500 heads in a row was a fluke result.
So, why don’t you answer?
It seems to me I've been answering the same question over and over again. It seems like people aren't really reading what I've written. Or they are only reading the responses directed at themselves.
No, it is not impossible. It IS very highly improbable.
That’s why I said “empirically” impossible, and not “logically” impossible. I am happy you agree with me on that, although maybe you did’n realize it.
I just wish you'd use standard statistical term rather than making stuff up. I've said, MANY MANY times now that if I flipped a coin 500 times I'd be very, very, very suspicious that something funny was going on and I'd do my best to try and find an explanation for that. And, if I couldn't find one, if I was very sure the whole procedure was 'fair' then I'd say the result was a fluke. You decide whether I'm being 'empirical'.
Getting 500 heads is just as likely from a purely random selection process as is any other sequence of 500 Hs and Ts. If you have any mathematical arguments against that then please provide them.
Yes, I have. You can can find them in my #41. And by the way, a “selection process” is not a “random process”, as usually even darwinists can understand.
We're talking mathematics here, some terms may not mean what you think they mean. I find your vocabulary idiosyncratic and confusing at times. I had read post 41 and responded.
The probabilities that should be compared are not the probability of having 500 heads and of having a single specific random sequence. We must compare the probability of having an outcome from a subset of two sequences (500 heads or 500 tails), or if you prefer from any well specified and recognizable ordered subset, rather than form the vast subset of random non ordered sequences, which comprise almost all the sequences in the search space.
Give me a null hypothesis and an alternate hypothesis, a testing procedure and a level of significance.
Ah, you read that too. While I can maybe accept that it is “clever”, I cannot in any reasonable way conceive why it should be a “restatement of your views”. And “incorrect”, just to add! Will you clarify that point, or is it destined to remain a mystery forever, like many other statements of yours?
I've been saying the same thing over and over and over again. Your saying that we have to think about the problem differently is fine but you haven't laid out exactly what you want your testing criteria to be. After you've done that then I can respond to that DIFFERENT issue.
I really don’t know how I could be more specific that this. I have been specific almost to the point of discourtesy. What can I do more?
Lay out your procedure, and give an null and an alternate hypothesis. And PLEASE try and use commonly accepted statistical terms.
Just a simple question: if you get a binary sequence that, in ascii interpretation, is the exact text of Hamlet, and you are told that the sequence arose as a random result of fair coin tossing, will you simply accept that observation as a perfectly expected result, given that its probability is the same as the probability of any other sequence of the same length?
FOR THE NTH TIME! I'd be extremely suspcious of any such result and would exhaustively check to see if there was any detectable bias in the process. But, if none could be found then I'd say it was a fluke result. Why do you keep asking me the same basic question over and over and over again?Jerad
June 24, 2013
June
06
Jun
24
24
2013
10:55 PM
10
10
55
PM
PDT
Elizabeth: But that doesn’t mean that 500 Heads isn’t just as possible as any other sequence, under the Law of Large Numbers or anything else! It is "possible" (not "as possible", because how do you measure "possibility"? Possibility is a binary category), but certainly not "probable" as anything else. A random sequence is extremely more "probable" than a highly ordered sequence. That is the simple point that many people here, in their passion for pseudo-statistics, seem to forget.gpuccio
June 24, 2013
June
06
Jun
24
24
2013
09:03 PM
9
09
03
PM
PDT
keiths: The math doesn’t distinguish between partitions that are important versus partitions that are arbitrary or trivial — and it shouldn’t. Whether some outcome is important has no necessary bearing on whether it is probable. The math doesn't distinguish because the math is not us. We are the thinkers of the math. And we do distinguish. Statistics is nothing, if correct thinking and methodology do not use it correctly and usefully. Scientific "explanations" are not mere statistical effects: they are judgements. A judgement happens in the consciousness of an intelligent beings, not in numbers. All empirical science has been built on the principles that you and your friends seem to doubt. Our whole understanding of the objective world is based mostly on inferences based on partitions that are "important". By the way, why don't you try to answer the question that I have repeatedly offered here? I type it again for your convenience: "Just a simple question: if you get a binary sequence that, in ascii interpretation, is the exact text of Hamlet, and you are told that the sequence arose as a random result of fair coin tossing, will you simply accept that observation as a perfectly expected result, given that its probability is the same as the probability of any other sequence of the same length?" And possibly explain why a partition "Hamlet" versus "non Hamlet" is of no importance in making inferences?gpuccio
June 24, 2013
June
06
Jun
24
24
2013
08:57 PM
8
08
57
PM
PDT
gpuccio @47, thanks for the kind words. :)Chance Ratcliff
June 24, 2013
June
06
Jun
24
24
2013
05:47 PM
5
05
47
PM
PDT
KF,
There are a LOT of situations where the sort of partitioning of a config space we are talking about is real and important. Start with that scene in the Da Vinci Code where a bank vault must be accessed first shot or else.
The math doesn't distinguish between partitions that are important versus partitions that are arbitrary or trivial -- and it shouldn't. Whether some outcome is important has no necessary bearing on whether it is probable. For example, the odds of rolling a particular class of 9-digit number remain the same whether a) somebody's life depends on it, or b) I'm just using the number to decide where to have lunch. If you haven't read the rest of my post, keep reading.keiths
June 24, 2013
June
06
Jun
24
24
2013
05:01 PM
5
05
01
PM
PDT
Sorry that last sentence should read: But that doesn’t mean that 500 Heads isn't just as possible as any other sequence, under the Law of Large Numbers or anything else! oops :oElizabeth B Liddle
June 24, 2013
June
06
Jun
24
24
2013
04:58 PM
4
04
58
PM
PDT
Point taken, KF. But they strike me as having a family resemblance at the descriptive level, possibly at the formal level, I don't know. But I do think that the key point here is not about the physics of coin toss sequences, but about what the alternatives hypotheses are. When something is vanishingly unlikely, almost any other hypothesis, however unlikely (Sal? Cheating?) becomes a near certainty. My Bayesian output gives the right answer, and that's because enables us to weigh alternative explanations. That's why I keep saying that almost everyone is basically correct, here, even those who disagree. The people saying that all sequences are equally probably are correct (and Barry agrees). Almost everyone also agrees that a "special" sequence would raise serious eyebrows. The only real disagreement seems to me over why we raise our eyebrows. Common sense says: because skulduggery is much more likely. Bayes says; because skulduggery is much more likely. IBE says: because skulduggery is much more likely. But that doesn't mean that 500 Heads is just as possible as any other sequence, under the Law of Large Numbers or anything else!Elizabeth B Liddle
June 24, 2013
June
06
Jun
24
24
2013
04:55 PM
4
04
55
PM
PDT
1 2 3

Leave a Reply