Uncommon Descent Serving The Intelligent Design Community

# Jerad and Neil Rickert Double Down

Share
Flipboard
Print
Email

In the combox to my last post Jerad and Neil join to give us a truly pristine example of Darwinist Derangement Syndrome in action.  Like the person suffering from Tourette’s they just don’t seem to be able to help themselves.

Here are the money quotes:

Barry:  “The probability of [500 heads in a row] actually happening is so vanishingly small that it can be considered a practical impossibility.  If a person refuses to admit this, it means they are either invincibly stupid or piggishly obstinate or both.  Either way, it makes no sense to argue with them.”

Sal to Neil:  “But to be clear, do you think 500 fair coins heads violates the chance hypothesis?”

Neil:  “If that happened to me, I would find it startling, and I would wonder whether there was some hanky-panky going on. However, a strict mathematical analysis tells me that it is just as probable (or improbable) as any other sequence. So the appearance of this sequence by itself does not prove unfairness.”

Jared chimes in:  “There is no mathematical argument that would say that 500 heads in 500 coin tosses is proof of intervention.” And “But if 500 Hs did happen it’s not an indication of design.”

I do not believe Jerad and Neil are invincibly stupid.  They must know that what they are saying is blithering nonsense.  They are, of course, being piggishly obstinate, and I will not argue with them.  But who needs to argue?  When one’s opponents say such outlandish things one wins by default.

And I can’t resist adding this one last example of DDS:

Barry to Jerad:  “Is there ANY number of heads in a row that would satisfy you. Let’s say that the coin was flipped 100 million times and they all came up heads. Would you then know for a moral certainty that the coin is not fair without having to check it?”

Jerad:  “A moral certainty? What does that mean?”

It’s funny how often when one catches a Darwinist in really painful-to-watch idiocy, and call them on it, their response is something like “me no speaka the English.”

But the argument against evolution by random chance alone, without the insertion of improved designs, is that we have MANY examples of wildly improbable events in natural systems, and this becomes the same as arguing that a tornado MIGHT assemble a 747 whilst passing through a junk yard (or the spare parts warehouse at Boeing).
Well, I guess it's a good thing no one is making that claim!! Since only the genetic mutations and some environmental conditions and culls are random as far as evolution is concerned. Jerad
gpuccio:
It is, IMO, a very serious fault of the current academy to have refuted ID as a scientific theory, to have fought it with all possible means, to have transformed what could have been a serious and stimulating scientific and philosophical discussion into a war. I don’t like that, but really I don’t believe that the ID folks can be considered responsible for that.
First: nobody has refuted ID. Second: I think that ID folks are at the very least partially responsible. Consider, for example, the notorious Wedge document. Third: Creationist "science" journals, at least, have a long history of requiring that any contributors sign up to a statement of faith, and at least some prominent ID proponents belong to academic institutions that require such a commitment. the same is not true of what you call "the academy". Even Dembski was made to retract a statement he made about the Flood by his employer. Behe, on the other hand, remains employed at an "academy" institution. Having said all that: it is time the war ended. That is why I started my own site - so that we could try to get past the tribalism and down to what really divides (and often, to our surprise, unites) us. I'm not always successful in suppressing the skirmishes, but I think we do pretty well. I'd be honoured if you would occasionally drop by. Elizabeth B Liddle
Mark:
A Bayesian approach is to judge an alternative hypothesis on its merits. It takes into account how likely the hypothesis is to be true without the data and how likely the data is given the hypothesis. What other merits would be relevant? All Bayes formula does is link all the merits in a systematic and mathematically justified way. It is the weakness of other approaches that they do not give sufficient weight to all the merits.
this. And I'd add that it's what ID proponents do all the time - particularly when they express astonishment that materialists should believe something so unlikely! There are always far more unrejected models from a Fisherian test than rejected models. How we decide between them depends on how much weight we give those unrejected alternatives. The great thing about using a Bayesian approach is that it forces you to make your priors explicit. The result is of course less conclusive, but so it should be. Bayes forces us to confront what we still do not know. It stops us making "of the gaps" arguments, whether for materialist or non-materialist explanations, and, above all, tells us the probability that we are interested in - that our hypothesis is correct, whereas a Fisher p value simple tells us the probability of our data, given the null. Not very informative, unless we have an extremely restricted relevant null! (such as "fair coin") Elizabeth B Liddle
From a purely statistical point of view, 1 instance of 500 heads is practically required. The same as 1 instance of 500 (consecutive) tails. But the argument against evolution by random chance alone, without the insertion of improved designs, is that we have MANY examples of wildly improbable events in natural systems, and this becomes the same as arguing that a tornado MIGHT assemble a 747 whilst passing through a junk yard (or the spare parts warehouse at Boeing). During the Vietnam War, an American infantryman who was aiming at a Viet Cong guerrilla had the odd experience of "catching" a bullet from the guerilla straight down the barrel of his M16. Since the 7.62mm round is larger than the 5.56mm barrel, it plugged the end. I've seen the photograph. Considering the small size of both the bullet and the gun barrel and the very precise angular alignments required, the probability of this happening is infinitesimally small. But billions of bullets were fired over a period of many years. So, odd things happen every day by chance. But it's been a long time since we stopped believing that weather occurs randomly, or death from infection or the alignment of the Sun and Moon to produce an eclipse. mahuna
gpuccio,
I want to thank you for your contribution to this discussion, which has been constructive and stimulating.
I think we showed what can be accomplished: a greater understanding. And I'm pleased to have talked with you and, hopefully, helped some other understand both our positions. Jerad
Gpuccio I am delighted to agree to disagree on some many points. And you are remarkable in being one of the few IDists who is prepared to examine what the ID hypothesis entails. But I am disappointed in this:
I believe that, when I use Fisherian reasonings here, I know what I do. I will accept any valid objection to my specific reasonings, while I am not interested in a generic refusal of Fisherian reasoning in itself.
It seems that as long you know how to use the Fisher process and it seems to you to be working in practice you are not interested in why it is successful. This means you are always at risk of coming to wrong conclusions (and in a stochastic world you may not know they are wrong). As I said in #67 most published research findings are wrong and the use of Fisher processes is behind a lot of it. Luckily most published research findings are also ignored. You write:
here can be more than one alternative hypothesis, and they must be judged on their merits, not on a probability, unless you use a Bayesian approach, which I don’t.
A Bayesian approach is to judge an alternative hypothesis on its merits. It takes into account how likely the hypothesis is to be true without the data and how likely the data is given the hypothesis. What other merits would be relevant? All Bayes formula does is link all the merits in a systematic and mathematically justified way. It is the weakness of other approaches that they do not give sufficient weight to all the merits. Mark Frank
Jerad: I want to thank you for your contribution to this discussion, which has been constructive and stimulating. My final summary just wanted to stress the essential difference between our positions, not deny the many things we have agreed upon. Yes, if you want to put it that way, I am absolutely "biased" against accepting order, and especially function, that is completely improbable as a "fluke". That is, IMO, against any intuition of truth and any common sense. I will not do it. My epistemology is obviously different from yours. Only, I would not call that a "bias", but simply an explicit cognitive choice. If in doubt about the terminology, we can always turn Bayesian and call it a "prior" :) . My alternative hypothesis, for order and function, has always been design: the intervention of consciousness. I have detailed the many positive reasons why that is perfectly reasonable, IMO. However, for simple "order" many other alternative hypotheses are certainly viable, and must be investigated thoroughly. It is my firm conviction that for complex function, instead, any non design explanation will be found to be utterly lacking. The neo darwinian theory is a good example of that failure. I am really sure, in my heart and mind, that only consciousness can generate dFSCI. Finally, I would not be so disappointed that we have been, in a way, "left alone" here. It is a general, and perfectly acceptable, fact that as a discussion becomes more precise and technical (and therefore, IMO, much more interesting and valid) the "general public" becomes less interested. No harm in that. That's why I am, always have been, and always will be, a "minority guy". This is a blog. While I personally refrain form discussing here topics that are not directly or indirectly pertinent to the ID theory (especially religious topics), it's perfectly fine with me that others love to do that. But the ID discussion is another thing. It is, IMO, a very serious fault of the current academy to have refuted ID as a scientific theory, to have fought it with all possible means, to have transformed what could have been a serious and stimulating scientific and philosophical discussion into a war. I don't like that, but really I don't believe that the ID folks can be considered responsible for that. ID is a very powerful scientific paradigm. It will never be "brushed away". Either the academy accepts to seriously give it the intellectual role it deserves, or the war will go on, and it will ever more become a war against the academy. That is the final result of dogmatism and intellectual intolerance. gpuccio
gpuccio,
Just one thing: to reject the null, you need not necessarily one alternative hypothesis. In general, the alternative hypothesis is simply “not H0?, that is, what we observe is extremely improbable as a random effect. There can be more than one alternative hypothesis, and they must be judged on their merits, not on a probability, unless you use a Bayesian approach, which I don’t. So for me, once rejected the null, the duty remains to choose the best “non random” explanation. As I have tried to show in my example. Unfair coins, a man in the room, or some trick from the child at the input, or some strange physical force, are all possible candidates. Each hypothesis will be evaluated according to its explanatory merits, or to its consistency, or falsifiability. Statistics is no more useful at this level.
We agree on much. I'm still not sure what your bottom line alternate hypothesis is, when all other explanations have been ruled out. But this has been one of my criticisms for a long time. And it's odd that you choose to pick the best non-random explanation. Sounds like you have a bias!!
“You, the “fluke 500 heads” guy. I, the “there must be another explanation” guy.” still summarizes well our differences.
I find that a bit disappointing after our informative and insightful conversation as it brushes aside the huge amount that we agree on, that we'd both do our utmost to try and root out any detectable bias. And I find it disappointing that you cannot state a clear final conclusion. "There must be another explanation" is pretty wishy-washy but that's your call. What I find very disappointing is that most of the commentators at UD have lost interest in the whole discussion and are now off chasing other perceived slurs against ID or imagined examples of stupid science. There was lots of shouting and finger pointing and then off they go, not willing to stick around for some substantive conversation. You seem actually interested in learning but I'm not so sure about many of your fellows. Jerad
The answer is simple and probably acceptable to you. The 500 heads mean something to lots of people, as does the 250/250 string and the opening lines of Hamlet. Therefore, it is plausible that someone might want to make the string come out that way for their purposes. It may not be very likely that such a person exists and that they could fiddle the results – but it only has to be marginally likely to overwhelm the hypothesis that it was a fair coin. But it does require the alternative hypothesis to be considered. The reason we differ so much on ID is twofold: 1) I don’t think the evolutionary theory hypothesis is comparable to the fair coin hypothesis. It is less well defined but seems to me that the outcome is plausible. 2) The alternative hypothesis has not been articulated – but if it had then I suspect it would be absurdly implausible.
Jerad: OK, your point of view is clear enough. I maintain mine, which I hope is clear too. Just one thing: to reject the null, you need not necessarily one alternative hypothesis. In general, the alternative hypothesis is simply "not H0", that is, what we observe is extremely improbable as a random effect. There can be more than one alternative hypothesis, and they must be judged on their merits, not on a probability, unless you use a Bayesian approach, which I don't. So for me, once rejected the null, the duty remains to choose the best "non random" explanation. As I have tried to show in my example. Unfair coins, a man in the room, or some trick from the child at the input, or some strange physical force, are all possible candidates. Each hypothesis will be evaluated according to its explanatory merits, or to its consistency, or falsifiability. Statistics is no more useful at this level. But these are trivial points. I believe that the following: "You, the “fluke 500 heads” guy. I, the “there must be another explanation” guy." still summarizes well our differences. gpuccio
Perhaps I should be sure my views are clear. If the null hypothesis is: the coin flipping process is fair, i.e. truly fair And the alternate hypothesis is: the coin flipping process is not fair. Then I'd most likely reject the null hypothesis if we got a string of 500 heads, depending on the confidence interval you specified. It all depends on what your alternate hypothesis is. I sound like Bill Clinton now. Sigh. If your alternate hypothesis is: the system is bias then I'd most likely reject the null hypothesis again depending on the confidence interval. If your alternate hypothesis is: there's a guy in Moscow who is psychically affecting the coin tosses then . . . I think you'd better use a Bayesian approach where other factors are introduced. What is the plausibility that psychic powers can do such a thing? Could the man in Moscow be getting the signal the coin was being flipped in time to affect its outcome? If you're going to make statistical arguments then be precise and follow the procedures. Give me an clear and testable alternate hypothesis. And, ideally, a confidence interval you'd like to use. But, remember, there is no such thing as a 100% confidence interval. And remember what a confidence interval tells you: that your refection of the null hypothesis is blah% sure to not be down to a chance result. And that is based on the distribution of the variable being tested. You see confidence intervals all the time in poll results. Mostly you don't see the confidence percentage reported which is just sloppy journalism. Fairly obviously, the higher the confidence the bigger the sample size has to be. So I'd really like to get that nailed down as well. Jerad
a) We seem to agree that a series of 500 heads is not something we “expect” from a random system, even if it has the same individual probability of any other sequence of that length.
Agreed.
b) We agree that the reason for that is that we are not comparing the probabilities of each single sequence, but rather the probabilities of two very different subsets of the search space. Do we agree on that?
Um . . . not really. Since every possible sequence of Hs and Ts is equally likely it is only our pattern seeking mental processes that trump our statistical reasoning powers most of the time. I'm just like you, 500 Hs would be a real WTF moment for me. And I'd probably spend days or months or even years trying to be sure there was no bias before I accepted and explanation of chance. But really, 500 Hs is just as likely as any other particular sequence. But, clearly, a vast majority of the time we'll get a jumbled sequence of Hs and Ts and won't find those outcomes surprising in the least.
And, if I couldn’t find one, if I was very sure the whole procedure was ‘fair’ then I’d say the result was a fluke. You decide whether I’m being ‘empirical’. Maybe you are easily satisfied. I would look further for an explanation.
Oh no, I'd have to be very, very, very, VERY sure there was no bias before I accepted a chance explanation.
Who is the more empirical here? Well, I am happy there is still something we don’t agree about. You, the “fluke 500 heads” guy. I, the “there must be another explanation” guy. Maybe I am becoming a skeptic, after all.
Maybe. :-)
That’s what I mean by “empirically impossible”: something that is not logically impossible, but rather so extremely improbable that I will never accept it as a random outcome, and will always look for a different explanation.
I'd just stick with extremely improbable which is less confusing.
c) We definitely don’t agree about Hamlet. Ah! I feel better, after all.
A rose by any other name?
For the first time: you must be mad, at best. So, you would be “extremely suspicious” of the random emergence of Hamlet’s text, but in the end you can accept it? Good luck, my friend…
After more scrutiny than even I can imagine.
I would not be “extremely suspicious”: I would be absolutely sure that the outcome is not the product of a random system. And I would never, never even entertain the idea of a fluke. Well, anyone can choose his own position on that. All are free to comments on that.
Fair enough. There are things in this world that cannot be explained by your philosophy.
Give me a null hypothesis and an alternate hypothesis, a testing procedure and a level of significance. That is a right request. So, I will propose a scenario, maybe a little complicated, but just to have the right components at their place.
Good.
Let’s say that there is a big closed room, and we know nothing of what is in it. On one wall there is an “input” coin slot. On another wall there is an “output” coin slot, where a coin can come out and rest quietly on a frame. . . . . The null hypothesis is very simple: each coin is taken randomly from the bag, randomly inserted into the input coin slot, and it comes out from the output coin slot in the same position it had when it was inserted into the input slot. We can also suppose that something happens within the dark room, but if it so, that “something” is again a random procedure, where each side of the coin still has 0.5 probability to be the upward side in the end. For example, each coin could be randomly tossed in the dark room, and then outputted to the output slot as it is. IOWs, the null hypothesis, as usual, is that what we observe as an outcome is the result of random variation.
That's not quite the normal way of stating it. I'd just say the null hypothesis is that the coin and procedure are fair, i.e. random. But that's just quibbling.
Now, to be simple, we are sure that all the coins are fair, and that there is no other “interference£ out of the dark room. So, our whole interest is focused on the main question: What happens in the dark room?
So, what is your alternate hypothesis? The thing you're testing?
You ask for a level of significance. There is really no reason that I give you one, you can choose for yourself. With a search space of 500 bits, and a subset of outcomes with “only heads ot tails” whose numerosity is 2, we are in the order of magnitude of 1E-150 for the probability of the outcome we observe. What level do you like? 1E-10? 1E-20? You choose.
Uh, that's not how it's done. The level of significance is used to set up a confidence interval say 90% or 95%. Sometimes this is referred to as picking the p-value. Well . . . they're related. The point being if you're going to reject the null hypothesis in favour of the alternate hypothesis you want to be 90 or 95% sure that the outcome you observed was not down to chance. You cannot have a 100% confidence interval which is why I'd never be 100% sure the outcome wasn't due to chance.
Do you reject the null (H0)?
At what level of significance? I'll save you the effort. By common statistical analysis you probably would. But in favour of an alternate hypothesis which would NOT be there was design but in favour of an alternate hypothesis along the lines of "the coin and/or process is not fair'.
Our explanations (H1s) can be may. ID is one of them. The ID explanation is that there is one person in the dark room, that he takes the coin that has been inputted, checks its condition, and simply outputs it through the output slot with head upwards. Very simple indeed.
But you didn't give an alternate hypothesis so I don't know what you're testing. And if your trying to test something complicated then a Bayesian approach would be more pertinent. How big is the dark room? Is there a system of air circulation? Etc.
But other explanations are possible. In the room, there could be a mechanism that can read the position of the coin and invert it only when tail is upward. That would probably still be an ID explanation, because whence did that mechanism come? But yes, we must be thorough, and investigate the possibility that such a mechanism spontaneously arose in the dark room.
Like I said, I'd be extremely diligent in checking for all possible testable plausible causes of bias.
An interesting aspect of this explanation is that it teaches us something about the nature of ordered strings and the concept of Kolmogorov complexity. Indeed, if the mechanism is responsible for the order of the final string, then the complexity of the mechanism should be taken in place of the complexity of the string, if it is lower.
I'm not an expert on such matters.
The import point is that, once you have the mechanism working, you can increase the brute complexity of the output as much as you like: you can have an output of 500 heads, or of 5000, or of 5 billion heads. While the apparent complexity of the outcome increases exponentially, its true, Kolmogorov complexity remains the same: the complexity of the mechanism.
Again, I'm no expert.
Finally, let’s say that from the output slot you get a binary sequence that corresponds to the full text of Hamlet. Again, do you reject the null? I suppose you do.
Again, depending on what your alternate hypothesis is. If it's just: the system isn't random then certainly I would. Easily and gladly. But you haven't told me what your alternate hypothesis is so I don't know what I'm rejecting the null hypothesis for.
Here, the situation is definitely different. Not only because Hamlet in binary form is certainly much longer than 500 bits. But because the type of specific information here is completely different. A drama in english is not “an ordered sequence”, like the all heads sequence. It can never be outputted by any algorithm, however complex, unless the algorithm already knows the text.
As extremely unlikely as it is it could be the result of a random generating process.
Hamlet is certainly the output of a conscious being. I will have no hesitation in inferring design (well, not necessarily Shakespeare himself in the dark room, but certainly Shakespeare at the beginning of the transcriptions of all kinds that have brought the text to our dark room).
I believe there was man called William Shakespeare who wrote the play Hamlet. That's much more plausible than it was arrived at by some chance event. But Shakespeare was a man who was known by other men for whom we have documentary evidence and whose abilities are not beyond what we've seen other men at that time do.
Do you agree on that? Probably not. You will probably insist with the “fluke” theory.
I hope my responses clarify my views. Jerad
Mark: It wold be your turn, but I am very tired. Later, I hope. By... gpuccio
1) I don’t think the evolutionary theory hypothesis is comparable to the fair coin hypothesis.
What is this alleged evolutionary theory hypothesis? Joe
To Jerad, keiths, Mark Frank: Wow, guy! It seems that I did stir some reaction... Well, just an initial collective comment, and then I will go on answering you individually. If I have misunderstood something that you have said, I apologize. I have no interest in demonstrating that you are wrong. I am only interested in the final results of a discussion. So, if you agree with me, I can only be happy :) But as things are never as good as they seem, let's see if we really agree, and on what. I think it is better to go on one by one. Jerad first! gpuccio
***** accidentally hit enter before finishing 69 (hence typos) ***** .... But it does require the alternative hypothesis to be considered. The reason we differ so much on ID is twofold: 1) I don't think the evolutionary theory hypothesis is comparable to the fair coin hypothesis. It is less well defined but seems to me that the outcome is plausible. 2) The alternative hypothesis has not been articulated - but if it had then I suspect it would be absurdly implausible. Mark Frank
Under a fairly wide range of conditions classical hypothesis testing leads to the same conclusion as Bayesian thinking – although it also goes badly wrong under a wide range of conditions as well.
Agreed!! If you're talking about some ideal, hypothetical 'fair' coin being tossed 500 times then classical hypothesis testing is fine. It seems to me that the real conflict here is why some of us won't accept some kind of design behind a highly improbable result. It's just my opinion but I'd stake my claim to faith on impossible events occurring not improbable ones. Jerad
Gpuccio I will address your other points later now but this one is the most important.
I simply want to do an easy and correct calculation which can be the basis for a sound empirical inference, like I do every day in my medical practice. According to your position, the whole empirical knowledge of the last decades should be discarded.
Not the whole of empirical knowledge but the majority of scientific research findings are false - see Ionaddis Why Most Published Research Findings Are False. He doesn't specifically refer to Bayes but he uses Bayesian thinking and many of the problems he identifies would have been avoided with a Bayesian approach. I highly recommend Nate Silver The Signal and The Noise for an easy to read explanation of the importance of Bayesian thinking (among other things). Of course there an enormous amount of science is true. The technology on which it is based works. There are several reasons for this. 1) A lot of science is not probabilistic. Newton didn't calculate any significance levels. 2) Scientists use Bayesian thinking without realising - you do every time you think about sensitivity and specificity. Even Dembksi uses it without realising. 3) Under a fairly wide range of conditions classical hypothesis testing leads to the same conclusion as Bayesian thinking - although it also goes badly wrong under a wide range of conditions as well. Mark Frank
gpuccio,
And possibly explain why a partition “Hamlet” versus “non Hamlet” is of no importance in making inferences?
Again, you're misunderstanding me if you think that I'm claiming that partitions don't matter. They do matter, but it's the sizes of the sets that matter for purposes of calculating probabilities, not their actual content (with the proviso that the distribution is flat, as it is for coin flip sequences). Have you read my new post at TSZ that I mentioned above? It's only about 650 words long, and it answers many of the questions you are asking. keiths
Hi gpuccio, You're battling a strawman. If I flipped a coin and got the exact text of Hamlet, then I would be almost certain that the outcome was NOT due to chance. However, that just means that the non-chance explanation is far, far likelier to be correct. It doesn't mean that the chance explanation is impossible. Regarding your comment to Lizzie:
A random sequence is extremely more “probable” than a highly ordered sequence. That is the simple point that many people here, in their passion for pseudo-statistics, seem to forget.
I don't know if this is an Italian/English issue or a conceptual issue, but your statement doesn't make sense. To call a sequence "random" just means that it was produced by a random process. It doesn't tell you about its content. The all-heads sequence is just as random as a mixed sequence if both are produced by random processes. Likewise, a random-looking sequence isn't random if it is produced by a deterministic process. Your statement is correct only if you meant to say something like "if we generate a sequence at random, it is more likely to be a mized sequence of heads and tails than it is to be all heads or all tails." But all fixed sequences, whether they look random or not, are equally probable, as you said yourself:
a) Any individual sequence of 500 coin tosses has obviously the same probability to be the outcome of a single experiment of coin tossing. A very, very low probability. I hope we all agree on that.
keiths
It seems that you really don’t understand statistics and probability. If an outcomes happens 2, 3 or 100 times in a row, that simply gives another calculation of the probabilities of the whole series, considered as a single outcome.
Which is why I've already asked: what is a data point in this scenario, one coin toss or 500? I have yet to have anyone actually write down a null hypothesis, an alternate hypothesis, a testing protocol and a level of significance required. Lay out what you want to test and then ask. Just to save you the trouble . . . no, I won't. You tell me what it is you're testing by laying it all out clearly and properly.
IOWs, the probability of having 3 head in ten tosses is rather high. The probability of having 500 heads in a row is laughable, and it is in reality the global probability of having the same result in 500 events, where the probability of that outcome for each event is 0.5.
Do you really think I don't understand all that? I also said I would NEVER bet on the possibility of getting 10 heads in a row. NEVER. Is anyone actually reading what I've said?
The phrase for you was rather: “will you simply accept that observation as a perfectly expected result, given that its probability is the same as the probability of any other sequence of the same length?” Do you think you are too insulted, or maybe you can try to answer?
As I've said MANY TIMES if all manner and possibility of bias was ruled out then I would say 500 heads in a row was a fluke result.
It seems to me I've been answering the same question over and over again. It seems like people aren't really reading what I've written. Or they are only reading the responses directed at themselves.
No, it is not impossible. It IS very highly improbable.
That’s why I said “empirically” impossible, and not “logically” impossible. I am happy you agree with me on that, although maybe you did’n realize it.
I just wish you'd use standard statistical term rather than making stuff up. I've said, MANY MANY times now that if I flipped a coin 500 times I'd be very, very, very suspicious that something funny was going on and I'd do my best to try and find an explanation for that. And, if I couldn't find one, if I was very sure the whole procedure was 'fair' then I'd say the result was a fluke. You decide whether I'm being 'empirical'.
Getting 500 heads is just as likely from a purely random selection process as is any other sequence of 500 Hs and Ts. If you have any mathematical arguments against that then please provide them.
Yes, I have. You can can find them in my #41. And by the way, a “selection process” is not a “random process”, as usually even darwinists can understand.
We're talking mathematics here, some terms may not mean what you think they mean. I find your vocabulary idiosyncratic and confusing at times. I had read post 41 and responded.
The probabilities that should be compared are not the probability of having 500 heads and of having a single specific random sequence. We must compare the probability of having an outcome from a subset of two sequences (500 heads or 500 tails), or if you prefer from any well specified and recognizable ordered subset, rather than form the vast subset of random non ordered sequences, which comprise almost all the sequences in the search space.
Give me a null hypothesis and an alternate hypothesis, a testing procedure and a level of significance.
Ah, you read that too. While I can maybe accept that it is “clever”, I cannot in any reasonable way conceive why it should be a “restatement of your views”. And “incorrect”, just to add! Will you clarify that point, or is it destined to remain a mystery forever, like many other statements of yours?
I've been saying the same thing over and over and over again. Your saying that we have to think about the problem differently is fine but you haven't laid out exactly what you want your testing criteria to be. After you've done that then I can respond to that DIFFERENT issue.
I really don’t know how I could be more specific that this. I have been specific almost to the point of discourtesy. What can I do more?
Lay out your procedure, and give an null and an alternate hypothesis. And PLEASE try and use commonly accepted statistical terms.
Just a simple question: if you get a binary sequence that, in ascii interpretation, is the exact text of Hamlet, and you are told that the sequence arose as a random result of fair coin tossing, will you simply accept that observation as a perfectly expected result, given that its probability is the same as the probability of any other sequence of the same length?
FOR THE NTH TIME! I'd be extremely suspcious of any such result and would exhaustively check to see if there was any detectable bias in the process. But, if none could be found then I'd say it was a fluke result. Why do you keep asking me the same basic question over and over and over again? Jerad
Elizabeth: But that doesn’t mean that 500 Heads isn’t just as possible as any other sequence, under the Law of Large Numbers or anything else! It is "possible" (not "as possible", because how do you measure "possibility"? Possibility is a binary category), but certainly not "probable" as anything else. A random sequence is extremely more "probable" than a highly ordered sequence. That is the simple point that many people here, in their passion for pseudo-statistics, seem to forget. gpuccio
keiths: The math doesn’t distinguish between partitions that are important versus partitions that are arbitrary or trivial — and it shouldn’t. Whether some outcome is important has no necessary bearing on whether it is probable. The math doesn't distinguish because the math is not us. We are the thinkers of the math. And we do distinguish. Statistics is nothing, if correct thinking and methodology do not use it correctly and usefully. Scientific "explanations" are not mere statistical effects: they are judgements. A judgement happens in the consciousness of an intelligent beings, not in numbers. All empirical science has been built on the principles that you and your friends seem to doubt. Our whole understanding of the objective world is based mostly on inferences based on partitions that are "important". By the way, why don't you try to answer the question that I have repeatedly offered here? I type it again for your convenience: "Just a simple question: if you get a binary sequence that, in ascii interpretation, is the exact text of Hamlet, and you are told that the sequence arose as a random result of fair coin tossing, will you simply accept that observation as a perfectly expected result, given that its probability is the same as the probability of any other sequence of the same length?" And possibly explain why a partition "Hamlet" versus "non Hamlet" is of no importance in making inferences? gpuccio
gpuccio @47, thanks for the kind words. :) Chance Ratcliff
KF,
There are a LOT of situations where the sort of partitioning of a config space we are talking about is real and important. Start with that scene in the Da Vinci Code where a bank vault must be accessed first shot or else.
The math doesn't distinguish between partitions that are important versus partitions that are arbitrary or trivial -- and it shouldn't. Whether some outcome is important has no necessary bearing on whether it is probable. For example, the odds of rolling a particular class of 9-digit number remain the same whether a) somebody's life depends on it, or b) I'm just using the number to decide where to have lunch. If you haven't read the rest of my post, keep reading. keiths
Sorry that last sentence should read: But that doesn’t mean that 500 Heads isn't just as possible as any other sequence, under the Law of Large Numbers or anything else! oops :o Elizabeth B Liddle
Point taken, KF. But they strike me as having a family resemblance at the descriptive level, possibly at the formal level, I don't know. But I do think that the key point here is not about the physics of coin toss sequences, but about what the alternatives hypotheses are. When something is vanishingly unlikely, almost any other hypothesis, however unlikely (Sal? Cheating?) becomes a near certainty. My Bayesian output gives the right answer, and that's because enables us to weigh alternative explanations. That's why I keep saying that almost everyone is basically correct, here, even those who disagree. The people saying that all sequences are equally probably are correct (and Barry agrees). Almost everyone also agrees that a "special" sequence would raise serious eyebrows. The only real disagreement seems to me over why we raise our eyebrows. Common sense says: because skulduggery is much more likely. Bayes says; because skulduggery is much more likely. IBE says: because skulduggery is much more likely. But that doesn't mean that 500 Heads is just as possible as any other sequence, under the Law of Large Numbers or anything else! Elizabeth B Liddle
Dr Liddle, IBE is not generally a Bayesian probability inference with weighting on probabilities, or even a likelihood one. Scoring superiority on factual adequacy, coherence and explanatory power in light of empirical observation is not generally Bayesian. Though, in limited cases it can be. KF kairosfocus
Yep. That would seem to be "inference to best explanation", KF! Glad we can agree on something for once :) Cheers Lizzie Elizabeth B Liddle
KS, re:
It still seems arbitrary and subjective to divide the 9-digit numbers into two categories, “significant to me” and “not significant to me”.
There are a LOT of situations where the sort of partitioning of a config space we are talking about is real and important. Start with that scene in the Da Vinci Code where a bank vault must be accessed first shot or else. In another, text in English is sharply distinct from repetitive short blocks or typical at random gibberish, and the three do not function the same. The above is little more than a case of wishing away a very important and vital phenomenon that is inconvenient. KF kairosfocus
Biasing coins? Easy -- get a double-head coin. KF kairosfocus
Sal: I am not disputing what you said, or what you meant. My point is much more hypothetical, but very important, and it is that the reason we can conclude from observing a "special" sequence that something weird happened (it doesn't matter whether it was that the coin had two heads, or that it wasn't actually tossed, either scenario will do) isn't that such a sequence is "impossible" or "empirically impossible" or "against the Laws of Physics" or anything else about probability of the sequence. It's about because we know that Something Weird is much MORE probable than tossing one of those rare sequences. As I said, if we knew, with certainty, that the coin was fair, and the tossing fair, then we would simply have to conclude that, well, that the coin was fair and the tossing fair! We could not conclude "design" because we would know, a priori, that design was not the cause! In other words, our confidence that the sequence was designed stems from the relative probability that it was, compared with the probability that it was thrown by chance. Even if we are extremely confident that the coin was fair, and tossed fairly, it is still much more likely that the coin was not as fair as we thought it was, or that the tossing was somehow a conjuring trick, than that the sequence was tossed by chance. That is because we are less certain of non-design than we are of not tossing such a rarer kind of sequence. Bayes is a GOOD tool for ID, not a bad one. It's exactly what IDists here (including gpuccio, though he thinks he doesn't!) use, although usually it's called "inference to the best explanation" or some such (for some reason Bayes is a bad word in ID circles I think). But I have to say, I think all this back-biting about other people's probability smarts is completely unjustified. There are very few errors being made on these threads, but boy, is there a lot of misunderstanding of each other's meaning! As I said above, most people are mostly right. Where you guys are disagreeing is over the meaning of words, not the math. *growl* Elizabeth B Liddle
Sal has told us that the coin was fair. How great is his confidence that the coin is fair? Has Sal used the coin himself many times, and always previously got non-special sequences? If not, perhaps we should not place too much confidence in Sal’s confidence! And even if he tells us he has, do we trust his honesty? Probably, but not absolutely. In fact, is there any way we can be absolutelysure that Sal tossed a fair coin, fairly? No, there is no way. We can test the coin subsequently; we can subject Sal to a polygraph test; but we have no way of knowing, for sure, a priori, whether Sal tossed a fair coin fairly or not.
consider if we saw 500 fair coins all heads, do we actually have to consider human subjectivity when looking at the pattern and concluding it is designed? No. Why? We can make an alternative mathematical argument that says if coins are all heads they are sufficiently inconsistent with the Binomial Distribution for randomly tossed coins, hence we can reject the chance hypothesis.
I never said the coins were randomly tossed. I only said we can compare the configuration of the coins against the hypothesis that they were randomly tossed. Given these considerations, and given that we know humans are capable of making all coins heads, a reasonable (but not absolute) inference is that the configuration arrived by design. And finally, severely biased coins are considered rare: You can load dice, you can't bias coins scordova
MF: All I will say for the moment is that if you were to drop 30 - 100 darts in the case envisioned, it is reasonably certain that the one sigma bands will pick up a proportion og hits that is linked to relative area. Tails being small, will tend to be hit less often, and if our far tails are involved, we are unlikely to see any hits at all. But the bulk will pick up most of the hits. Now, you can pick an arbitrarily narrow stripe near the peak and it will have the same pattern of being of low proportion less likely to be hit. That simply underscores the point that such special zones are unlikely to be found on a reasonably limited blind search. Which is one of the points I was highlighting. You do understand the first point, on trying to blindly catch needles in haystacks with limited searches. Now, the further point you tried to divert attention from is not strictly central to where I am going, but let's note it. The far tails of a bell are natural examples of narrow zones T in a much larger distribution of possibilities W. Now that the first hurdle is behind us, look next at relevant cases where W = 2^500 to 2^1,000 or more. The search capacity of the solar system's 10^57 atoms, acting for a plausible lifespan ~ 10^17 s, could not sample more than 1 straw sized pluck from a cubical haystack 1,000 light years on the side. About as thick as our galaxy. Since stars are on average several LY apart in our neighbourhood, if such a stack were superposed on our galaxy, with all but certainty, such a sample -- and we have just one shot, will all but certainly pick straw. At 1,000 bits worth of configs, the conceptual haystack would swallow up the observable cosmos worse than a haystack swallows up a needle. In short, with all but certainty, when we have config spaces at least that big, cosmic scale search resources are going to be vastly inadequate to find anything but the bulk, configs in no particular pattern, much less a linguistically or computationally relevant one. Where also functional specificity and complexity get us into needing very tightly specified, atypical configs. Where also, as AutoCAD shows us 3-d machines and systems can be represented by strings, an analysis on strings is WLOG. KF PS: The simplest case of fluctuations I can think of for the moment is how for small particles in a fluid we see brownian motion, but as size goes up, impacts on the various sides average off and the effect vanishes. Likewise, it is all but certain that the molecules of oxygen in the room you sit in would spontaneously rush off to one end and leave you gasping. It can be shown we are unlikely to observe this once in the lifespan of the observed cosmos. And yet such is a valid distribution. Just, its statistical weight is so overwhelmed by the scattered at random ones -- the overwhelming bulk -- that it is maximally improbable and practically unobservable. There is a difference between abstract possibility and empirical observability without deliberate intervention to set up simply describable but extremely atypical configs of the space of possibilities W. kairosfocus
groovamos @35: Well said. A similar point has been made many times, to those willing to listen, but I like the way you articulated it. I'm going to shamelessly steal your thinking. Eric Anderson
Chance Ratcliff: Thank you for you #45. It's really good to read intelligent and reasonable words, once in a while! :) gpuccio
Jerad: Where is my mathematical fallacy? I have explained it very clearly in my #41.
Sigh. As I’ve said several times already . . .if some specific specified sequence is randomly generated on the first trial then I would be very, very, very careful to check and see if there was any kind of bias in the system. And, if I was very, very, very sure there was not then I would say such a result was a fluke, a lucky result. There is no reason that design should be inferred to such a single outcome. What you really should be asking is: what if it happened two times in a row. Or 3 out of 5 times.
groovamos' comment @35 echoes my thinking on this. A similar subject was brought up a few months ago by Phinehas and I replied in kind. To say that any outcome is equiprobable and hence just as unlikely as any other is to tacitly define an event in the sample space that is equal to the sample space: S = {a1, a2, ... an}, E = S, hence P(E) = 1. With regard to coin tosses, a specification in this sense would be an E for which 0 < P(E) < 1, and it forces a partition onto the sample space, such that S is equal to the union of E and not E. Specifying an outcome of all heads defines a specific sequence in the sample space. For 500 tosses, this sequence has a probability of P(E) = 2^-500, and P(~E) = 1 - P(E). There is no equiprobability with this partition, and we should never expect to see E occur. As gpuccio points out, this is empirical. The sequence is not logically impossible, and this was never at issue. We can be near-absolutely certain, that for any sequence of 500 coin tosses, there has never been one that comes up all heads, since the first coin was tossed by the first monetarily-aware person. The implication for a sample space of 500 bits is that any sequence that one can specify by any means whatsoever has likely never occurred at random, nor will it likely ever occur. Ever. Chance Ratcliff
Neil and Jerad have stated the absurd. Repeating an infamous fallacy that has been very popular in the worst darwinist propaganda.
Where is my mathematical fallacy?
Just a simple question: if you get a binary sequence that, in ascii interpretation, is the exact text of Hamlet, and you are told that the sequence arose as a random result of fair coin tossing, will you simply accept that observation as a perfectly expected result, given that its probability is the same as the probability of any other sequence of the same length? Or will you recur to Bayesian arguments to evaluate the probability that Shakespeare ever existed?
Sigh. As I've said several times already . . .if some specific specified sequence is randomly generated on the first trial then I would be very, very, very careful to check and see if there was any kind of bias in the system. And, if I was very, very, very sure there was not then I would say such a result was a fluke, a lucky result. There is no reason that design should be inferred to such a sinlge outcome. What you really should be asking is: what if it happened two times in a row. Or 3 out of 5 times. Your comment about whether I would wonder if Shakespeare ever existed is pretty insulting really. As is Barry's Tourette's dig.
By the way, Neil and Jerad are cordially invited to express their opinion too, illuminating us a little bit more about our logical fallacies.
Gee thanks.
e) So, for those who understand probability, the only rational question that applies here is: how likely is to have an outcome from the extremely small subset of two sequences with only one value, or even form some of the other highly ordered subsets in the search space? The answer is very simple: with a 500 bit search space, that’s empirically impossible.
No, it is not impossible. It IS very highly improbable.
f) This is the correct reasoning why a sequence of 500 heads is totally unexpected, while a random sequence is completely expected. Maybe Neil and Jerad would like to comment on this simple concept.
Getting 500 heads is just as likely from a purely random selection process as is any other sequence of 500 Hs and Ts. If you have any mathematical arguments against that then please provide them.
IOWs, we are not comparing the probability of single outcomes, but the probability of different subsets of outcomes.
If we reasoned like Neil and Jerad, we would not at all be surprised by any strange behaviour of a natural gas, such as it filling only one half of the available space!
That is a clever but incorrect restatement of my views. I'm tired of being misinterpreted and having words put in my mouth. Find something wrong with what I've said, be specific please. Jerad
Gpuccio Unfortunately you don't address the problems with Fisherian inference - you just declare that they are irrelevant (including the major objection that it answers the wrong question). Meanwhile you seem to be content to dismiss Bayesian inference on the grounds that it is hard to do the sums (even though it answers the right question). Do you want to do an easy calculation to answer the wrong question or a hard calculation to answer the right question? Mark Frank
To all: A few comments to try to clarify this important point. First of all, my compliments to groovamos (#35), who has very correctly stated the fundamental point. I would only add, for clarity, the following: a) Any individual sequence of 500 coin tosses has obviously the same probability to be the outcome of a single experiment of coin tossing. A very, very low probability. I hope we all agree on that. b) As groovamos very correctly states, the probability that a sequence, one of the 2^500 possible ones, will be the outcome of a single experiment is very easy to compute: it is 1 (necessity). c) The problem here is that, among the 2^500 sequences, there are specific subsets that have some recognizable formal property. The subset £sequences where only one value is obtained 550 times", is made of two sequences: 500 heads and 500 tails. d) While there are certainly many other "subsets" more or less ordered or recognizable, the vast vast majority, quite the totality of the 2^500 sequences will be of the random kind, with no special recognizable order. e) So, for those who understand probability, the only rational question that applies here is: how likely is to have an outcome from the extremely small subset of two sequences with only one value, or even form some of the other highly ordered subsets in the search space? The answer is very simple: with a 500 bit search space, that's empirically impossible. f) This is the correct reasoning why a sequence of 500 heads is totally unexpected, while a random sequence is completely expected. Maybe Neil and Jerad would like to comment on this simple concept. IOWs, we are not comparing the probability of single outcomes, but the probability of different subsets of outcomes. If we reasoned like Neil and Jerad, we would not at all be surprised by any strange behaviour of a natural gas, such as it filling only one half of the available space! By the way, Mark, the fallacy so well outlined by groovamos is also the fallacy that, certainly in good faith, but not so good statistical and methodological clarity, you tried on me at the time of the famous dFSCI challenge. You may remember your "argument" about the random sequence that pointed to a set of papers in a database. A set defined by the numbers randomly obtained. As I hope you can see, the probability of getting a sequence, say, of 5 numbers from 1 to 1000 pointing to 5 items in a database where the items are numbered from 1 to 100 is, exactly 1. So, you may be clever in statistics, but being clever does not save us from error, when a cognitive bias is our strong motivator. gpuccio
Mark (#24): Please, compare this statement of yours: However, the conceptual problems are rather severe and they become very relevant when you are trying to tackle more philosophical subjects like ID. with this other one: The cost is – they can be hard to calculate and sometimes (not always) they require subjective estimates of the priors. That's exactly your problem when you use such Bayesian arguments to refute ID. In what you declare as a "philosophical subject" (and I don't agree!), you propose to avoid a method which is simple and vastly used in all empirical sciences with a method that requires "subjective estimates of the priors". That seems folly, to me. Look at my treatment of dFSCI. It's simple, it's Fisherain, it's valid. You cannot accept it because of your priors, and so you shift to Bayesian objections. There is nothing good in that. Look at the absurd position of Neil and Jerad: they refute what is empirically evident, through a philosophical misunderstanding of probability. If these are the results of being Bayesian, I am very happy that I am a Fisherian. Your objections to the Fisherian method have really no relevance to a correctly argued Fisherian testing in a real empirical context, such as the problem of protein information. In my dFSCI procedure, I compute the probabilistic resources of a system to reject the null that some specific amount of functional protein information could arise by chance in that system. Once taken into account the probabilistic resource, it's enough to add enough bits to be sure of an alpha level extremely low (not certainly 0.05, or 0.01!) to be empirically sure that such an amount of functional protein information could not arise by chance in that system. There is nothing philosophical in that. Here we are dealing with definite discrete states (the protein sequences). The probability of reaching a specific subset is well defined by the ration of the subset to the search space. Your objections do not apply. Neil and Jerad have stated the absurd. Repeating an infamous fallacy that has been very popular in the worst darwinist propaganda. Just a simple question: if you get a binary sequence that, in ascii interpretation, is the exact text of Hamlet, and you are told that the sequence arose as a random result of fair coin tossing, will you simply accept that observation as a perfectly expected result, given that its probability is the same as the probability of any other sequence of the same length? Or will you recur to Bayesian arguments to evaluate the probability that Shakespeare ever existed? Just to know. By the way, Neil and Jerad are cordially invited to express their opinion too, illuminating us a little bit more about our logical fallacies. gpuccio
Barry:
Like the person suffering from Tourette’s they just don’t seem to be able to help themselves.
Just to pick up a point you might be interested in: we have good evidence to demonstrate that far from people with Tourette's being unable to help themselves, they do such a fantastic job of learning to control their tics that they perform better than the rest of us at tasks that involve suppressing instinctive responses, e.g. on the Stroop task, or on an anti-saccade task (where you have to look in the opposite direction to a visual cue). Compensatory Neural Reorganization in Tourette Syndrome Neuroscience isn't all bunk :) Elizabeth B Liddle
I think what KF is saying, Mark, is that the nearer a class of pattern is to the tails of a distribution, the less likely we are to draw one at random, and so if we do find one, it demands an explanation in the way that finding a pattern from the middle of the distribution would not. This means that if we only have a few trials, we are very unlikely to sample from the tails, and that if something is so unlikely as to require 2^500 trials to have any decent chance of finding it, then we aren't going to find it by blind search before we exhaust the number of possible trials in the universe. The more familiar way of saying the same thing would be to say that if your random sample has a mean and distribution that is very different from the mean and distribution of the population you postulated under your null, you can reject that null that your sample was randomly drawn from that population. So if we find a sample of functional sequences out of a vast population of sequences, the overwhelming majority of which are non-functional, we can reject the null that it is a random sample from that population. Elizabeth B Liddle
KF re 36. I am quite confused by the point you are making but I will try my best.
Do you hear the point I have made by citing Fisher on what we would call fluctuations in stat mech [we are here close to the basis for the second law of thermodynamics],
Sorry no - I am struggling to understand the point you are making.
and do you see the reason why the darts would dot themselves in proportion tot he areas of the strips on the chart on the floor,
Yes - no problem with that.
thus also why the far tails would be unlikely to be captured in relatively small samples?
No less likely than any other equally small area on the chart e.g. a very thin strip in the middle.
(Do you see why I point out that far tails are natural zones of interest and low probability, in a context of partitioning a space of possibilities in ways that bring out the needle in haystack effect?
I struggle to make head or tail of this sentence :-)
You will notice that from the beginning that is what I highlighted [also, it is what the clip from Fisher points to], and that the side-debate you have provoked is at6 best tangential.)
Well no - because I am not sure what it is your are highlighting. Maybe you could write out your argument as a series of short simple sentences with no jargon and no abbreviations? That would really help me understand your point. Mark Frank
MF: Do you hear the point I have made by citing Fisher on what we would call fluctuations in stat mech [we are here close to the basis for the second law of thermodynamics], and do you see the reason why the darts would dot themselves in proportion tot he areas of the strips on the chart on the floor, thus also why the far tails would be unlikely to be captured in relatively small samples? (Do you see why I point out that far tails are natural zones of interest and low probability, in a context of partitioning a space of possibilities in ways that bring out the needle in haystack effect? You will notice that from the beginning that is what I highlighted [also, it is what the clip from Fisher points to], and that the side-debate you have provoked is at6 best tangential.) KF kairosfocus
Neil Rickert: Flip a coin 500 times. Write down the exact sequence that you got. We can say of that sequence, that it had a probability of (1/2)^500. It is a sequence that we would not expect to see even once. Yet we saw it. This is a common fallacy about probabilistic thinking. You are making one particular sequence as especially improbably, when all sequences are equally improbable. And since what you wrote down came from an actual sequence, you can see that highly improbable things can happen. Although it is highly improbable for any particular person to win the lottery, we regularly see people winning. Why the above is meaningful * not: a coin toss of 500 trials will select from an outcome set of (.5)^-500 members. The probability that a member of the set is selected is 1.0. What you are really saying (even though the words can be construed otherwise) is a masquerade of what is needed, by saying ANY PARTICULAR member of the set being selected is unexpected, or has a probability of (.5)^500. If you remove the word PARTICULAR from the previous, then the quirky English language we use prods (not forcing) us to a drastically different interpretation, the one that has any meaning for the discussion. Worth repeating: the only interpretation having didactic meaning here for the discussion. And there is no "common fallacy" involved. Your statement then is only a hashing of the statement: "The probability that a member of the set is selected is 1.0.". Since this statement contains no new information, it is information-free or in the context of our discussion, meaningless. BTW Neil: We have had a cooler than normal early June, cool nights, hot in the late afternoon. I tried to get you and Dr. Tour together at Rice U. and we have fabulous hotels in a city clearly emerging on the international scene. What happened? groovamos
Better link to Dimitrov e-book: 50 Nobel Laureates and other great scientists who believed in God by Tihomir Dimitrov http://www.nobelists.net/ bornagain77
It’s funny how often when one catches a Darwinist in really painful-to-watch idiocy, and call them on it, their response is something like “me no speaka the English.”
LOL! Eric Anderson
Jerad @ 27. It appears that you have no shame. Barry Arrington
corrected link: Founders of Modern Science Who Believe in GOD – Tihomir Dimitrov (pg. 222) http://www.academia.edu/2739607/Scientific_GOD_Journal bornagain77
KF 25 Yes thanks - I am familiar with Fisher and NP. I have a diploma in statistics and have had a strong interest in the foundations of hypothesis testing for many years. The article you pointed me to appears to give a nice introduction to both but I didn't have time to read it all in detail. Before taking this discussion any further let's check we both talking about the same thing. I am debating the validity of Fisherian hypothesis testing as opposed to a Bayesian approach. Do you agree that is the issue and that it is relevant? If not, we should drop it immediately. Mark Frank
Contrary to what Einstein found to be miraculous, Jerad maintains that he should not be surprised at all that he is able comprehend the universe. But alas, contrary to Jerad's complacency, Jerad's own atheistic/materialistic worldview, whether he wants to admit it or not, results in the epistemological failure of the entire enterprise of modern science that he has paid such empty lip service to admiring so much:
Epistemology – Why Should The Human Mind Even Be Able To Comprehend Reality? – Stephen Meyer - video – (Notes in description) http://vimeo.com/32145998 BRUCE GORDON: Hawking's irrational arguments - October 2010 Excerpt: What is worse, multiplying without limit the opportunities for any event to happen in the context of a multiverse - where it is alleged that anything can spontaneously jump into existence without cause - produces a situation in which no absurdity is beyond the pale. For instance, we find multiverse cosmologists debating the "Boltzmann Brain" problem: In the most "reasonable" models for a multiverse, it is immeasurably more likely that our consciousness is associated with a brain that has spontaneously fluctuated into existence in the quantum vacuum than it is that we have parents and exist in an orderly universe with a 13.7 billion-year history. This is absurd. The multiverse hypothesis is therefore falsified because it renders false what we know to be true about ourselves. Clearly, embracing the multiverse idea entails a nihilistic irrationality that destroys the very possibility of science. http://www.washingtontimes.com/news/2010/oct/1/hawking-irrational-arguments/ The Absurdity of Inflation, String Theory and The Multiverse - Dr. Bruce Gordon - video http://vimeo.com/34468027
This 'lack of a guarantee', for trusting our perceptions and reasoning in science to be trustworthy in the first place, even extends into evolutionary naturalism itself;
Scientific Peer Review is in Trouble: From Medical Science to Darwinism - Mike Keas - October 10, 2012 Excerpt: Survival is all that matters on evolutionary naturalism. Our evolving brains are more likely to give us useful fictions that promote survival rather than the truth about reality. Thus evolutionary naturalism undermines all rationality (including confidence in science itself). Renown philosopher Alvin Plantinga has argued against naturalism in this way (summary of that argument is linked on the site:). Or, if your short on time and patience to grasp Plantinga's nuanced argument, see if you can digest this thought from evolutionary cognitive psychologist Steve Pinker, who baldly states: "Our brains are shaped for fitness, not for truth; sometimes the truth is adaptive, sometimes it is not." Steven Pinker, evolutionary cognitive psychologist, How the Mind Works (W.W. Norton, 1997), p. 305. http://blogs.christianpost.com/science-and-faith/scientific-peer-review-is-in-trouble-from-medical-science-to-darwinism-12421/ Why No One (Can) Believe Atheism/Naturalism to be True - video Excerpt: "Since we are creatures of natural selection, we cannot totally trust our senses. Evolution only passes on traits that help a species survive, and not concerned with preserving traits that tell a species what is actually true about life." Richard Dawkins - quoted from "The God Delusion" http://www.youtube.com/watch?v=N4QFsKevTXs
The following interview is sadly comical as a evolutionary psychologist realizes that neo-Darwinism can offer no guarantee that our faculties of reasoning will correspond to the truth, not even for the truth that he is purporting to give in the interview, (which begs the question of how was he able to come to that particular truthful realization, in the first place, if neo-Darwinian evolution were actually true?);
Evolutionary guru: Don't believe everything you think - October 2011 Interviewer: You could be deceiving yourself about that.(?) Evolutionary Psychologist: Absolutely. http://www.newscientist.com/article/mg21128335.300-evolutionary-guru-dont-believe-everything-you-think.html "But then with me the horrid doubt always arises whether the convictions of man’s mind, which has been developed from the mind of the lower animals, are of any value or at all trustworthy. Would any one trust in the convictions of a monkey’s mind, if there are any convictions in such a mind?" - Charles Darwin - Letter To William Graham - July 3, 1881
also of note:
The Origin of Science Jaki writes: Herein lies the tremendous difference between Christian monotheism on the one hand and Jewish and Muslim monotheism on the other. This explains also the fact that it is almost natural for a Jewish or Muslim intellectual to become a patheist. About the former Spinoza and Einstein are well-known examples. As to the Muslims, it should be enough to think of the Averroists. With this in mind one can also hope to understand why the Muslims, who for five hundred years had studied Aristotle's works and produced many commentaries on them failed to make a breakthrough. The latter came in medieval Christian context and just about within a hundred years from the availability of Aristotle's works in Latin.. As we will see below, the break-through that began science was a Christian commentary on Aristotle's De Caelo (On the Heavens).,, Modern experimental science was rendered possible, Jaki has shown, as a result of the Christian philosophical atmosphere of the Middle Ages. Although a talent for science was certainly present in the ancient world (for example in the design and construction of the Egyptian pyramids), nevertheless the philosophical and psychological climate was hostile to a self-sustaining scientific process. Thus science suffered still-births in the cultures of ancient China, India, Egypt and Babylonia. It also failed to come to fruition among the Maya, Incas and Aztecs of the Americas. Even though ancient Greece came closer to achieving a continuous scientific enterprise than any other ancient culture, science was not born there either. Science did not come to birth among the medieval Muslim heirs to Aristotle. …. The psychological climate of such ancient cultures, with their belief that the universe was infinite and time an endless repetition of historical cycles, was often either hopelessness or complacency (hardly what is needed to spur and sustain scientific progress); and in either case there was a failure to arrive at a belief in the existence of God the Creator and of creation itself as therefore rational and intelligible. Thus their inability to produce a self-sustaining scientific enterprise. If science suffered only stillbirths in ancient cultures, how did it come to its unique viable birth? The beginning of science as a fully fledged enterprise took place in relation to two important definitions of the Magisterium of the Church. The first was the definition at the Fourth Lateran Council in the year 1215, that the universe was created out of nothing at the beginning of time. The second magisterial statement was at the local level, enunciated by Bishop Stephen Tempier of Paris who, on March 7, 1277, condemned 219 Aristotelian propositions, so outlawing the deterministic and necessitarian views of creation. These statements of the teaching authority of the Church expressed an atmosphere in which faith in God had penetrated the medieval culture and given rise to philosophical consequences. The cosmos was seen as contingent in its existence and thus dependent on a divine choice which called it into being; the universe is also contingent in its nature and so God was free to create this particular form of world among an infinity of other possibilities. Thus the cosmos cannot be a necessary form of existence; and so it has to be approached by a posteriori investigation. The universe is also rational and so a coherent discourse can be made about it. Indeed the contingency and rationality of the cosmos are like two pillars supporting the Christian vision of the cosmos. http://www.columbia.edu/cu/augustine/a/science_origin.html Founders of Modern Science Who Believe in GOD - Tihomir Dimitrov http://www.scigod.com/index.php/sgj/article/viewFile/18/18
bornagain77
I could not fail to notice that you are still dodging the question. I see that you did not follow the link I provided to you explaining the concept of “moral certainty.” If you had, you would have learned something. You would have learned that “moral certainty” has nothing to do with morals in the sense of ethics (which is the dodge you are using). So, your dodge does not work. It only makes you like more obstinate, which is some trick itself.
You're right, I hadn't read the link. I have now. I've already said that if I got 500 heads I would be very suspicious and check everything out but if nothing was wrong I'd conclude a fluke result. That's an explanation that depends on existing causes without the need to invoke a designer. There is no need to fall back on 'beyond reasonable doubt'. We have a perfectly good explanation for getting any prespecified sequence on the first try: it just happened.
FYI, it is readily shown that in theory any member of a pop can be accessed by a sample, but that simply distracts from the material issue.
That's my only issue. You want to escalate the discussion so that it traipses into other realms.
And, on the same point, you clustered a clip from someone else about presuppositions with what I said, about the issue of strictly limited and relatively tiny searches of vast config spaces W, that have in them zones of interest that are sparse indeed.)
Yup, I did address two points from different people in one post. Once again: if anyone can point out something mathematical that I've got wrong then I'll change my stance. Please restrict your criticisms to things I've actually said and addressed. Jerad
MF: There you go, dragging red herrings away to ad hominem-laced strawmen -- here the subtext of my imagined ignorance and/or stupidity such that you want GP to help you correct me. Kindly, see what I have clipped just above from Fisher's mouth, and ponder on how a "natural" special zone like a far tail to a bell that is hard to hit by dropping darts scattering at random (relative to the ease of hitting the bulk) illustrate aptly the catching the needle in the haystack on a small blind sample problem. KF kairosfocus
MF: Are you familiar with what Fisher actually did, which pivoted on areas under the curve beyond a given point, relativised into p-values? [Probability being turned into likelihood of an evenly scattered sample hitting a given fraction of the whole area. As in, exactly what the dart-dropping exercise puts in more intuitive terms?] As in, further, a reasonable blind sample will reliably gravitate to the bulk rather than the far tails? Hence, if, contrary to reasonable expectation on sampling we are where we should not expect to be, Fisher said: "either an exceptionally rare chance has occurred or the theory [--> he here means the model that would scatter results on the relevant bell curve] is not true." The NP discussion on type I/II errors etc. is post the relevant point. Kindly cf. the linked review article. KF kairosfocus
Gpuccio 18 Well glad you recognise that the Fisher/NP/Likelihood/Bayes issue is not a strawman. Maybe you can explain that to KF? As I said Fisherian techniques work because in a wide range of situations they lead to the same decision as a Bayesian approach and they are easier to use. However, the conceptual problems are rather severe and they become very relevant when you are trying to tackle more philosophical subjects like ID. Here are a few of the problems: * No justification for one rejection region over another. Clearly illustrated when you justify one-tail as opposed to 2-tail testing but actually applies more generally. * No justification for any particular significance level. Why 95% or 99% or 99.9%? * No proof that the same significance level represents the same level of evidence in any two situations - so there is no reason to suppose that 95% significance is a higher level of evidence than 90% significance in two different situations. * Can get two different significance levels from the same experiment with the same results depending on the experimenter's intentions! (See http://www.indiana.edu/~kruschke/articles/Kruschke2010TiCS.pdf) But perhaps most important of all - it measures the wrong thing! We want to know how probable the hypothesis is given the data. Fisher's method tells us only how probable it is that the data would have fallen into certain categories given the hypothesis. Bayesian approaches avoid all these problems - which seem to me to be worth avoiding and rather more substantial than an excuse to introduce my worldview. The cost is - they can be hard to calculate and sometimes (not always) they require subjective estimates of the priors. Mark Frank
KF 13
You know or should know that the material issue at stake is partitioned config spaces and relative statistical weights of clusters of possible outcomes, leading to the dominance of the bulk of a bell distribution under relevant sampling circumstances;
I was only pointing out problems with Fisherian hypothesis testing. If Fisherian hypothesis is not relevant then I apologise - but then I have to wonder why you raised it? Mark Frank
BA: It seems Jerad et al need to make acquaintance with the ordinary unprejudiced man in the Clapham bus stop. Or, with the following from Simon Greenleaf, in Evidence, vol I Ch 1, on the same basic point. KF kairosfocus
Jerad: See what I mean about tilting at strawmen -- as in, there you go again? FYI, it is readily shown that in theory any member of a pop can be accessed by a sample, but that simply distracts from the material issue. Let me put it in somewhat symbolised terms, as saying the equivalent in English seems to make no impression:
1: config spaces of possibilities, W are partitioned into zones of interest that are naturally significant -- far tails, text strings in English not repetitions or typical random gibberish, etc -- which we can symbolise z1, z2, . . . zn, where 2: SUM on i (Zi) is much, much, much less than W, putting us in the needle in the haystack context. 3: Also, where search resources leading to credible blind and unguided sample size s, is also incredibly less than W. 4: So, it is highly predictable/reliable -- in cases where W = 2^500 - 2^1,000 or more, all but certainly -- that a blind search of W of scope s [10^84 - 10^111 samples) will come from the overwhelming bulk of W, not the special zones in aggregate. 5: That is, for relevantly large S, the overwhelming likelihood is that blind searches will come from W - {SUM on i(zi)}, not from SUM on i (zi) 6: And so, if instead we see the opposite, the BEST, EMPIRICALLY WARRANTED EXPLANATION is that such arose by choice contingency [for relevant cases where this is the reasonable alt), not chance. 7: Which is a design inference. 8: Where also, in relevant cases, requisites of specific function, e,g. as text in English, sharply constrain acceptable possible strings from W. 9: That is, that SUM on i (zi) is much, much less than W is not an unreasonable criterion.
(And, on the same point, you clustered a clip from someone else about presuppositions with what I said, about the issue of strictly limited and relatively tiny searches of vast config spaces W, that have in them zones of interest that are sparse indeed.) KF kairosfocus
Jerad @ 5. I could not fail to notice that you are still dodging the question. I see that you did not follow the link I provided to you explaining the concept of "moral certainty." If you had, you would have learned something. You would have learned that "moral certainty" has nothing to do with morals in the sense of ethics (which is the dodge you are using). So, your dodge does not work. It only makes you like more obstinate, which is some trick itself. Barry Arrington
I must say that I am really surprised of Neil Rickert. I did not expect such a position from him. From Jerad, on the other hand... gpuccio
Mark: I would say that Fisherian hypothesis testing works perfectly in empirical sciences, provided that it is applied with a correct methodology. The hypothesis testing procedure is perfectly correct and sound, but the methodology must be correct too. We have to make reasonable questions, and the answers must be pertinent. Frankly, the only reason that I can see for your personal (and of others too) insistence in a Bayesian approach is that you use it only to introduce your personal worldview commitments (under the "noble" word of priors), computing irrational and improbable probabilities for all that you don't want to accept (such as the existence of non physical conscious beings). If that is the only addition that a Bayesian approach can offer us in this context, I gladly leave it to you. I am happy to discuss my and others' worldviews, but I will certainly not do that in terms of "probabilities". gpuccio
Don’t you see that you are tilting at a strawman at this point? Cf 9 just above on partitioned config spaces and relative statistical weights of resulting clusters. Or, don’t you see that you are inviting the conclusion that you are revealing by actions speaking louder than words, that you think any and all tactics are “fair” in debate — “fair” on such a view being a mere social construct after all so that the honourable thing is one thing by nature and another by laws made for the moment by men through power struggles on the principle that might and manipulation make ‘right.’ Do you really want to go there?) KF
I started by responding to a post from Saturday discussing the probability of getting a result 22 standard deviations from the mean of a binomial distribution. That's all I'm doing. And defending what I've said when others have brought it up again in other threads. I'm happy to address partiioned configuration spaces if you want. As far as I can see no one has actually be able to show that my mathematics is wrong. Some have attacked a strawman of what I've said. And there's been a certain amount of abuse (Jerad's DDS . . . ) which I'm doing my best to ignore. Perhaps you'd like to caution some of the other commentors about their tone and correct their mathematical errors.
Yet apparently Jerad and other materialists/atheists are blind, either willingly or otherwise, to the fact that science would be impossible without Theistic presuppositions:
Since you can't prove that negative it's just an assertion on your part, a hypothesis that science has no need of. I look at the universe and see chaos, destruction and waste and, yes, some beauty and order. But, from a naturalistic point of view, if there were no order then I wouldn't be here to discover it. That does not mean that I can look backwards and anthropomorphically say things were/are designed. We live on this planet in this solar system in this galaxy because it happens to be one (of probably billions) that has the right combination of conditions to foster the beginning of life. But there are many, many, many other planets and solar systems where the conditions are completely hostile. If that meteor hadn't helped doom the dinosaurs the human race might not ever had existed at all. Stuff happens, all the time, every day. Sometimes there's an amazing coincidence or synchronicity that makes you stop in awe. Happens to me all the time. There's no magic director back in the studio bending events so certain things happen. You are going to get coincidences and really, really improbable things happening. Jerad
Jerad repeats the oft repeated false mantra of materialists/atheists:
Supposing we live in a Theistic universe is not science though.
Yet apparently Jerad and other materialists/atheists are blind, either willingly or otherwise, to the fact that science would be impossible without Theistic presuppositions: A few quick notes to that effect:
John Lennox - Science Is Impossible Without God - Quotes - video remix http://www.metacafe.com/watch/6287271/ Not the God of the Gaps, But the Whole Show - John Lennox - April 2012 Excerpt: God is not a "God of the gaps", he is God of the whole show. http://www.christianpost.com/news/the-god-particle-not-the-god-of-the-gaps-but-the-whole-show-80307/ Philosopher Sticks Up for God Excerpt: Theism, with its vision of an orderly universe superintended by a God who created rational-minded creatures in his own image, “is vastly more hospitable to science than naturalism,” with its random process of natural selection, he (Plantinga) writes. “Indeed, it is theism, not naturalism, that deserves to be called ‘the scientific worldview.’” http://www.nytimes.com/2011/12/14/books/alvin-plantingas-new-book-on-god-and-science.html?_r=1&pagewanted=all "You find it strange that I consider the comprehensibility of the world (to the extent that we are authorized to speak of such a comprehensibility) as a miracle or as an eternal mystery. Well, a priori, one should expect a chaotic world, which cannot be grasped by the mind in any way.. the kind of order created by Newton's theory of gravitation, for example, is wholly different. Even if a man proposes the axioms of the theory, the success of such a project presupposes a high degree of ordering of the objective world, and this could not be expected a priori. That is the 'miracle' which is constantly reinforced as our knowledge expands." Albert Einstein - Goldman - Letters to Solovine p 131. Comprehensibility of the world - April 4, 2013 Excerpt:,,,So, for materialism, the Einstein’s question remains unanswered. Logic and math (that is fully based on logic), to be so effective, must be universal truths. If they are only states of the brain of one or more individuals – as materialists maintain – they cannot be universal at all. Universal truths must be objective and absolute, not just subjective and relative. Only in this way can they be shared among all intelligent beings.,,, ,,,Bottom line: without an absolute Truth, (there would be) no logic, no mathematics, no beings, no knowledge by beings, no science, no comprehensibility of the world whatsoever. https://uncommondesc.wpengine.com/mathematics/comprehensibility-of-the-world/ The Great Debate: Does God Exist? - Justin Holcomb - audio of the 1985 debate available on the site Excerpt: The transcendental proof for God’s existence is that without Him it is impossible to prove anything. The atheist worldview is irrational and cannot consistently provide the preconditions of intelligible experience, science, logic, or morality. The atheist worldview cannot allow for laws of logic, the uniformity of nature, the ability for the mind to understand the world, and moral absolutes. In that sense the atheist worldview cannot account for our debate tonight.,,, http://theresurgence.com/2012/01/17/the-great-debate-does-god-exist Random Chaos vs. Uniformity Of Nature - Presuppositional Apologetics - video http://www.metacafe.com/w/6853139 "Clearly then no scientific cosmology, which of necessity must be highly mathematical, can have its proof of consistency within itself as far as mathematics go. In absence of such consistency, all mathematical models, all theories of elementary particles, including the theory of quarks and gluons...fall inherently short of being that theory which shows in virtue of its a priori truth that the world can only be what it is and nothing else. This is true even if the theory happened to account for perfect accuracy for all phenomena of the physical world known at a particular time." Stanley Jaki - Cosmos and Creator - 1980, pg. 49 Taking God Out of the Equation - Biblical Worldview - by Ron Tagliapietra - January 1, 2012 Excerpt: Kurt Gödel (1906–1978) proved that no logical systems (if they include the counting numbers) can have all three of the following properties. 1. Validity . . . all conclusions are reached by valid reasoning. 2. Consistency . . . no conclusions contradict any other conclusions. 3. Completeness . . . all statements made in the system are either true or false. The details filled a book, but the basic concept was simple and elegant. He summed it up this way: “Anything you can draw a circle around cannot explain itself without referring to something outside the circle—something you have to assume but cannot prove.” For this reason, his proof is also called the Incompleteness Theorem. Kurt Gödel had dropped a bomb on the foundations of mathematics. Math could not play the role of God as infinite and autonomous. It was shocking, though, that logic could prove that mathematics could not be its own ultimate foundation. Christians should not have been surprised. The first two conditions are true about math: it is valid and consistent. But only God fulfills the third condition. Only He is complete and therefore self-dependent (autonomous). God alone is “all in all” (1 Corinthians 15:28), “the beginning and the end” (Revelation 22:13). God is the ultimate authority (Hebrews 6:13), and in Christ are hidden all the treasures of wisdom and knowledge (Colossians 2:3). http://www.answersingenesis.org/articles/am/v7/n1/equation#
etc.. etc.. bornagain77
PPS: Those interested in following up the rabbit trail discussion may wish to go here for a review. I am highlighting that unlike arbitrarily chosen not naturally evident target zones, far tails of bell distributions are naturally evident special zones that illustrate the effect of partitioning a config space into drastically different statistical weight clusters, and then searching blindly with restricted resources. kairosfocus
PS: Remember, target zones of interest are not merely arbitrarily chosen groups of outcomes, another fallacy in the strawman arguments above. E.g. functional configs such as 72+ ASCII character text in English are readily recognisable and distinct from either (i) repeating short patterns: THETHE. . . . THE, and (ii) typical, expected at random outcomes: GHJDXTOU%&OUHYER&KLJGUD . . . HTUI. There seems to be a willful refusal to accept the reality of functionally specific configs showing functional sequence complexity, FSC, that are readily observable as distinct from RSC or OSC. kairosfocus
MF: You are demonstrably wrong, and your snipping out of context allowed you to set up a strawman and knock it over. You know or should know that the material issue at stake is partitioned config spaces and relative statistical weights of clusters of possible outcomes, leading to the dominance of the bulk of a bell distribution under relevant sampling circumstances; that is an easily observed fact as doing the darts and charts exercise will rapidly show EMPIRICALLY -- a 4 - 5 SD tail (as discussed) will be very thin indeed. Discussions of NP etc and the shaving off of a slice from the bulk -- which has no natural special significance here serves only as a red herring distraction from a point that is quite plain and easily shown empirically. Thence, you seem to have used the red herring led out to a strawman to duck the more direct issue on the table, where this applies to the sort of beyond astronomical config spaces and relatively tiny special, known attractive target zones and small blind samples we are dealing with. The suspect pattern continues. Do better next time, please. KF kairosfocus
Jerad: Don't you see that you are tilting at a strawman at this point? Cf 9 just above on partitioned config spaces and relative statistical weights of resulting clusters. Or, don't you see that you are inviting the conclusion that you are revealing by actions speaking louder than words, that you think any and all tactics are "fair" in debate -- "fair" on such a view being a mere social construct after all so that the honourable thing is one thing by nature and another by laws made for the moment by men through power struggles on the principle that might and manipulation make 'right.' Do you really want to go there?) KF kairosfocus
It probably does not help, that old fashioned Fisherian Hyp testing has fallen out of academic fashion, never mind that its approach is sound on sampling theory. Yes it is not as cool as Bayesian statistics etc, but there is a reason why it works well in practice.
Fisherian hypothesis testing has fallen out of fashion because it has become widely recognised that it is wrong. It only worked for all those years because in a wide range of circumstances it leads to much the same decisions as a Bayesian approach and it was much easier to use. With the advent of computers and clearer thinking about the foundations of statistics this is less and less necessary. In fact in many contexts pure Fisherian hypothesis testing fell out of favour several decades ago and was superseded by the Neyman-Pearson approach which requires an alternative hypothesis to be clearly articulated (and is thus moving in the direction of Bayes). Without the NP approach you cannot calculate such vital parameters as the power of the test. Whether you use a pure Fisherian or NP approach there are deep conceptual problems. To take your example of throwing darts at a Gaussian distribution. What that shows is that you are more likely to get a result between 0 and 1 SD than between 1 and 2 SD and so on. However,this does not in itself provide a justification for the rejection region being at the extremities. Fisherian thinking justifies the rejection region on the basis of the probability of hitting it being less than the significance level. You can draw such a region anywhere on your Gaussian distribution. Near the middle this would be a much narrow region than it would near the tails but it would still fall below the significance level. The only reason why using the tails of the distribution as a rejection region usually works is because the alternative hypothesis almost always gives a greater likelihood to this area than it does to the center. But there has to be an alternative hypothesis. Indeed in classical hypothesis testing it is common to decide that the rejection region is just one tail and not both - single tailed hypothesis testing. How is this decision made? By deciding that only plausible alternative hypotheses is one side of the distribution and not the other. I hope you are not going to ignore this corrective :-) Mark Frank
To be able to have a ‘fair coin flip’ in the first place presupposes that we live in a Theistic universe where what we perceive to be random events are bounded within overriding constraints that prevent complete chaos from happening. Chaos such as the infamous Boltzmann’s brain that would result in a universe where infinite randomness was allowed to rule supreme with no constraint.
Supposing we live in a Theistic universe is not science though.
FYI, BA spoke of moral certainty in a PROBABILISTIC mathematical context, where it is relevant on the application of the calcs. KF
That makes no sense to me whatsoever. If something is possible and it happens and it looks like there was no intervention or bias then what do morals have to do with it? Jerad