Uncommon Descent Serving The Intelligent Design Community

Jerad and Neil Rickert Double Down

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In the combox to my last post Jerad and Neil join to give us a truly pristine example of Darwinist Derangement Syndrome in action.  Like the person suffering from Tourette’s they just don’t seem to be able to help themselves.

Here are the money quotes:

Barry:  “The probability of [500 heads in a row] actually happening is so vanishingly small that it can be considered a practical impossibility.  If a person refuses to admit this, it means they are either invincibly stupid or piggishly obstinate or both.  Either way, it makes no sense to argue with them.”

Sal to Neil:  “But to be clear, do you think 500 fair coins heads violates the chance hypothesis?”

Neil:  “If that happened to me, I would find it startling, and I would wonder whether there was some hanky-panky going on. However, a strict mathematical analysis tells me that it is just as probable (or improbable) as any other sequence. So the appearance of this sequence by itself does not prove unfairness.”

Jared chimes in:  “There is no mathematical argument that would say that 500 heads in 500 coin tosses is proof of intervention.” And “But if 500 Hs did happen it’s not an indication of design.”

I do not believe Jerad and Neil are invincibly stupid.  They must know that what they are saying is blithering nonsense.  They are, of course, being piggishly obstinate, and I will not argue with them.  But who needs to argue?  When one’s opponents say such outlandish things one wins by default.

And I can’t resist adding this one last example of DDS:

Barry to Jerad:  “Is there ANY number of heads in a row that would satisfy you. Let’s say that the coin was flipped 100 million times and they all came up heads. Would you then know for a moral certainty that the coin is not fair without having to check it?”

Jerad:  “A moral certainty? What does that mean?”

It’s funny how often when one catches a Darwinist in really painful-to-watch idiocy, and call them on it, their response is something like “me no speaka the English.”

Jerad, let me help you out:  http://en.wikipedia.org/wiki/Moral_certainty

Comments
Dr Liddle, IBE is not generally a Bayesian probability inference with weighting on probabilities, or even a likelihood one. Scoring superiority on factual adequacy, coherence and explanatory power in light of empirical observation is not generally Bayesian. Though, in limited cases it can be. KFkairosfocus
June 24, 2013
June
06
Jun
24
24
2013
04:44 PM
4
04
44
PM
PDT
Yep. That would seem to be "inference to best explanation", KF! Glad we can agree on something for once :) Cheers LizzieElizabeth B Liddle
June 24, 2013
June
06
Jun
24
24
2013
04:42 PM
4
04
42
PM
PDT
KS, re:
It still seems arbitrary and subjective to divide the 9-digit numbers into two categories, “significant to me” and “not significant to me”.
There are a LOT of situations where the sort of partitioning of a config space we are talking about is real and important. Start with that scene in the Da Vinci Code where a bank vault must be accessed first shot or else. In another, text in English is sharply distinct from repetitive short blocks or typical at random gibberish, and the three do not function the same. The above is little more than a case of wishing away a very important and vital phenomenon that is inconvenient. KFkairosfocus
June 24, 2013
June
06
Jun
24
24
2013
04:41 PM
4
04
41
PM
PDT
Biasing coins? Easy -- get a double-head coin. KFkairosfocus
June 24, 2013
June
06
Jun
24
24
2013
04:35 PM
4
04
35
PM
PDT
Sal: I am not disputing what you said, or what you meant. My point is much more hypothetical, but very important, and it is that the reason we can conclude from observing a "special" sequence that something weird happened (it doesn't matter whether it was that the coin had two heads, or that it wasn't actually tossed, either scenario will do) isn't that such a sequence is "impossible" or "empirically impossible" or "against the Laws of Physics" or anything else about probability of the sequence. It's about because we know that Something Weird is much MORE probable than tossing one of those rare sequences. As I said, if we knew, with certainty, that the coin was fair, and the tossing fair, then we would simply have to conclude that, well, that the coin was fair and the tossing fair! We could not conclude "design" because we would know, a priori, that design was not the cause! In other words, our confidence that the sequence was designed stems from the relative probability that it was, compared with the probability that it was thrown by chance. Even if we are extremely confident that the coin was fair, and tossed fairly, it is still much more likely that the coin was not as fair as we thought it was, or that the tossing was somehow a conjuring trick, than that the sequence was tossed by chance. That is because we are less certain of non-design than we are of not tossing such a rarer kind of sequence. Bayes is a GOOD tool for ID, not a bad one. It's exactly what IDists here (including gpuccio, though he thinks he doesn't!) use, although usually it's called "inference to the best explanation" or some such (for some reason Bayes is a bad word in ID circles I think). But I have to say, I think all this back-biting about other people's probability smarts is completely unjustified. There are very few errors being made on these threads, but boy, is there a lot of misunderstanding of each other's meaning! As I said above, most people are mostly right. Where you guys are disagreeing is over the meaning of words, not the math. *growl*Elizabeth B Liddle
June 24, 2013
June
06
Jun
24
24
2013
04:08 PM
4
04
08
PM
PDT
A resolution of the 'all-heads paradox'keiths
June 24, 2013
June
06
Jun
24
24
2013
03:52 PM
3
03
52
PM
PDT
Sal has told us that the coin was fair. How great is his confidence that the coin is fair? Has Sal used the coin himself many times, and always previously got non-special sequences? If not, perhaps we should not place too much confidence in Sal’s confidence! And even if he tells us he has, do we trust his honesty? Probably, but not absolutely. In fact, is there any way we can be absolutelysure that Sal tossed a fair coin, fairly? No, there is no way. We can test the coin subsequently; we can subject Sal to a polygraph test; but we have no way of knowing, for sure, a priori, whether Sal tossed a fair coin fairly or not.
I clarified this point in other discussions. But I'll repeat it. The coins is presumed fair based on physics. It is presumed reasonably symmetric, if you like you can even hypothetically test it. Even if it is slightly unfair, for a sufficiently large number of coins the chance hypothesis can be rejected for all heads. For example the binomial distribution for a coin that has a 75% propensity for heads is still remote. The probability is (.75)^500 = 3.39 x 10^-63 and this is confirmed by the stat trek calculator: http://stattrek.com/online-cal.....omial.aspx By way of contrast a fair coin being all heads has a probability of: (.5)^500 = 3.1 x 10^-151 Given this, even unfair coins are not a good explanation for all coins being observed to be heads. It’s a better explanation, but 3.39 x 10^-63 probabilities isn’t anything I’d wager on. We can reject the fair coin hypothesis and accept it is unfair within reasonable limits just to be generous (like say 75% propensity for heads). All coins heads for a sufficiently large set of coins would still reasonably (not absolutely) suggest a non-random process was the driver for the configuration. All heads for approximately 1205 unfair coins (at 75% propensity) for heads will be as unlikely as 500 fair coins. Next, I never said the coins were tossed randomly. I said they were observed to be in the all heads state. This could mean, for example you open a box and find all the coins in the all heads state. The issue of fair tosses mention only for considering whether the configuration of all heads of a fair (or even slightly unfair) coins is consistent with random tosses. I never said the coins were actually tossed. The original statement and post where all this began was: Siding with Mathgrrl. Where I said:
consider if we saw 500 fair coins all heads, do we actually have to consider human subjectivity when looking at the pattern and concluding it is designed? No. Why? We can make an alternative mathematical argument that says if coins are all heads they are sufficiently inconsistent with the Binomial Distribution for randomly tossed coins, hence we can reject the chance hypothesis.
I never said the coins were randomly tossed. I only said we can compare the configuration of the coins against the hypothesis that they were randomly tossed. Given these considerations, and given that we know humans are capable of making all coins heads, a reasonable (but not absolute) inference is that the configuration arrived by design. And finally, severely biased coins are considered rare: You can load dice, you can't bias coinsscordova
June 24, 2013
June
06
Jun
24
24
2013
03:13 PM
3
03
13
PM
PDT
Incidentally, the mistake Neil Rickert makes is extremely common among anti-design people. Sometimes it is not articulated in a mathematical sense, but in a more everyday-life-experiences sense. I remember listening to an evolution/design debate on a talk show and the anti-design person argued that improbable things happen all the time by saying, in essence, "What are the odds that you and I would be here together the same day on the same show at the same time? If anyone had asked either of us a year ago we would have said it was extremely unlikely, and yet here we are. Improbable things happen all the time." There are several problems with this kind of thinking, but one that perhaps doesn't get enough play, is the intervention of the intelligent agent, so I'll highlight that here. Specifically, there were many decision points that were crossed toward making that particular talk show happen. And at each step of the way, it became more and more probable that it would occur. For instance, once the invitations had been sent and accepted, once the date had been selected and the time slot determined, there was a very high likelihood that it would take place. Then once the planes had already been caught, the taxi's grabbed, and the individuals had shown up at the studio's address, it was practically a certainty that the show would take place. So the answer to the anti-design person's cute question "What are the odds we would both be here today on this show?" is: "Given the preparations, the planning, and the decisions made by the parties involved, the odds were near certain. Now, having dispensed with your ridiculous example, tell us again why you think it is likely that a bunch of amino acids would happen to bump into each other and form life?"Eric Anderson
June 24, 2013
June
06
Jun
24
24
2013
03:00 PM
3
03
00
PM
PDT
MF: All I will say for the moment is that if you were to drop 30 - 100 darts in the case envisioned, it is reasonably certain that the one sigma bands will pick up a proportion og hits that is linked to relative area. Tails being small, will tend to be hit less often, and if our far tails are involved, we are unlikely to see any hits at all. But the bulk will pick up most of the hits. Now, you can pick an arbitrarily narrow stripe near the peak and it will have the same pattern of being of low proportion less likely to be hit. That simply underscores the point that such special zones are unlikely to be found on a reasonably limited blind search. Which is one of the points I was highlighting. You do understand the first point, on trying to blindly catch needles in haystacks with limited searches. Now, the further point you tried to divert attention from is not strictly central to where I am going, but let's note it. The far tails of a bell are natural examples of narrow zones T in a much larger distribution of possibilities W. Now that the first hurdle is behind us, look next at relevant cases where W = 2^500 to 2^1,000 or more. The search capacity of the solar system's 10^57 atoms, acting for a plausible lifespan ~ 10^17 s, could not sample more than 1 straw sized pluck from a cubical haystack 1,000 light years on the side. About as thick as our galaxy. Since stars are on average several LY apart in our neighbourhood, if such a stack were superposed on our galaxy, with all but certainty, such a sample -- and we have just one shot, will all but certainly pick straw. At 1,000 bits worth of configs, the conceptual haystack would swallow up the observable cosmos worse than a haystack swallows up a needle. In short, with all but certainty, when we have config spaces at least that big, cosmic scale search resources are going to be vastly inadequate to find anything but the bulk, configs in no particular pattern, much less a linguistically or computationally relevant one. Where also functional specificity and complexity get us into needing very tightly specified, atypical configs. Where also, as AutoCAD shows us 3-d machines and systems can be represented by strings, an analysis on strings is WLOG. KF PS: The simplest case of fluctuations I can think of for the moment is how for small particles in a fluid we see brownian motion, but as size goes up, impacts on the various sides average off and the effect vanishes. Likewise, it is all but certain that the molecules of oxygen in the room you sit in would spontaneously rush off to one end and leave you gasping. It can be shown we are unlikely to observe this once in the lifespan of the observed cosmos. And yet such is a valid distribution. Just, its statistical weight is so overwhelmed by the scattered at random ones -- the overwhelming bulk -- that it is maximally improbable and practically unobservable. There is a difference between abstract possibility and empirical observability without deliberate intervention to set up simply describable but extremely atypical configs of the space of possibilities W.kairosfocus
June 24, 2013
June
06
Jun
24
24
2013
02:58 PM
2
02
58
PM
PDT
groovamos @35: Well said. A similar point has been made many times, to those willing to listen, but I like the way you articulated it. I'm going to shamelessly steal your thinking.Eric Anderson
June 24, 2013
June
06
Jun
24
24
2013
02:49 PM
2
02
49
PM
PDT
Chance Ratcliff: Thank you for you #45. It's really good to read intelligent and reasonable words, once in a while! :)gpuccio
June 24, 2013
June
06
Jun
24
24
2013
02:41 PM
2
02
41
PM
PDT
Jerad: Where is my mathematical fallacy? I have explained it very clearly in my #41.
Sigh. As I’ve said several times already . . .if some specific specified sequence is randomly generated on the first trial then I would be very, very, very careful to check and see if there was any kind of bias in the system. And, if I was very, very, very sure there was not then I would say such a result was a fluke, a lucky result. There is no reason that design should be inferred to such a single outcome. What you really should be asking is: what if it happened two times in a row. Or 3 out of 5 times.
It seems that you really don't understand statistics and probability. If an outcomes happens 2, 3 or 100 times in a row, that simply gives another calculation of the probabilities of the whole series, considered as a single outcome. IOWs, the probability of having 3 head in ten tosses is rather high. The probability of having 500 heads in a row is laughable, and it is in reality the global probability of having the same result in 500 events, where the probability of that outcome for each event is 0.5. So, as you can see, your observations about something "happening two times in a row" are completely pointless. The probability of having 500 heads in a row is so low, that it certainly is much less acceptable than the probability of having less rare, empirically possible events, two or three times in a row. You must always consider the total probability of the event or set of events you are analyzing. Your comment about whether I would wonder if Shakespeare ever existed is pretty insulting really. Don't worry. The comment about Bayesian arguments for Shakespeare's existence was rather meant for Mark. Maybe he can feel insulted instead of you, although I hope not. The phrase for you was rather: "will you simply accept that observation as a perfectly expected result, given that its probability is the same as the probability of any other sequence of the same length?" Do you think you are too insulted, or maybe you can try to answer? Gee thanks. So, why don't you answer? No, it is not impossible. It IS very highly improbable. That's why I said "empirically" impossible, and not "logically" impossible. I am happy you agree with me on that, although maybe you did'n realize it. Getting 500 heads is just as likely from a purely random selection process as is any other sequence of 500 Hs and Ts. If you have any mathematical arguments against that then please provide them. Yes, I have. You can can find them in my #41. And by the way, a "selection process" is not a "random process", as usually even darwinists can understand. Then please be very specific and state your claims cogently And, if I’ve made a mathematical error then please find it. Ehm, I see that you have already read my #41, and maybe not understood it. Must I say the same things again? OK, I will do it. The probabilities that should be compared are not the probability of having 500 heads and of having a single specific random sequence. We must compare the probability of having an outcome from a subset of two sequences (500 heads or 500 tails), or if you prefer from any well specified and recognizable ordered subset, rather than form the vast subset of random non ordered sequences, which comprise almost all the sequences in the search space. Please, read carefully my example about gas mechanics, and maybe you will understand. That is a clever but incorrect restatement of my views. Ah, you read that too. While I can maybe accept that it is "clever", I cannot in any reasonable way conceive why it should be a "restatement of your views". And "incorrect", just to add! Will you clarify that point, or is it destined to remain a mystery forever, like many other statements of yours? I’m tired of being misinterpreted and having words put in my mouth. Find something wrong with what I’ve said, be specific please. I really don't know how I could be more specific that this. I have been specific almost to the point of discourtesy. What can I do more? Just to begin, why don't you answer the question that was, definitely, meant for you? "Just a simple question: if you get a binary sequence that, in ascii interpretation, is the exact text of Hamlet, and you are told that the sequence arose as a random result of fair coin tossing, will you simply accept that observation as a perfectly expected result, given that its probability is the same as the probability of any other sequence of the same length?"gpuccio
June 24, 2013
June
06
Jun
24
24
2013
02:38 PM
2
02
38
PM
PDT
groovamos' comment @35 echoes my thinking on this. A similar subject was brought up a few months ago by Phinehas and I replied in kind. To say that any outcome is equiprobable and hence just as unlikely as any other is to tacitly define an event in the sample space that is equal to the sample space: S = {a1, a2, ... an}, E = S, hence P(E) = 1. With regard to coin tosses, a specification in this sense would be an E for which 0 < P(E) < 1, and it forces a partition onto the sample space, such that S is equal to the union of E and not E. Specifying an outcome of all heads defines a specific sequence in the sample space. For 500 tosses, this sequence has a probability of P(E) = 2^-500, and P(~E) = 1 - P(E). There is no equiprobability with this partition, and we should never expect to see E occur. As gpuccio points out, this is empirical. The sequence is not logically impossible, and this was never at issue. We can be near-absolutely certain, that for any sequence of 500 coin tosses, there has never been one that comes up all heads, since the first coin was tossed by the first monetarily-aware person. The implication for a sample space of 500 bits is that any sequence that one can specify by any means whatsoever has likely never occurred at random, nor will it likely ever occur. Ever.Chance Ratcliff
June 24, 2013
June
06
Jun
24
24
2013
02:35 PM
2
02
35
PM
PDT
Mark: I simply want to do an easy and correct calculation which can be the basis for a sound empirical inference, like I do every day in my medical practice. According to your position, the whole empirical knowledge of the last decades should be discarded. Moreover, I do deny that you can involve a calculation of "priors" where worldviews are concerned. Probability can never say, either in a Fisherian or Bayesian way, if it is reasonable to accept the idea that consciousness can exist in other, non physical forms, or if a materialist reductionist point of view is better. Such choices are the fruit of a global commitments of one's cognition, feeling, intuition and free will. By the way, have you answered my explicit questions in #40 and 41? Regarding your objections to the Fisherian method in dFSCI, I think I have already commented, but I will do it in more detail here. No justification for one rejection region over another. Clearly illustrated when you justify one-tail as opposed to 2-tail testing but actually applies more generally. As I have said, here we have not a normal distribution of a continuous variable. We just have a simple rate of two discrete sets. The problem is very simple, and I don't see how the "tail" question applies. No justification for any particular significance level. Why 95% or 99% or 99.9%? The only justification is that it is appropriate for the inference you have to make. When I proposed 150 bits as a dFSCI threshold for a biological system, I considered about 120 bits for the maximal probabilistic resource in the planet earth system in 5 billion years. I added 30 bits to get to my proposed threshold of 150 bits. That would be an alpha level of 9,31323E-10. Such a value would be considered absolutely safe in any context, including all the inferences that are routinely made in the darwinian field about homologies. Do you suggest that it is not enough? Do you think that there are level of probabilities, whatever the context, Fisherian or Bayesian, that give us absolute knowledge of truth? That would be a strange concept. No proof that the same significance level represents the same level of evidence in any two situations – so there is no reason to suppose that 95% significance is a higher level of evidence than 90% significance in two different situations. I never consider a p value under 0.05 as evidence of anything. I am not stupid. But I can ensure you that, when I get a p value of 9,31323E-10, or even lower, I am absolutely sure, empirically, that what I am observing is real. R, the statistical software that I routinely use, does not even compute values under 2.2e-16, probably because at that level it's completely pointless to have a definite numeric value: you are already absolutely safe to reject the null hypothesis whatever the context.gpuccio
June 24, 2013
June
06
Jun
24
24
2013
02:11 PM
2
02
11
PM
PDT
Neil and Jerad have stated the absurd. Repeating an infamous fallacy that has been very popular in the worst darwinist propaganda.
Where is my mathematical fallacy?
Just a simple question: if you get a binary sequence that, in ascii interpretation, is the exact text of Hamlet, and you are told that the sequence arose as a random result of fair coin tossing, will you simply accept that observation as a perfectly expected result, given that its probability is the same as the probability of any other sequence of the same length? Or will you recur to Bayesian arguments to evaluate the probability that Shakespeare ever existed?
Sigh. As I've said several times already . . .if some specific specified sequence is randomly generated on the first trial then I would be very, very, very careful to check and see if there was any kind of bias in the system. And, if I was very, very, very sure there was not then I would say such a result was a fluke, a lucky result. There is no reason that design should be inferred to such a sinlge outcome. What you really should be asking is: what if it happened two times in a row. Or 3 out of 5 times. Your comment about whether I would wonder if Shakespeare ever existed is pretty insulting really. As is Barry's Tourette's dig.
By the way, Neil and Jerad are cordially invited to express their opinion too, illuminating us a little bit more about our logical fallacies.
Gee thanks.
e) So, for those who understand probability, the only rational question that applies here is: how likely is to have an outcome from the extremely small subset of two sequences with only one value, or even form some of the other highly ordered subsets in the search space? The answer is very simple: with a 500 bit search space, that’s empirically impossible.
No, it is not impossible. It IS very highly improbable.
f) This is the correct reasoning why a sequence of 500 heads is totally unexpected, while a random sequence is completely expected. Maybe Neil and Jerad would like to comment on this simple concept.
Getting 500 heads is just as likely from a purely random selection process as is any other sequence of 500 Hs and Ts. If you have any mathematical arguments against that then please provide them.
IOWs, we are not comparing the probability of single outcomes, but the probability of different subsets of outcomes.
Then please be very specific and state your claims cogently And, if I've made a mathematical error then please find it.
If we reasoned like Neil and Jerad, we would not at all be surprised by any strange behaviour of a natural gas, such as it filling only one half of the available space!
That is a clever but incorrect restatement of my views. I'm tired of being misinterpreted and having words put in my mouth. Find something wrong with what I've said, be specific please.Jerad
June 24, 2013
June
06
Jun
24
24
2013
02:03 PM
2
02
03
PM
PDT
Gpuccio Unfortunately you don't address the problems with Fisherian inference - you just declare that they are irrelevant (including the major objection that it answers the wrong question). Meanwhile you seem to be content to dismiss Bayesian inference on the grounds that it is hard to do the sums (even though it answers the right question). Do you want to do an easy calculation to answer the wrong question or a hard calculation to answer the right question?Mark Frank
June 24, 2013
June
06
Jun
24
24
2013
01:43 PM
1
01
43
PM
PDT
To all: A few comments to try to clarify this important point. First of all, my compliments to groovamos (#35), who has very correctly stated the fundamental point. I would only add, for clarity, the following: a) Any individual sequence of 500 coin tosses has obviously the same probability to be the outcome of a single experiment of coin tossing. A very, very low probability. I hope we all agree on that. b) As groovamos very correctly states, the probability that a sequence, one of the 2^500 possible ones, will be the outcome of a single experiment is very easy to compute: it is 1 (necessity). c) The problem here is that, among the 2^500 sequences, there are specific subsets that have some recognizable formal property. The subset £sequences where only one value is obtained 550 times", is made of two sequences: 500 heads and 500 tails. d) While there are certainly many other "subsets" more or less ordered or recognizable, the vast vast majority, quite the totality of the 2^500 sequences will be of the random kind, with no special recognizable order. e) So, for those who understand probability, the only rational question that applies here is: how likely is to have an outcome from the extremely small subset of two sequences with only one value, or even form some of the other highly ordered subsets in the search space? The answer is very simple: with a 500 bit search space, that's empirically impossible. f) This is the correct reasoning why a sequence of 500 heads is totally unexpected, while a random sequence is completely expected. Maybe Neil and Jerad would like to comment on this simple concept. IOWs, we are not comparing the probability of single outcomes, but the probability of different subsets of outcomes. If we reasoned like Neil and Jerad, we would not at all be surprised by any strange behaviour of a natural gas, such as it filling only one half of the available space! By the way, Mark, the fallacy so well outlined by groovamos is also the fallacy that, certainly in good faith, but not so good statistical and methodological clarity, you tried on me at the time of the famous dFSCI challenge. You may remember your "argument" about the random sequence that pointed to a set of papers in a database. A set defined by the numbers randomly obtained. As I hope you can see, the probability of getting a sequence, say, of 5 numbers from 1 to 1000 pointing to 5 items in a database where the items are numbered from 1 to 100 is, exactly 1. So, you may be clever in statistics, but being clever does not save us from error, when a cognitive bias is our strong motivator.gpuccio
June 24, 2013
June
06
Jun
24
24
2013
12:14 PM
12
12
14
PM
PDT
Mark (#24): Please, compare this statement of yours: However, the conceptual problems are rather severe and they become very relevant when you are trying to tackle more philosophical subjects like ID. with this other one: The cost is – they can be hard to calculate and sometimes (not always) they require subjective estimates of the priors. That's exactly your problem when you use such Bayesian arguments to refute ID. In what you declare as a "philosophical subject" (and I don't agree!), you propose to avoid a method which is simple and vastly used in all empirical sciences with a method that requires "subjective estimates of the priors". That seems folly, to me. Look at my treatment of dFSCI. It's simple, it's Fisherain, it's valid. You cannot accept it because of your priors, and so you shift to Bayesian objections. There is nothing good in that. Look at the absurd position of Neil and Jerad: they refute what is empirically evident, through a philosophical misunderstanding of probability. If these are the results of being Bayesian, I am very happy that I am a Fisherian. Your objections to the Fisherian method have really no relevance to a correctly argued Fisherian testing in a real empirical context, such as the problem of protein information. In my dFSCI procedure, I compute the probabilistic resources of a system to reject the null that some specific amount of functional protein information could arise by chance in that system. Once taken into account the probabilistic resource, it's enough to add enough bits to be sure of an alpha level extremely low (not certainly 0.05, or 0.01!) to be empirically sure that such an amount of functional protein information could not arise by chance in that system. There is nothing philosophical in that. Here we are dealing with definite discrete states (the protein sequences). The probability of reaching a specific subset is well defined by the ration of the subset to the search space. Your objections do not apply. Neil and Jerad have stated the absurd. Repeating an infamous fallacy that has been very popular in the worst darwinist propaganda. Just a simple question: if you get a binary sequence that, in ascii interpretation, is the exact text of Hamlet, and you are told that the sequence arose as a random result of fair coin tossing, will you simply accept that observation as a perfectly expected result, given that its probability is the same as the probability of any other sequence of the same length? Or will you recur to Bayesian arguments to evaluate the probability that Shakespeare ever existed? Just to know. By the way, Neil and Jerad are cordially invited to express their opinion too, illuminating us a little bit more about our logical fallacies.gpuccio
June 24, 2013
June
06
Jun
24
24
2013
11:45 AM
11
11
45
AM
PDT
Barry:
Like the person suffering from Tourette’s they just don’t seem to be able to help themselves.
Just to pick up a point you might be interested in: we have good evidence to demonstrate that far from people with Tourette's being unable to help themselves, they do such a fantastic job of learning to control their tics that they perform better than the rest of us at tasks that involve suppressing instinctive responses, e.g. on the Stroop task, or on an anti-saccade task (where you have to look in the opposite direction to a visual cue). Compensatory Neural Reorganization in Tourette Syndrome Neuroscience isn't all bunk :)Elizabeth B Liddle
June 24, 2013
June
06
Jun
24
24
2013
10:53 AM
10
10
53
AM
PDT
I think what KF is saying, Mark, is that the nearer a class of pattern is to the tails of a distribution, the less likely we are to draw one at random, and so if we do find one, it demands an explanation in the way that finding a pattern from the middle of the distribution would not. This means that if we only have a few trials, we are very unlikely to sample from the tails, and that if something is so unlikely as to require 2^500 trials to have any decent chance of finding it, then we aren't going to find it by blind search before we exhaust the number of possible trials in the universe. The more familiar way of saying the same thing would be to say that if your random sample has a mean and distribution that is very different from the mean and distribution of the population you postulated under your null, you can reject that null that your sample was randomly drawn from that population. So if we find a sample of functional sequences out of a vast population of sequences, the overwhelming majority of which are non-functional, we can reject the null that it is a random sample from that population.Elizabeth B Liddle
June 24, 2013
June
06
Jun
24
24
2013
09:19 AM
9
09
19
AM
PDT
KF re 36. I am quite confused by the point you are making but I will try my best.
Do you hear the point I have made by citing Fisher on what we would call fluctuations in stat mech [we are here close to the basis for the second law of thermodynamics],
Sorry no - I am struggling to understand the point you are making.
and do you see the reason why the darts would dot themselves in proportion tot he areas of the strips on the chart on the floor,
Yes - no problem with that.
thus also why the far tails would be unlikely to be captured in relatively small samples?
No less likely than any other equally small area on the chart e.g. a very thin strip in the middle.
(Do you see why I point out that far tails are natural zones of interest and low probability, in a context of partitioning a space of possibilities in ways that bring out the needle in haystack effect?
I struggle to make head or tail of this sentence :-)
You will notice that from the beginning that is what I highlighted [also, it is what the clip from Fisher points to], and that the side-debate you have provoked is at6 best tangential.)
Well no - because I am not sure what it is your are highlighting. Maybe you could write out your argument as a series of short simple sentences with no jargon and no abbreviations? That would really help me understand your point.Mark Frank
June 24, 2013
June
06
Jun
24
24
2013
08:53 AM
8
08
53
AM
PDT
MF: Do you hear the point I have made by citing Fisher on what we would call fluctuations in stat mech [we are here close to the basis for the second law of thermodynamics], and do you see the reason why the darts would dot themselves in proportion tot he areas of the strips on the chart on the floor, thus also why the far tails would be unlikely to be captured in relatively small samples? (Do you see why I point out that far tails are natural zones of interest and low probability, in a context of partitioning a space of possibilities in ways that bring out the needle in haystack effect? You will notice that from the beginning that is what I highlighted [also, it is what the clip from Fisher points to], and that the side-debate you have provoked is at6 best tangential.) KFkairosfocus
June 24, 2013
June
06
Jun
24
24
2013
08:23 AM
8
08
23
AM
PDT
Neil Rickert: Flip a coin 500 times. Write down the exact sequence that you got. We can say of that sequence, that it had a probability of (1/2)^500. It is a sequence that we would not expect to see even once. Yet we saw it. This is a common fallacy about probabilistic thinking. You are making one particular sequence as especially improbably, when all sequences are equally improbable. And since what you wrote down came from an actual sequence, you can see that highly improbable things can happen. Although it is highly improbable for any particular person to win the lottery, we regularly see people winning. Why the above is meaningful * not: a coin toss of 500 trials will select from an outcome set of (.5)^-500 members. The probability that a member of the set is selected is 1.0. What you are really saying (even though the words can be construed otherwise) is a masquerade of what is needed, by saying ANY PARTICULAR member of the set being selected is unexpected, or has a probability of (.5)^500. If you remove the word PARTICULAR from the previous, then the quirky English language we use prods (not forcing) us to a drastically different interpretation, the one that has any meaning for the discussion. Worth repeating: the only interpretation having didactic meaning here for the discussion. And there is no "common fallacy" involved. Your statement then is only a hashing of the statement: "The probability that a member of the set is selected is 1.0.". Since this statement contains no new information, it is information-free or in the context of our discussion, meaningless. BTW Neil: We have had a cooler than normal early June, cool nights, hot in the late afternoon. I tried to get you and Dr. Tour together at Rice U. and we have fabulous hotels in a city clearly emerging on the international scene. What happened?groovamos
June 24, 2013
June
06
Jun
24
24
2013
08:04 AM
8
08
04
AM
PDT
(It took me so long to write my comment re Bayes, that the conversation has moved on to this thread, and I see that finally the Bayes story has emerged! Here is the comment I posted on the other thread:) I don't think I've ever seen a thread generate so much heat with so little actual fundamental disagreement! Almost everyone (including Sal, Eigenstate, Neil, Shallit, Jerad, and Barry) is correct. It’s just that massive and inadvertent equivocation is going on regarding the word “probability”. The compressibility thing is irrelevant. Where we all agree is that "special" sequences are vastly outnumbered by "non-special" sequences, however we define "special", whether it’s the sequence I just generated yesterday in Excel, or highly compressible sequences, or sequences with extreme ratios of H:T, or whatever. It doesn't matter in what way a sequence is "special" as long as it was either deemed special before you started, or is in a clear class of "special" numbers that anyone would agree was cool. The definition of “special” (the Specification) is not the problem. The problem is that “probability” under a frequentist interpretation means something different than under a Bayesian interpretation, and we are sliding from frequentist interpretation (“how likely is this event?”) which we start with, to a Bayesian interpretation (“what caused this event?”) , which is what we want, but without noticing that we are doing so. Under the frequentist interpretation of probability, a probability distribution is simply a normalised frequency distribution - if you toss enough sequences, you can plot the frequency of each sequence, and get a nice histogram which you then normalise by dividing by the total number of observations to generate a "probability distribution". You can also compute it theoretically, but it still just gives you a normalised frequency distribution albeit a theoretical one. In other words, a frequentist probability distribution, when applied to future events, simply tells you how frequently you can expect to observe that event. It therefore tells you how confident you can be (how probable it is) that that the event will happen on your next try. The problem is arises when we try to turn frequentist probabilities about future events into a measure of confidence about the cause of a past event. We are asking a frequency probability distribution to do a job it isn't built for. We are trying to turn a normalised frequency, which tells us the how much confidence we can have of a future event, given some hypothesis, into a measure of confidence in some hypothesis concerning a past event. These are NOT THE SAME THING. So how do we convert our confidence about whether a future event will occur into a measure of confidence that a past event had a particular cause? To do so, we have to look beyond the reported event itself (the tossing of 500 heads), and include more data. Sal has told us that the coin was fair. How great is his confidence that the coin is fair? Has Sal used the coin himself many times, and always previously got non-special sequences? If not, perhaps we should not place too much confidence in Sal’s confidence! And even if he tells us he has, do we trust his honesty? Probably, but not absolutely. In fact, is there any way we can be <absolutelysure that Sal tossed a fair coin, fairly? No, there is no way. We can test the coin subsequently; we can subject Sal to a polygraph test; but we have no way of knowing, for sure, a priori, whether Sal tossed a fair coin fairly or not. So, let’s say I set the prior probability that Sal is not honest, at something really very low (after all, in my experience, he seems to be a decent guy): let’s say, p=.0001. And I put the probability of getting a “special” sequence at something fairly generous – let’s say there are 1000 sequences of 500 coin tosses that I would seriously blink at, making the probability of getting one of them 1000/2^500. I’ll call the observed sequence of heads S, and the hypothesis that Sal was dishonest, D. From Bayes theorem we have: P(D|S)=[P(S|D)*P(D)]/[ P(S|D)*P(D)*+ P(T|~D)*P(~D)] where P(D|S) is what we actually want to know, which is the probability of Sal being Dishonest, given the observed Sequence. We can set the probability of P(S|D) (i.e. the probability of a Special sequence given the hypothesis that Sal was Dishonest) as 1 (there’s a tiny possibility he meant to be Dishonest, but forgot, and tossed honestly by mistake, but we can discount that for simplicity). We have already set the probability of D (Sal being Dishonest) as .0001. So we have: P(D|S)=[1*.0001]/[1*.0001+1000/2^500*(1-.0001)] Which is, as near as dammit, 1. In other words, despite the very low prior probability of Sal being dishonest, now that we have observed him claiming that he tossed 500 heads with a fair coin, the probability that he was being Dishonest, is now a virtual certainty, even though throwing 500 Heads honestly is perfectly possible, entirely consistent with the Laws of Physics, and, indeed, the Laws of Statistics. Because the parameter (P(T|~D) (the probability of the Target given not-Dishonesty) is so tiny, any realistic evaluation of P(~D) (the probability that Sal was not Dishonest) , however great, is still going to make the term on the denominator, P(T|~W)]P(~W), negligible, and the denominator always only very slightly larger than the numerator. Only if our confidence in Sal’s integrity exceeds 500 bits will we be forced to conclude that the sequence could just or more easily have been Just One Of Those Crazy Things that occasionally happen when a person tosses 500 fair coins honestly. In other words, the reason we know with near certainty that if we see 500 Heads tossed, the Tosser must have been Dishonest, is simply that Dishonest people are more common (frequent!) than tossing 500 Heads. It’s so obvious, a child can see it, as indeed we all could. It’s just that we don’t notice the intuitive Bayesian reasoning we do to get there – which involves not only computing the prior probability of 500 Heads under the null of Fair Coin, Fairly Tossed, but also the prior probability of Honest Sal. Both of which we can do using Frequentist statistics, because they tell us about the future (hence “prior”). But to get the Posterior (the probability that a past event had one cause rather than another) we need to plug them into Bayes. The possibly unwelcome implication of this, for any inference about past events, is that when we try to estimate our confidence that a particular past event had a particular cause (whether it is a bacterial flagellum or a sequence of coin-tosses), we cannot simply estimate it from observed frequency distribution of the data. We also need to factor in our degree of confidence in various causal hypotheses. And that degree of confidence will depend on all kinds of things, including our personal experience, for example, of an unseen Designer altering our lives in apparently meaningful and physical ways (increasing our priors for the existence of Unseen Designers), our confidence in expertise, our confidence in witness reports, our experience of running phylogenetic analyses, or writing evolutionary algorithms. In other words, it’s subjective. That doesn’t mean it isn’t valid, but it does mean that we should be wary (on all sides!) of making over confident claims based on voodoo statistics in which frequentist predictions are transmogrified into Bayesian inferences without visible priors.Elizabeth B Liddle
June 24, 2013
June
06
Jun
24
24
2013
07:12 AM
7
07
12
AM
PDT
Better link to Dimitrov e-book: 50 Nobel Laureates and other great scientists who believed in God by Tihomir Dimitrov http://www.nobelists.net/bornagain77
June 24, 2013
June
06
Jun
24
24
2013
07:07 AM
7
07
07
AM
PDT
It’s funny how often when one catches a Darwinist in really painful-to-watch idiocy, and call them on it, their response is something like “me no speaka the English.”
LOL!Eric Anderson
June 24, 2013
June
06
Jun
24
24
2013
07:06 AM
7
07
06
AM
PDT
Jerad @ 27. It appears that you have no shame.Barry Arrington
June 24, 2013
June
06
Jun
24
24
2013
07:01 AM
7
07
01
AM
PDT
corrected link: Founders of Modern Science Who Believe in GOD – Tihomir Dimitrov (pg. 222) http://www.academia.edu/2739607/Scientific_GOD_Journalbornagain77
June 24, 2013
June
06
Jun
24
24
2013
06:39 AM
6
06
39
AM
PDT
KF 25 Yes thanks - I am familiar with Fisher and NP. I have a diploma in statistics and have had a strong interest in the foundations of hypothesis testing for many years. The article you pointed me to appears to give a nice introduction to both but I didn't have time to read it all in detail. Before taking this discussion any further let's check we both talking about the same thing. I am debating the validity of Fisherian hypothesis testing as opposed to a Bayesian approach. Do you agree that is the issue and that it is relevant? If not, we should drop it immediately.Mark Frank
June 24, 2013
June
06
Jun
24
24
2013
06:34 AM
6
06
34
AM
PDT
Contrary to what Einstein found to be miraculous, Jerad maintains that he should not be surprised at all that he is able comprehend the universe. But alas, contrary to Jerad's complacency, Jerad's own atheistic/materialistic worldview, whether he wants to admit it or not, results in the epistemological failure of the entire enterprise of modern science that he has paid such empty lip service to admiring so much:
Epistemology – Why Should The Human Mind Even Be Able To Comprehend Reality? – Stephen Meyer - video – (Notes in description) http://vimeo.com/32145998 BRUCE GORDON: Hawking's irrational arguments - October 2010 Excerpt: What is worse, multiplying without limit the opportunities for any event to happen in the context of a multiverse - where it is alleged that anything can spontaneously jump into existence without cause - produces a situation in which no absurdity is beyond the pale. For instance, we find multiverse cosmologists debating the "Boltzmann Brain" problem: In the most "reasonable" models for a multiverse, it is immeasurably more likely that our consciousness is associated with a brain that has spontaneously fluctuated into existence in the quantum vacuum than it is that we have parents and exist in an orderly universe with a 13.7 billion-year history. This is absurd. The multiverse hypothesis is therefore falsified because it renders false what we know to be true about ourselves. Clearly, embracing the multiverse idea entails a nihilistic irrationality that destroys the very possibility of science. http://www.washingtontimes.com/news/2010/oct/1/hawking-irrational-arguments/ The Absurdity of Inflation, String Theory and The Multiverse - Dr. Bruce Gordon - video http://vimeo.com/34468027
This 'lack of a guarantee', for trusting our perceptions and reasoning in science to be trustworthy in the first place, even extends into evolutionary naturalism itself;
Scientific Peer Review is in Trouble: From Medical Science to Darwinism - Mike Keas - October 10, 2012 Excerpt: Survival is all that matters on evolutionary naturalism. Our evolving brains are more likely to give us useful fictions that promote survival rather than the truth about reality. Thus evolutionary naturalism undermines all rationality (including confidence in science itself). Renown philosopher Alvin Plantinga has argued against naturalism in this way (summary of that argument is linked on the site:). Or, if your short on time and patience to grasp Plantinga's nuanced argument, see if you can digest this thought from evolutionary cognitive psychologist Steve Pinker, who baldly states: "Our brains are shaped for fitness, not for truth; sometimes the truth is adaptive, sometimes it is not." Steven Pinker, evolutionary cognitive psychologist, How the Mind Works (W.W. Norton, 1997), p. 305. http://blogs.christianpost.com/science-and-faith/scientific-peer-review-is-in-trouble-from-medical-science-to-darwinism-12421/ Why No One (Can) Believe Atheism/Naturalism to be True - video Excerpt: "Since we are creatures of natural selection, we cannot totally trust our senses. Evolution only passes on traits that help a species survive, and not concerned with preserving traits that tell a species what is actually true about life." Richard Dawkins - quoted from "The God Delusion" http://www.youtube.com/watch?v=N4QFsKevTXs
The following interview is sadly comical as a evolutionary psychologist realizes that neo-Darwinism can offer no guarantee that our faculties of reasoning will correspond to the truth, not even for the truth that he is purporting to give in the interview, (which begs the question of how was he able to come to that particular truthful realization, in the first place, if neo-Darwinian evolution were actually true?);
Evolutionary guru: Don't believe everything you think - October 2011 Interviewer: You could be deceiving yourself about that.(?) Evolutionary Psychologist: Absolutely. http://www.newscientist.com/article/mg21128335.300-evolutionary-guru-dont-believe-everything-you-think.html "But then with me the horrid doubt always arises whether the convictions of man’s mind, which has been developed from the mind of the lower animals, are of any value or at all trustworthy. Would any one trust in the convictions of a monkey’s mind, if there are any convictions in such a mind?" - Charles Darwin - Letter To William Graham - July 3, 1881
also of note:
The Origin of Science Jaki writes: Herein lies the tremendous difference between Christian monotheism on the one hand and Jewish and Muslim monotheism on the other. This explains also the fact that it is almost natural for a Jewish or Muslim intellectual to become a patheist. About the former Spinoza and Einstein are well-known examples. As to the Muslims, it should be enough to think of the Averroists. With this in mind one can also hope to understand why the Muslims, who for five hundred years had studied Aristotle's works and produced many commentaries on them failed to make a breakthrough. The latter came in medieval Christian context and just about within a hundred years from the availability of Aristotle's works in Latin.. As we will see below, the break-through that began science was a Christian commentary on Aristotle's De Caelo (On the Heavens).,, Modern experimental science was rendered possible, Jaki has shown, as a result of the Christian philosophical atmosphere of the Middle Ages. Although a talent for science was certainly present in the ancient world (for example in the design and construction of the Egyptian pyramids), nevertheless the philosophical and psychological climate was hostile to a self-sustaining scientific process. Thus science suffered still-births in the cultures of ancient China, India, Egypt and Babylonia. It also failed to come to fruition among the Maya, Incas and Aztecs of the Americas. Even though ancient Greece came closer to achieving a continuous scientific enterprise than any other ancient culture, science was not born there either. Science did not come to birth among the medieval Muslim heirs to Aristotle. …. The psychological climate of such ancient cultures, with their belief that the universe was infinite and time an endless repetition of historical cycles, was often either hopelessness or complacency (hardly what is needed to spur and sustain scientific progress); and in either case there was a failure to arrive at a belief in the existence of God the Creator and of creation itself as therefore rational and intelligible. Thus their inability to produce a self-sustaining scientific enterprise. If science suffered only stillbirths in ancient cultures, how did it come to its unique viable birth? The beginning of science as a fully fledged enterprise took place in relation to two important definitions of the Magisterium of the Church. The first was the definition at the Fourth Lateran Council in the year 1215, that the universe was created out of nothing at the beginning of time. The second magisterial statement was at the local level, enunciated by Bishop Stephen Tempier of Paris who, on March 7, 1277, condemned 219 Aristotelian propositions, so outlawing the deterministic and necessitarian views of creation. These statements of the teaching authority of the Church expressed an atmosphere in which faith in God had penetrated the medieval culture and given rise to philosophical consequences. The cosmos was seen as contingent in its existence and thus dependent on a divine choice which called it into being; the universe is also contingent in its nature and so God was free to create this particular form of world among an infinity of other possibilities. Thus the cosmos cannot be a necessary form of existence; and so it has to be approached by a posteriori investigation. The universe is also rational and so a coherent discourse can be made about it. Indeed the contingency and rationality of the cosmos are like two pillars supporting the Christian vision of the cosmos. http://www.columbia.edu/cu/augustine/a/science_origin.html Founders of Modern Science Who Believe in GOD - Tihomir Dimitrov http://www.scigod.com/index.php/sgj/article/viewFile/18/18
bornagain77
June 24, 2013
June
06
Jun
24
24
2013
06:32 AM
6
06
32
AM
PDT
1 2 3

Leave a Reply