Uncommon Descent Serving The Intelligent Design Community

Darwinism: Why its failed predictions don’t matter

Categories
Culture
Darwinism
Intelligent Design
Science
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

From Wayne Rossiter, author of Shadow of Oz: Theistic Evolution and the Absent God: at his book blog:

It’s an odd pattern. It was this problem that came to mind as I recently revisited Living with Darwin: Evolution, Design and the Future of Faith, by Philip Kitcher. Kitcher is a philosopher at Columbia University, and he specializes on biology. His book was published by Oxford University Press, and was the recipient of the 2008 Lannan Notable Book Award. We should take his views seriously.

His book begins with a forceful assertion: “From the perspective of almost the entire community of natural scientists world-wide, this continued resistance to Darwin is absurd. Biologists confidently proclaim that Darwin’s theory of evolution by natural selection is as well established as any of the major theories in contemporary science.”

This is not really a prediction. But, it is a statement that was wrong even before it was penned. More.

People who are committed to intellectual integrity in their own work often miss this central point: Once a bully pulpit like Darwinism has been established the occupant does not need to be correct, accurate or even useful. He can be a drag on the system. He can lead the march into degenerate science. He can, incidentally, fix you good if you try to offer an alternative view however grounded.

Bullies are not dislodged by being shown to be wrong, only by being successful opposed. Efforts so far have been commendable but quite honestly, more is needed.

See also: Biologist Wayne Rossiter on Joshua Swamidass’ claim that entropy = information

Follow UD News at Twitter!

Comments
Jdk: So the interesting issue here is to understand better why C has significance to us, to use the word I am suggesting, and A doesn’t. I think the key issue here is, as you point out, that human beings are good at and inclined towards pattern recognition. When we recognize a pattern, we then attach a significance to that throw that we don’t attach to a throw such as A above, which has no pattern clear pattern at all. Pattern recognition is a very important part of our cognitive ability to understand the world, and one of the tools we use to build a base of knowledge about the world. ... This is the “human” part of the situation that goes beyond the pure probability: the part that adds the “dual” aspect you speak of.
I would like to note that something can have significance because we understand that it requires knowledge. A sequence of prime numbers is an obvious example and my sequence in post #194 is another.
THE UNIVERSAL DESIGN INTUITION Tasks that we would need knowledge to accomplish can be accomplished only by someone who has that knowledge. In other words, whenever we think we would be unable to achieve a particular useful result without first learning how, we judge that result to be unattainable by accident. ... I use the term universal design intuition—or simply design intuition—to refer to this common human faculty by which we intuit design. ... I intend to show that the universal design intuition is reliable when properly used ... The design intuition is utterly simple. Can you make an omelet? Can you button a shirt? Can you wrap a present? Can you put sheets on a bed? Tasks like these are so ordinary that we give them little thought, and yet we weren’t born with the ability to do them. Most of the training we received occurred so early in life that we may struggle to recall it, but we have only to look at a young person still in the training years to be reminded that all of us had to be taught. Whether we taught ourselves these skills or were taught by others, the point is that knowledge had to be acquired in the form of practical know-how. Everyday experience consistently shows us that even simple tasks like these never accomplish themselves. If no one makes breakfast, then breakfast goes unmade. Likewise for cleaning up after breakfast, for making the bed, and so on. [Douglas Axe, ‘Undeniable’, Ch.2]
Origenes
May 30, 2017
May
05
May
30
30
2017
01:56 AM
1
01
56
AM
PDT
PaV: What Perakh has done, perhaps unwittingly, is to say that any group of ten rolls of a die ALL have a probability of 1 in 10^6. But that’s not so. It’s only so when the “mind” decides to “group together” individual events into a “dependent” whole. … So, contra Perakh, what the “mind” does, or doesn’t do, is essential: and is not simply a “psychological” reaction. It is knowledge at work using information that the “mind” has acquired over the lifetime of the individual.
I would like to note that in life we are confronted with real wholes which cannot be arbitrarily cut up and turned into newly arranged wholes at will. Also with numbers we can suddenly recognize a pattern and infer design as the best explanation:
2, 1, 3, 4, 2, 6, 5, 1, 6, 2, 3, 5, 3, 1, 4, 1, 1, 2, 4, 2, 6
Is this sequence best explained by design or the throw of a die?Origenes
May 30, 2017
May
05
May
30
30
2017
01:28 AM
1
01
28
AM
PDT
jdk: You'd make a lousy evolutionary biologist if you think the probabilities are linked and not added. That's all we get from them. And that includes Perakh, who wants to divide the sequence of 100 4's into 10 sequences of 10 4's.PaV
May 30, 2017
May
05
May
30
30
2017
01:26 AM
1
01
26
AM
PDT
Hi PaV. This is really long, but I have been comprehensive about some things, and tried to illustrate with examples to make everything really clear. I have no idea if anyone will really read all this with the intent of digesting it, but I'm interested, so I did it anyway! :-) Way back at 148, you wrote,
jdk, [you wrote],
I think what is confusing is that there is a difference between the event, the specification, and the significance of the event. The significance lies outside the realm of the probability situation itself, but rather in some broader context.
I think that you’re onto something here. This is, in a way, what we’re wrangling about. But, my hunch is that there is something deeper lurking here. For example, all of this discussion is bringing me to the point of view that probability acts like a ‘dual space,’ or, at the very least, there’s some kind of duality associated with it. And that this duality, then, can give rise to misunderstandings. Hopefully some clarity will emerge.
I agree that this part of the subject is worth exploring. Our human interpretation of events adds a layer of significance and meaning that goes beyond the purely theoretical probability considerations. It is also closely associated with the idea of specifications, at least from one point of view. This is the subject Perakh was addressing in the section you quote, which he titled Psychological Aspects of Probability. I'm willing to use his example, but I'd like to use my formulation about there being some significance to us, from a broader context than just the pure probability. I think this approach is consistent with your use of the idea of "duality": that we compare the probabilistic event with other things we know about the world to determine the significance of an event. I'm not very interested in responded to Perakh's thoughts on the matter, though: I would rather make my own points, and hear yours and others here, than worry much about exactly what Perakh said. Let first be a little formal here: The situation is that we roll a fair die 10 times, and record the numbers rolled in order. There are thus 6^10 (60,466,176) possible events that can occur. Call each of these events a "throw." The sample space is this entire set of possible throws. Call the sample space S. The number of elements in the sample space is 60,466,176. Call this number n. All the events are equiprobable, so the probability of any particular throw is 1/n So consider the following two throws, per Perahk: A = (3, 5, 6, 2, 6, 5, 6, 4, 1, and 1) C = (4, 4, 4, 4, 4, 4, 4, 4, 4, and 4) It is a true fact that both of these have a probability of 1/n. From the point of view of pure probability, these are equivalent events: C is just one of n possible events. However, obviously, we would respond to C very differently than A. To A our response would be, "Well that's one of the many results I could get - nothing special here." But to C our response would be, "That's hard to believe. I'm inclined to think something else is going on here besides pure chance." (I might actually have stronger feelings that that, and feel pretty sure some kind of cheating has gone on.) So the interesting issue here is to understand better why C has significance to us, to use the word I am suggesting, and A doesn't. I think the key issue here is, as you point out, that human beings are good at and inclined towards pattern recognition. When we recognize a pattern, we then attach a significance to that throw that we don't attach to a throw such as A above, which has no pattern clear pattern at all. Pattern recognition is a very important part of our cognitive ability to understand the world, and one of the tools we use to build a base of knowledge about the world. (There is a whole subset of child psychology that studies the development of this skill in children.) This is the "human" part of the situation that goes beyond the pure probability: the part that adds the "dual" aspect you speak of. However, merely seeing a pattern is not enough to make us question chance. As I have mentioned, if the sample space is small, where nothing is improbable, even if we see a pattern, we might very well not question chance. So, here is a fairly extensive analysis of a simpler situation: throwing three dice. Once we understand the issues well, we can expand to the much larger sample space arising from throwing 10 dice. If I throw three dice, and they come up all 6's, that has a 1/216 chance (not 1/108 as you mention). If we've played enough dice games with three dice, we've seen that happen, and will not be unduly surprised. (And note, the probability of all three dice being the same number, like 1 1 1, 2 2 2 , etc., is only 1/36, so that is pretty common.) I think the main issue is here is the question of what proportion of the events exhibit a pattern to which we attach a significance. For instance, we could list all the possible events when we throw three dice: there are only 216 of them. Suppose we decided to mark which of those 216 were significant because of having a pattern. For instance, the six events where all three dice are the same would qualify as having a pattern. What about 1 2 3? This is just as likely as 6 6 6, and it has a pattern. Would we consider it as significant? What about 2 4 6? What about 1 6 1? What abou 4 2 6 (all even numbers)? I have two points to make about these examples: 1. There is not a clearcut dividing line between throws that exhibit a pattern and those that don't. We might be even tempted to devise a "significance scale", with 5 = very significant pattern (6 6 6), 3 = somewhat significant pattern (1 6 1, perhaps, or at least 2 4 6) and 1 = no significant pattern (3 5 2). 2. Suppose we did categorize all 216 possibilities, picking some criteria to separate all the significant patterns from those throws lacking a significant pattern. What percent of the sample space is events that are significant? Let the set SP = all the throws which have a significant pattern. Let m = the number of events in SP The big question is what is the ratio m/n? What percentage of the possible throws are in SP? That is, when we throw three dice, what is the probability m/n that we will get a throw that exhibits a significant pattern? My intuitive guess is that the percentage is pretty high. If you count all the throws where the dice are the same, all the straights going forward and backward, and all the "skippy straights" such as 2 4 6, we already have almost 10% of all possibilities as significant. Therefore, when we throw 6 6 6 we are a bit surprised. but not amazed, not solely because the odds are 1/216, but because we have a pretty high probability of throwing some significant throw in. For instance if m/n = 10% (which includes just the throws listed above), the odds of throwing a significant throw vs throwing a non-significant throw is 1 to 9, and those are reasonable odds. The 10% is really what is important, not the 1/216 ratio, if all we are trying to explain is getting a throw with a significant pattern as opposed to getting some specific, particular pattern. Two more notes to add to this explanation. Since there is not clearcut distinction between significant and non-significant throws, we might, as I suggested above, apply a scale from 5 to 1. This then a place where we could calculate an expected significant value: take the number of 5's times the percentage of throws that are labeled 5's + the number of 4's times the percentage of throws that are labeled 4's + etc. Then we would have a expected significance value EV for throwing three dice. I have very little idea what SV might be: the very first step of assigning a value to each throw would involve some judgments upon which people would disagree. However, if all throws were insignificant (no one saw any patterns at all), EV would equal 1. If half the throws were insignificant, and the other 1/2 average about 3, for instance, EV would equal 2. If every throw was very significant (which isn't the case at all), EV would equal 5. So we see that the lower EV is, the less likely you will get a significant throw, so the more likely that you will be surprised if do you one. Last point: an important one. The dice already contain a pattern built in: the numbers from 1 to 6. Most of the patterns we see arise from the relationship of the numbers themselves: they are ordered, and the group in known sets of evens and odds. Suppose instead we had just six symbols that had no order and could not be categorized into any groups. This would reduce the number of significant hands because there would be fewer patterns to recognize. All three dice with the same symbol, such as # # # would stay the same, a 5. But there would be nothing to compare to straights or evens in the above example. We might recognize $ % $ as a pattern, but that would be no different than 5 1 5 in the regular dice, and that would be no more than a 2, I think. Therefore, the expected significance value EV for this situation would be lower than that of the regular dice, because there would be fewer significant hands to compare to. That is, the probability of getting & & & would be the same as the probability of 6 6 6, but & & & would possibly surprise us more because the dice with symbols has fewer significant throw. To summarize, before going on to ten dice: We definitely see patterns, and we intuitively (even if we are not very sophisticated about theoretical probability) make an estimate of the ratio of significant events in comparison to the total number of possible events. If that ratio is relatively large (say 10%) we are unlikely to infer some other than chance no matter what throw we get. And another, more precise way, to measure significance would be to assign a significance value to every throw, and then create an expected significance value EV. The higher the EV, the more likely we will get significant hands, and thus the less likely we are to be surprised when we get one. ======== Now, to wrap this up, let's jump to 10 dice. Without going through any more analysis, I am virtaully certain that the ratio of throws that exhibit a pattern in comparison to the total 60 million possible throws would be much, much smaller than with three dice. That is, there would be a much larger percentage of total hands that did not exhibit much, if any, pattern: they would be 1's on our significance scale. Because of this when we do get a significant hand such as all 4's, or 5 5 5 5 5 7 7 7 7 7, it is a much more unlikely event than getting a non-significant hand. So to return to the example at the start of his post. A = (3, 5, 6, 2, 6, 5, 6, 4, 1, and 1) and C = (4, 4, 4, 4, 4, 4, 4, 4, 4, and 4) both are equally improbable from a pure probability point of view. But A is a member of a very large number of non-significant throws and C is a member of a much, much smaller set of significant throws. It is this ratio that we are thinking about, intuitively, we when declare that C could not have happened by chance but A did. We are looking for patterns to which we attach significance vs those we don't. If I specified a particular throw 3 1 4 3 1 6 4 5 2 4, I would have a 1 out of 60 million chance of getting that throw. But if I just specified that I was going to throw a non-significant throw, I would have a very good chance of matching that specification. Conversely, therefore, the chances of throwing a significant throw is very small. It's not the absolute probability that we are interested in when we look for significance, it the ratio of events that are and aren't significant that is ... well, significant. Way too much typing. I'm done, and shot the whole evening!jdk
May 29, 2017
May
05
May
29
29
2017
08:42 PM
8
08
42
PM
PDT
PaV @186 @189 Connecting the dots. Very interesting.Origenes
May 29, 2017
May
05
May
29
29
2017
06:16 PM
6
06
16
PM
PDT
PaV, you write,
The basic idea is that when you see a pattern, then the pattern is produced by a series, or sequence, of events, each of which has a probability, which probabilities must be multiplied together.
This is true whether you see a pattern or not. This is just how probability works: when successive independent events occur, you multiply the probabilities. As I said before, there is nothing in our current discussion where adding probabilities applies. You write,
It’s like the example I gave before: if you throw one die, a hundred times in a row, we know the probability of each throw/event is low; if, however, you throw 100 dice ‘all at once, then each throw/event is highly improbable. It’s the probability associated with the first die, plus that of the second, plus that of the third, etc. So, the probabilities are multiplied.
This is true (with the exception noted below). It is exactly the principle we have been using in all our examples. However, as a clarification, since we are considering the order of the throw, we need to either throw a die 100 times successively, or number the 100 dice so we know which is dice 1, which is dice 2, etc. If we don't care about order, we can throw all 100 at the same time. However, we have been talking about order in all our examples, so I think we need to talk about throwing a die many times, one throw at a time, not throwing 100 at the same time. By the way, I'm working on a longer post that may shed some light on our topic. And I must repeat, I am not at all even thinking about how this might apply to "cellular realities" or any other aspect of biology. But I am interested in why we see some throws as significantly improbable, and others not, and that is what I'm working on trying to explain.jdk
May 29, 2017
May
05
May
29
29
2017
05:56 PM
5
05
56
PM
PDT
jdk: I'm fully capable of getting backwards: it's a sort dyslexia in remembering things. But that said, the basic idea is that when you see a pattern, then the pattern is produced by a series, or sequence, of events, each of which has a probability, which probabilities must be multiplied together. The basic way of "climbing Mt. Improbable" is to take 100 steps, with each step having nothing to do with the other steps. Or, you might say, one mutation at a time. The whole idea of irreducible complexity is to say that this isn't going to happen; that to arrive at the complexity of proteins, single steps won't get you there. So, it's all about: do you "add," or do "multiply." My insight is that when a "pattern" is glimpsed, then those single steps become fused together. It's an "probabilistic ensemble," to borrow from thermodynamics. It's like the example I gave before: if you throw one die, a hundred times in a row, we know the probability of each throw/event is low; if, however, you throw 100 dice 'all at once, then each throw/event is highly improbable. It's the probability associated with the first die, plus that of the second, plus that of the third, etc. So, the probabilities are multiplied. I think common sense should tell you that when it comes to cellular realities, the improbabilities are way too high. Perakh, and his acolytes, simply tells us to put common sense to the one side. BTW: most of what you posted is available online; but, it is a helpful summary, having all those various elements in one condensed setting.PaV
May 29, 2017
May
05
May
29
29
2017
05:35 PM
5
05
35
PM
PDT
Here is a link to a notesheet I used in my Probability and Stats chapter when I taught high school Pre-Calculus. It might make a good reference for anyone interested. Probability Notesheet It is, I think, a succinct summary of all the concepts we have been using in this discussion, and more. Google will calculate factorials and combinatorial numbers for you: type 5! or 5 choose 2. I don't think it computes permutations, so to get 52 pick 13, you have to use the formula 52!/39! And here it all is, although some of the formatting, such as indents, don't translate well to the screen. Pre-Calculus: Chapter 12 Notes on Counting and Probability 1) Definition of variables: E and F are events. An event is a particular occurance that can happen in a given situation. n = N(E) is the number of ways that event E can happen. p = P(E) is the probability that event E will happen. S = the sample space: the total set of possible events that can happen in a given situation. Therefore, N(S) is the total number of possible events in a given situation. 2) The AND rule (the multiplication principle): N(E and F) = N(E) • N(F) P(E and F) = P(E) • P(F) To find the total number of ways (or probability) that event E will happen and then event F will happen, you multiply the number of ways (or probabilities) that each will happen. 3) The OR rule (the addition principle): N(E or F) = N(E) + N(F) P(E or F) = P(E) + P(F) To find the total number of ways (or probability) that event E will happen or event F will happen, you add the number of ways (or probabilities) that each will happen. (This applies only if the events are mutually exclusive.) 4) The NOT rule (the complement principle): P(not E) = 1 – P(E) Either an event happens (E) or it doesn't happen (not E). The probability that the event won't happen is 100% minus the probability that E will happen. The event is called not E, and the probability is called the complement of P(E). For example, if the probability of throwing a 5 on a dice is 17% , then the probability of not throwing a 5 is 83% Counting Principles 5) Counting with Replacement N(E repeated k times) = n^k Assume Event E can happen in n ways. Assume also that if E is repeated, it can again happen n times (this is called with replacement.) If event E is repeated k times then the total number of possible events is n^k. 6) Counting without Replacement If the number of ways event E can happen is reduced by one (1) each time it repeats, then we say that the event happens without replacement. There are three situations. a) n Factorial (n!) n! is defined as n (n –1) (n –2) • ..... • 1. For example, 3! = 3•2•1 = 6 This is the number of ways that a set of n objects can be arranged in order. b) Permutations (nPk) (when order does counts) nPk = n!/(n – k)! This is read "n Pick k". This is the number of ways that a subset of k objects can be picked from a set of n objects, in order. c) Combinations (nCk) (when order does not counts) nCk = n!/k!•(n – k)! This is read "n Choose k". This is the number of ways that a subset of k objects can be picked from a set of n objects, without regard to order. The combinatorial numbers nCk can be summarized in Pascal's Triangle. nC0 = nCn = 1 Only 1 way to choose none or all nC1 = nCn–1 = n n ways to choose 1, or all but 1 nCk = nCn–k Choosing some (k) is the same as not choosing the rest (n-k) 7) Definition of Probability: Probability = number of ways the desired event can happen/total number of ways all possible events can happen P(E) = N(E)/N(S) (S = Sample space: all the possible events.) Two Types of Complex Probability Situations 8) Binomial Probability BP(n, k) = nCk • p^k • q^(n-k) Variables: p = probability of a success on one trial q = probability of a failure on one trial (q = 1 – p) n = number of trials k = number of successes Example: A baseball player gets a hit 30% of the time. What is the probability that she will get 3 hits in 5 at-bats? p = 30% chance of getting a hit in one at-bat q = 70% chance of not getting a hit in one at-bat n = 5 at-bats k = 3 hits P (3 hits) = 5C3 • 30%^3 • 70%^2 = 13% 9) Combinatorial Probability Assume a set of n objects contains a subset of m objects. Assume k objects are chosen from the set of objects on n objects. What is the probability that all of the objects will be chosen from the subset of m objects? The combinatorial probability formula is CP(n, m, k) = mCk/nCk Example: A group of 20 people contains 12 women. Three people are chosen at random. What is the probability that they will all be women? P(3 women) = 12C3/20C3 = 220/1140 = 19% Maybe someone will find all this useful.jdk
May 29, 2017
May
05
May
29
29
2017
02:33 PM
2
02
33
PM
PDT
Hi PaV. I appreciate your lengthy post, and have some relevant things to say, I think, which I hope to get to later today. But one sentence jumped out at me:
When a “pattern” is detected, then “independent events” become “dependent events,” and the probabilities are no longer added but MULTIPLIED!
I don’t believe this is correct. The basic rule is that if you want the probability of A or B you add the probabilities, but if you want the probability of A and B, as in successive independent events, you multiply the probabilities. The probability of throwing a 1 or a 6 is 1/6 + 1/6 = 1/3 The probability of throwing a 1 and a 6 (either successively, or with two dice which are distinct from each other), the probability is 1/6 • 1/6 = 1/36. These are basic rules for computing probabilities.jdk
May 29, 2017
May
05
May
29
29
2017
02:08 PM
2
02
08
PM
PDT
jdk: I looked on my computer to see if I had written anything about Perakh's treatment of ID. And, I had. Looking over it, from a "probabilistic" point of view, I imagine we could easily say that Perakh had made no errors. [I do remember, however, that in his book he criticized how Behe used probabilities, and, in one occasion, did so in a way that contradicted what he had written in an appendix. But, no need to go into that. I haven't the time, nor disposition. And, again, it wasn't that he got his probability theory wrong, but that he was being dishonest (it's also possible that he simply made a mistake)] However, along the lines of what I had said earlier about there being a kind of "duality" when it comes to probabilities (remember, e.g., the cards written in invisible ink, and then the markings appearing. When there is no markings the actual 'event' of dealing the cards tells us nothing; it is only when the markings appear that we can, and do, make distinctions. This points to how the mind is involved in probabilities. Well run into that in what I'm going to post from my Word file). And it is here that I believe Perakh misses the mark: From Chapter 13, Unintelligent Design
Consider an experiment with a die, where events are sets of ten trials each. We assume an honest die was well as independence of outcomes. If we toss the die once, each of the six possible outcomes has the same chance of happening, 1/6. Assume that in the first trial the outcome was, say, 3. Then we toss the die the second time. It is the same die, tossed in the same way, with the same six equally probable outcomes. To get an outcome of 3 is as probable as any of the five other outcomes. The tests are independent, so the outcome of each subsequent trial does not depend on the outcomes of any of the preceding trials. Now toss the die in sets of ten trials each. Assume that the first event is as follows: A (3, 5, 6, 2, 6, 5, 6, 4, 1, and 1). We are not surprised in the least since we know that there are 6^10 (that is 60,466,176) possible, equally probable events. Even A is just one of them and does not stand alone in any respect among those over sixty million events, so it could have happened in any set of ten trials as well as any other of those sixty million variations of numbers, Let us assume that in the second set of ten trials the vent is B (6, 5, 5, 2, 6, 3, 4, 1, and 6). Again, we have no reason to be surprised by such a result since it is just another of those millions of possible events, and there is no reason for it not to happen. So far the probability theory seems to agree with common sense. Assume now that in the third set of ten trials the event is C (4, 4, 4, 4, 4, 4, 4, 4, 4, and 4---the “all 4s” event described earlier in the chapter). I am confident that in such a case everybody would be amazed, and the immediate explanation of that seemingly “improbable” event would be the suspicion that either the die has been tampered with or that it was tossed using some sleight of hand. While cheating cannot be excluded, this event does not necessarily require such an assumption. Indeed, what was the probability of event A? It was one in over sixty million. Despite the exceedingly small probability of A its occurrence did not surprise anybody. What was the probability of event B? Again, only one in over sixty million but we were not amazed at all. What was the probability of event C? The same one in over sixty million, but this time we are amazed. Why does “all 4s” seem amazing? Only for psychological reasons. It seems easier to assume cheating on the par of the dice-tossing player than the never-before-seen occurrence of all 4s in ten trials. What is not realized is that the overwhelming majority of events other than this one were never seen, either. There are so many possible combinations of ten numbers, composed of six different unique numbers, that each of them occurs extremely rarely. The set of ten identical numbers seems psychologically to be “special” among combinations of different numbers, but for probability theory this set is not special. To view an event as special means abolishing the premise of the probabilistic estimate---the postulate of a fair die. . . . There is no doubt that the viewpoint of probability theory is correct, even in the face of contradictory common sense. Such a human psychological reaction to an improbable event such as ten identical outcomes in a set is as wrong as the suggestion to a pilot of a spacecraft lagging behind to increase her speed if she wishes to overcome a craft ahead of hers in orbit.
I suppose Perakh means to say by the 'spacecraft' analogy, that by "increasing" one's speed, you won't "overcome" the other craft, but will, in fact, simply enter some different orbit. IOW, reality is "counter-intuitive." This is, of course, the evolutionist's argument, as though intuition was left behind with the discovery of Copernicus. If this is what Perakh means, it's almost a non sequitur. Again, I hearken back to the notion of "duality." That notion points out that the human mind is very much a part of probability. And it is a part of probability when the ideas of probability begin to give us "knowledge," or "information." Here, Perakh uses the word "psychological." His use of this word betrays his deliberate attempt to sidestep the issue of "information." The reason that 10 4's in a row catches our eyes is that we humans, familiar with the game of dice, "know" the improbability of such an ACTUAL even from happening. In the world of probability, they would all be the same; but in the real world we live in, we know that this is highly improbable. Thus, TWO worlds: duality. Now, as I say, Perakh dismisses this. But let's take a closer look. On the first roll of a single dice, any number, 1 to 6, is expected; on the second roll, any number is again expected, but the roll of TWO 4's in a row is noticed; three 4's in a row is NOT expected; nor the 4th, nor the 5th, and exceedingly the remaining five 4's that will be rolled. So, in the 'world' of Perakh, and 'pure' probability, 10 4's is just like any other roll of the die ten times. But, in the "real" world of intelligent beings, we "know" that something's up. If you were at a crap table and rolled 10 seven's in row, the pit boss would come looking for you. So, in the "real" world the following occurs (in our minds, beds of intelligence and knowledge): when a die is rolled and it comes up with a number, we think that this has happened independently of the first roll; this makes EACH roll of the die a 1 in 6 event. They're all "independent" events. However, when we see 4 after 4 being rolled, we begin to link these 'seemingly' independent events together. So, now, the likelihood of a 4 in the first roll is 1 in 6; but now the probability of the second roll is 1 in 36; and the third, 1 in 108, with the tenth 4 being rolled having a probability of 1 in 10^6. What Perakh has done, perhaps unwittingly, is to say that any group of ten rolls of a die ALL have a probability of 1 in 10^6. But that's not so. It's only so when the "mind" decides to "group together" individual events into a "dependent" whole. (And the "mind" doesn't normally do this when viewing the individual roll of a die.) Notice that it is the "mind" doing this, and deciding this. It's just like the shuffling of cards printed in invisible ink. The mind cannot make distinctions in that case; and, probability theory cannot apply. So, contra Perakh, what the "mind" does, or doesn't do, is essential: and is not simply a "psychological" reaction. It is knowledge at work using information that the "mind" has acquired over the lifetime of the individual. ++++++++++++++++++++++++++++++ To the UD community, and any on-lookers: I actually think I've stumbled upon a very important point in ID theory, and it has to do with the whole notion of "pattern," or "specification." When a "pattern" is detected, then "independent events" become "dependent events," and the probabilities are no longer added but MULTIPLIED! It is the "mind" that connects individual events into a greater whole. We do this all the time, in all sorts of ways, every day. It's called "pattern recognition." I recognize someone's face from a distance. I recognize my car in the middle of a parking lot. I recognize the face of Abraham Lincoln on the penny I have in my hand. Sometimes it happens, though, where we recognize a pattern--notice that the 'pattern' has to be linked to our base of knowledge (as Demski includes in his paper on Specification)--and, if this pattern involves some underlying series of events involving probability, our "minds" then connect seemingly "independent events" and combines it into one "inter-dependent" event. So, e.g., a series of 1' and 0's numbering 1,000 comes to be "recognized" as representing the first 10 prime numbers using ASCII code. (Dembski's example), and the string of numbers goes from simply being a SERIES of events, each with a probability of 1 in 2, to ONE event, with a probability of 1 in 2^1000. Are we justified in saying that a protein is not just the sum of events, each of which is 1 in 4 ? (the 4 nucleotides that form the DNA code) Is this simply some "psychological" trick being played on us? Are we simply failing to be "counter-intuitive"? Well, the fact that machinery inside living cells can convert this sequence of nucleotides into codons, and codons into amino acids, and amino acids into proteins strongly suggests that the CELL recognizes a pattern, and the entire sequence as a "combined event": i.e., the cell machinery does not see a single nucleotide in isolation from the remaining ones, but views the entire sequence as a "combined event." This is nub of of the difference that holds apart those who hold to ID, and those, like Dawkin's, who want to climb Mt. Improbable. jdk: I'm interested in your reaction, even though I've strayed away from 'pure' probability theory.PaV
May 29, 2017
May
05
May
29
29
2017
11:43 AM
11
11
43
AM
PDT
I'm with Dave. I'm not interested at all in Perahk's anti-ID position.jdk
May 29, 2017
May
05
May
29
29
2017
08:28 AM
8
08
28
AM
PDT
EugeneS, If I were to attempt to read Perakh's mind, I would guess he is probably interested in correcting some misconceptions as well. Everyone likes to get their 2 cents in. But I'm sure no one is interested in my opinions about Perakh's motives. Does Perakh make any incorrect statements about probability in this book? That's what I really care about here.daveS
May 29, 2017
May
05
May
29
29
2017
07:48 AM
7
07
48
AM
PDT
Let's put it plainly. All the likes of Shalitt, Perakh, Dawkins, Brian Cox, etc. want is exclude God from their lives. For this purpose they abuse their own rational faculty, science and whatever else it takes to abuse. Fine, their own lives, their own decisions. But there invariably comes a time to pay the bill... Eccl. 11:9.EugeneS
May 29, 2017
May
05
May
29
29
2017
07:35 AM
7
07
35
AM
PDT
to PaV re 180. You write,
Do you disagree with the notion that the expectation value of receiving a bridge hand is 1.0?
First, as Dave and I have pointed out, I believe you are using the term "expectation value" when you mean probability. See post 117. But yes, I agree with that. In fact, I thought we had agreed on that long ago. Here is a summary of two main points: 1. The probability of being dealt a hand is 1, because we are assuming a hand gets dealt. More formally, if we have a sample space S with n equiprobale events, then when an event happens it has a probability = 1 of being an element of the sample space. That is by definition, as the sample space is the set of all events that can happen in the situation. 2. The probability of getting a particular hand is 1 out of 4 x 10^21 (assuming we are taking order into account.) More formally, the probability of any particular event is 1/n. Does this agree with what you are trying to say? I thought that we had all agreed on this quite a few posts ago.jdk
May 29, 2017
May
05
May
29
29
2017
05:46 AM
5
05
46
AM
PDT
to PaV re 178: Perahk specifically says at the start of his example that "all n tickets are distributed." He, and I, are obviously using the lottery as a model for a situation where you have 1,000,000 equiprobale events, and one of them is selected. So I don't think your objections in 178 are relevant to, nor accurately depict, what Perakh actually wrote in the example I summarized. Perhaps you could read that section again and refresh your memory, starting on page 385. It's only about three pages long.jdk
May 29, 2017
May
05
May
29
29
2017
05:23 AM
5
05
23
AM
PDT
jdk: Do you disagree with the notion that the expectation value of receiving a bridge hand is 1.0? You mention a weighted average. Well, the odds of ANY bridge hand is 1 in 4 x 10^21. And there is only ONE way of getting that hand. So, if you have an expectation value defined as the number of ways of getting a hand x the odds of getting that hand, this product summed over all possible hands, and then divided by the odds of any ONE Bridge hand summed over all possible hands, you get 1.0. Am I missing something?PaV
May 28, 2017
May
05
May
28
28
2017
10:10 PM
10
10
10
PM
PDT
Origines: Dembski has modified his approach to information in a way that involves searches and probability distributions. In it he finds that successful searches involve information, and that that information can be traced back to some kind of input of information, and this can be linked to a designer. In his new approach, the upper probability bound is not needed. That's my summation. But, I'm guessing you got that much, too. So, I'm not sure what's making you scratch your head.PaV
May 28, 2017
May
05
May
28
28
2017
10:04 PM
10
10
04
PM
PDT
jdk:
In a lottery with a million numbers, there is a probability of 1 that someone will win, and a probability of 1 out of 1,000,000 that a particular person (let us say, Joe) will win.
Perakh says this, and you say that his analysis is right. This statement is wrong. There is not a probability of 1 that someone will win. There is a probability of 1 only upon the condition that you will keep selecting numbers until someone wins. And this involves intentionality on the part of an "intelligent" being. The whole argument of ID is that "intelligence" is the only way of overcoming extreme improbabilities. Again, the probability, outside human agency, is not 1 that someone will win. For example, we know of lotteries where no one has the winning ticket. It is then extended, and you have super lotteries. I'm not exactly sure how this works, but I'm guessing it's along these lines. Everything rolls over to the next week's lottery. Eventually, someone wins. Now, where Perakh is being 'slick' is by not mentioning the word "eventually," and by not specifically pointing out that this probability is 1.0 only if there have been an appropriate number of lottery tickets sold. These are important, and I think deceptive, omissions. And it is done so as to say, "Well, look, in the lottery your chance of winning is only 1 in 100 million. And yet someone wins all the time." The deception is that no mention is made that this is simply how a lottery is "designed" to work (interesting choice of a word, eh?). It cannot work otherwise. And, that lotteries work by selling a sufficient amount of tickets. Naturally, if the odds are 1 in 100 million, and you give out $60 million, but only sold 2,000 tickets at a dollar a piece, there will be one, and only one, lottery. So, this gets into how many lottery tickets are sold. And the probability is one simply because enough, generally, are sold. Bringing this across to biology, if the odds of a particular protein sequence is 1 in 10^75, this means for this to "happen," in the same way that a "winning ticket" happens, is for there to be 1 in 10^75 cell division/replications, more or less. And it is this impossibility that Perakh wants to avoid, and does, by failing to bring in the reason for his saying the probability is very high, nearly 1, of there being a winning ticket. So, it is said: "Your chance of winning the lottery is 1 in 100 million; yet someone always wins. See, highly improbable things happen all the time." And, with this 'way of the hand,' evolution becomes probable--or even 'expected.' This is disingenuous.PaV
May 28, 2017
May
05
May
28
28
2017
09:43 PM
9
09
43
PM
PDT
Thanks, hnorman5. Perakh wrote his book back in 2003 and Kitcher's book was in 2008, so Perakh wouldn't have been responding specifically to Kitcher. But I've never understood what argument of Kitcher's people were concerned about here anyway. A phrase of Kitcher's triggered this discussion, and it seems to me we've reached quite a bit of agreement as we've tried to clarify what the issues are. Also, I've paid no attention to any arguments, by Perakh or anyone else, about how all this does, or might, apply to the universe, or life: I've been interested in the pure mathematics.jdk
May 28, 2017
May
05
May
28
28
2017
06:56 PM
6
06
56
PM
PDT
jdk at 170 I think your comments on Parakh's take on multiple lottery wins are correct. What's odd is that he seems to be arguing on the wrong side. He clearly explains the role of specification and refutes Kitcher's argument. I'm not sure if this represents a concession on his part that he's declaring irrelevant or what. He does go on to say that the universe could have been created by high probability events but I don't know if that's the main thrust of his argument.hnorman5
May 28, 2017
May
05
May
28
28
2017
06:27 PM
6
06
27
PM
PDT
Origenes, I now understand the difference between the 1/S and 1/S^2 situation, I think. We'll use your lottery example with 1,000,000 numbers (but I'm also thinking of your earlier example with just head and tails, in 154) Let S = the number of equiprobable events in the sample space. S = 1,000,000 in the lottery example and S = 2 with a coin. The probability of any one particular event happening is 1/S. Now, specify one of the events, such as E = 672,483 in the lottery (or E = T with a coin.) The probability of a particular event (Joe's ticket, or the flip of the coin) matching the specification is 1/S. WE have agreed on this, and you describe it in 171 when you write,
Joe buys a lottery ticket — nr. 672483 (1) The Kitcher question: “What is the probability that you get exactly those numbers in exactly that order?” Answer: 1 in 1000.000 (assuming that all tickets are available). Suppose that there is one winning ticket and it is Joe’s ticket. (2) What is the chance that Joe’s ticket is the winning ticket? Put another way: what is the chance that a random lottery number generator produces the numbers sequence that matches Joe’s specification? Answer: 1 in 1000.000.
So far so good. Now you introduce a different question.
(3) What is the chance for anyone to get exactly the 672483 lottery ticket AND this ticket to be the winning ticket? Answer: 1 in 1000.000^2
What you have done here is introduced a third element in the situation: essentially an additional specification for the winning lottery ticket number, which is already the specification for Joe to be a winner. If you decide before hand that 672,483 is the number in question, you are asking what is the probability Joe has this number (1/S) and what is the probability that this number is the winning number (also 1/S). Joe may have that number but it might not be the winning number: probability of that is 1/S. The winning number might be that number, but Joe doesn't have it: the probability of that is 1/S. The probability, for Joe "to get exactly the 672483 lottery ticket AND this ticket to be the winning ticket" is thus 1/S^2. What I don't understand is why this is significant. The more basic situation is you call heads or tail and then I flip the coin. There is an event and a specification, and a 1/S probability they match. What would be a meaningful situation where you would have a specification for the specification? The situation you have here is like this: I am going to flip the coin and you are going to call it heads or tails. Your chance of success is 1/2. However, another person is standing off to the side and quietly says to his friend, "I think he's going to call tails." The third person has added a specification concerning your choice of heads or tails. Therefore there is a 1/4 chance that you will call tails (1/2) and the coin will actually land tails (1/2). This is the situation you described in 154.jdk
May 28, 2017
May
05
May
28
28
2017
06:19 PM
6
06
19
PM
PDT
Thanks. I appreciate knowing that we agree, Also, I really know nothing about Dembski's conservation of information approach, and the article you linked to looks long and not straightforward, so I'll pass on thinking about that unless you can show an example of what he is talking about.jdk
May 28, 2017
May
05
May
28
28
2017
04:40 PM
4
04
40
PM
PDT
Jdk: If I have a very large sample space but also a very large number of events which match a specification, then I am I justified in inferring chance and not design if the ratio of potential matching events to sample space is large enough?
Jdk, you are obviously correct IMO. Meanwhile I am trying to understand Dembski. Here he is saying that 'conservation of information' renders his unisversal probability bound of 1 in 10^150 irrelevant:
Dembski: .... The animating impulse behind Shallit’s email, and one that Felsenstein seems to have taken to heart, is that having seen my earlier work on conservation of information, they need only deal with it (meanwhile misrepresenting it) and can ignore anything I subsequently say or write on the topic. Moreover, if others use my work in this area, Shallit et al. can pretend that they are using my earlier work and can critique them as though that’s what they did. Shallit’s 2003 paper that Felsenstein cites never got into my newer work on conservation of information with Robert Marks, nor did Felsenstein’s 2007 paper for which he desires a response. Both papers key off my 2002 book No Free Lunch along with popular spinoffs from that book a year or two later. Nothing else. So, what is the difference between the earlier work on conservation of information and the later? The earlier work on conservation of information focused on particular events that matched particular patterns (specifications) and that could be assigned probabilities below certain cutoffs. Conservation of information in this sense was logically equivalent to the design detection apparatus that I had first laid out in my book The Design Inference (Cambridge, 1998). In the newer approach to conservation of information, the focus is not on drawing design inferences but on understanding search in general and how information facilitates successful search. The focus is therefore not so much on individual probabilities as on probability distributions and how they change as searches incorporate information. My universal probability bound of 1 in 10^150 (a perennial sticking point for Shallit and Felsenstein) therefore becomes irrelevant in the new form of conservation of information whereas in the earlier it was essential because there a certain probability threshold had to be attained before conservation of information could be said to apply. The new form is more powerful and conceptually elegant. Rather than lead to a design inference, it shows that accounting for the information required for successful search leads to a regress that only intensifies as one backtracks. It therefore suggests an ultimate source of information, which it can reasonably be argued is a designer. I explain all this in a nontechnical way in an article I posted at ENV a few months back titled “Conservation of Information Made Simple” (go here).
Origenes
May 28, 2017
May
05
May
28
28
2017
04:19 PM
4
04
19
PM
PDT
Hi Origenes. I think I understand your 1/s^2 example now, and we'll respond shortly. But I'm also interested in hearing what you think about the middle part of 165 above, where I responded to you by writing,
I agree with the good summary of issues (1) and (2). I disagree with (2) in that it only considers the situation where the sample space is large and the specification space is small. In that situation, yes we would conclude design, such as in the situation I mentioned above as “ludicrously improbable” However, I think it’s important to realize that it is the relative size of the sample space and the specification space that determines the probability of the match between the event and the specification. An obvious situation is throwing three coins. The sample space is so small that there is no specification for which a match would be considered too improbable to be due to chance. The other situation is one I’ve mentioned several times, but Origenes has not acknowledged as relevant: the situation where the specification is broad enough to include a large number of matching events. The example I’ve used above is if the specification is “deal 13 cards in order without any red face cards”, the specification space is so large that this will happen about 16% of the time. Therefore, if it happens, I’m sure we would believe that chance is a sufficient explanation, and that design would not be a warranted conclusion.
So what do you think>? If I have a very large sample space but also a very large number of events which match a specification, then I am I justified in inferring chance and not design if the ratio of potential matching events to sample space is large enough? Is my question clear?jdk
May 28, 2017
May
05
May
28
28
2017
03:47 PM
3
03
47
PM
PDT
A lottery is good example. Suppose a lottery with a million numbers. Joe buys a lottery ticket — nr. 672483 (1) The Kitcher question: "What is the probability that you get exactly those numbers in exactly that order?" Answer: 1 in 1000.000 (assuming that all tickets are available). Suppose that there is one winning ticket and it is Joe's ticket. (2) What is the chance that Joe's ticket is the winning ticket? Put another way: what is the chance that a random lottery number generator produces the numbers sequence that matches Joe's specification? Answer: 1 in 1000.000 Note that, although probabilities are the same, it is much easier buying a ticket than winning the lottery. (3) What is the chance for anyone to get exactly the 672483 lottery ticket AND this ticket to be the winning ticket? Answer: 1 in 1000.000^2Origenes
May 28, 2017
May
05
May
28
28
2017
03:11 PM
3
03
11
PM
PDT
Here is what Perakh is saying about the lottery: In a lottery with a million numbers, there is a probability of 1 that someone will win, and a probability of 1 out of 1,000,000 that a particular person (let us say, Joe) will win. [I'll add: So when Joe wins, we, the outside observer, are not surprised, because we had no prior specification about who would win, and someone had to win. Joe, however, did have a prior specification about the winner, namely himself, so he is amazed, and considered himself lucky that he hit the 1 in a million jackpot.] In a lottery with only a 100 people, there is a probability of 1/100 ^3 that Joe will win three times in a row, which is 1 in a million, which is the same probability of Joe winning the bigger lottery. Why are we not surprised when Joe wins the big lottery, not concerned at all about the chance occurence of a 1 in a million event, and yet we are surprised, and might even suspect cheating, if Joe wins the smaller lottery three times in a row, Perakh explains, and I now think he is right, that even though we may not be aware of it, we are not comparing Joe's chances of winning, which are the same in both cases, but rather are comparing the probability of someone winning. In the first case (a million people) the probability of someone winning = 1. In the second case, there are 100 people who might win three times is a row, so the probability of someone winning three times in a row is 100/1,000,000, which is 1/10,000. So it is 10,000 times less likely that someone will win the the little lottery three times in a row than it is that someone will win the big lottery, which is certain, even though the odds of a particular person (Joe) winning the big lottery or three times in a row in the small lottery are the same. So we are justified in being surprised, and perhaps even suspecting cheating, if Joe wins the little lottery three times in a row, but not justified in suspecting cheating if he wins the big lottery, because we aren't really thinking about Joe in particular, but rather about the generic "someone" winning. I think Perakh has a good point, and I don't see anything wrong about this analysis,jdk
May 28, 2017
May
05
May
28
28
2017
01:45 PM
1
01
45
PM
PDT
Deleted - I understand now something I said was wrong. Here is what Perakh is saying about the lottery::jdk
May 28, 2017
May
05
May
28
28
2017
01:00 PM
1
01
00
PM
PDT
jdk:
Maybe this is the distinction PaV meant to highlight.
Yes, you're right. That is what I meant about equivocation. I'm not sure I want to spend a lot of time re-reading Perakh, and digging into it again. But, that said, the comments you made reflect my disappointment with him. You seem to take a more benign view of his treatment of the lottery. Just from memory, that is where I thought he was way off, and at points made little sense. It is here where I think he is particularly wrong. I'm happy you took the time to look. Realize that Perakh is looked to by many people who disagree with ID as an authority on these matters.PaV
May 28, 2017
May
05
May
28
28
2017
11:42 AM
11
11
42
AM
PDT
jdk,
The hard subject, which he doesn’t handle very well and which we are struggling with also, is the issue of some specifications having meaning or significance to us for various reasons, and thus seeming more improbable when they occur than when a non-meaningful event of equal probability occurs. He discusses cognitive and psychological issues in addressing low probability matches between specifications and events, but I don’t think he is very clear. But again, I think we are struggling with understanding these issues also. I did find his example of multiple lottery wins interesting, and think there may be an important point lurking there but would have to study it a bit to explain.
I only have access to a few pages in chapter 13 (and elsewhere where he describes the multiple lottery wins), but I found it less than 100% convincing (or perhaps it's a good explanation, given that he is writing for those of us with limited knowledge of stats) Obviously we all would intuitively conclude there is some cheating going on if some particular person won a large lottery three weeks in a row. But how to justify this rigorously? I can only think back to a mostly forgotten class where likelihood ratio tests were discussed, where you compare two specific competing hypotheses proposed to account for the data. The null hypothesis is that the lottery is fair. What precisely is the alternative hypothesis? One option is that the person who is cheating sets up the lottery so he wins every week. Clearly that would beat the null hypothesis. But there are innumerably many ways the lottery could be rigged. For example, maybe it's set up so that whoever wins during the first week will win every subsequent week (so it's fair in week 1, but rigged thereafter). I don't know how to "objectively" choose an alternative hypothesis in this case.daveS
May 28, 2017
May
05
May
28
28
2017
08:02 AM
8
08
02
AM
PDT
I read Chapter 13 of Perahk's book. First, I didn't think it was very good. As someone who has written math materials explaining probability to good high school students, I found his explanation of basic probability principles correct, but not very well done. (And I did disagree with some of his statements, but they don't relate to the subjects we are discussing.) Second, I only paid attention to the part about pure probability. I paid no attention to all the asides about origin of life or Bible codes. Third, I think he pretty much said things we have said, such as the probability of getting some hand is 1, but the probability of getting a particular hand is 1/S. (He uses n for the number of events in S.) The hard subject, which he doesn't handle very well and which we are struggling with also, is the issue of some specifications having meaning or significance to us for various reasons, and thus seeming more improbable when they occur than when a non-meaningful event of equal probability occurs. He discusses cognitive and psychological issues in addressing low probability matches between specifications and events, but I don't think he is very clear. But again, I think we are struggling with understanding these issues also. I did find his example of multiple lottery wins interesting, and think there may be an important point lurking there but would have to study it a bit to explain. With all that said, PaV, can you point to some statements (or even page numbers and paragraphs) that you most take issue with?jdk
May 28, 2017
May
05
May
28
28
2017
07:05 AM
7
07
05
AM
PDT
1 7 8 9 10 11 15

Leave a Reply