Uncommon Descent Serving The Intelligent Design Community

Some Thanks for Professor Olofsson II

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The original Professor Olofsson post now has 340 comments on it, and is loading very slowly.  Further comments to that post should be made here.

Comments
JT 8 The issue is."How reasonable is it to assume a uniform pdf when calculating the probability of an outcome arising by chance" How can the complete cause for some thing be more probable than the thing itself. Once you have the complete cause for the thing, the thing itself occurs. Of course the result can't be less probable than the cause if it always follows the cause. But that just goes to show that the result (the 100 bit string) was not the result of a uniform pdf. The presence of the cause means the supposed pdf was wrong. Let's make it more concrete. Suppose the outcome is 100 1's. The probability of this outcome assuming a uniform pdf is 2^-100. However, if a computer programme which amounts to write 1 a 100 times is only 10 bits long (one can imagine such a mechanism arising naturally) then the probability of this arising under a uniform pdf is 2^-10. Therefore, the probability of the 100 1's is actually 2^-10 and the assumption of uniform pdf was very misleading. I think I must have missed something. This all seems so trivially obvious??Mark Frank
December 8, 2008
December
12
Dec
8
08
2008
05:31 AM
5
05
31
AM
PDT
Barry, thank you for the thread; Sal Gal, thank you for the link. There has been confusion over Dembski’s point (1) in the other thread. What I believe he is saying is that chance, necessity, and design may all contribute to an event. I agree.tribune7
December 8, 2008
December
12
Dec
8
08
2008
05:11 AM
5
05
11
AM
PDT
JT: I am always thinking in terms of the real thing: biological information. In the cell, you have not, as far as we know, any computer which can calculate a compressible sequence and output it. First of all the protein gene sequences are not compressible, and second, there is no computer there. Even in the abstract example, you don't need only the input program. You need some Turing machine too, where the input program can be inputted. So, the real probability of the compressible string to arise by chance in a purely random environment through a compressed input program has to take into account the probability of both the Turing machine and the program to arise by chance and to work to produce the required output. I am not sure that probability is higher than the probability if the whole string arising by random variation alone. Perhaps, for very long strings, that would be the case. But again, that has no relevance to the biological issue.gpuccio
December 8, 2008
December
12
Dec
8
08
2008
02:47 AM
2
02
47
AM
PDT
gpuccio - Its strange because we start saying the same thing and end up saying the exact opposite. We both agree the probabilities will be equal - you say they're equal to the output length, I say its the program-input length.JT
December 8, 2008
December
12
Dec
8
08
2008
01:11 AM
1
01
11
AM
PDT
Mark Frank wrote: Let’s assume that in the case of the bit string it means each bit is equally likely to be 1 or 0 and the probabilities are independent. Then the probability a particular bit string of length 100 is 2^-100. If that bit string can be generated by a program of length 10 then the probability of the that generating bit string is 2^-10 - which is much greater. How can the complete cause for some thing be more probable than the thing itself. Once you have the complete cause for the thing, the thing itself occurs. (Not to inundate you with technical jargon.) I would suppose that the probability of the output bit string would be 2^10 except you have to consider the length of input to the program as well. So its the length of the smallest program-input that's relevant. So assume you have some active process f that came into existence by chance or has always existed for no apparent reason. This process acts on some thing else that also came into existence by chance, call it x. and the output of f acting on x is y. How can the probability of y be less than the probability of f(x). So no matter how long y is in bits its probablity can't be less than the probability of f(x). I did just read the following in the Dembski paper: define p = P(T|H) as the probability for the chance formation for the bacterial flagellum. T, here, is conceived not as a pattern but as the evolutionary event/pathway that brings about that pattern (i.e., the bacterial flagellar structure). So maybe what I'm saying is obvious to him and everyone else (though I'm not sure.) Sal Gal is the one apparently expert in Algorithmic Info Theory.JT
December 8, 2008
December
12
Dec
8
08
2008
12:50 AM
12
12
50
AM
PDT
Mark Frank: The "uniform probability distribution function” is referred to the distribution of all the possible forms of the whole sequence. So, for instance, in a 100 bit sequence, the probability ditribution alawys refers to a search space of 2^100 different sequences: if we assume an uniform distribution, each sequence will have the same probability (1:2^100). If the distribution is not uniform, some sequences will be more likely, others less likely. Let's remember that the total probability is always 1, and that the number of sequences is always 2^100. Obviously, some sequences could have 0 probabilities, depending on the constraints of the system. Your observation about the compressible sequence is not correct: the fact that the sequence has a lower compressed informational content does not mean that it has higher probability in a random system. It remains equally improbable. It just means that, if you have a computer and the correct algoritm, which is certainly shorter than the string otself, then you can generate the string without detailing each single bit. But, in a random system, the string will always have a probability of 1:2^100, if the distribution is uniform. Let's bring all that to the biological field. In DNA, you have a four letter alphabet (the four nucleotides). In proteins, you have a 20 letter alphabet, which is linked to the DNA alphabet by the genetic code. Is the distribution of, say, a 100 nucletide sequence uniform? Is the probability of each sequence 1:4^100? I believe it practically is. If you build a DNA strand randomly, the distribution will be uniform or quasi-uniform. As we are dealing with a complex biological system of synthesis, the true empirical distribution can obviously vary, according to the specific system. For instance, if there is different availability of the four nucleotides, some sequences will be more likely, others less. And there can be other factors which favor some nucleotides in a real biological system. So, yes, in a real system the empirical distribution will not be perfectly uniform. For proteins, I have already noticed that, being the genetic code asymetric, some aminoacids have higher probability to be represented in a random protein than others. And aminoacids are present in different concentrations in the cell. So, again, the distribution is certainly not completely uniform. Has all that any relevance to our problem, which is the nature of biological information? Practically not. Why? Because what we are interested in here is the probability distribution of functional protein (or DNA) sequences vs all non functional sequences of the search space. It is obvious that any asymetries in the theoretical uniform distribution can have no correlation with the general space of functional proteins. That should be evident, because there is no relationship between the constraints which make a protein functional (folding, active site, etc.) and the constraints which may influence the distribution of random sequences (availability of the elementary components, biological characteristics of the environment, etc.). They are obviously totally unrelated, unless you are a theistic evolutionist of the most deperate kind... So, a specific non uniform probability distribution can certainly favor one specific functional protein, by mere chance, but it will have completely different effects, on other functional proteins. And we have a huge number of functional proteins in natue, very differnt one form the other, organized in very different families and superfamilies, and whose primary sequences, and tertiary structure are completely different. Therefore, it is obvious that any deviance from the uniform distribution will have totally random effects on the space of functional proteins in respect to the general space of all possible proteins. Indeed, it is raher obvious that if the reastraints imposed on the system, which may cause a non uniform distributions, are too strong, the system will no more be flexible enough to express all functional sequences, even in a design context: in other words, let's suppose that the designer needs, for functional reasons, a specific protein where a 50% of tryptophan is required to achieve the functional sequence, and that the physical constraint of the system make that kind of sequence not only improbable, but impossible. Then even the designer cannot achieve that specific result. And if the designer is using, as a tool in engineering his proteins, some partially random variation (as do modern protein engineers, as well as the immune system), then if the probability of the target sequence is too low, even if not 0, just the same that result will be out of the designer's power. In other words, a random system must be flexible enough (in other words, behave according to a sufficiently uniform probability distribution) to be used as an instrument to generate functional information, even in the hands of an intelligent designer.gpuccio
December 8, 2008
December
12
Dec
8
08
2008
12:24 AM
12
12
24
AM
PDT
Sal Gal: I appreciate your perfectly balanced comments on Dembski's post. He is obviously not refuting the EF, but rather "dispensing" with it in favor of a more advanced approach to the matter. Obviously, CSI remains for him the most important concept, as can be clearly seen in his points 2 to 5. I am in whole agreement with him on all those points, and am looking forward to any new development of his thought.gpuccio
December 7, 2008
December
12
Dec
7
07
2008
11:46 PM
11
11
46
PM
PDT
JT wrote this (which I repeat to avoid switching back to the other thread) The objection is that Dembski’s calculations establish the probability of a bacterial flagellum being thrown together at a point in time, such that that molecules randomly floaiting around just by happenstance one day converged into the configuration of a bacterial flagellum, where no type of organism existed before. But, what if preexisting conditions in physical reality favored the formation of at least certain key attributes of a bacterial flagellum at a rate that was much higher than blind chance. The argument goes that Dembksi’s arguments do not address this, and the probability he calculated could be too low as a result. Well what I was saying was, suppose those preexisting conditions are such that they directly account for the formation of every key attribute of a bacterial flagellum. IOW, let’s just take it for granted that some identifibable physical process alone, sans ID, can completely account for the production of a bacterial flagellum from nothing. So the probability of getting a bacterial flagellum is equal to 1, and we have this physical process that preceded it to account for it, but now we can’t account for the origin of that physical process. Well what I’m saying is the probability of getting that physical process by uniform chance cannot be greater than the probability of getting a bacterial flagellum by uniform chance. Even if this physical process itself was directly caused by something that preceded it, you will eventually have to hit a point of origin, where nothing preceded it by blind chance or something else, and the probability of that point of origin for bacterial flagellums occurring by uniform chance cannot be greater than the probability of a bacterial flagellum itself occurring by uniform chance. It should be obvious why the cause for a bacterial flagellum cannot be more likely to occur than a physical flagellum itself. Just thinking about this statement for a few seconds should explain why. But to expand on this, in algorithmic information theory, the probability of a particular binary string C (e.g. 100011001…) is equal to the probability of the smallest program-input that will generate C as output. So that’s why the probability of a cause for a bacterial flagellum is equal to the probability of a flagellum itself. It is an interesting idea but I don't think it works. A "uniform probability distribution function" is not fully defined until you specify what it is uniform across. Let's assume that in the case of the bit string it means each bit is equally likely to be 1 or 0 and the probabilities are independent. Then the probability a particular bit string of length 100 is 0.5^100. If that bit string can be generated by a program of length 10 then the probability of the that generating bit string is 0.5^10 - which is much greater. What am I missing?Mark Frank
December 7, 2008
December
12
Dec
7
07
2008
11:41 PM
11
11
41
PM
PDT
There has been confusion over Dembski's point (1) in the other thread. What I believe he is saying is that chance, necessity, and design may all contribute to an event.Sal Gal
December 7, 2008
December
12
Dec
7
07
2008
10:45 PM
10
10
45
PM
PDT
There was an extraordinary clarification by Bill Dembski in the other thread. I'd like to start by thanking for setting matters straight. Here is his comment, for easy reference.
I wish I had time to respond adequately to this thread, but I’ve got a book to deliver to my publisher January 1 — so I don’t. Briefly: (1) I’ve pretty much dispensed with the EF. It suggests that chance, necessity, and design are mutually exclusive. They are not. Straight CSI is clearer as a criterion for design detection. (2) The challenge for determining whether a biological structure exhibits CSI is to find one that’s simple enough on which the probability calculation can be convincingly performed but complex enough so that it does indeed exhibit CSI. The example in NFL ch. 5 doesn’t fit the bill. The example from Doug Axe in ch. 7 of THE DESIGN OF LIFE (www.thedesignoflife.net) is much stronger. (3) As for the applicability of CSI to biology, see the chapter on “assertibility” in my book THE DESIGN REVOLUTION. (4) For my most up-to-date treatment of CSI, see “Specification: The Pattern That Signifies Intelligence” at http://www.designinference.com. (5) There’s a paper Bob Marks and I just got accepted which shows that evolutionary search can never escape the CSI problem (even if, say, the flagellum was built by a selection-variation mechanism, CSI still had to be fed in).
Sal Gal
December 7, 2008
December
12
Dec
7
07
2008
10:42 PM
10
10
42
PM
PDT
1 2 3 4

Leave a Reply