Uncommon Descent Serving The Intelligent Design Community

Mathematically Defining Functional Information In Biology

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Lecture by Kirk Durston,  Biophysics PhD candidate, University of Guelph

[youtube XWi9TMwPthE nolink]

Click here to read the Szostak paper referred to in the video.

 HT to UD subscriber bornagain77 for the video and the link to the paper.

Comments
Prof_P.Olofsson [27] "If there is a 1-in-a-million chance that something happens by chance versus some other explanation, you cannot say that “chance is a million times less likely.” You’d put a lot of innocent people in jail with such a logic! " You are confused. The use of DNA in court cases is always based on the probability of a match between a sample and the defendant. Lawyers will always tell you that the probability of a match is say 1 in 50 million (P1). In a case where there are only two possible outcomes, the second outcome must have a probability 1 - P1. On this basis people go to jail. Also, DNA match is not circumstantial. It is the most respected form of evidence on which many wrongfully convicted people were released.Peter
January 30, 2009
January
01
Jan
30
30
2009
09:25 AM
9
09
25
AM
PDT
bornagain[75], I don't think you're getting my point which is that there is no empirical meat with which we can cook up priors for "design" or "chance."Prof_P.Olofsson
January 30, 2009
January
01
Jan
30
30
2009
09:23 AM
9
09
23
AM
PDT
StephenB:77 "If you are so sure about what is wrong, it seems reasonable that you should be able to affirm what is right." That isn't always true, you can be certain that an explanation for a particular event is wrong without having to postulate a correct explanation. Take for example something like gravity and motion, you could come up with a mathematical description for the way gravity affects a mass which produces results that are inconsistent with observation, someone else could therefore be certain that your hypothesis is wrong without ever having to propose a corrected alternative.Laminar
January 30, 2009
January
01
Jan
30
30
2009
08:20 AM
8
08
20
AM
PDT
Professor Olofsson, as a mathematician, you must have some notion about the soundness of the Darwninian model, and, in a reciprocal sense, about providing a sound mathematical model for intelligent design. So, first, what is your estimate of the probability that incrementalism can do what Darwinists say it can do? There would seem to be only two possible alternatives [A] Darwinism is not really a scientific theory because it's paradigm is too vaguely defined to be measured, or [B] It really is a valid scientific theory, meaning that there is a way to measure the probability that incrementalism can do what Darwinists say it can do. Are you willing to either acknowledge [A] or provide a mathematical answer for [B]? Second, since you think Dembski's formulations are not supported by sound mathematics, you must have a better idea. What is it? It is one thing to kibbitz from the sidelines, it is something else to actually come up with something. If you are so sure about what is wrong, it seems reasonable that you should be able to affirm what is right.StephenB
January 30, 2009
January
01
Jan
30
30
2009
07:56 AM
7
07
56
AM
PDT
CJYman @ 67:
it first measures P(T|H) which implies measuring chance hypothesis against ID hypothesis [Bayesian]
Actually, the ID hypothesis plays no role in Dembski's method. Dembski in TDI: Because the design inference is eliminative, there is no "design hypothesis" against which the relevant chance hypotheses compete, and which must be compared within a Bayesian confirmation scheme.R0b
January 30, 2009
January
01
Jan
30
30
2009
07:47 AM
7
07
47
AM
PDT
Bayesian@63, I'm having a hard time seeing how the hypothesis that "someone or something with sufficient intelligence sets out to create the stuff" constitutes "all possible hypotheses that involve intelligence". How about the hypothesis that "someone with possibly insufficient intelligence sets out to create the stuff", or "someone intelligent set out to create something better, with the possibility of coming up short", or "someone intelligent chose what to create and created it". In each case, the conditional probability of the outcome would be less than 1. If your stated design hypothesis is all we need, couldn't we also define the chance hypothesis as "an unintelligent cause that is sufficient to the data was operating", in which case P(data|chance)=1? Interestingly, Dembski says that design hypotheses don't confer probabilities, and, in fact, that there is no design hypothesis, at least not one that can be compared to a chance hypotheses in a Bayesian analyses.R0b
January 30, 2009
January
01
Jan
30
30
2009
07:42 AM
7
07
42
AM
PDT
I'm sorry then for thinking you thought to highly of yourself, but my challenge to your lack of empirical substance stands as I have seen nothing of empirical merit on your part to back up your claims of extreme favoritism to the Darwinian perspective as far as the interpreting of the probability mathematics is concerned.bornagain77
January 30, 2009
January
01
Jan
30
30
2009
07:37 AM
7
07
37
AM
PDT
Professor O -- I enjoy your comments, as always, and I'm glad you are posting. Are you saying probability is never something we should base a decision upon?tribune7
January 30, 2009
January
01
Jan
30
30
2009
07:36 AM
7
07
36
AM
PDT
CJYman[67], Specification is Dembski's concept and it is entirely frequentist. It attempts to generalize the concept of rejection region. I have to rush to class now but we can talk more later, if you wish.Prof_P.Olofsson
January 30, 2009
January
01
Jan
30
30
2009
07:33 AM
7
07
33
AM
PDT
bornagain[70], Thanks for the kind words. My comment was for Bayesian though. In [63] he asked if he should "abandon the argument solely on his authority" (referring to me). My answer to him is "no."Prof_P.Olofsson
January 30, 2009
January
01
Jan
30
30
2009
07:22 AM
7
07
22
AM
PDT
Prof you state: You should not base anything on my authority but you might also refrain from accusing me of “bluffing.” Buddy you haven't even earned my respect for your "authority", and I damn sure ain't anybody special as you seem to think you are, furthermore until you put some actual empirical meat to all your high sounding chalk board rhetoric, you ain't gonna earn my respect.bornagain77
January 30, 2009
January
01
Jan
30
30
2009
07:14 AM
7
07
14
AM
PDT
Bayesian[63], General comment to keep in mind: Even if "chance" as such is well-defined, there are uncountably many chance hypotheses depending on what probability distribution one uses.Prof_P.Olofsson
January 30, 2009
January
01
Jan
30
30
2009
06:59 AM
6
06
59
AM
PDT
Bayesian[63], On this issue I agree with Dembski. There is no way we can make "reasonable assumptions" to come up with a prior for ID in examples relating to evolutionary biology. At any rate, such a prior is only a thought construction unless we want to claim that there was an initial random experiment that decided "ID" or "chance." As you are aware, there are those who are opposed to Bayesian methods in general, and this might be a case where they have a point. Let us now wait for Mr. Durston to present his argument. You should not base anything on my authority but you might also refrain from accusing me of "bluffing."Prof_P.Olofsson
January 30, 2009
January
01
Jan
30
30
2009
06:57 AM
6
06
57
AM
PDT
Can someone give me their two cents on a question I have. Here it is: Doesn't the measurement for a specification (and the above measurement for functional information -- both equations being extremely similar) seem to combine both bayesian analysis and frequentist analysis, since it first measures P(T|H) which implies measuring chance hypothesis against ID hypothesis [Bayesian] and then measures this against all probabilistic resources (M*N) as a "cut off point" [Frequentist]. As such doesn't this provide an even stronger measure of functional information and detection of intelligence than either solely Frequentist or Bayesian analysis, since frequentist merely creates an arbitrary cut of point if not utilizing probabilistic resources and Bayesian probability of chance vs intelligence is based on an arbitrary definition of specification so that the probability of intelligence is also vague. Is this correct or how far off am I? I hope I've made sense here, and I would much appreciate it if someone with a deeper knowledge of these fields could put in their two cents regarding this question.CJYman
January 30, 2009
January
01
Jan
30
30
2009
06:55 AM
6
06
55
AM
PDT
WeaselSpotting, Money is additive, biological fitness is multiplicative.Prof_P.Olofsson
January 30, 2009
January
01
Jan
30
30
2009
06:47 AM
6
06
47
AM
PDT
Thanks for the link benkeshet OK please someone who can decipher all the math in the paper; Bottom line Will they/we finally be able to mathematically demonstrate the empirically established principle of Genetic Entropy and unseat the current evolutionary speculation at the level of molecular biology with the math they demonstrate in the paper? From what I can gather in my very limited understanding this prospect looks very promising; for instance in this excerpt of the paper they delineated functional information between a small and large protein: although we might expect larger proteins to have a higher FSC, that is not always the case. For example, 342-residue SecY has a FSC of 688 Fits, but the smaller 240-residue RecA actually has a larger FSC of 832 Fits. The Fit density (Fits/amino acid) is, therefore, lower in SecY than in RecA. This indicates that RecA is likely more functionally complex than SecY. Thus from what I can gather this looks like it may be sufficient to establish Genetic Entropy: i.e. “But in all the reading I’ve done in the life-sciences literature, I’ve never found a mutation that added information… All point mutations that have been studied on the molecular level turn out to reduce the genetic information and not increase it.” Lee Spetner (Ph.D. Physics - MIT) and Commenting on a "Fitness" test, which compared the 30 million year old ancient amber sealed bacteria to its modern descendants, Dr. Cano stated: "We performed such a test, a long time ago, using a panel of substrates (the old gram positive biolog panel) on B. sphaericus. From the results we surmised that the putative "ancient" B. sphaericus isolate was capable of utilizing a broader scope of substrates. Additionally, we looked at the fatty acid profile and here, again, the profiles were similar but more diverse in the amber isolate." RJ Cano and MK Borucki thus loss in functionality and information as well as conforming to Genetic Entropybornagain77
January 30, 2009
January
01
Jan
30
30
2009
05:59 AM
5
05
59
AM
PDT
This looks interesting. "Measuring the functional sequence complexity of proteins" Background Abel and Trevors have delineated three aspects of sequence complexity, Random Sequence Complexity (RSC), Ordered Sequence Complexity (OSC) and Functional Sequence Complexity (FSC) observed in biosequences such as proteins. In this paper, we provide a method to measure functional sequence complexity. Kirk K Durston, David KY Chiu, David L Abel and Jack T Trevorsbenkeshet
January 30, 2009
January
01
Jan
30
30
2009
03:55 AM
3
03
55
AM
PDT
@Mark en PO, PO could be the incarnation of Thomas Bayes himself, but if he doesn't provide us an example that shows that, under reasonable assumptions, the probablisitic argument is nullified, then what can I do? Abandon the argument solely on his authority? Now, Mark has tried that, which is commendable, and gives me something to work with. First PO: P(ID)=P(chance)=0.5 is simply stating that ID is as likely as materialistic darwinism. Personally I agree that, because of Occam's razor, it is entirely reasonable to be biased in favor of a materialistic origin of life. So taking a very low P(ID) is very reasonable, perhaps P(ID)=10^-9 or even P(ID)=10^-150, which is Dembski's universal probability bound. And, as far as I'm concerned you may assume P(ID) as small as you like, provided you can give a reasonable explanation for your choice. For instance "Because I want P(data|chance) to be larger than 10^-150" does not seem reasonable to me. And taking P(ID)=0 is entirely unreasonable, because then we are excluding ID a priori. @Mark: Durston said that intelligence, e.g. human intelligence, is capable of producing proteins. In effect this means P(data|ID)=1. In other word: If someone or something with sufficient intelligence sets out to create the stuff (proteins/primitive organism), it will succeed. I find this an entirely reasonable assumption. It has been often stated, on this blog and elsewhere, that "ID includes all possible hypotheses that involve intelligence." And we all know what that means: Somewhere there is or was an intelligent person, alien, god or force that put somehow put information in living things. That force could even be programmed culling force, such as in Dawkin's weasel program. ID is well defined and you are right that 'chance' isn't, but the easy solution to that is to define chance as anything that does not involve intelligence. Then it is well defined.Bayesian
January 30, 2009
January
01
Jan
30
30
2009
03:13 AM
3
03
13
AM
PDT
I've never really enjoyed the Lottery analogy for evolution: Firstly, nobody/thing knows that they have entered the lottery; secondly, you don't supposedly win but your offspring does (and they don't share!); when you win, you don't know ... yet you've already collected; and lastly, it doesn't really mean anything until you have reproduced and passed on your winnings. Is this a fair assumption? Great to also see the author on board!AussieID
January 30, 2009
January
01
Jan
30
30
2009
02:29 AM
2
02
29
AM
PDT
Paul, Thanks for the 'peer review' :) I was quite a zombi yesterday evening, as illustrated. At least I have evidence that someone read what I wrote. Later I realized that I should have written 4^2 at the first place and refer to a 2 nucleotid long RNA, then it would have been fine, but never mind.Alex73
January 30, 2009
January
01
Jan
30
30
2009
01:26 AM
1
01
26
AM
PDT
I want to apologise to Kirk for the strength of my language in #44 and elsewhere. It makes quite a difference when you find the author is reading what you write! And is a lesson for me that I should always assume that will happen.. I do stand by my criticisms of what I saw in the video and look forward to a more detailed explanation.Mark Frank
January 29, 2009
January
01
Jan
29
29
2009
11:41 PM
11
11
41
PM
PDT
The payback is 1 million in both cases. Only if you wager your entire, initial $1,000 winnings. But that's not how evolution is supposed to work, is it?WeaselSpotting
January 29, 2009
January
01
Jan
29
29
2009
10:55 PM
10
10
55
PM
PDT
Guys, The lottery example is not helpful. What evolutionary theory actually posits is that the $1,000 winners "earn" enough to produce thousands of babies, who then grow up to play the lottery again, and then on the second (or is it 20th?) generation all of them can play again, making the second step virtually certain. This is all fine if the first step is in fact advantageous. But if it is not, Prof_P.Olofsson is right, there is really no advantage, and if anyone winning only one lottery is killed, then if two lottery tickets are not bought at the same time, the two-step method is worse off than the single try. (Actually, to be very technical, in the example given, if the $1,000 "invested" in the original lottery is "reinvested", the chance of winning $1,000,000 on the second round is less, because it depends on winning $1,000 a thousand times in a row, which is roughly 1 in 10^3000. But if the example is of a "fair" lottery, the probability of winning 1 million dollars in one fell swoop is the same as winning $1,000 twice in a row). The real question is not the mathematics. That is rather straightforward (although sometimes this seems to be a challenge also, as when someone describes 4^4 as first 16, then 64, instead of 256 :) ). The real question is which mathematics corresponds to the usual biologic situation. That is no longer in Prof's field of expertise, although I'm sure that, like the rest of us, he is an intelligent observer. I don't know of any 3-mutation payoffs that have been observed. The most I have seen are with 2-mutation payoffs, such as chloroquine, and now apparently citrate transport and nylonase. What we need is not statistics. It is biochemical/genetic data.Paul Giem
January 29, 2009
January
01
Jan
29
29
2009
10:15 PM
10
10
15
PM
PDT
WeaselSpotting[56], I don't think you read my post [52] carefully enough. The payback is 1 million in both cases.Prof_P.Olofsson
January 29, 2009
January
01
Jan
29
29
2009
09:32 PM
9
09
32
PM
PDT
Excellent point, Gil. Using Prof Olaffson's example, the odds of winning two 1000's (in a row) is the same as winning a million once. The odds of the two events occuring is the same, but the payback is hugely different...in one case you get 2000, in the other 1,000,000.WeaselSpotting
January 29, 2009
January
01
Jan
29
29
2009
09:09 PM
9
09
09
PM
PDT
KD, I want to hear your impute. Please come back soon. :-)Domoman
January 29, 2009
January
01
Jan
29
29
2009
08:54 PM
8
08
54
PM
PDT
Professor Olofsson, what is your estimate of the probability that incrementalism can do what Darwinists say it can do?StephenB
January 29, 2009
January
01
Jan
29
29
2009
08:45 PM
8
08
45
PM
PDT
bornagain[52], You say
My respect for your debating style is plummeting the longer I see you misleading the presentation of facts
which I think you ought to back up with some examples. When did I "mislead the presentation"? I have blogged on UD a few times before so I am used to such sweeping allegations but once in a while it would be nice to learn if there is any substance to them. I would like to point out that my comment about learning "last week" was a joke (Mr. Dodgen sounded a bit angry so I thought I would lighten the mood). Of course I've known it for a long time, ever since second year of graduate school. I would also like to assure you that my comment[38] about Mr. Bayesian was a joke. I am sure that Mr. Bayesian is a handsome man who smells nicely of lavender and pomegranate. My comments about probabilities are serious, factual and accurate to the best of my knowledge.Prof_P.Olofsson
January 29, 2009
January
01
Jan
29
29
2009
08:21 PM
8
08
21
PM
PDT
bornagain[51], My point was that I don't see how the kind of additivity that Mr Dodgen assumes is relevant in biology. What is it that adds up the way money does? Far more relevant, it seems to me, is multiplicativity. So instead of buying fixed-price, fixed-payout lottery tickets, think of games where you win in proportion to your wager. As you rightly point out, the chance to win twice in a 1/1000-probability game is the same as once in a 1/1000000-probability game, and if you wager your 1000-dollar win, you have your million. In this sense, the two scenarios are probabilisticallhy identical. In evolutionary biology (about which I know far, far less than about probability), you can only multiply probabilities for neutral mutations; those that remain in a roughly fixed proportion in the population. Favorable mutations will appear in an increasing proportion so in that case, you have a better chance to do it in two small steps than in one large step. I suppose we could do some computations here with reproducing individuals, mutations rates etc.Prof_P.Olofsson
January 29, 2009
January
01
Jan
29
29
2009
08:10 PM
8
08
10
PM
PDT
So professor , did you also learn that winning two 1000 to 1 lotteries is the same odds as winning only one million to 1 lottery, thus Gil's rightful stress that the odds get far worse! Please Tell me you aren't deliberately being deceptive (then again if you were deliberate would you tell me?), Others express respect for you and I'm sure in some areas you may merit it yet My respect for your debating style is plummeting the longer I see you misleading the presentation of facts. Slightly off topic video I just loaded: A Few Hundred Thousand Computers vs. A Single Protein Molecule http://www.youtube.com/watch?v=G-6mVr6vJJQbornagain77
January 29, 2009
January
01
Jan
29
29
2009
07:57 PM
7
07
57
PM
PDT
1 7 8 9 10 11

Leave a Reply