Uncommon Descent Serving The Intelligent Design Community

Mathematically Defining Functional Information In Biology

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Lecture by Kirk Durston,  Biophysics PhD candidate, University of Guelph

[youtube XWi9TMwPthE nolink]

Click here to read the Szostak paper referred to in the video.

 HT to UD subscriber bornagain77 for the video and the link to the paper.

Comments
Prof O @192 You say we can never rule out all chance hypotheses. That is true but, happily, we don't need to. Proofs are for math. Science doesn't require proof. It requires falsifiable, at least in principle, hypotheses. Thus we can state the ID hypothesis for the flagellum as: The ex nihilo creation of a flagellum cannot occur absent intelligent agency. The hypothesis can be falsified by a single observation of a flagellum forming ex nihilo by law and chance alone. The fine tuning problem isn't the same because to make it falsifiable would ask for a universe to be observed being created by law and chance alone. That's not an observation that can be made even in principle in any known fashion. Thus the problem remains in theoretical physics and ID is one of three possible explanations along with discovery of a law that demands just the right amount of mass in a universe to within a single grain of sand so that stars and galaxies can form or that there are an absurdly large number of universes (such as the 10^500 solutions to sting theory) all with different mass/energy totals and ours is one of that set and we're in it because we couldn't exist in any of the others. This is a very real problem for physicists. Fine tuning isn't something that cdesign proponentists made up for apologetic use. It's something that emerges from the laws of physics. Einstein IIRC was the first to find it and called it the cosmological constant and was required for a flat universe. He later thought it was a mistake and should have been a value of zero and dropped from the general relativity field equations. However today we believe it isn't quite zero but rather a number on the order of 10^-60 which, back in Einstein's time, was not distinguishable from zero. Its value today comes from observation, not theory. Another huge problem in cosmology is that quantum field theory predicts the cosmological constant should be 10^120 times larger than the observed value. There is no quantum theory of gravity and that's a huge gap in our understanding of nature. The holy grail of theoretical physics is a theory of gravity that encomposses both the quantum and macro scales. Classical and quantum mechanics are reconciled for all forces of nature except for gravity.DaveScot
February 2, 2009
February
02
Feb
2
02
2009
06:23 AM
6
06
23
AM
PDT
StephenB[224]:
It doesn’t help that your paragraph about what you do believe is followed by a clarification described in terms of what you don’t believe. Why not give it another try and put it in the form of an affirmation.
The first paragraph is what I'm affirming. You dispute it as follows:
We do not assume that humans create specified complexity, we know it to be a fact.
Who is "we"? The ID community or scientists in general? Facts in science are based on data. Where are the specified complexity data published?
That is why I raised the example of written paragraphs and sand castles. I trust that there is no need to provide a trillion other examples of humans creating design and no known examples of natural processes creating design. If you are disputing this point, please let me know in the most explcit terms possible so that we can discuss it.
You seem to be conflating design with specified complexity. Are the terms synonymous in your mind? In the most explicit terms possible, I'm saying that "specified complexity" has not been accepted as a legitimate scientific concept by the scientific community.R0b
February 2, 2009
February
02
Feb
2
02
2009
06:15 AM
6
06
15
AM
PDT
Can we now say that it is a million times more probable that she is guilty than innocent? I wouldn't, I would say, however, that it is a million to one that I'd hire her as a babysitter. Actually, it's closer to 10^150 to 1tribune7
February 2, 2009
February
02
Feb
2
02
2009
05:02 AM
5
05
02
AM
PDT
BA 77: I have little time or space to engage all the rabbit trails on this thread [cf. my always linked . . . including on Bayes, Fisher and Caputo], though I note SB and GP have raised some very useful points. That Weak Arguments FAQ (and glossary . . .) revision from the existing one will prove useful I believe . . . I will however respond on your, @ 197:
Are they actually getting a pure measure of functionality in information her?e i.e. is it a, across the board approximation of 3-D functionality to information?
First, a "no-brainer" footnote, that bio-functional, algorithm-driving information is precisely that: information. The 4-state G/C/A/T digital patterns in DNA strands make a meaningful -- functional -- difference to the implementing cellular machinery that uses it to step by step assemble proteins, whose function is in key sections [e.g. for folding and/or key-lock fitting and/or bringing to bear the right chemical functional groups in the right slots in an enzyme . . . ] very sensitive to composition. KD cites Axe et al (in a peer-reviewed paper . . . FYI, Judge Jones!) on how that sometimes at least works out: ~ 1 in 10^65 - 70 or so in the peptide sequence space. [And that is an empirical -- observationally based -- probability estimate for those who don't know what such is.) Since Hazen et al specify -- one assumes observed or at least calculable in light of observations [e.g. on folding] -- function in the Fits eqn (p. 2), they are measuring just that: functional info in bits. I suspect the issue over length vs fits has to do with degree of isolation: a more hard to find island of function has more info in it -- it gives us a bigger "surprise" to see it, i.e. more info. And yes, surprise is an info metric too, leading up to the - log[prob] type metric. (Brillouin negentropy info is a related metric and my tie into the genetic entropy of your concern. Cf my note.) Hope that helps GEM of TKI PS: Our friend to the S celebrated his NY in fine style with a 12 km high blast. No hope for better futebol but Lionel Baker made the WI cricket team, which is -- [linked?] on the mend it seems. (At least, that's my hope. How are the formerly mighty fallen! Sigh.)kairosfocus
February 2, 2009
February
02
Feb
2
02
2009
03:22 AM
3
03
22
AM
PDT
Dave Scot [217] If you want to continue participating in this thread I suggest you drop the pedantics. I apologise. I will refrain from pointing out any minor or careless errors that you make in the future. If the odds of something happening are given as 9:1 by definition the reciprocal, the odds of not happening, are 1:9. That is no error. I don't mind when real errors are pointed out. -dsMark Frank
February 2, 2009
February
02
Feb
2
02
2009
02:42 AM
2
02
42
AM
PDT
StephenB (#224): Very well said. I had not the time to follow this thread in the details, but I think you have pointed to very fundamental issues. So, just to add the strength of repetition: R0b says: "The point is this: If ID proponents want ID to be a part of mainstream science, with all of the benefits that entails, then they must start the discussion with assumptions that are already established." That really astonishes me, R0b. I think you are confounding facts with assumptions. In science, one has to start the discussion with "facts" that are already established, and then anyone can make any reasonable assumption about those facts. That's how science works. I can't understand why basic epistemology is so often violated in the discussions here. If we "had to start the discussion with assumptions that are already established", in order to please darwinists and "be a part of mainstream science", then any false assumption in mainstream science could never be challenged! Is that what you support? As for me, I do prefer not to "be a part of mainstream science" which is so seriously biased in many fundamental issues. Si, I will stick to facts, and not to "established assumptions". R0b says: "The assumptions that human design activity does not reduce to law+chance, that specified complexity is a coherent concept, and that humans create it and nature does not are not established facts in science." An assumption is not a fact, nor becomes a fact. At best, an assumption is "supported" by facts. So, let's pit things a little bit in order: a) The assumption that human design activity does not reduce to law+chance: that is as much an assumption as its opposite, that human design activity reduces to law+chance. Being two logically exclusive affirmations, one must be true, and the other false. I don't think we know for certain, at present. Therefore, anybody is free to choose either assumption, and to argue in its favour, trying to show which is the assumption which is at present the best explanations for the facts we can observe. That is called scientific debate. Your suggestion, that we should "start" with the "established assumption" that "human design activity reduces to law+chance" (it is, indeed, the preferred assumption of most scientists today), just because it is the prevailing assumption in a social circle, is called scientific tyranny. b) That specified complexity is a coherent concept: that is not an assumption: specified complexity is something we "define". As all definitions about something observable, it can be done in different ways, and darwinists speculate on those differences. But that is simply not correct. If one gives a definition, it can be coherent or not. If it is not coherent, just show why. If two definitions are slightly different, that is not incoherence: two different people are just defining two slightly different concepts, which can both be coherent, and bear no contradiction. Many times, even with you, I have suggested that, for operational reasons, we stick in our discussions of a very generic nature about ID to some very simple and unequivocal definition of CSI. As you know, my favourite one is "any string of digital information which is functionally specified (that is, for which a function can be explicitly described in a specific context) and which has a complexity (improbability of the target space) which certainly is lower than a conventional threshold (which for a generic discussion we can well assume at 1:10^150). Now, that is a very simple and unequivocal definition of CSI. Wee can discuss about what we mean for funtion, or about how the complexity can be calculated in specific cases, but that does not make the definition incoherent. A definition must only define something which we can observe. And CSI is an observable property, not a theory. c) and that humans create it: well, given a coherent definition of CSI, such as the one I gave in the previous point, I think it is very easy to show that this point is an observable fact (and a very easily observable one): just take this (rather long) post. It is CSI in that sense, without any doubt. Are you doubting that? Or are you doubting that I am human? Or are you just doubting that what I write has some meaning? So, are you doubting that humans can routinely output strings of digital information which have some definite function and exceed the complexity threshold I ave indicated? Please, be very clear on that. d) and nature does not: well, there is no assumption here. We are just affirming that nature, "as far as we know", and with the only exception of the subset of biological information, which is the onject of the ID discussion, show no example of spontaneous CSI. Again, we take here for simplicity the CSI definition I gave. And I am not saying that tomorrow we cannot find an example of spontaneous CSI in nature. I am not making a logical statement here. I am just making an objective statement about what we have so far observed (an empirical statement). If my statement is wrong, then you can easily show why: just show us a known example of spontaneous CSI in nature. So, to sum up, we have: in a), two competing, and mutually exclusive, assumptions about human design, none of which can be established as better by simple authority or conformism. in b) simple definitions of CSI which can individually be discussed for their coherence. in c) and d) two very objective statements about observable properties, provided that we use a definition whose coherence we have verified. Nowhere here I see anything like "established assumptions". Nowhere I see any indication to adopt a conformism which requires absolute betrayals of epistemology and logic just to be defined.gpuccio
February 2, 2009
February
02
Feb
2
02
2009
01:15 AM
1
01
15
AM
PDT
Wow, thanks for posting this video. Outstanding. And it has PZ running scared. Great stuff.William Wallace
February 1, 2009
February
02
Feb
1
01
2009
11:06 PM
11
11
06
PM
PDT
----Rob: "What I have said is that some of the fundamental assumptions on which specified complexity arguments are based have not gained acceptance in mainstream science, so it seems that the ID community might want to establish those assumptions before arguing from them." Perhaps I read something into your comments that weren't there. Which assumptions are you alluding to. For the record, here is your comment that I was responding to: ----"The assumptions that that human design activity does not reduce to law+chance, that specified complexity is a coherent concept, and that humans create it and nature does not are not established facts in science." And now your more recent comment: ----"Just to be clear, here are some things that I haven’t said in this thread: - Human design activity reduces to law+chance. - “Specified complexity” is an incoherent concept. - Humans can’t generate specified complexity. - Nature can generate specified complexity." It doesn't help that your paragraph about what you do believe is followed by a clarification described in terms of what you don't believe. Why not give it another try and put it in the form of an affirmation. Meanwhile, the critical point is this: We do not assume that humans create specified complexity, we know it to be a fact. That is why I raised the example of written paragraphs and sand castles. I trust that there is no need to provide a trillion other examples of humans creating design and no known examples of natural processes creating design. If you are disputing this point, please let me know in the most explcit terms possible so that we can discuss it.StephenB
February 1, 2009
February
02
Feb
1
01
2009
10:57 PM
10
10
57
PM
PDT
StephenB[220], Just to be perfectly clear: I agree with everything you have said about spears and sandcastles. Now let us go back to the discussion we had when you joined. Tell me how to do the analysis for the flagellum. Don't just avoid answering by saying "like Kirk does." As it is unclear whether he intends to do a likelihood comparison or a Bayesian inference, I'll let you choose. Let's go.Prof_P.Olofsson
February 1, 2009
February
02
Feb
1
01
2009
10:47 PM
10
10
47
PM
PDT
Stephen[220],
Well, we do it exactly the way that Durston has indicated.
Please, you tell me how we should do it for the flagellum. You ask a lot of questions, how about giving an answer for a change?Prof_P.Olofsson
February 1, 2009
February
02
Feb
1
01
2009
10:38 PM
10
10
38
PM
PDT
All, Lest you will think that the discussion about conditional probabilities is purely academic, let us consider the real-life case of Sally Clark. She had two babies who died at young age and was charged with double murder. There was no other evidence against her. An expert witness stated that two cases of SIDS (sudden infant death syndrome) was extremely unlikely and she was convicted. The conviction was later appealed and whe was aquitted but only after spending a couple of years in jail. Let us look closer. We have the evidence E of two dead children. We have two competing hypotheses to explain the evidence: guilt (design) and innocence (chance). [For sake of simplicity, let us neglect the possibility of one murder and one case of SIDS.] Under the assumption of guilt, the evidence is certain so we have P(E|guilt)=1. Assuming innocence, the probaility of E is the chance of 2 cases of SIDS. To have a number, let us say it is one in a million: P(E|innocence)=10^-6 (fairly close to the real number). Can we now say that it is a million times more probable that she is guilty than innocent? I'll leave it there for now as a homework assignment. We can do the full analysis tomorrow. The analog with Kirk's analysis is obvious: he claims that the probabiity that ID was required is 10^80000 times more likely than chance, and he does it based on conditional probabilities where ID and chance are to the right of the conditioning bar (see his Example in his post 95).Prof_P.Olofsson
February 1, 2009
February
02
Feb
1
01
2009
10:34 PM
10
10
34
PM
PDT
Professor Olofsson [214]: Well, we do it exactly the way that Durston has indicated. Beyond that, I will simply make the general point. If I saw a 500 word paragraph written on the surface of the planet Mars, I would know that it probably didn’t occur as a natural event. If, on the other hand, I simply observe the word "Olofsson," I would still assume the same thing but with much less mathematical certainty. If I only see the letters Olo, I will shrug it off as a coincidence (or natural occurrence). The mathematical proportions in the aforementioned example are clear enough, so there is no reason in the world why statistics cannot express those proportions, all your claims to the contrary notwithstanding. I have already successfully refuted the argument that we somehow need to have prior knowledge about these events to measure them or that we need to know anything about the behavior of the designer that caused them. So, if I notice four nucleotides, each similar to a letter in the alphabet, continually rearranging themselves in multiple patterns with millions of permutations and combinations and working in concert as if in a small factory, design is indicated with a high degree of mathematical certainly. There really shouldn’t be much debate about that. Even so, some on this thread deny or ignore even these elementary facts, which is why I find it necessary to call everyone's relectant attention to the fact that sand castles are obviously designed. Note that I had to bring everyone in kicking and screaming on that one. Under the circumstances, I have to believe, perhaps unjustly, that all their objections about methods are, at least in part, contrived. So, when we debate these same folks on the math, as we must, we must also deal with the fact that many of them, against reason, rule out design in principle. For them, design is nothing more than a mental construct, and this unwarranted presumption muddies the debate waters. In a way, it's like trying to discuss Shakespeare with someone who thinks that language is an “illusion.” So, the first order of business, for me at least, is to liberate ID critics from their neglect and horror of the obvious.StephenB
February 1, 2009
February
02
Feb
1
01
2009
10:22 PM
10
10
22
PM
PDT
R0b, The whole scientific community accepts the concept of functional complex specified information. They just do not call it that. If you talk about how DNA is information and is complex, they will all nod their heads yes. If you talk about how DNA specifies a protein. They will nod their heads yes and know you are talking about the translation process and transcription process. If you say the proteins are functional they will nod their heads. If you ask the question the right way they will admit that they cannot think of any other place in nature where this happens. They will also admit that DNA acts like a code and that a computer code is similar to DNA in that the code is complex, specifies another process in the computer and this process has function. They will also say the same thing about human language. Now what they will not say is that the FSCI of DNA did not have a natural origin. You can bring all sorts of arguments such as probabilities, no obvious predecessors, the lack of similar other DNA strings etc. but they will not grant you anything. Just look at the response of some on this thread or on the thread of Dembski's two papers. Now I believe that Kirk Durston may be doing just what you are asking for but it won't have much effect on people's way of thinking. They will deny the hand in front of their face before they give ID an inch. It is an ideology debate not one of science. I have said elsewhere that the most interesting thing about this debate is the refusal of many to accept the obvious or even admit that the obvious might be possible. FCSI exists and is easy to explain but they will deny each part of it here and to use an expression from an above comment, that what we say is daft.jerry
February 1, 2009
February
02
Feb
1
01
2009
09:27 PM
9
09
27
PM
PDT
DaveScot[217], Minor point: Probabilities as numbers between 0 and 1 is more than "legitimate," it is how they are defined mathematically and how all results and theorems are formulated. You are correct that everyday uses of percentages and odds are equivalent. Odds of 10:1 against an event corresponds to the probability of that event being 1/11. I'm OK with that and I suppose Mark is as well. The more substantive point he makes is your use of conditional probabilities though. When you say "probability of design" and then write P(e|ID), you are inconsistent. As I outlined in 185, we need to be careful with probability statements involving conditional probabilities. Assume guilt and a DNA match has probability 1. Observe a DNA match and you cannot compute the probability of guilt without estimating other probabilities. While you're here, I'd still be interested in how you think we should resolve the fine-tuning problem in posts [170] and [175]. The numbers 10^80 and 10^20 don't give us any information on what prior distribution we can assume (whatever it means that the particles of the universe are randomly generated in the first place).Prof_P.Olofsson
February 1, 2009
February
02
Feb
1
01
2009
09:25 PM
9
09
25
PM
PDT
Mark Frank re; reciprocals If you want to continue participating in this thread I suggest you drop the pedantics. The most common forms of expressing probabilities to the vast majority of people are in percentages (look at a weather forcast) or as a ratio (look at horse racing and other gambling). Expressing them as a number between 0 and 1 is certainly a legitimate third way but it's not common usage. The Meriam-Webster Thesaurus entry for 'probability' lists as synonyms 'chance', 'odds', and 'percentage' so I don't know what you mean by saying odds are different than probability. Perhaps you should be arguing on Meriam-Webster's blog instead of this one. Good luck with that. DaveScot
February 1, 2009
February
02
Feb
1
01
2009
08:58 PM
8
08
58
PM
PDT
Professor, 185 & 192 do address my points and I think they are sensible. I had overlooked them and hence, I apologize. If we assume “chance” we can compute probabilities I have no problem assuming chance to compute a probability. My problem is rejecting design to follow a dogma i.e. P(evidence given ID)=0 which I think is that state of things regarding the powers that be with regard to this debate. I think that one looks at the evidence, tacitly assumes a particular designer and concludes that, yes, that evidence is precisely what we would get from this particular designer. What would convince you that it's about design and not the designer? So the problem becomes, what exactly is the “ID hypothesis”? If it is merely “intelligent design has been observed,” about which we all agree, Professor, I think the hypothesis is more along the lines that intelligent design is quantifiable. And as much as I respect Dr. Dembski & I'm cheerleading for KD, I think ID is a work-in-progress and is subject to falsification, scrutiny, criticism & improvement. And it might even turn out to be unsustainable. But it is not something that should be dismissed (which I'm not saying you do).tribune7
February 1, 2009
February
02
Feb
1
01
2009
08:02 PM
8
08
02
PM
PDT
StephenB [157] and jerry, I'm confused (which is admittedly a common state of mind for me). I can't tell which, if any, of my statements in this thread you disagree with. Can you help me out here? Just to be clear, here are some things that I haven't said in this thread: - Human design activity reduces to law+chance. - "Specified complexity" is an incoherent concept. - Humans can't generate specified complexity. - Nature can generate specified complexity. What I have said is that some of the fundamental assumptions on which specified complexity arguments are based have not gained acceptance in mainstream science, so it seems that the ID community might want to establish those assumptions before arguing from them. A good first step might be to submit a paper about specified complexity to a scientific or mathematical journal. (I know that Meyer's "Origin" paper talked about specified complexity, but as a survey paper, it reported on it rather than tried to make a case for it as a legitimate scientific concept.) If you want to know my own views on specified complexity or design as the complement of law+chance, I'm happy to discuss them. But I don't know why ID proponents would care about convincing someone like me. There are important fish to fry out there, and I'm not one of them.R0b
February 1, 2009
February
02
Feb
1
01
2009
07:35 PM
7
07
35
PM
PDT
StephenB[213], Sure. Now back to the context: How do you use this insight to do likelihood inference or Bayesian inference of, for example, the flagellum or the origin of life?Prof_P.Olofsson
February 1, 2009
February
02
Feb
1
01
2009
07:05 PM
7
07
05
PM
PDT
----PO: "The first paleontologist knew what a spear was though." Perhaps, perhaps not, but the first person to recognize the unique pattern in a spear need not have seen one previously. Let's go back to the sand castle. The first person ever to observe one (that wasn't built in his presence) knew (beyond a reasonable doubt) that it was designed.StephenB
February 1, 2009
February
02
Feb
1
01
2009
06:38 PM
6
06
38
PM
PDT
tribune[210], If we assume "chance" we can compute probabilities of the evidence at hand, whatever it is, for example the flagellum. Now, "chance" can mean many different things, but at least we can conceptualize how to find probabilities: combinatorial arguments, previous data, etc. Thus, we can assess P(evidence given chance). Now assume "ID". Should we take it for granted that P(evidence given ID)=1? In doing so, I think that one looks at the evidence, tacitly assumes a particular designer and concludes that, yes, that evidence is precisely what we would get from this particular designer. I don't find this logic convinving. So the problem becomes, what exactly is the "ID hypothesis"? If it is merely "intelligent design has been observed," about which we all agree, I don't see how that helps us compute the probability of the flagellum. So what specific ID hypothesis do you want to state, and how to you compute the probability of the flagellum under this hypothesis? I will continue to point out that I am making effectively the same argument as Dembski here; he does not wish to consider design hypotheses, only rule out chance. One might argue that he is not very constructive if ToE is supposed to be replaced by another theory, but as criticism of evolution it is perfectly acceptable. The difference in Mark's and my replies above to what we would do as ID proponents is that I took the negative road to shoot down darwinism and he the positive road to establish an alternative.Prof_P.Olofsson
February 1, 2009
February
02
Feb
1
01
2009
06:28 PM
6
06
28
PM
PDT
tribune[210], Some probability calculations are proper, some are not. Read my posts 185 and 192. You should be happy that I'm siding so much with Dembski on this issue!Prof_P.Olofsson
February 1, 2009
February
02
Feb
1
01
2009
06:06 PM
6
06
06
PM
PDT
Professor, you seem to be saying that ID theory is improperly using probability calculations because "we (don't) have empirical data to assess our hypotheses in the first place" Now the formation of flagellum as per ToE specifically prohibits the consideration of design. In fact, it dogmatically says random mutations fixed by natural selection is adequate. Now with the mutations being random, chance is a big part of the ToE. So if we can't use probability calculations without some data as to formation of a flagellum -- which we really shouldn't count on getting -- how can we assess the reasonableness of the claims of the capabilities of random mutations?tribune7
February 1, 2009
February
02
Feb
1
01
2009
05:58 PM
5
05
58
PM
PDT
tribune[200], Where do I "seem to" say anything like that?Prof_P.Olofsson
February 1, 2009
February
02
Feb
1
01
2009
05:31 PM
5
05
31
PM
PDT
StephenB[207], The first paleontologist knew what a spear was though. There are plenty of data that assist us in making that type of inference about human activities. I don't see how we can use any such data to estimate the probabilities needed for a Bayesian or likelihood analysis of biological phenomena.Prof_P.Olofsson
February 1, 2009
February
02
Feb
1
01
2009
05:28 PM
5
05
28
PM
PDT
-----Professor O: "Yep, me too. As long as we have data, we can do Bayesian inference, whether explicit or implicit. We don’t have much data on universes being created by chance or by design." Well, no, not really. We don't need data on previous or parallel universes to detect design in this one. This is the case at all levels. The first paleotologist to do the research was able to detect design in an ancient hunter's spear even when there was no precedent. It's the same thing with the first sand castle ever built, or the first love letter ever written. No parallel or precedent is needed.StephenB
February 1, 2009
February
02
Feb
1
01
2009
04:34 PM
4
04
34
PM
PDT
-----PO: "Data about politicians can be used to assess the behavior of politicians. I wouldn’t use them to assess the behavior of the designer of the universe." On the other hand, you would use them to detect the "existence" of the designer of the universe.StephenB
February 1, 2009
February
02
Feb
1
01
2009
04:23 PM
4
04
23
PM
PDT
tribune[203], Yep, me too. As long as we have data, we can do Bayesian inference, whether explicit or implicit. We don't have much data on universes being created by chance or by design.Prof_P.Olofsson
February 1, 2009
February
02
Feb
1
01
2009
01:29 PM
1
01
29
PM
PDT
StephenB[202], Alliteral asphyxiation! By the way, now that you're here, in the last debate a while ago I wrote a last post for you. Just so I didn't do it in vain, here it is, comment 93. Apologies to the rest for using this thread fo personal communcation!Prof_P.Olofsson
February 1, 2009
February
02
Feb
1
01
2009
01:27 PM
1
01
27
PM
PDT
Data about politicians can be used to assess the behavior of politicians. I wouldn’t use them to assess the behavior of the designer of the universe. Neither would I . I would used data than can be used to assess whether it was a noble Native American who put markings on a piece of rock rather than wind and rain :-)tribune7
February 1, 2009
February
02
Feb
1
01
2009
01:20 PM
1
01
20
PM
PDT
----Prof Olofsson: "That's double daftness dude." It's doubtful that your double-daftness-dude deduction deftly describes the Daffy Duck diversion.StephenB
February 1, 2009
February
02
Feb
1
01
2009
01:07 PM
1
01
07
PM
PDT
1 2 3 4 5 6 11

Leave a Reply