Uncommon Descent Serving The Intelligent Design Community

At Some Point, the Obvious Becomes Transparently Obvious (or, Recognizing the Forrest, With all its Barbs, Through the Trees)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

At UD we have many brilliant ID apologists, and they continue to mount what I perceive as increasingly indefensible assaults on the creative powers of the Darwinian mechanism of random errors filtered by natural selection. In addition, they present overwhelming positive evidence that the only known source of functionally specified, highly integrated information-processing systems, with such sophisticated technology as error detection and repair, is intelligent design.

[Part 2 is here. ]

This should be obvious to any unbiased observer with a decent education in basic mathematics and expertise in any rigorous engineering discipline.

Here is my analysis: The Forrests of the world don’t want to admit that there is design in the universe and living systems — even when the evidence bludgeons them over the head from every corner of contemporary science, and when the trajectory of the evidence makes their thesis less and less believable every day.

Why would such a person hold on to a transparently obvious 19th-century pseudo-scientific fantasy, when all the evidence of modern science points in the opposite direction?

I can see the Forrest through the trees. Can you?

Comments
Dr Liddle: Okay, let's see if we can find a good common point for discussion. GEM of TKIkairosfocus
June 13, 2011
June
06
Jun
13
13
2011
03:26 PM
3
03
26
PM
PDT
kairosfocus: Thanks for your responses. I agree we are going in somewhat useless circles right now, so I will step back for a little from the issue of Shannon Information. I do think that part of the trouble (and it is no fault on your part) is lack of shared vocabulary, or, at least, lack of shared referents for our respective vocabularies. I will re-read your posts, and try to understand what you are saying. Getting to common ground is, as we agreed, difficult. Still, where there is a will there is a way :) With best wishes LizzieElizabeth Liddle
June 13, 2011
June
06
Jun
13
13
2011
02:18 PM
2
02
18
PM
PDT
Thanks, Upright BiPed! I need to slow my pace anyway (got my Java homework to do!) Will check back later. Cheers Lizzie.Elizabeth Liddle
June 13, 2011
June
06
Jun
13
13
2011
12:46 PM
12
12
46
PM
PDT
PS: I should underscore -- the genetic code is a known digital coded meaningful system, feeding into an automated assembly process. Biology, then, is not the magical exception to the patterns of such systems, unless those who claim it is can show that. In short, the turnabout dismissal fails.kairosfocus
June 13, 2011
June
06
Jun
13
13
2011
10:45 AM
10
10
45
AM
PDT
Dr Bot: I have pointed out the general characteristics of coded systems t6hat have to carry meanings. the use of rules or constraints of meaning and/or function lock out vast swaths of the nominally possible configurations as non-functional or meaningless. This is a general property of organised structured functional systems, and we already know that for proteins -- a main expression of genetic codes -- getting the AA sequence wrong can easily destabilise folding or function. We even know that some correctly sequences proteins need chaperoning to make sure to fold right, and that cells have corrective mechanisms to deal with mischained and misfolded or non- folded proteins. All of these are mutually supportive on the existence of identifiable islands of functional organisation, which are narrowly specified. Indeed, let us observe how cells assemble proteins step by step, acid by acid, and often chaperone the resulting AA chain to see to it that there is a resulting correct protein. It is those who wish to suggest that the expected and in material part observed island of function pattern does not hold, who have a duty of warrant here. GEM of TKIkairosfocus
June 13, 2011
June
06
Jun
13
13
2011
08:08 AM
8
08
08
AM
PDT
Goodmorning (local time) Elizabeth, I have read, and very much appreciated, your response at 304. I hope to be able to respond to it later (late) today, or first thing in the morning. As for the operational definition yous seek, to my mind the attempts thusfar have not captured the center of what is to be demonstrated. However, I think there may the enough on the table now to change that. cheers...Upright BiPed
June 13, 2011
June
06
Jun
13
13
2011
07:51 AM
7
07
51
AM
PDT
Show how by single letter changes funcitonal all the way you can get from a Hello World to an operating system, and so on.
Why is this relevant? You are basically saying: "The topography of space for high level computer languages consists of many discrete isolated points of function seperated by large expanses of non-function - therefore the topology of of space for biology is the same" This looks like a strawman argument to me. Kindly show why a high level computer language is a valid direct paralell of biology. We have been over this many times before KF and it gets a little tedious having to correct you!
Kindly show that he case is not so
Kindly show that it is! Personally I think the jury is still out.DrBot
June 13, 2011
June
06
Jun
13
13
2011
06:19 AM
6
06
19
AM
PDT
F/N: Let me put this a somewhat different way, in hopes it will get through the perceptions and beliefs. Designers of complex systems know that they have to very carefully configure if the resulting composite object is to work. Similarly,for writers of programs or of text, or even mechanics putting in a car part. Islands of function is a commonplace reality where clumping at random is overwhelmingly unlikely to result in required function. Even in biological systems, the most ardent Darwinists will shy away form high doses of ionising radiation, because precisely they know that he most likely results of mutations or random reorganisation of biomoloecules will be damaging, with radiation sickness and cancer lurking. In short the overwhelming evidence is that the islands of function view is correct. Those who wish to reject it have to show empirical grounds for doing so, GEM of TKIkairosfocus
June 13, 2011
June
06
Jun
13
13
2011
06:17 AM
6
06
17
AM
PDT
F/N: You may dismiss the point of islands of function on what might be all you wish. Kindly show that he case is not so, i.e that protein fold domains are not deeply isolated in AA sequence space -- in the teeth of the research that points that way. Show that functional coded messages are not narrow cross sections of sequence space, where the overwhelming majority of complex strings are non functional and even meaningless. Show how by single letter changes funcitonal all the way you can get from a Hello World to an operating system, and so on. Until you show such, we have every reason to accept the observation of islands of fuction in seas of non-function, especially given the impact of the constraints of required function on possibilities that would otherwise obtain.kairosfocus
June 13, 2011
June
06
Jun
13
13
2011
05:58 AM
5
05
58
AM
PDT
F.N: When it comes to quantifying CSI, or more relevantly FSCI, the approach of identifying the presence of an event E in a zone of interest T (from a very large config space relative to available resources), quantifying the number of bits and comparing to a threshold of being large enough that it is unlikely that you were found in T by accident rather than intent is reasonable and effective, as has been discussed for months. That is what he simple brute force X-metric did for years now, and it is what he log reduced Chi metric does: X = C*S*B , where C is 1/0 if beyond 1,000 bits capcity, S is specificity 1/0 and b is the number of functional bits. Chi_1,000 = I*S - 1,000, bits beyond a threshold. You may object, as is your privilege, but I daresay these two metrics have long done and will do the job of identifying reliably cases of designed informational objects, as can be tested against known cases. GEM of TKIkairosfocus
June 13, 2011
June
06
Jun
13
13
2011
05:49 AM
5
05
49
AM
PDT
Dr Liddle: This is beginning to go in useless circles. The context of any reasonable extra information to judge a particular message has already been addressed, and that to exhaustion. Above I have excerpted Shannon, which should give further context if that was needed. And, given what we know about signals and noise, it would indeed be very possible to identify a single extraterrestrial message as that. The relevant context is not just an immediate traffic analysis of the string of messages. The point of the CSI concept is again to identify that which is organised as opposed to at random or merely preset on mechanical necessity, an issue that as I citred above, was put onteh table by the 1970's Pardon, but I find this discussion is beginning to be a tedious, non-progressive treading around circles or in effect demanding of us that we show basic and long since established things over and over and over again, but no repetition or elaboration will ever be deemed sufficient to be acceptable. I think you need to look at he issue of what makes for adequate warrant, and at what point you may be treading unawares into selective hyperskepticism based on suspicion of sources. Much of the above for me is stuff I first saw as a telecomms course student many, many years ago, and I am astonished to see this sort of stuff being suddenly suspect. When I saw that Schneider was trying to "correct" and dismiss Dembski as ignorant on the most commonplace definition of information there is int eh field, I = - log p, that did it for me. After that point I had no further confidence in Mr Schneider or those who blindly followed him. GEM of TKIkairosfocus
June 13, 2011
June
06
Jun
13
13
2011
05:40 AM
5
05
40
AM
PDT
PS: Remember, Shannon's context was to use the well known fact of typical patterns of symbol frequencies to measure information, on the grounds that the rarer symbols were more informative. A log frequentist probability metric was then fairly obvious, as had been suggested by Hartley maybe 20 years previously. (I gather that Morse consulted printers on the known frequency of occurrence of letters in English text when he constructed his code, e.g. E is about 1/8 of typical English text.) Just to settle things, here is a clip from the introduction to the 1948 paper:
The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design. If the number of messages in the set is finite then this number or any monotonic function of this number can be regarded as a measure of the information produced when one message is chosen from the set, all choices being equally likely. As was pointed out by Hartley the most natural choice is the logarithmic function. Although this definition must be generalized considerably when we consider the influence of the statistics of the message and when we have a continuous range of messages, we will in all cases use an essentially logarithmic measure.
He shortly goes on to say:
The choice of a logarithmic base corresponds to the choice of a unit for measuring information. If the base 2 is used the resulting units may be called binary digits, or more briefly bits, a word suggested by J. W. Tukey. A device with two stable positions, such as a relay or a flip-flop circuit, can store one bit of information. N such devices can store N bits, since the total number of possible states is 2N and log2 2^N =N . . . . a decimal digit is about 313 bits. A digit wheel on a desk computing machine has ten stable positions and therefore has a storage capacity of one decimal digit.
We thus see here the direct use of both statistical considerations and the physical features of storage devices as measures of information. By Section 2 he is saying:
We now consider the information source. How is an information source to be described mathematically, and how much information in bits per second is produced in a given source? The main point at issue is the effect of statistical knowledge about the source in reducing the required capacity of the channel, by the use of proper encoding of the information. In telegraphy, for example, the messages to be transmitted consist of sequences of letters. These sequences, however, are not completely random. In general, they form sentences and have the statistical structure of, say, English. The letter E occurs more frequently than Q, the sequence TH more frequently than XP, etc. The existence of this structure allows one to make a saving in time (or channel capacity) by properly encoding the message sequences into signal sequences. This is already done to a limited extent in telegraphy by using the shortest channel symbol, a dot, for the most common English letter E; while the infrequent letters, Q, X, Z are represented by longer sequences of dots and dashes.
In short, from the outset the statistical properties of symbol strings were a part of the considerations, and were integrated into the theory as Taub and Schilling summarise along with many others. He continues:
We can think of a discrete source as generating the message, symbol by symbol. It will choose successive symbols according to certain probabilities depending, in general, on preceding choices as well as the particular symbols in question. A physical system, or a mathematical model of a system which produces such a sequence of symbols governed by a set of probabilities, is known as a stochastic process.3 We may consider a discrete source, therefore, to be represented by a stochastic process. Conversely, any stochastic process which produces a discrete sequence of symbols chosen from a finite set may be considered a discrete source . . . . A more complicated structure is obtained if successive symbols are not chosen independently but their probabilities depend on preceding letters. In the simplest case of this type a choice depends only on the preceding letter and not on ones before that. The statistical structure can then be described by a set of transition probabilities pi( j), the probability that letter i is followed by letter j. The indices i and j range over all the possible symbols. A second equivalent way of specifying the structure is to give the “digram” probabilities p(i; j), i.e., the relative frequency of the digram i j . . . . The zero-order approximation is obtained by choosing all letters with the same probability and independently. The first-order approximation is obtained by choosing successive letters independently but each letter having the same probability that it has in the natural language.5 Thus, in the first-order approximation to English, E is chosen with probability .12 (its frequency in normal English) and W with probability .02, but there is no influence between adjacent letters and no tendency to form the preferred digrams such as TH, ED, etc. In the second-order approximation, digram structure is introduced. After a letter is chosen, the next one is chosen in accordance with the frequencies with which the various letters follow the first one. This requires a table of digram frequencies pi( j). In the third-order approximation, trigram structure is introduced. Each letter is chosen with probabilities which depend on the preceding two letters. ++++++++++ F/N 5: 5Letter, digram and trigram frequencies are given in Secret and Urgent by Fletcher Pratt, Blue Ribbon Books, 1939. Word frequencies are tabulated in Relative Frequency of English Speech Sounds, G. Dewey, Harvard University Press, 1923.
this suffices to show that the approaches I have spoken of are in fact part and parcel of Shannon's approach. In addition, he recognises the centrality of meaningfulness but is focussed on aspects tied closely to sending information down channels, an engineering task. Other participants in this discussion and I have already spoken to how the context of meaningfulness can be added back in, through following the ID concept of zones of functional configurations from the field of possibilities. GEM of TKIkairosfocus
June 13, 2011
June
06
Jun
13
13
2011
05:29 AM
5
05
29
AM
PDT
Kairosfocus:
Pardon, but it is a commonplace of communications theory that one characterises the pattern of symbol usage in typical messages, to infer to the probabilities of symbols on a sampling basis. It is not always accurate, e.g. that novel in was it the 1930?s that was cleverly designed not to have in it a single E.
Yes, and that's what I'm saying! That you need data external to the message to ascertain the information content of the message. You can't just look at the message alone and determine whether it has information content or not. You need independent information. That's all I was saying. You seem to agree - but at one point you seemed to be saying that you could look at a single message (perhaps a SETI message) and determine what if any of it was signal and what noise. I don't think you can, at least not using Shannon Information. Presumably that's why you need CSI. Except I think that has problems too :) But I may have misunderstood what you were saying earlier, in which case I apologise. We seem to agree wrt Shannon.Elizabeth Liddle
June 13, 2011
June
06
Jun
13
13
2011
05:11 AM
5
05
11
AM
PDT
Dr Liddle: Pardon, but it is a commonplace of communications theory that one characterises the pattern of symbol usage in typical messages, to infer to the probabilities of symbols on a sampling basis. It is not always accurate, e.g. that novel in was it the 1930's that was cleverly designed not to have in it a single E. But it is a general and typical approach, and indeed it is closely tied to the wider practices of statistics. Why then the belabouring as though I am saying something new that needs special demonstration? Let me cite again from Taub and Schilling, Princs of Comm Systems, 2nd edn, Taub and Schilling (McGraw Hill, 1986), p. 512, Sect. 13.2,a work I have used in one edition or another for almost 30 years:
Let us consider a communication system in which the allowable messages are m1, m2, . . ., with probabilities of occurrence p1, p2 [My NB: generally -- cf here where I use F R Connor's approach, detected through statistical studies of messages, e/g E is about 1/8 of typical English text], . . . . Of course p1 + p2 + . . . = 1. Let the transmitter select message mk of probability pk; let us further assume that the receiver has correctly identified the message [My nb: i.e. the a posteriori probability in my online discussion is 1]. Then we shall say, by way of definition of the term information, that the system has communicated an amount of information Ik given by Ik = (def) log2 1/pk (13.2-1) [i.e. Ik = - log2 pk, in bits]
This is all of course in the context of an observed communication system -- an irreducibly complex entity that reeks of design, and of a pattern that allows us to recognise messages as distinct from noise and the symbols in the messages [cf previous links on the eye diagram challenge that contrasts natural noise from intelligent, intentional signal] with some assurance that what is detected is what was sent. (The Connor derivation addresses this complexity.) In short, once we are in the context of signals, codes, systems like this, there is an overwhelming evidence of design to begin with. In that context, and the point of message as opposed to noise is that the signal has characteristics that normally reflect purpose and are distinct from the sort of randomness that messes up amplitude and timing etc. Notice, too how above I pointed to a source of noise in the comms system used in protein assembly: the generic nature of the coupler to the AA in the tRNA, which leads to the observation of error correcting edit functionality. This is yet another strong index of design. So, no I am not in fundamental disagreement from UB or Mung. Signals, comm systems that use them and in particular symbols are -- per massive direct observation of their cause -- intentional, and the fact that a flat, uncorrelated distribution of pi would give a peak value for the H metric is simply an artifact of the mathematics. The divide the other side gambit fails. And, when it comes to DNA and AA's, we have entire families of proteins and classes of organism to look at, a considerable population, and longstanding empirical justification for code assignments as well. In addition, we see that the AA string is based on a generic coupler, the NH2 and COOH bonding, just as in nucleic acids, the bases are joined with a generic sugar phosphate coupler. The information is stored in key lock configs of side-chains [as von Neumann proposed for his kinematic replicator -- rods with amplitude storing digital information], but the sequencing allows us to deduce that we are dealing with a 4-state digital code. As has been known and publicly stated since the 1950's and as was decoded across the 1960s. Indeed, we have the situation where Venter used the code to put in his "watermark" signature. Other researchers have loaded what normally would be a stop codon with AAs and have chained novel chains as a result. this is symbolic, purposeful language with the possibility of reprogramming. We thus know the info storage capacity per symbol, and we know how the codes used make use of that capacity. I find it tiresome to have to be going over this ground again and again and again as in recent days and weeks, as it is so much a commonplace. I think it is you who would really need to carry the burden of warranting the claim that DNA is not a digital coded molecule that stores up to 2 bits per GCAT symbol. Information that is used to make proteins by a step by step assembly process using mRNA, tRNA, ribosomes and enzymes etc. That this seems to be such a struggle to reach agreement on is itself a strong evidence of the implications of this pattern of credibly established and commonplace facts. These are not pretendeddirect observations of a remote past but things we can see through well established practices in the present now culminating in the ongoing sequencing and publishing of the genomes for organism after organism, including us. GEM of TKIkairosfocus
June 13, 2011
June
06
Jun
13
13
2011
05:00 AM
5
05
00
AM
PDT
kairosfocus (continuing with #315...) You write:
4 –> In the case of D/RNA and AA’s, we know we have a flexible string structure capable of storing 4- state [2 bits] or 20-state [4.32 bits] information per symbol.
OK.
5 –> So, we have a baseline of storage potential. In this context, one that is hard wired into the chemistry: essentially any base can go next to essentially any base, and the same holds for AA’s.
Agreed re bases; not sure that it's true for AAs (suspect some combos don't result in viable proteins, but I don't know).
6 –> In the case of AA’s, in life forms, the actual sequences used are specified by base sequences in DNA thence mRNA courtesy the ribosome as an automated assembly plant. mRNA can and does sequence essentially any AA followed by any AA.
OK.
7 –> That from the field of possibilities, in life forms certain sequences are chosen, has to do with constraints of purpose or onward utility, not the chemistry of chaining.
Yes indeed, with the caveat that I think we can substitute "teleonomy" for "purpose". A persisting system is constrained by what facilitates persistence; in other words, in a persisting system, we would expect to find elements and mechanisms that promote persistence, not dissolution.
8 –> In short, the islands of function where we have properly folding, agglomerating, activated AA sequences that dot he work of life are the sequences USED, as opposed to the sequences that are POSSIBLE.
Yes, indeed, except that I am wary of the metaphor, "islands". A sequence that is USED may adjoin a sequence that is POSSIBLE but not USED.
9 –> To get to these functional AA sequences [which are deeply isolated in the field of possible AA chains], we have a symbolic mapping in the D/RNA system, with 3-letter codons specifying 64 states mapped onto the 20 AA states and certain procedural instructions: start chain, stop chain.
Well, as I've said above, I'm wary of the term "symbol" in this context. But, in the sense that other systems could (perhaps) in theory exist (perhaps in some unrelated part of the universe) in which the reading system was different, and resulted in a different mapping, then, maybe it's OK. I'll try not to let it hold me up. I'm more concerned about "isolated" but then that is the point of contention :)
10 –> So, it is immediately seriously arguable that the relevant number of possibilities per element are 4 and 20 for D/RNA and AA’s, respectively. Certainly, that holds from the getting to the first functional case end of things.
OK-ish.
11 –> Now of course — as is true of any real world code pattern, there is not in practice a perfectly even distribution of the symbols as used. So, we can take a frequentist approach on the codes as observed, and infer that from the information carrying capacity, the USED variablility is somewhat different, and yields a lower number of bits per element.
Indeed there is not an even distribution of the codes used, nor of the combinations of codes used. This is a crucial point.
12 –> Sounds plausible, but this comes at a surprisingly high price. Namely, this directly implies design of a language, not the outworkings of blind chemistry.
No, this I believe is an error. Darwin proposed something very simple but very important, which was, in effect, a series of filters, each of which alters the pdf of what is found at the next, and which is specifically biased towards sequences that promote persistence (survival and replication) of the sequence. This hugely alters the case. We are not talking about a flat, or near flat distribution that has no relationship with "teleonomy" (the constraints on persistence). The pdfs are serially forced through a filters that "select" the sequences that best persist. And lest this seem tautological (which it almost is, but only because phrasing it like that makes it more complicated than it is), a simpler way of saying it is: sequences that promote persistence will tend to persist while sequences that undermine persistence will tend to be lost.
13 –> For, it is implicitly inferring that it is not the chemistry that controls but the grammar, and the fitness for onward function.
Yes, exactly. But that does not allow us to infer intelligente design, merely the persistence of sequences that promote "onward function", by definition.
14 –> So, there is a choice: if we look to the chemistry, we see that degree of flexibility of possibilities that immediately leads us to having no preference for any particular sequences patterns, if we look to the frequencies of symbols in observed cases, we are looking at in effect a symbol assignment on fitness for function.
Well, yes, but that is explained by "natural selection" acting on "variance", or, to put it as I prefer, the tendency of sequences with phenotypic effects that raise the probability of reproduction in the current environment to propagate through the population
15 –> In any case, the chemistry is an independent source of information [!] on probability distribution functions, i.e. we have no in-principle reason to reject a flat-random pattern as a first probability distribution function estimate; especially if one is appealing to blind chemistry leading to random walks and trial and error as cause.
Nor any reason to assume it. Generally flat distributions are rare; Gaussian distributions much more common (as the Central Limit Theorem suggests). Time-to-failure distributions even more common. Essentially natural selection samples the right hand tail of time-to-failure distributions, and then resamples the previous sample it in each generation.
16 –> If instead we look at the frequency exhibited by the population used in actual bio-functional molecules, that is not circular, it is inferring that functional messages in the system make use of symbols — per empirical investigations — in certain ways. So, we may access that empirical study to estimate the a priori probability of such symbols in any given case.
Not without ignoring the inbuilt sampling I have mentioned. When we include that, we get a very different pdf, I would argue, and one that hugely increase the value of p(T|H) (not to mention rendering effectively incalculable in any given case).
17 –> This is a typical pattern in statistical investigations, where we study cases that seem to be typical to infer to the properties of the population as a whole. Sure, things can go wrong, but so long as we keep an open mind and recognise the inherent provisionality, we have good mathematical grounds for reasonable confidence. (So, to single out a particular case of a general pattern and to criticise it as though that were somehow a special and unique problem warranting specific suspicion, is to slip into selective hyperskepticism.)
Yes indeed. That's why I'm not bothered that much by the Very Big Number on the left of the CSI formula. It's the Very Small Number on the right that I think is wrong :)
18 –> In this context, Shannon worked out an average info per symbol metric that takes into account the different observed frequencies of occurrence of symbols: H = – [SUM on i] pi log pi
Yes, but we still need p, and the calculation of p is what is at issue.
19 –> So, we have two valid approaches, and we have comparable results up to an order of magnitude, certainly in the aggregate. 20 –> Namely, the functional information in the living cell, in aggregate can be estimated on more or less standard techniques, and if we do so, we see that we have quite enough functionally specific and complex information to infer that the best explanation for what we observe is design.
Well, I disagree, profoundly, for the reasons above. The "more or less standard techniques" completely ignore the filtering process known as "natural selection", and this is precisely why I think we cannot distinguish (by this means at any rate) between the products of natural selection and the products of design. Design may indeed constrain the pdfs of the components of the code, but so does natural selection, because of the simple truism that sequences that promote their own persistence will tend to persist, while those that don't, won't. This is the heart of Darwin's insight, and whether or not ID is true, it renders the inference of ID from the kind of functional sequences we observe in life unjustified. At least, that is the argument I am making :)
21 –> Shooting the messenger is not going to change the message he has already delivered. GEM of TKI
No, but finding the error in the message may alter the conclusions we draw :)Elizabeth Liddle
June 13, 2011
June
06
Jun
13
13
2011
04:50 AM
4
04
50
AM
PDT
F/N: Significance of the UPB. It marks a threshold where random walk based search that starts at an arbitrary initial point, will be maximally unlikely to find an island of function, for failure of adequate resources to get enough of a sample to be credibly different form no sample. It has already been shown that the average search algorithm is this one. And, one will need a credible source of unintelligent [which BTW is unintentional by definition] bias that puts one in the near vicinity of such islands if one is going to be able to argue that chance and necessity can get the ball rolling so to speak.kairosfocus
June 13, 2011
June
06
Jun
13
13
2011
04:04 AM
4
04
04
AM
PDT
BTW, the above was a response to #315, sorry should have made that clear. Crossposted with #316.Elizabeth Liddle
June 13, 2011
June
06
Jun
13
13
2011
03:22 AM
3
03
22
AM
PDT
So you are saying, and in this you appear to disagree with Mung and myself, and possibly with Upright BiPed, that the value p, which you need in order to quantify "surprise" can be estimated from the frequency of each character within the message? If not, and you say: "In that context, so soon as we can characterise a typical symbol frequency distribution, we are well on the way", then that frequency distribution must be estimated from some other information source, which was my point: if you are only estimating your pdfs from the message, then you have no way of knowing whether the message contains any useful information or not - you have no distribution under the null hypothesis with which to compare your message. To take Mung's example above, of a string of Ones. If we take the pdf from the message, the message contains zero bits of information, because the probability of a One (estimated from the message) is 1. However, if we know that the pdf under the null is equiprobable Ones and Zeros, then the message contains 100 bits of information. The point being that the information content of the message is a function of your priors concerning the distribution from which possible messages are drawn. Without those priors, you can't compute the information content of the message. To quote Shannon himself (1948):
The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design. If the number of messages in the set is finite then this number or any monotonic function of this number can be regarded as a measure of the information produced when one message is chosen from the set, all choices being equally likely.
(Italics as in the original). In other words, for Shannon information, firstly, meaning is irrelevant; and secondly, the estimate of information content requires knowledge of the set from which possible messages are drawn. This is why I demurred about using Shannon Information as a measure of information where information is taken as having meaning, and also as a measure of meaning when we do not know anything about the set of possible messages. And why I disagreed with you when you said you could distinguish signal from noise from inspection of the message alone. Do you see my point?Elizabeth Liddle
June 13, 2011
June
06
Jun
13
13
2011
03:20 AM
3
03
20
AM
PDT
F/N: Some clarifications following up from the above:
[KF:] P(T|H) is a probability metric on a random walk hyp, perhaps with a bias. [EL:] And “natural selection” can best by expressed as such a bias. But drift is also a factor, and the two interact.
1 --> Question-begging: the issue is not hill climbing within an island of function where differential reproductive success allows for specialisation to niches, but to ARRIVE at islands of function across vast seas of non-functional configs where there is no reproductive success possible at all on the relevant body plan. 2 --> Bias, drift and hill-climbing to wander about within an island of function has not addressed the focal issue for FSCI: getting to such islands of function in the face of the field of possibilities for strings of that degree of complexity. 3 --> let me be direct: (i) can you account for the origin of novel, FSCI-rich body plans in the face of the implications of codes and the requisites of functional folding (given the freedom in the chemistry) that only very specific sequences will work? (ii) can you show empirically that once any one initial body plan e.g. a unicellular one, has been arrived at, all other body plans are connected by small incremental changes within reach of the sort of drift and variation we have seen: a few point mutations, or some duplication and variation of a string of DNA etc? (iii) If so, what is it, and if so, what then is the answer to the remarks of Gould et al, e.g. here? [Please note the three different quotes or clusters of quotes.] (iv) Similarly, how do you then account for say the Cambrian fossil life revolution, on what specific empirical evidence? Next:
the probabilities have to be computed (if they can be – we simply do not have enough facts, in practice to do it) as contingent probabilities. Clearly the probability of a given strand of DNA from a living organism arising by chance is very tiny. But that is not the null. The null is that a much earlier, chance sequence happened to give rise to something that increased the strand’s probability of being transmitted (replicated) and thereby created an an enhanced number of oppportunities for a subsequent enhancement of the original something to occur. This lies at the heart of Darwinian evolution.
4 --> Begs the question. Consistently, you argue about the moving around in an island of function, without grounding how you can get to the island of function in the first place. 5 --> This issue holds for first cell based life -- the ONLY observed biological life we have, and it holds for subsequent more complex body plans, which have to be embryologically feasible in genes expressed early in development if there is to be a new body plan at all, as that is when it is expressed. 6 --> the consistency with which Darwin supporters cannot seem to see this problem tells me that there is a problem of paradigm indoctrination here that blinds minds to what should otherwise be plain and obvious. 7 --> Let us remember Kuhn's warning that a paradigm is not only a way to see but a way to be blinded to what it does not see. In this case, the problem of complex functional organisation that has to be expressed at early stages of development of a body plan, for it to be viable at all and reproduce. 8 --> the injection of a bias on function and advantage would be relevant in the case of an already functional body plan, but that is not where we are starting, we are starting with the chemistry of chaining,and with the situation of a complex information system that expresses the coded information in that chain of bases or AAs. 9 --> In the warm little pond or the like, the chaining is in the face of chirality, cross-interference, and the unfavourable thermodynamics of the relevant molecules, all attested to by the assembly line processes that are used to build the molecules in the living cell. These are not thermodynamically favourable, and have to be paid for by using energy-rich molecules to drive the process forward. Not to mention, the need to spontaneously invent and assemble systems that express symbolic codes, step by step execution algorithms with halting, and assembly lines to make the things effective. The only empirically supported source for such is intelligence, and that is backed up by the relevant analysis of config spaces. Our observed cosmos just does not have enough resources in it to search enough of such spaces to make it reasonable that happenstance is a reasonable explanation. 10 --> In the case of our proposed ancestral organism, there would need to be a complete chain of simple favourable steps to move from primitive so-called to complex body plans. But the jump to a new body plan credibly requires 10 - 100+ mn bits of information that has to come in the form of regulatory and instructional codes. 11 --> In addition, one needs enough time and population to fix these, then go on to the next stage, all in the face of the unfavourable balance of mutations, where marginally damaging mutations are also much more likely than those of incremental improvement and are quite likely to get fixed. i.e. we see reason to infer to net deterioration, embrittlement and ultimate breakdown of the genome and life function, not for progress on this model. 12 --> This is of course the genetic entropy challenge, and it is equivalent to the problem of the compulsive gambler: he may win small or big on occasion, but he odds are net unfavourable so on average he is being ruined all along. GEM of TKIkairosfocus
June 13, 2011
June
06
Jun
13
13
2011
02:19 AM
2
02
19
AM
PDT
Dr Liddle: This caught my eye:
My point was that in order to know how much Shannon information is in a message, in any useful sense, you have to have independent information about the pdfs for each character in the message. If you don’t, all you have is the pdf from the message itself, and, in the case of a message from a 100 of coin tosses, that will be more or less 100 bits. But that’s a useless measure, because the pdf wasn’t derived independently.
1 --> Nope, we first know that we have a metric that we can turn degree of surprise within a system of symbols into a measure of how informative a string of such symbols is. I = log (1/p) 2 --> In that context, so soon as we can characterise a typical symbol frequency distribution, we are well on the way. 3 --> Directly related, information systems have a structure, and we may directly inspect what in them stores symbols, and how they work, which gives us a baseline on possibilities. For instance, if something does not allow for contingencies, it cannot store information. 4 --> In the case of D/RNA and AA's, we know we have a flexible string structure capable of storing 4- state [2 bits] or 20-state [4.32 bits] information per symbol. 5 --> So, we have a baseline of storage potential. In this context, one that is hard wired into the chemistry: essentially any base can go next to essentially any base, and the same holds for AA's. 6 --> In the case of AA's, in life forms, the actual sequences used are specified by base sequences in DNA thence mRNA courtesy the ribosome as an automated assembly plant. mRNA can and does sequence essentially any AA followed by any AA. 7 --> That from the field of possibilities, in life forms certain sequences are chosen, has to do with constraints of purpose or onward utility, not the chemistry of chaining. 8 --> In short, the islands of function where we have properly folding, agglomerating, activated AA sequences that dot he work of life are the sequences USED, as opposed to the sequences that are POSSIBLE. 9 --> To get to these functional AA sequences [which are deeply isolated in the field of possible AA chains], we have a symbolic mapping in the D/RNA system, with 3-letter codons specifying 64 states mapped onto the 20 AA states and certain procedural instructions: start chain, stop chain. 10 --> So, it is immediately seriously arguable that the relevant number of possibilities per element are 4 and 20 for D/RNA and AA's, respectively. Certainly, that holds from the getting to the first functional case end of things. 11 --> Now of course -- as is true of any real world code pattern, there is not in practice a perfectly even distribution of the symbols as used. So, we can take a frequentist approach on the codes as observed, and infer that from the information carrying capacity, the USED variablility is somewhat different, and yields a lower number of bits per element. 12 --> Sounds plausible, but this comes at a surprisingly high price. Namely, this directly implies design of a language, not the outworkings of blind chemistry. 13 --> For, it is implicitly inferring that it is not the chemistry that controls but the grammar, and the fitness for onward function. 14 --> So, there is a choice: if we look to the chemistry, we see that degree of flexibility of possibilities that immediately leads us to having no preference for any particular sequences patterns, if we look to the frequencies of symbols in observed cases, we are looking at in effect a symbol assignment on fitness for function. 15 --> In any case, the chemistry is an independent source of information [!] on probability distribution functions, i.e. we have no in-principle reason to reject a flat-random pattern as a first probability distribution function estimate; especially if one is appealing to blind chemistry leading to random walks and trial and error as cause. 16 --> If instead we look at the frequency exhibited by the population used in actual bio-functional molecules, that is not circular, it is inferring that functional messages in the system make use of symbols -- per empirical investigations -- in certain ways. So, we may access that empirical study to estimate the a priori probability of such symbols in any given case. 17 --> This is a typical pattern in statistical investigations, where we study cases that seem to be typical to infer to the properties of the population as a whole. Sure, things can go wrong, but so long as we keep an open mind and recognise the inherent provisionality, we have good mathematical grounds for reasonable confidence. (So, to single out a particular case of a general pattern and to criticise it as though that were somehow a special and unique problem warranting specific suspicion, is to slip into selective hyperskepticism.) 18 --> In this context, Shannon worked out an average info per symbol metric that takes into account the different observed frequencies of occurrence of symbols: H = - [SUM on i] pi log pi 19 --> So, we have two valid approaches, and we have comparable results up to an order of magnitude, certainly in the aggregate. 20 --> Namely, the functional information in the living cell, in aggregate can be estimated on more or less standard techniques, and if we do so, we see that we have quite enough functionally specific and complex information to infer that the best explanation for what we observe is design. 21 --> Shooting the messenger is not going to change the message he has already delivered. GEM of TKIkairosfocus
June 13, 2011
June
06
Jun
13
13
2011
01:34 AM
1
01
34
AM
PDT
We seem to be arguing at cross-purposes, Mung. It's probably my fault, but it's getting late, and I'll be off line for a bit now. But the long and short of it is I'm not disagreeing with you. My point was that in order to know how much Shannon information is in a message, in any useful sense, you have to have independent information about the pdfs for each character in the message. If you don't, all you have is the pdf from the message itself, and, in the case of a message from a 100 of coin tosses, that will be more or less 100 bits. But that's a useless measure, because the pdf wasn't derived independently. So we all agree: Shannon information is only a useful measure if we have some independent information about the source. We can't figure it out from the message alone. Or, if we do, we get a silly answer. I'm sorry if I appeared to suggest otherwise. See you guys probably in a few days. It's been a really interesting conversation so far. Cheers LizzieElizabeth Liddle
June 12, 2011
June
06
Jun
12
12
2011
03:48 PM
3
03
48
PM
PDT
For instance, you say that my statement that “…on that definition, any stochastic process creates information” is false. But you just gave an example of a stochastic process that created information, not a stochastic process that didn’t.
I did no such thing. :) In fact, I showed just the opposite. On the Information Content of a Randomly Generated Sequence On the Meaning of Shannon Information On the Information Content of a Randomly Generated Sequence (cont.) The sequence of 0's and 1's representing true/false answers to the questions posed was by no means stochastic or random. It's hard for me to conceive of how a randomly generated sequence of symbols sent in a message could convey information. If Upright BiPed was asking questions about the configuration of Heads and Tails in the sequence you generated by tossing a coin and in response you sent him a randomly generated sequence of 0's and 1's you would have been sending him nonsense, not 100 bits of Information. Let me put it another way. You've tossed a coin 100 time and recorded the sequence. Upright BiPed wants to obtain information about the sequence. He asks a series of questions. Q1. Was the result of the first coin tossed a heads? Now lets say you have a transmitter with three buttons. Press the first button it sends a 0. Press the second button it sends a 1. Press the third button and either a 0 or a 1 is sent with equal probability. If in response to his first question, your claim is that if you strike button three, you've sent him one bit of Shannon Information. And if for each of his 100 questions, you hit button three in response to every one of the questions. You would claim you've sent him 100 bits of Shannon Information. So he asks questions about the configuration of the sequence of heads/tails and each time you send him a 0 or a 1 but the symbol sent has nothing at all to do with the actual head or tail that was recorded. You claim you've sent him information. I say you haven't.Mung
June 12, 2011
June
06
Jun
12
12
2011
03:26 PM
3
03
26
PM
PDT
I hope not, Mung. That's why I tried to pin down something in my response to you above. Operationalizing hypotheses are a tedious but absolutely necessary part of scientific methodology (probably rather like trying to write a good piece of legislation). But the aim is to find stuff out, not claim that we can't know anything! And it's perfectly doable, just, well, tedious.Elizabeth Liddle
June 12, 2011
June
06
Jun
12
12
2011
03:05 PM
3
03
05
PM
PDT
I’m not going to agree or disagree without – you guessed it – an operational definition of “about something”.
Well, it's at least comforting to know I wasn't imagining things. So once we start talking about about without knowing what we are talking about are we then going to talk about the words that tell us about what it means for something to be about something and claim that we're now involved in circular reasoning and can never therefore know anything about anything at all because we can't know anything about about without appealing to what it means for something to be about something?Mung
June 12, 2011
June
06
Jun
12
12
2011
02:53 PM
2
02
53
PM
PDT
I'm not going to agree or disagree without - you guessed it - an operational definition of "about something". However, I think Upright BiPed and I may have come to something close. I would certainly agree (indeed, it's a point I've been making for some time) that in order to quantify Information (Shannon Information) we need to know what additional Information is available to the receiver regarding the probability distribution of the characters in the message under the null hypothesis of "no information". That way, we can conclude, that if the probability distribution of the characters, and their sequence, in the message is improbable under the null, that the message is "about" something, i.e. that it is informative. Does that count?Elizabeth Liddle
June 12, 2011
June
06
Jun
12
12
2011
01:54 PM
1
01
54
PM
PDT
Elizabeth, Information, in order to be Information, must be about something. Do you agree or disagree?
A fundamental, but a somehow forgotten fact, is that information is always information about something.The Mathematical Theory of Information
Sorry, but there's still some doubt in my mind about whether you accept this as true.Mung
June 12, 2011
June
06
Jun
12
12
2011
01:25 PM
1
01
25
PM
PDT
Kairosfocus:
Pardon my mis-speaking, I think I was tired. GEM of TKI
No problem :) I mis-speak (and mistype) all the time. My error-monitoring system is aging, and my eyesight is not what it was, either! Peering through the only bit of my glasses with the right focal length for a computer screen doesn't help . Oh for the wisdom of age with the vigour of youth :) Regarding the UPB: I'm not especially concerned about the setting of what I am calling "alpha". As I said, I would accept a much lower threshold has evidence against the null. What I am more concerned about is the computation of the probability of the observed pattern under the null (allowing for correction for multiple hypothesises, namely the number of similarly compressible patterns). You say:
P(T|H) is a probability metric on a random walk hyp, perhaps with a bias.
And "natural selection" can best by expressed as such a bias. But drift is also a factor, and the two interact.
But, once we reduce it to the information metric, we can also come back to it from the direct observation of information storage or messages in that storage area. DNA is a direct info store, and so are amino acid chains. Subtler cases come with functionally organised entities, where we can infer information stored int eh functional organisation based on perturbation to get tolerances and the number of yes/no structured questions to specify the resulting function. This is implicit in say the engineering drawings for a machine.
I agree that DNA can be regarded as an "information store" as well as a great many other components of living organisms. But the null hypothesis the patterns that encode that information has to include the consequences of cumulative acquisition. In other words the probabilities have to be computed (if they can be - we simply do not have enough facts, in practice to do it) as contingent probabilities. Clearly the probability of a given strand of DNA from a living organism arising by chance is very tiny. But that is not the null. The null is that a much earlier, chance sequence happened to give rise to something that increased the strand's probability of being transmitted (replicated) and thereby created an an enhanced number of oppportunities for a subsequent enhancement of the original something to occur. This lies at the heart of Darwinian evolution. However, as you pointed out, it does not explain the origin of the strand, or the mechanisms by which certain sequences of such strands were able to enhance the probability of the strand being replicated. That is what I hope to address in my proposed project.Elizabeth Liddle
June 12, 2011
June
06
Jun
12
12
2011
11:16 AM
11
11
16
AM
PDT
F/N: Since there is an assertion of a blunder, it seems I need to again point out the basis for the Dembski type bound on number of possible events. Pardon, but I find it a little tiring to see "corrections" that are not correct. 1 --> It is commonly estimated that there are some 10^80 particles in the observable cosmos, which we take as a crude estimate of number of atoms. (This is already conservative.) 2 --> The Planck time is about 5 *10^-44 s, which is rounded down to 10^-45 s. there are about 10^20 P-times in the duration of a strong force nuclear interaction, and about 10^30 in that of a fast ionic chemical interaction [organic reactions are MUCH slower, with ms or even s not unlikely]. 3 --> Number of seconds since the big bang is about 10^17, and the time from BB to heat death may reasonably be put at about 50 mn times this, 10^25 s. 4 --> Number of states possible for 10^80 atoms, in 10^25 s at 10^45 states/s is thus 10^150. 5 --> This is an upper bound on the number of events in the observed cosmos. 6 --> Similar estimates for our solar system since the big bang give an upper bound of order 10^102 possible events for 10^57 atoms. 7 --> You will see this is independent of Seth Lloyd's numbers and his framework of conversions to get 10^90 bits [i.e. this is the scope of the equivalent storage register to the observed cosmos . . . ] carrying out 10^120 operations. [One can take it that it is atoms that are acting and taking up states in the relevant context of events. NB: Dark matter does not seem to be conventionally atomic, based on observations of its behaviour, so it is not relevant to the calculation.] 8 --> By comparison, 500 bits will have ~ 3* 10^150 possible configs, and 1,000 bits will have ~ 1.07 * 10^301 possible configs. 9 --> The Solar system will only scan up to 1 in 10^48 of the number of configs for 500 bits, and the observed cosmos will scan up to 1 in 10^150 or so of those for 1,000 bits. _________ The Dembski type bound is reasonable. GEM of TKI PS: For a sounder analysis than was linked just above, I suggest -- again -- Abel's Universal plausibility metric paper.kairosfocus
June 12, 2011
June
06
Jun
12
12
2011
10:35 AM
10
10
35
AM
PDT
Dr Liddle: Pardon me,I misspoke earlier but corrected myself above. Information metrics are equivalent to or catch up probability metrics. As noted above, you can directly deduce info carrying capacity, and you can use a frequentist analysis of symbols in information bearing messages. P(T|H) is a probability metric on a random walk hyp, perhaps with a bias. But, once we reduce it to the information metric, we can also come back to it from the direct observation of information storage or messages in that storage area. DNA is a direct info store, and so are amino acid chains. Subtler cases come with functionally organised entities, where we can infer information stored int eh functional organisation based on perturbation to get tolerances and the number of yes/no structured questions to specify the resulting function. This is implicit in say the engineering drawings for a machine. If you look at the log reduction, you will see that the p(T|H) term goes to the information metric, and the other terms go to the threshold. I may as well clip it again, just to make it clear what is going on:
what about the more complex definition in the 2005 Specification paper by Dembski? Namely:
define ?S as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . . where M is the number of semiotic agents [S's] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history.[31] . . . [Then] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity [?] of T given [chance hypothesis] H [in bits] . . . as [the negative base-2 log of the conditional probability P(T|H) multiplied by the number of similar cases ?S(t) and also by the maximum number of binary search-events in our observed universe 10^120] ? = – log2[10^120 ·?S(T)·P(T|H)] . . . eqn n1
How about this (we are now embarking on an exercise in “open notebook” science): 1 –> 10^120 ~ 2^398 2 –> Following Hartley, we can define Information on a probability metric: I = – log(p) . . . eqn n2 3 –> So, we can re-present the Chi-metric: Chi = – log2(2^398 * D2 * p) . . . eqn n3 [p is the probability term] Chi = Ip – (398 + K2) . . . eqn n4 [now an information term] 4 –> That is, the Dembski CSI Chi-metric is a measure of Information for samples from a target zone T on the presumption of a chance-dominated process, beyond a threshold of at least 398 bits, covering 10^120 possibilities. 5 –> Where also, K2 is a further increment to the threshold that naturally peaks at about 100 further bits. [this is an allusion to essentially the limit of our solar system and/or the cosmos . . . ]
Pardon my mis-speaking, I think I was tired. GEM of TKIkairosfocus
June 12, 2011
June
06
Jun
12
12
2011
10:13 AM
10
10
13
AM
PDT
Kairosfocus @ #299 (golly, 299!!!)
Dr Liddle: The P(T|H) term etc get subsumed in the limit, in effect a threshold is set beyond which these will not reasonably go for the solar system or the observed cosmos. In efect you have set every atom to work looking for the edge of a zone of interest, but with a big enough field, the isolation of the zones tells. With Chi_1,000, the whole observed cosmos is unable to scan enough of the space of possibilities to make a difference from no scan. I have already shown how that happens, so I will not repeat myself. That’s why there is a threshold imposed.
I understand why there is a threshold imposed. It is the equivalent (if not precisely the same as) an alpha value. What the chi threshold does, it seems to me, is to say: If, under null, the probability that an event of class X will happen at least once in the history of the universe is less than .5, we can reject the null. (I will leave aside Howard Landman's note http://www.scribd.com/doc/23648196/Landman-DEMBSKI-S-SPECIFIED-COMPLEXITY-A-SIMPLE-ERROR-IN-ARITHMETIC-2008-6 regarding the number of possible events in the universe as having been underestimated, and thus rendering the threshold unexceedable by any pattern, even by those known to have been designed and thus making it impossible to conclude design for any event, as I am perfectly happy to reject the null on a less conservative alpha). So what we need to do, therefore, to test whether our observed pattern reaches the threshold at which we can reject the null is to calculate the probability of observing it under the null (this is straightforward standard null hypothesis testing procedure of course). Which makes "non-design" the null and "design" the hypothesis (with a very high bar for "design") So how do we go about calculating the probability of observing the observed under the null? Without a way of calculating that, we cannot test whether a pattern's chi exceeds the threshold and allows us to reject the null. And "that" is given by: ?S(T)·P(T|H) P(T|H) is not "subsumed into the limit". It must be calculated (as must ?S(T)) in order to determin whether, in effect, the ratio of the Seth Lloyd estimate over the probability of one of ?S(T) patterns of class T being observed under the null is less than .5. No?
The estimates for actual parameters will REDUCE the scope of search below that. Think about converting he observable cosmos into banana plantations, trains to move the bananas, forests of paper and monkeys typing impossibly fast at keyboards, from the big bang to the heat death, they will not exceed the limit we have set. Nor will any other scenario.
Yes, I understand the principle, once you have the probability under the null. But you can't conflate the alpha value (how improbable a thing has to be, under the null before you can reject the null) with computing the probability of the observed under the null. It seems to me that is what you are doing. If not, what am I not seeing?
As VJT showed, months ago, now. We have an upper limit, and we have reason to see that we are going to be within that limit, then we see also how the resources of the solar system or cosmos will be vastly inadequate. GEM of TKI
Yes of course. But my question still stands :) And it is important, because the argument made against ID is not that very improbable patterns can happen by chance, but that the patterns deemed by IDists as improbable under the null, are not, in fact improbable. This point I thought was made elegantly in the conclusion to Granville Sewell's paper discussed here recently. Cheers LizzieElizabeth Liddle
June 12, 2011
June
06
Jun
12
12
2011
08:42 AM
8
08
42
AM
PDT
1 2 3 4 13

Leave a Reply