Uncommon Descent Serving The Intelligent Design Community

Orgel and Dembski Redux

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

 

A couple of months ago I quoted from Lesli Orgel’s 1973 book on the origins of life.  L. E. Orgel, The Origins of Life: Molecules and Natural Selection (John Wiley & Sons, Inc.; New York, 1973).  I argued that on page 189 of that book Orgel used the term “specified complexity” in a way almost indistinguishable from the way Bill Dembski has used the term in his work.  Many of my Darwinian interlocutors demurred.  They argued the quotation was taken out of context and that Orgel meant something completely different from Dembski.  I decided to order the book and find out who was right.  Below, I have reproduced the entire section in which the original quotation appeared.  I will let readers decide whether I was right.  (Hint: I was).

 

All that follows is a word-for-word reproduction of the relevant section from Orgel’s book:

 

[Page 189]

Terrestrial Biology

Most elementary introductions to biology contain a section on the nature of life.  It is usual in such discussions to list a number of properties that distinguish living from nonliving things. Reproduction and metabolism, for example, appear in all of the lists; the ability to respond to the environment is another old favorite.  This approach extends somewhat the chef’s definition “If it quivers, it’s alive.” Of course, there are also many characteristics that are restricted to the living world but are not common to all forms of life.  Plants cannot pursue their food; animals do not carry out photosynthesis; lowly organisms do not behave intelligently.

It is possible to make a more fundamental distinction between living and nonliving things by examining their molecular structure and molecular behavior.  In brief, living organisms are distinguished by their specified complexity.*· Crystals are usually taken as the prototypes of simple, well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way.  Lumps of granite or random mixtures of polymers are examples of structures which are complex but not specified.  The crystals fail to qualify as living because they lack complexity, the mixtures of polymers fail to qualify because they lack specificity.

_______

* It is impossible to find a simple catch phrase to capture this complex idea.  “Specified and. therefore repetitive complexity” gets a little closer (see later).

[Page 190]

These vague ideas can be made more precise by introducing the idea of information.  Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure.  One can see intuitively that many instructions are needed to specify a complex structure.  On the other hand, a simple repeating structure can be specified in rather few instructions.  Complex but random structures, by definition, need hardly be specified at all.

These differences are made clear by the following example.  Suppose a chemist agreed to synthesize anything that could describe [sic] accurately to him.  How many instructions would he need to make a crystal, a mixture of random DNA-like polymers or the DNA of the bacterium E. coli?

To describe the crystal we had in mind, we would need to specify which substance we wanted and the way in which the molecules were to be packed together in the crystal.  The first requirement could be conveyed in a short sentence.  The second would be almost as brief, because we could describe how we wanted the first few molecules packed together, and then say “and keep on doing the same.”  Structural information has to be given only once because the crystal is regular.

It would be almost as easy to tell the chemist how to make a mixture of random DNA-like polymers.  We would first specify the proportion of each of the four nucleotides in the mixture.  Then, we would say, “Mix the nucleotides in the required proportions, choose nucleotide molecules at random from the mixture, and join them together in the order you find them.”  In this way the chemist would be sure to make polymers with the specified composition, but the sequences would be random.

It is quite impossible to produce a corresponding simple set of instructions that would enable the chemist to synthesize the DNA of E. coli.  In this case, the sequence matters; only by specifying the sequence letter-by-letter (about 4,000,000 instructions) could we tell the chemist what we wanted him to make.  The synthetic chemist would need a book of instructions rather than a few short sentences.

It is important to notice that each polymer molecule in a random mixture has a sequence just as definite as that of E.

[Page 191]

coli DNA.  However, in a random mixture the sequences are not specified, whereas in E. coli, the DNA sequence is crucial.  Two random mixtures contain quite different polymer sequences, but the DNA sequences in two E. coli cells are identical because they are specified.  The polymer sequences are complex but random; although E. coli DNA is also complex, it is specified in a unique way.

The structure of DNA has been emphasized here, but similar arguments would apply to other polymeric materials.  The protein molecules in a cell are not a random mixture of polypeptides; all of the many hemoglobin molecules in the oxygen-carrying blood cells, for example, have the same sequence.  By contrast, the chance of getting even two identical sequences 100 amino acids long in a sample of random polypeptides is negligible.  Again, sequence information can serve to distinguish the contents of living cells from random mixtures of organic polymers.

When we come to consider the most important functions of living matter, we again find that they are most easily differentiated from inorganic processes at the molecular level.  Cell division, as seen under the microscope, does not appear very different from a number of processes that are known to occur in colloidal solutions.  However, at the molecular level the differences are unmistakable:  cell division is preceded by the replication of the cellular DNA.  It is this genetic copying process that distinguishes most clearly between the molecular behavior of living organisms and that of nonliving systems.  In biological processes the number of information-rich polymers is increased during growth; when colloidal droplets “divide” they just break up into smaller droplets.

Comments
KF, If we can identify something that is high in Dembski's specified complexity but low in Kolmogorov complexity, then we have shown that the two concepts are distinct. A cylindrical crystal of pure silicon fits the bill. High specified complexity, low Kolmogorov complexity. You and Barry got it wrong, and no amount of tap dancing on your part will change that.keith s
January 22, 2015
January
01
Jan
22
22
2015
08:29 AM
8
08
29
AM
PDT
Your #4, 5thMM 'Is the intelligence needed to create and uphold physical laws chopped liver in your opinion (nightlight)?' Got it in one, it seems, 5MM. No mention of condiments, however.Axel
January 22, 2015
January
01
Jan
22
22
2015
06:47 AM
6
06
47
AM
PDT
F/N: Onlookers, pardon my citing in extenso from my note to draw out the links between probability, information and entropy, materials that have been linked from every comment I have ever made at UD: ______________ >>The second major step is to refine our thoughts, through discussing the communication theory definition of and its approach to measuring information. A good place to begin this is with British Communication theory expert F. R Connor, who gives us an excellent "definition by discussion" of what information is:
From a human point of view the word 'communication' conveys the idea of one person talking or writing to another in words or messages . . . through the use of words derived from an alphabet [NB: he here means, a "vocabulary" of possible signals]. Not all words are used all the time and this implies that there is a minimum number which could enable communication to be possible. In order to communicate, it is necessary to transfer information to another person, or more objectively, between men or machines. This naturally leads to the definition of the word 'information', and from a communication point of view it does not have its usual everyday meaning. Information is not what is actually in a message but what could constitute a message. The word could implies a statistical definition in that it involves some selection of the various possible messages. The important quantity is not the actual information content of the message but rather its possible information content. This is the quantitative definition of information and so it is measured in terms of the number of selections that could be made. Hartley was the first to suggest a logarithmic unit . . . and this is given in terms of a message probability. [p. 79, Signals, Edward Arnold. 1972. Bold emphasis added. Apart from the justly classical status of Connor's series, his classic work dating from before the ID controversy arose is deliberately cited, to give us an indisputably objective benchmark.]
To quantify the above definition of what is perhaps best descriptively termed information-carrying capacity, but has long been simply termed information (in the "Shannon sense" - never mind his disclaimers . . .), let us consider a source that emits symbols from a vocabulary: s1,s2, s3, . . . sn, with probabilities p1, p2, p3, . . . pn. That is, in a "typical" long string of symbols, of size M [say this web page], the average number that are some sj, J, will be such that the ratio J/M --> pj, and in the limit attains equality. We term pj the a priori -- before the fact -- probability of symbol sj. Then, when a receiver detects sj, the question arises as to whether this was sent. [That is, the mixing in of noise means that received messages are prone to misidentification.] If on average, sj will be detected correctly a fraction, dj of the time, the a posteriori -- after the fact -- probability of sj is by a similar calculation, dj. So, we now define the information content of symbol sj as, in effect how much it surprises us on average when it shows up in our receiver: I = log [dj/pj], in bits [if the log is base 2, log2] . . . Eqn 1 This immediately means that the question of receiving information arises AFTER an apparent symbol sj has been detected and decoded. That is, the issue of information inherently implies an inference to having received an intentional signal in the face of the possibility that noise could be present. Second, logs are used in the definition of I, as they give an additive property: for, the amount of information in independent signals, si + sj, using the above definition, is such that: I total = Ii + Ij . . . Eqn 2 For example, assume that dj for the moment is 1, i.e. we have a noiseless channel so what is transmitted is just what is received. Then, the information in sj is: I = log [1/pj] = - log pj . . . Eqn 3 This case illustrates the additive property as well, assuming that symbols si and sj are independent. That means that the probability of receiving both messages is the product of the probability of the individual messages (pi *pj); so: Itot = log1/(pi *pj) = [-log pi] + [-log pj] = Ii + Ij . . . Eqn 4 So if there are two symbols, say 1 and 0, and each has probability 0.5, then for each, I is - log [1/2], on a base of 2, which is 1 bit. (If the symbols were not equiprobable, the less probable binary digit-state would convey more than, and the more probable, less than, one bit of information. Moving over to English text, we can easily see that E is as a rule far more probable than X, and that Q is most often followed by U. So, X conveys more information than E, and U conveys very little, though it is useful as redundancy, which gives us a chance to catch errors and fix them: if we see "wueen" it is most likely to have been "queen.") Further to this, we may average the information per symbol in the communication system thusly (giving in termns of -H to make the additive relationships clearer): - H = p1 log p1 + p2 log p2 + . . . + pn log pn or, H = - SUM [pi log pi] . . . Eqn 5 H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: "it is often referred to as the entropy of the source." [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form. Though somewhat controversial even in quite recent years, this is becoming more broadly accepted in physics and information theory, as Wikipedia now discusses [as at April 2011] in its article on Informational Entropy (aka Shannon Information, cf also here):
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
Summarising Harry Robertson's Statistical Thermophysics (Prentice-Hall International, 1993) -- excerpting desperately and adding emphases and explanatory comments, we can see, perhaps, that this should not be so surprising after all. (In effect, since we do not possess detailed knowledge of the states of the vary large number of microscopic particles of thermal systems [typically ~ 10^20 to 10^26; a mole of substance containing ~ 6.023*10^23 particles; i.e. the Avogadro Number], we can only view them in terms of those gross averages we term thermodynamic variables [pressure, temperature, etc], and so we cannot take advantage of knowledge of such individual particle states that would give us a richer harvest of work, etc.) For, as he astutely observes on pp. vii - viii:
. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . .
And, in more details, (pp. 3 - 6, 7, 36, cf Appendix 1 below for a more detailed development of thermodynamics issues and their tie-in with the inference to design; also see recent ArXiv papers by Duncan and Samura here and here):
. . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . . [deriving informational entropy, cf. discussions here, here, here, here and here; also Sarfati's discussion of debates and the issue of open systems here . . . ] H({pi}) = - C [SUM over i] pi*ln pi, [. . . "my" Eqn 6] [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp - beta*yi) = Z [Z being in effect the partition function across microstates, the "Holy Grail" of statistical thermodynamics]. . . . [H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . . Jayne's [summary rebuttal to a typical objection] is ". . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly 'objective' quantity . . . it is a function of [those variables] and does not depend on anybody's personality. There is no reason why it cannot be measured in the laboratory." . . . . [pp. 3 - 6, 7, 36; replacing Robertson's use of S for Informational Entropy with the more standard H.]
As is discussed briefly in Appendix 1, Thaxton, Bradley and Olsen [TBO], following Brillouin et al, in the 1984 foundational work for the modern Design Theory, The Mystery of Life's Origins [TMLO], exploit this information-entropy link, through the idea of moving from a random to a known microscopic configuration in the creation of the bio-functional polymers of life, and then -- again following Brillouin -- identify a quantitative information metric for the information of polymer molecules. For, in moving from a random to a functional molecule, we have in effect an objective, observable increment in information about the molecule. This leads to energy constraints, thence to a calculable concentration of such molecules in suggested, generously "plausible" primordial "soups." In effect, so unfavourable is the resulting thermodynamic balance, that the concentrations of the individual functional molecules in such a prebiotic soup are arguably so small as to be negligibly different from zero on a planet-wide scale. By many orders of magnitude, we don't get to even one molecule each of the required polymers per planet, much less bringing them together in the required proximity for them to work together as the molecular machinery of life. The linked chapter gives the details. More modern analyses [e.g. Trevors and Abel, here and here], however, tend to speak directly in terms of information and probabilities rather than the more arcane world of classical and statistical thermodynamics, so let us now return to that focus; in particular addressing information in its functional sense, as the third step in this preliminary analysis. >> ______________ For record. KFkairosfocus
January 22, 2015
January
01
Jan
22
22
2015
03:27 AM
3
03
27
AM
PDT
KS, all you have managed to do is convince me that no evidence whatsoever will ever convince you of evident truth. It is patent that Orgel's identification of organisation as a second contrast to randomness was pivotal, and that his use of specified complexity in connexion with molecular level functional biological forms that are information bearing, sets out the concept functionally specific complex organisation and associated information. This is the pivotal form of CSI. Orgel went on to indicate that the informational content of such FSCO/I can be quantified in the first instance by description length. Which is of course what structured y/n q's in a string will do, in bits, as say AutoCAD files or the like do, reducing structures to node-arc patterns. I draw to your attention, again, Wiki's introduction which is a useful summary -- and which you have obviously not taken on board seriously:
In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity (also known as descriptive complexity, Kolmogorov–Chaitin complexity, algorithmic entropy, or program-size complexity) of an object, such as a piece of text, is a measure of the computability resources needed to specify the object. It is named after Andrey Kolmogorov, who first published on the subject in 1963.[1][2] For example, consider the following two strings of 32 lowercase letters and digits: abababababababababababababababab 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7 The first string has a short English-language description, namely "ab 16 times", which consists of 11 characters. The second one has no obvious simple description (using the same character set) other than writing down the string itself, which has 32 characters. More formally, the complexity of a string is the length of the shortest possible description of the string in some fixed universal description language (the sensitivity of complexity relative to the choice of description language is discussed below). It can be shown that the Kolmogorov complexity of any string cannot be more than a few bytes larger than the length of the string itself. Strings, like the abab example above, whose Kolmogorov complexity is small relative to the string's size are not considered to be complex.
The direct parallel to Orgel: (" Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. One can see intuitively that many instructions are needed to specify a complex structure."), and to Dembski ("T is detachable from E, and and T measures at least 500 bits of information . . ." and "In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently [--> thus, described or specified] of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . .") as I cited this morning already is patent. Save, to those devoted to selective hyperskepticism and determined to make zero concessions to anyone associated with the design view regardless of cost in want of fairness or objectivity. As to your attempted example, the first aspect -- notice the stress I have made for years on aspect by aspect examination, the only reasonable basis for properly understanding the application of the design inference process -- to observe about a block of pure Si is that it is a crystal. Its structure as such is the repetition of a unit cell, which is a case of mechanical necessity in action. The second aspect, extreme purity suitable for use in a fab to make ICs etc, is indeed something that is functionally specific and complex, also highly contingent as locus by locus in the body of the crystal, there are many possible arrangements. So in the ultra-astronomical config space applicable, we are indeed in a zone T, with cases E1, . . . En. And, lo and behold, you have acknowledged that he explanation for that FSCO/I is, design, probably by highly complex zone melt refining techniques or the like. Where, in nature starting from stellar furnaces, it is overwhelmingly likely that when Si forms as atoms and is able to condense into solid materials, it will be closely associated with impurities, due to the high incidence of chance and the high reactivities involved. Chance does not credibly explain FSCO/I but credibly explains the sort of stochastic contingencies that are common and easily empirically observed. That is, you again failed to reckon with the design inference process aspect by aspect, and filed to see that you in fact provided probably another trillion or so by now cases in point of FSCO/I being caused by design in our observation. In short, as has happened with many dozens of other attempted counter examples to the consistent pattern of FSCO/I being caused by design, it turns out to be an example of what it was meant to overturn. Please, think again. KF PS: It is quite evident also that you refuse to attend to the direct link between information and probability as captured in I = - log p, I have lined again my 101.kairosfocus
January 22, 2015
January
01
Jan
22
22
2015
03:12 AM
3
03
12
AM
PDT
KF,
Onlookers, note, that BA has been vindicated in the face of some loaded dismissive comments.
No, he hasn't. You and Barry are still confusing Kolmogorov complexity with Dembski's specified complexity. I already gave fifthmonarchyman an example of something with high specified complexity but low Kolmogorov complexity:
Consider a cylindrical crystal of pure silicon, of the kind used to make integrated circuits. It has a regular structure and thus low Kolmogorov complexity. Yet it is extremely unlikely to be produced by unintelligent natural processes, so Dembski’s equation attributes high specified complexity to it. Low Kolmogorov complexity, high specified complexity. “Specified improbability” would have been a better, more accurate name for what Dembski calls “specified complexity”. This is obvious given the presence of the P(T|H) term — a probability — in Dembski’s equation. He confused Barry, KF, and a lot of other people by using the word “complexity” instead of “improbability”.
keith s
January 22, 2015
January
01
Jan
22
22
2015
01:13 AM
1
01
13
AM
PDT
LH, Let's compare Orgel: >> It is possible to make a more fundamental distinction between living and nonliving things by examining their molecular structure and molecular behavior. In brief, living organisms are distinguished by their specified complexity.*· Crystals are usually taken as the prototypes of simple, well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures which are complex but not specified. The crystals fail to qualify as living because they lack complexity, the mixtures of polymers fail to qualify because they lack specificity. These vague ideas can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. One can see intuitively that many instructions are needed to specify a complex structure. On the other hand, a simple repeating structure can be specified in rather few instructions. Complex but random structures, by definition, need hardly be specified at all . . . . When we come to consider the most important functions of living matter, we again find that they are most easily differentiated from inorganic processes at the molecular level. Cell division, as seen under the microscope, does not appear very different from a number of processes that are known to occur in colloidal solutions. However, at the molecular level the differences are unmistakable: cell division is preceded by the replication of the cellular DNA. It is this genetic copying process that distinguishes most clearly between the molecular behavior of living organisms and that of nonliving systems. In biological processes the number of information-rich polymers is increased during growth; when colloidal droplets “divide” they just break up into smaller droplets.>> Notice, use of term specified complexity, association with functionality dependent on arrangement of parts, further association with functional specificity, in the case of D/RNA, ALGORITHMIC functional specificity, per the action of ribosomes in making proteins. Dembski, defining CSI in his key work, NFL, pp 148 and 144 giving priority to direct informational measures: >>p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . [Manfred] Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity, or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . . Biological specification always refers to function . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .” p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ” >> The priority of functionally specific complex organisation and associated information in Dembski is thus patent. And, this is the work that principally set out his argument and laid its basic framework. Where he defines, as well. He also here bridges to onward thought by laying out a configuration space, we can symbolise as W standing in for Omega. Within W we have E in a zone of similar cases of presumably FSCO/I, T. The issue now looks at a blind, highly contingent search for zones T and stipulates that 500 bits as a measure of complexity or search challenge is required before considering the case relevant. That threshold sets up a situation where blind search is maximally implausible as a mechanism. Best appreciated in Darwin's warm pond or the like pre-life environment. Mechanical necessity under closely similar initial conditions, produces closely similar outcomes, hence laws of mechanical necessity such as Newton's cluster of laws of motion and Gravitation, the paradigm cases. High contingency rules out such as a plausible explanation for an aspect of a phenomenon or process. Empirically, that leaves blind chance and intelligently directed configuration aka design on the table. Of these the default is chance. But when we see outcomes E from a zone T in a deeply isolated island of function such that chance is of negligible plausibility [i.e. FSCO/I], design is best explanation. On trillions of observed cases, that inference is empirically reliable. The truth is, it is only controversial in respect of origin of life based on cells or of complex body plans because a speculative theory backed up by a priori materialist ideology rules the roost. Number of cases where, for life or for other cases of FSCO/I, it has been observed to originate by blind chance and mechanical necessity, NIL. Number of cases by design, trillions. So, by the vera causa principle on explaining the remote unobservable past on forces seen to be adequate causes in the present, the proper warranted best explanation is design. Wallace, not Darwin, at minimum. But, ideology dominates, as Lewontin so aptly if inadvertently documented:
the problem is to get them to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth [[--> NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting]. . . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [[--> another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [[--> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door. [NYRB, 1997. If you imagine this is quote mined, kindly cf here for wider context and remarks.]
KF PS: In case you are labouring under issues on probability vs information measures and observability, I again note on the already linked 101, and point here on the bridge from the 2005 discussion to FSCO/I, which is separately quite readily seen in live cases. If you doubt me, go to a tackle shop and ask to look at some reels and their exploded view diagrams, which are readily reducible to node arc pattern descriptions per AutoCAD etc, in bits. That is chains of structured Y/N q's. And bits from this angle are directly connected to bits from the neg log prob angle. As Shannon noted in his paper. Think about how plausible it would be to expect to form a reel by shaking up a bag of its parts. Imagine a cell-sized reel with parts capable of diffusion in a vat of 1 cu m . . . 10^18 1-micron cells . . . and ponder on possible arrangements of parts vs functional ones, then think about what diffusional forces would likely do by comparison with a shaken bag of parts or a reel mechanic. Fishing reels are a LOT simpler than watches. Cells are a LOT more complex than watches. And the von Neumann kinematic self replication facility using codes and algorithms with huge volumes of info, is part of what has to be explained at OOL. Design sits to the table as a serious candidate for the tree of life right from the root. KFkairosfocus
January 22, 2015
January
01
Jan
22
22
2015
12:42 AM
12
12
42
AM
PDT
Onlookers, note, that BA has been vindicated in the face of some loaded dismissive comments. The continued lack of willingness to acknowledge even a first basic fact is revealing on the zero concessions policy of too many design objectors. In reply to much of the above, I say, read the OP. I have already linked a graphical illustration in the literature for a decade on the relationships between complexity, compressibility and functionality; I find no sign of serious engagement of Trevors & Abel above (NB: there is a trove of serious discussions by these authors that ties directly to Orgel's point). And, transformation by log reduction to the more easily observed information value -- as the chain of y/n q's gives a good first info metric (as is used in common file sizes on a routine basis) -- is a reasonable step. Ip = 1/log p is a longstanding basic result, cf my 101 here on in my always linked note. KF PS: Just to underscore, let me cite T & A in the paper already linked:
"Complexity," even "sequence complexity," is an inadequate term to describe the phenomenon of genetic "recipe." Innumerable phenomena in nature are self-ordered or complex without being instructive (e.g., crystals, complex lipids, certain polysaccharides). Other complex structures are the product of digital recipe (e.g., antibodies, signal recognition particles, transport proteins, hormones). Recipe specifies algorithmic function. Recipes are like programming instructions. They are strings of prescribed decision-node configurable switch-settings. If executed properly, they become like bug-free computer programs running in quality operating systems on fully operational hardware. The cell appears to be making its own choices. Ultimately, everything the cell does is programmed by its hardware, operating system, and software. Its responses to environmental stimuli seem free. But they are merely pre-programmed degrees of operational freedom. The digital world has heightened our realization that virtually all information, including descriptions of four-dimensional reality, can be reduced to a linear digital sequence [--> think, 3-d animated engineering drawings or the like] . . .
Those familiar with both Orgel and Wicken on one hand and the onward path T & A followed will find much here. In particular, the significance of functionally specific complex organisation dependent on interaction of parts to achieve function and associated information (FSCO/I) is obvious. As, is the ability to infer an information metric in a context where function is an observable constraint on acceptable bit-chain equivalent descriptions. PPS: Caption to TA, fig 4 already linked: >> Superimposition of Functional Sequence Complexity onto Figure 2. The Y1 axis plane plots the decreasing degree of algorithmic compressibility as complexity increases from order towards randomness. The Y2 (Z) axis plane shows where along the same complexity gradient (X-axis) that highly instructional sequences are generally found. The Functional Sequence Complexity (FSC) curve includes all algorithmic sequences that work at all (W). The peak of this curve (w*) represents "what works best." The FSC curve is usually quite narrow and is located closer to the random end than to the ordered end of the complexity scale.[--> island of func] Compression of an instructive sequence slides the FSC curve towards the right (away from order, towards maximum complexity, maximum Shannon uncertainty, and seeming randomness) with no loss of function. >> PPPS: Busy with local issues, will get back on FSCO/I later.kairosfocus
January 21, 2015
January
01
Jan
21
21
2015
11:38 PM
11
11
38
PM
PDT
Because the sentence you provided is a throwaway line. Sorry, we're not communicating. I don't know why you think it's a "throwaway line;" it a short, clear, simple statement of his approach. If you don't think it's accurate, then you probably need to read the book or some of his articles for yourself. If I understand correctly Dembski was not saying that the only way to compute specified complexity was by estimating probability. He was saying that low probability was not enough to show that something has specified complexity, Low probability is not enough to determine specified complexity. It's just one necessary component of the analysis. The "complexity" part of specified complexity is a measure of probability, so you have to at least estimate that probability to determine whether the subject is "complex." I'm not aware of any other way to estimate "complexity" as Dembski uses the term, but I haven't read everything he's written. Determining whether the subject is specified is a separate step. You need both steps to tell whether it has "specified complexity." I think I misunderstood your initial question; I thought you were saying something about the Orgel-Dembski connection. But now it looks like you're asking whether there's some way to tell whether something is "complex" without relying on probability. No, I don't think there is--I think Dembksi only really thinks of this in probabilistic terms. But like I said, I'm not an expert on his thinking or qualified to follow his equations.Learned Hand
January 21, 2015
January
01
Jan
21
21
2015
10:50 PM
10
10
50
PM
PDT
Fifthmonarchyman, From a November comment of mine:
Once he realized his error, Barry deleted the thread to hide the evidence. That’s funny enough, but here’s another good one: Dembski himself stresses the distinction between Kolmogorov complexity and improbability:
But given nothing more than ordinary probability theory, Kolmogorov could at most say that each of these events had the same small probability of occurring, namely 1 in 2^100, or approximately 1 in 10^30. Indeed, every sequence of 100 coin tosses has exactly this same small probability of occurring. Since probabilities alone could not discriminate E sub R from E sub N, Kolmogorov looked elsewhere. Where he looked was computational complexity theory. The Design Inference, p. 169
I look forward to Barry’s explanation of how Dembski is an idiot, and how we should all trust Barry instead when he tells us that Kolmogorov complexity and improbability are the same thing.
keith s
January 21, 2015
January
01
Jan
21
21
2015
10:39 PM
10
10
39
PM
PDT
Learned hand asks, Why is Dembski’s own language not the answer to your question? I say, Because the sentence you provided is a throwaway line. If I understand correctly Dembski was not saying that the only way to compute specified complexity was by estimating probability. He was saying that low probability was not enough to show that something has specified complexity, Suppose I said that "The energy in ‘specified energy' is a measure of electricity.” It's possible even probable that measures of magnetism would work just as well. To demonstrate that Dembski means to rule out all other measures of complexity it would I think require more than a single sentence without context. Especially when a scholarly case has been made that all measures of complexity are related and possibly synonymous. I hope that makes sense peacefifthmonarchyman
January 21, 2015
January
01
Jan
21
21
2015
06:58 PM
6
06
58
PM
PDT
E.Seigner, you should learn to read.Mung
January 21, 2015
January
01
Jan
21
21
2015
06:57 PM
6
06
57
PM
PDT
you say, Probability is specifically relevant to complexity, I say, Depends on the tool we are using to measure complexity check it out Dembski says that probability is the measure he uses to measure complexity. And because specified complexity is a special case of complexity, unless there's some special exception, probability is part of the SC determination. I don't really understand what your question is anymore. Why is Dembski's own language not the answer to your question?Learned Hand
January 21, 2015
January
01
Jan
21
21
2015
06:26 PM
6
06
26
PM
PDT
learned hand said, I think you’re trying to read “specified complexity” as a single thing. Which is fair enough usually, except here we’re talking about the components of it. I say. I'm not talking talking about individual components per say I'm talking about the unified concept of specified complexity and how it might be measured. you say, Probability is specifically relevant to complexity, I say, Depends on the tool we are using to measure complexity check it out http://web.mit.edu/esd.83/www/notebook/Complexity.PDF you say, setting specification completely aside for the moment I say, I just don't think we can set it aside peacefifthmonarchyman
January 21, 2015
January
01
Jan
21
21
2015
05:44 PM
5
05
44
PM
PDT
Mung
Orgel clearly intended to associate his concept of specified complexity with the concept of information, something the opponents have repeatedly denied (having never read the source material until it was shoved in their face).
Having read the source material (both Orgel and Dembski), I say that Orgel clearly associates his concept with life while Dembski associates it with P(T|H) and information. How these things are in some people's minds all the same is anybody's guess. Orgel: The crystals fail to qualify as living because they lack complexity, the mixtures of polymers fail to qualify because they lack specificity.E.Seigner
January 21, 2015
January
01
Jan
21
21
2015
05:40 PM
5
05
40
PM
PDT
If I understand Kolmogorov complexity it is not about the effort it takes to describe something it is about the effort it takes to compute it am I missing something? I don't know. I'm the wrong person to ask about the details of Kolmogorov anything. I'm just comparing and contrasting how Dembski described complexity--a measure of probability--with how Orgel did it--a measure of the length of ht instruction set--and observing that these are not the same thing. As a test of that conclusion, I observe that some things, like a perfect sphere of water ice on the surface of the moon, would be complex by Dembski's standards but not Orgel's. BA and KF think that Orgel and Dembski are so obviously talking about the same concept that the grovelling apologies should begin immediately. But I don't see how they reconcile the obvious inconsistency.Learned Hand
January 21, 2015
January
01
Jan
21
21
2015
04:38 PM
4
04
38
PM
PDT
I read NFL and if I recall correctly it’s specification that is the core of his argument. Do you have anything besides a throw away sentence about one half of the term in question? Sorry, I don’t think I understand your confusion. It’s not a “throw away sentence,” he’s explicitly defining complexity as a measurement of improbability. You asked where Dembski “argues that improbable things are complex by their nature.” Since complexity is a measurement of improbability, improbable things are going to be complex by their nature. (I guess the exception would be some additional standard that would exempt some improbable things from being complex. I’m not aware of any such exception he’s ever identified; my understanding is that if something makes an otherwise improbable thing more likely to occur, such as evolution, it would remove the complexity.) I read NFL and if I recall correctly it’s specification that is the core of his argument. Based on this, and your numbers example, I think you’re trying to read “specified complexity” as a single thing. Which is fair enough usually, except here we’re talking about the components of it. Probability is specifically relevant to complexity, setting specification completely aside for the moment. If Dembski means that complexity is a measure of probability, and Orgel means it’s a measure of the length of the instruction set, then they’re talking about two different definitions of “complexity.” If they have two different definitions of “complexity,” they have two different definitions of “specified complexity.” And for these purposes, again, specification is set completely to the side—it doesn’t matter if their definitions of “specification” are verbatim the same, because “specified complexity” relies on complexity as much as specification.Learned Hand
January 21, 2015
January
01
Jan
21
21
2015
04:32 PM
4
04
32
PM
PDT
Learned hand said, impossibly improbable and descriptively simple, such as a royal flush drawn ten times consecutively from a fair deck of cards. I say, If I understand Kolmogorov complexity it is not about the effort it takes to describe something it is about the effort it takes to compute it am I missing something? peacefifthmonarchyman
January 21, 2015
January
01
Jan
21
21
2015
04:20 PM
4
04
20
PM
PDT
Learned hand quoting Dembski says, “The ‘complexity’ in ‘specified complexity’ is a measure of improbability.” I say, I read NFL and if I recall correctly it's specification that is the core of his argument. Do you have anything besides a throw away sentence about one half of the term in question? I'm not trying to be difficult here. I just want to understand the difference between the two measurements if any. for example A 20 digit string of random numbers has Kolmogorov complexity but not specified complexity. But suppose I came across the following string 31415926535897932384. I would say that the string has specified complexity despite the fact any single digit of the string is not especially improbable. now look at this string 31514926535897922384 It has the same probability as the first one when each digit is viewed independently but again no specified complexity. You might have guessed from my recent ramblings around here that I think integration of information is where the cool stuff is. That is why I think Kolmogorov has more promise as a tool. Regardless it's the specification that makes Dembski's concept a valuable contribution to the discussion not his particular chosen ruler. IMO but I am open to correction. peacefifthmonarchyman
January 21, 2015
January
01
Jan
21
21
2015
04:13 PM
4
04
13
PM
PDT
5MM, Can you point me to a place where Dembski argues that improbable things are complex by their nature? Instead of the other way around? In No Free Lunch, he writes, "The 'complexity' in 'specified complexity' is a measure of improbability." Note that this is not the same as a measure of how long the instruction set is, as something can be both impossibly improbable and descriptively simple, such as a royal flush drawn ten times consecutively from a fair deck of cards. Orgel and Dembski were discussing different things.Learned Hand
January 21, 2015
January
01
Jan
21
21
2015
03:45 PM
3
03
45
PM
PDT
me earlier. I’ve recently been doing a lot of thinking about the complexity of Pi (Both kinds) and how it relates to ID Me now, That is some real unintentional comedy. ;-) I meant both kinds of complexity not both kinds of Pififthmonarchyman
January 21, 2015
January
01
Jan
21
21
2015
03:41 PM
3
03
41
PM
PDT
keith s says, Yet it is extremely unlikely to be produced by unintelligent natural processes, so Dembski’s equation attributes high specified complexity to it. I say, Interesting I always assumed that Dembski was arguing that specified complexity was improbable not that improbability was "specifically" complex. For example because of it's simplicity If we were to find a cylindrical crystal of pure silicon on Mars I would not infer design. I would assume some unknown natural law until we could rule that out. Can you point me to a place where Dembski argues that improbable things are "specifically" complex by their nature? Instead of the other way around? Thank you in advance peacefifthmonarchyman
January 21, 2015
January
01
Jan
21
21
2015
03:30 PM
3
03
30
PM
PDT
fifthmonarchyman,
You can measure distance by laser or by tape measure but the property you are measuring is still distance.
Sure, but in that case you are measuring the same underlying quantity using different methods. Kolmogorov complexity and Dembski's specified complexity are not the same quantity. Different quantities, different terms, different measurements.
You would have a point if you could demonstrate that an object with lots of specified complexity could be produced with an algorithm of minimal size. Can you do that?
Sure. Consider a cylindrical crystal of pure silicon, of the kind used to make integrated circuits. It has a regular structure and thus low Kolmogorov complexity. Yet it is extremely unlikely to be produced by unintelligent natural processes, so Dembski's equation attributes high specified complexity to it. Low Kolmogorov complexity, high specified complexity. "Specified improbability" would have been a better, more accurate name for what Dembski calls "specified complexity". This is obvious given the presence of the P(T|H) term -- a probability -- in Dembski's equation. He confused Barry, KF, and a lot of other people by using the word "complexity" instead of "improbability".keith s
January 21, 2015
January
01
Jan
21
21
2015
02:36 PM
2
02
36
PM
PDT
Hey Petrushka, Was your comment at 23 addressing my question? If it was please elaborate. I'm not sure I follow. Are you saying that a circle is an algorithm to compute the digits of Pi? I would probably characterize a circle as a lossless data compression/specification of the digits of Pi and not an algorithm. Do you think this idea is incorrect? If so why? I've recently been doing a lot of thinking about the complexity of Pi (Both kinds) and how it relates to ID I really want to make sure I'm not heading down the wrong path. So any insight would be appreciated thanks in advancefifthmonarchyman
January 21, 2015
January
01
Jan
21
21
2015
01:59 PM
1
01
59
PM
PDT
Pi. The digits of pi. A circle.Petrushka
January 21, 2015
January
01
Jan
21
21
2015
05:56 AM
5
05
56
AM
PDT
Keith S says, Kolmogorov complexity is not the same as Dembski’s specified complexity. I say These methods of measuring complexity are analogous and deeply related. Much like different methods of measuring length are analogous and deeply related. You can measure distance by laser or by tape measure but the property you are measuring is still distance. You would have a point if you could demonstrate that an object with lots of specified complexity could be produced with an algorithm of minimal size. Can you do that? peacefifthmonarchyman
January 21, 2015
January
01
Jan
21
21
2015
04:10 AM
4
04
10
AM
PDT
KF, Tap dance all you like, but the fact remains: Kolmogorov complexity is not the same as Dembski's specified complexity. You and Barry got it wrong.keith s
January 21, 2015
January
01
Jan
21
21
2015
02:13 AM
2
02
13
AM
PDT
KF,
KS, On being wise in one’s own eyes...
Barry is the butt of your joke. Being "wise in his own eyes", he posted a mocking OP. It backfired badly on him, so he dishonestly attempted to erase the evidence -- but he got caught. Will you be scolding Barry for his dishonesty? Or is honesty something you demand only of "Darwinists", and not of yourself or your fellow IDers?keith s
January 21, 2015
January
01
Jan
21
21
2015
02:02 AM
2
02
02
AM
PDT
PPS: I remind KS of the note in 8 above to LH:
. . . it’s coming on four years it was pointed out that WmAD extracted an information metric. FYI, it is a commonplace in science and mathematical modelling to transform from one form to another more amenable to empirical investigation. In this context a log-probability has been known to be an effective info metric since the 1920?s to 40?s. And, the Orgel remarks when they go on to address metrics of info on description length, gives such a metric. Reduce a description to a structured string of Y/N q’s to specify state and you have a first level info metric in bits, e.g 7 bits per ASCII character. Where, the implication for relevant cases such as protein codes, is that the history of life has allowed exploration of the effective space of variability for relevant key proteins, so an exploration on the H-metric of avg info per element in a message (the same thing entropy measures using SUM pi log pi do . . . ) gives a good analytical approach, cf Durston et al.
Refusal to acknowledge the force of a relevant response is not a healthy sign on the Isaiah 5:20-21 front. KFkairosfocus
January 21, 2015
January
01
Jan
21
21
2015
01:53 AM
1
01
53
AM
PDT
PS: A note on K-Complexity. Let's start with a useful Wiki clip:
In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity (also known as descriptive complexity, Kolmogorov–Chaitin complexity, algorithmic entropy, or program-size complexity) of an object, such as a piece of text, is a measure of the computability resources needed to specify the object . . . . For example, consider the following two strings of 32 lowercase letters and digits: abababababababababababababababab 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7 The first string has a short English-language description, namely "ab 16 times", which consists of 11 characters. The second one has no obvious simple description (using the same character set) other than writing down the string itself, which has 32 characters. More formally, the complexity of a string is the length of the shortest possible description of the string in some fixed universal description language . . . It can be shown that the Kolmogorov complexity of any string cannot be more than a few bytes larger than the length of the string itself. Strings, like the abab example above, whose Kolmogorov complexity is small relative to the string's size are not considered to be complex.
As the second example illustrates, a genuinely random string will resist compression so that the best way to capture it is to cite it and say as much. This relates to random tars or minerals in granites as Orgel discussed. A strictly orderly pattern such as ab, repeat n times, will be much more compressible. This directly relates to the order of crystals as discussed or simple repetitive polymers. A complex, functionally specific organised pattern such as an Abu 6500 C3 fishing reel, or a protein that must fold stably and predictably, fit key-lock style into a particular location and then must carry out a role dependent on its structure and proper location can be described in a similar way [especially as a structured string of y/n q's] and will resist compression but not as much as a strictly random entity. Where, the existence of AutoCAD etc shows practically that a 3-d functional entity may be reduced descriptively to a nodes-arcs pattern and then described further as a structured string. The resulting can be taken as cashing out the practical information content of such a structure. This, Orgel highlighted as a key characteristic of life. It is highly likely that in so writing, Orgel was aware of the issue of descriptive complexity as developed by Kolmogorov, Chaitin et al. So, yes, K-complexity can in fact be used as an index of randomness. As Trevors and Abel did in their Fig 4 on three types of sequence complexity, OSC, RSC and FSC, cf here in my always linked and the onward linked 2005 paper. It will be seen that they describe a trade-off between algorithmic compressibility and complexity, with a third axis with sharp peakedness indicating an index of functionality in a co-ordinated organised process. This diagram is in fact an illustration of the island of function effect strongly associated with FSCO/I. So, Orgel is applicable, and complexity/compressibility is indeed an index of randomness as opposed to order or organisation. KFkairosfocus
January 21, 2015
January
01
Jan
21
21
2015
01:44 AM
1
01
44
AM
PDT
KS, On being wise in one's own eyes (especially in a context where BA has shown that those who falsely accused him of distorting the meaning of Orgel's remarks have been shown spectacularly wrong . . . ), I suggest you and others would be well advised to reflect upon a bit of ancient wisdom that long anticipated anything of substance in Dunning and Krueger:
Is 5:Woe to those who draw iniquity with cords of falsehood, who draw sin as with cart ropes . . . Woe to those who call evil good and good evil, who put darkness for light and light for darkness, who put bitter for sweet and sweet for bitter! 21 Woe to those who are wise in their own eyes, and shrewd in their own sight! [ESV]
KFkairosfocus
January 21, 2015
January
01
Jan
21
21
2015
01:21 AM
1
01
21
AM
PDT
1 2 3 4 5

Leave a Reply