Uncommon Descent Serving The Intelligent Design Community

Orgel and Dembski Redux

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
arroba Email

 

A couple of months ago I quoted from Lesli Orgel’s 1973 book on the origins of life.  L. E. Orgel, The Origins of Life: Molecules and Natural Selection (John Wiley & Sons, Inc.; New York, 1973).  I argued that on page 189 of that book Orgel used the term “specified complexity” in a way almost indistinguishable from the way Bill Dembski has used the term in his work.  Many of my Darwinian interlocutors demurred.  They argued the quotation was taken out of context and that Orgel meant something completely different from Dembski.  I decided to order the book and find out who was right.  Below, I have reproduced the entire section in which the original quotation appeared.  I will let readers decide whether I was right.  (Hint: I was).

 

All that follows is a word-for-word reproduction of the relevant section from Orgel’s book:

 

[Page 189]

Terrestrial Biology

Most elementary introductions to biology contain a section on the nature of life.  It is usual in such discussions to list a number of properties that distinguish living from nonliving things. Reproduction and metabolism, for example, appear in all of the lists; the ability to respond to the environment is another old favorite.  This approach extends somewhat the chef’s definition “If it quivers, it’s alive.” Of course, there are also many characteristics that are restricted to the living world but are not common to all forms of life.  Plants cannot pursue their food; animals do not carry out photosynthesis; lowly organisms do not behave intelligently.

It is possible to make a more fundamental distinction between living and nonliving things by examining their molecular structure and molecular behavior.  In brief, living organisms are distinguished by their specified complexity.*· Crystals are usually taken as the prototypes of simple, well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way.  Lumps of granite or random mixtures of polymers are examples of structures which are complex but not specified.  The crystals fail to qualify as living because they lack complexity, the mixtures of polymers fail to qualify because they lack specificity.

_______

* It is impossible to find a simple catch phrase to capture this complex idea.  “Specified and. therefore repetitive complexity” gets a little closer (see later).

[Page 190]

These vague ideas can be made more precise by introducing the idea of information.  Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure.  One can see intuitively that many instructions are needed to specify a complex structure.  On the other hand, a simple repeating structure can be specified in rather few instructions.  Complex but random structures, by definition, need hardly be specified at all.

These differences are made clear by the following example.  Suppose a chemist agreed to synthesize anything that could describe [sic] accurately to him.  How many instructions would he need to make a crystal, a mixture of random DNA-like polymers or the DNA of the bacterium E. coli?

To describe the crystal we had in mind, we would need to specify which substance we wanted and the way in which the molecules were to be packed together in the crystal.  The first requirement could be conveyed in a short sentence.  The second would be almost as brief, because we could describe how we wanted the first few molecules packed together, and then say “and keep on doing the same.”  Structural information has to be given only once because the crystal is regular.

It would be almost as easy to tell the chemist how to make a mixture of random DNA-like polymers.  We would first specify the proportion of each of the four nucleotides in the mixture.  Then, we would say, “Mix the nucleotides in the required proportions, choose nucleotide molecules at random from the mixture, and join them together in the order you find them.”  In this way the chemist would be sure to make polymers with the specified composition, but the sequences would be random.

It is quite impossible to produce a corresponding simple set of instructions that would enable the chemist to synthesize the DNA of E. coli.  In this case, the sequence matters; only by specifying the sequence letter-by-letter (about 4,000,000 instructions) could we tell the chemist what we wanted him to make.  The synthetic chemist would need a book of instructions rather than a few short sentences.

It is important to notice that each polymer molecule in a random mixture has a sequence just as definite as that of E.

[Page 191]

coli DNA.  However, in a random mixture the sequences are not specified, whereas in E. coli, the DNA sequence is crucial.  Two random mixtures contain quite different polymer sequences, but the DNA sequences in two E. coli cells are identical because they are specified.  The polymer sequences are complex but random; although E. coli DNA is also complex, it is specified in a unique way.

The structure of DNA has been emphasized here, but similar arguments would apply to other polymeric materials.  The protein molecules in a cell are not a random mixture of polypeptides; all of the many hemoglobin molecules in the oxygen-carrying blood cells, for example, have the same sequence.  By contrast, the chance of getting even two identical sequences 100 amino acids long in a sample of random polypeptides is negligible.  Again, sequence information can serve to distinguish the contents of living cells from random mixtures of organic polymers.

When we come to consider the most important functions of living matter, we again find that they are most easily differentiated from inorganic processes at the molecular level.  Cell division, as seen under the microscope, does not appear very different from a number of processes that are known to occur in colloidal solutions.  However, at the molecular level the differences are unmistakable:  cell division is preceded by the replication of the cellular DNA.  It is this genetic copying process that distinguishes most clearly between the molecular behavior of living organisms and that of nonliving systems.  In biological processes the number of information-rich polymers is increased during growth; when colloidal droplets “divide” they just break up into smaller droplets.

Comments
fifthmonarchyman: Pi has meaning in a way that the algorithm 4*(1 – 1/3 + 1/5 – 1/7 + …) does not. http://www.smbc-comics.com/?id=3639#comic Zachriel
I see that keith s cannot support his claim.
To specify a cylindrical crystal of pure silicon, you would merely need to 1. Specify the unit cell. 2. Specify the spatial pattern for adding new unit cells to an existing crystal. 3. Specify the boundaries of the cylindrical crystal (to stop adding unit cells when the crystal has reached the desired size and shape).
That seems pretty complex, keith s. You lose, again, as usual. Joe
Define specified. Without being circular. Microevolution -- a process many IDists agree happens -- modifies specifications. Yes or no? Petrushka
There is no such thing as a simple living being. There is no such thing as a complex living being that is not specified. Orgel, or Dembski?
It's Mung. E.Seigner
fifthmonarchyman: But “Pi” works just as well and has less K-complexity than the algroythym. Pi is not a useful answer to Please send the value of pi, the ratio of the circumference of a circle to its diameter. Zachriel
fifthmonarchyman: I suppose you could treat the algroythym like a specification you could treat any symbol as such Pi = 4 * (1 – 1/3 + 1/5 – 1/7 + … ) fifthmonarchyman: Do you honestly think I was arguing that the algroythym that I provided to approximate Pi could not approximate Pi? What you claimed was that the algorithm based on 4 * (1 – 1/3 + 1/5 – 1/7 + … ) had unbounded K-complexity. That is false. It's K-simple. fifthmonarchyman: Do you think that algorithmic processes can’t be simultaneous? An algorithm is a step-by-step process. Zachriel
Zac says, Equal means equal. The ellipses means infinite expansion, which is an abstraction independent of time. I say I suppose you could treat the algroythym like a specification you could treat any symbol as such But "Pi" works just as well and has less K-complexity than the algroythym. What you can't do is treat it like an algroythym and then proceed to measure it's complexity as if it were a specification. you said, Glad you gave up your misunderstanding of K-complexity I say, Do you honestly think I have changed my position about K=complexity in the slightest? Do you honestly think I was arguing that the algroythym that I provided to approximate Pi could not approximate Pi? Geez Zac said, To return to your original idea, it’s not clear that NV + NS (natural variation and natural selection) is algorithmic as events can be simultaneous. I say, Do you think that algorithmic processes can't be simultaneous? 5*2=2+2+2+2+2 after all you say, Nor is there a specific solution or end to the process. I say, The solution is what the algorithm explains in this case the Panorama of life. If you are conceding that Neo-Darwinism can not explain the Panorama of life. Then we are in agreement and my point is made you say, Furthermore, neither NV or NS are simple. I say, I completely agree and that just means that it will require even more K-complexity to specify the model. peace fifthmonarchyman
fifthmonarchyman: The equal sign is a convention that assumes infinite time. Equal means equal. The ellipses means infinite expansion, which is an abstraction independent of time. fifthmonarchyman: Here is an approximation of Pi 3.14159. If you need an increased resolution you might try squaring a circle. It won’t get you to Pi but it’s probably good enough for government work. A person can get unbounded precision with the arithmetic series. fifthmonarchyman: infinite numbers can not be completely reproduced by finite algorithms in finite time. Glad you gave up your misunderstanding of K-complexity. To return to your original idea, it's not clear that NV + NS (natural variation and natural selection) is algorithmic as events can be simultaneous. Nor is there a specific solution or end to the process. Furthermore, neither NV or NS are simple. The latter entails not only the environment, but the relationship between the structure of the organism and the characteristics of the environment. (Consider that a sequence may fold into a complex three-dimensional shape with charges unevenly distributed along its surface, and how this shape then interacts with other such structures.) The former entails many types of variation, mutation, recombination, splicing, etc. Then there's also contingency, branching descent with all its exceptions, and so on. Zachriel
zac says, A finite algorithm can produce an infinite sequence in infinite time. I say, fair enough I will amend the statement..... infinite numbers can not be completely reproduced by finite algorithms in finite time. How is that for you? you say, Please note the equal sign. I say, The equal sign is a convention that assumes infinite time. Don't you agree? You say, What is your reply? I say Dear sirs, "I can't send you an infinite sequence over this channel. Here is an approximation of Pi 3.14159. If you need an increased resolution you might try squaring a circle. It won't get you to Pi but it's probably good enough for government work" I would give a similar reply if someone asks me to send my wife over the internet. I might send a picture and a note on how they can learn more about her. That is all I can do. What I would not do is send a 2D picture or an evolutionary algorithm and claim that it "equals" my wife. peace fifthmonarchyman
fifthmonarchyman: All I’m saying is that infinite numbers can not be completely reproduced by finite algorithms. That is incorrect. A finite algorithm can produce an infinite sequence in infinite time. fifthmonarchyman: 4*(1 – 1/3 + 1/5 – 1/7 + …) is an algorithm to approximate Pi. That is also incorrect. Pi = 4 * (1 – 1/3 + 1/5 – 1/7 + … ). Please note the equal sign. Someone transmits this request: Please send the value of pi, the ratio of the circumference of a circle to its diameter. What is your reply? Zachriel
skram, This is non-controversial. It is also irrelevant. I say, Irrelevant for what? We were are talking about how different measures of complexity and how they are related so the k-complexity of an infinite number is probably relevant. Don't you agree? You say, K-complexity is defined for finite strings. I say, I'm not disagreeing So now we need to decide if it can be used for infinite ones. I'd say yes as long as we are agree that it is un-bounded for such strings. I'm willing to hear opposing arguments but they need to be informed arguments not just pronouncements you said, Piotr’s point illustrates well that a long sequence of digits of ? is a K-compressible object, perfectly in agreement with Kolmogorov’s definition. I say, That's why I agreed with him. Geez peace fifthmonarchyman
fifthmonarchyman: All I’m saying is that infinite numbers can not be completely reproduced by finite algorithms. This should not be controversial This is non-controversial. It is also irrelevant. K-complexity is defined for finite strings. Piotr's point illustrates well that a long sequence of digits of π is a K-compressible object, perfectly in agreement with Kolmogorov's definition. skram
skram says, I don’t understand what you are saying. I say All I'm saying is that infinite numbers can not be completely reproduced by finite algorithms. This should not be controversial you say K-complexity is about specification and we all can agree that the digits of Pi represent a compressible string since it can be specified in a more economical way than by writing out the string itself. I say again I'm not denying the compressibility I'm denying that the compression is nonlossy. Again this should not be controversial a finite algorithm can not completely reproduce an infinite string. peace fifthmonarchyman
fifthmonarchyman, I don't understand what you are saying. The compression of the digits of π is certainly algorithmic. Zachriel's recipe gives an explicit algorithm. Your specification requires a recipe (i.e., an algorithm) through which the circumference is computed. (E.g., through a polygon approximation.) But anyway, K-complexity is about specification and we all can agree that the digits of π represent a compressible string since it can be specified in a more economical way than by writing out the string itself. skram
Skram says, To specify and to compute are two different things. I say That is my point!!!! Tell that to Zac he is treating an approximating algorithm as a specification. you say Your own specification (the ratio of the circumference and diameter of a circle) does not give any digits of Pi, it merely specifies one recipe to obtain the number. I say, exactly!!!! I'm not trying to calculate Pi with my specification I'm merely specifying it. The K-complexity of the symbol Pi is simple and the compression it yields is non algorithmic and nonlossy but at the same time complete. That is the beauty of axioms, Axioms are not algorithms peace fifthmonarchyman
fifthmonarchyman: Agreed the question is can a finite algorithm ever produce all the digits of Pi. No, that isn't the question. To specify and to compute are two different things. Your own specification (the ratio of the circumference and diameter of a circle) does not give any digits of π, it merely specifies one recipe to obtain the number. skram
Hey Guys, This is all about lossy verses non-lossy compression. I agree that 4*(1–1/3+1/5–1/7+…) can compress 3.141592653589793238462643383279502884197169399375105820974944592307816406286…… But information is inevitably lost the string is infinite the product of the algorithm is finite. peace fifthmonarchyman
Piotr says, the Kolmogoriov complexity of Pi is C (a constant, computable from the length of the minimal algorithm that generates the digits of Pi) I say, Agreed the question is can a finite algorithm ever produce all the digits of Pi. The answer is no you say, The complexity of the string of the first n digits of Pi is C + log(n), because in addition to the algorithm you have to specify n. I say, again agreed. It's important to say that we don't need to know all the digits to know n digits but the term Pi specifies all of them I'm not sure you intend to but I think we are saying the same thing here. peace fifthmonarchyman
fifthmonarchyman, take a look at Piotr's last comment. Writing down n digits of π directly requires a string of length n. However, we can write an algorithm outputting the n digits as a program of length of the order of log(n). This program is much shorter than the original string. Hence the digits of π represent a K-compressible string. skram
fifthmonarchyman, Formally, the Kolmogoriov complexity of Pi is C (a constant, computable from the length of the minimal algorithm that generates the digits of Pi). The complexity of the string of the first n digits of Pi is C + log(n), because in addition to the algorithm you have to specify n. Piotr
hey Skram, Zac is attempting to show that a specification and a algorithmic model are the same thing. This is the what the discussion is about. the term "J's#" has more information inherent in it than this algorithm 8675000+400-89 Can't you see that? peace fifthmonarchyman
skram says The definition on Wikipedia mentions the computational resources needed to specify an object, not resources to actually compute it. I say No I'm reading carefully 4*(1 – 1/3 + 1/5 – 1/7 + …)is an algorithm to approximate Pi. Pi is a term that specifies the ratio of a circle's circumference to its diameter Do you not see the difference? peace fifthmonarchyman
Skram says, Kolmogorov’s complexity only asks the question of whether it is possible to write a short program that can output ?’s digits. I say, What? K-complexety can be given for a text string. It is not only interested in digits. Besides the N in question is infinite. We are not asking for an algorithm to compute Pi to the nth digit we are asking for a an algorithm to specify all the digits. No finite algorithm can compute an infinite number in finite time. Don't you agree? But with the symbol/axiom Pi I can specify the whole shebang at one time all at once peace fifthmonarchyman
fifthmonarchyman, You are not reading carefully. The definition on Wikipedia mentions the computational resources needed to specify an object, not resources to actually compute it. skram
Zac says Pi is K-simple. I say Here is the definition of k-complexity quote: the Kolmogorov complexity (also known as descriptive complexity, Kolmogorov–Chaitin complexity, algorithmic entropy, or program-size complexity) of an object, such as a piece of text, is a measure of the computability resources needed to specify the object. end quote: from here http://en.wikipedia.org/wiki/Kolmogorov_complexity the computability resources needed to specify the pi algorithmically is infinite. Pi is infinite you can't get to Pi by a finite process. I'm not sure why this is so difficult to grasp. you say. Notice the equal sign. They are the same number. If I Google Pi I get the following Pi=3.14159265359 Surely you don't believe that that short 13 digit number is all there is to Pi? You say, Try to avoid using words like Kolmogorov complexity and computability, as you clearly don’t understand their meaning in mathematics. I say Try to avoid making official sounding pronouncements about mathematical meanings unless you are able to give evidence of your claims. Please explain with evidence how a finite algorithmic process can fully specify an infinite number peace fifthmonarchyman
fifthmonarchyman, you are wrong about π. Kolmogorov's complexity only asks the question of whether it is possible to write a short program that can output π's digits. It does not ask the question of how long it will take. The formula cited by Zachriel is such an algorithm. It will reproduce π to an arbitrary precision, given enough time. In fact, you can even tell how many terms in that expression must be summed in order to achieve a given accuracy. Thus every single digit of π is obtainable through that algorithm. Therefore π turns out to be K-compressible. skram
fifthmonarchyman: The fact is I’m using the standard definition of Kolmogorov complexity No, you're not. Pi is K-simple. fifthmonarchyman: Pi has meaning in a way that the algorithm 4*(1 – 1/3 + 1/5 – 1/7 + …) does not. Pi = 4 * (1 – 1/3 + 1/5 – 1/7 + … ) Notice the equal sign. They are the same number. fifthmonarchyman: Pi can’t be produced by any algorithm simple or otherwise it can only be approximated. That is incorrect. Pi is K-simple per the definition of Kolmogorov complexity, because it can be expressed as a simple algorithm. It's the length of the shortest algorithm that determines K-complexity. ETA: Try to avoid using words like Kolmogorov complexity and computability, as you clearly don't understand their meaning in mathematics. Zachriel
Zac says, Kolmogorov complexity is a proper noun. You can’t change its definition willy-nilly. I say, Why is it that any time I explore the implications of a concept you accuse me of changing definitions? The fact is I'm using the standard definition of Kolmogorov complexity you just don't like the implications I'm drawing You say, Pi is Kolmogorov simple because it is can produced by a simple algorithm. I say, Pi can't be produced by any algorithm simple or otherwise it can only be approximated. You say, Suppose you wanted to send someone your message, the sequence above, to some arbitrary limit. You could send the literal, but because it is non-compressible, it might take a while. Or you could send a short algorithm, and the recipient could calculate the expansion themselves. I say, You are still missing the point. I do not want to communicate the sequence to some arbitrary limit. I want to communicate the entire sequence all of it In fact to communicate to an arbitrary limit is to loose information that is central to what I want to convey. It's the difference between a 2D picture of my wife and my wife. 3.141592653589793238462643383279502884197169399375105820974944592307816406286…… is not compressible by any means algorithmic or otherwise unless you know the specification. Then it is highly compressible. Just like J's# Pi has meaning in a way that the algorithm 4*(1 – 1/3 + 1/5 – 1/7 + …) does not. A specification is not the same as an approximating algorithm. Not sure how many more ways I can state it. peace fifthmonarchyman
fifthmonarchyman: It will only approximate the number to ever greater accuracy forever. Kolmogorov complexity is a proper noun. You can't change its definition willy-nilly. Pi is Kolmogorov simple because it is can produced by a simple algorithm. Suppose you wanted to send someone your message, the sequence above, to some arbitrary limit. You could send the literal, but because it is non-compressible, it might take a while. Or you could send a short algorithm, and the recipient could calculate the expansion themselves. Zachriel
Zac says That is incorrect. Given arithmetic, there is a simple algorithm which can calculate the series. You provided one yourself. I say That algorithm will never get you to 3.141592653589793238462643383279502884197169399375105820974944592307816406286…… It will only approximate the number to ever greater accuracy forever. Peace fifthmonarchyman
fifthmonarchyman: I would say that the string that I posted has 0 probability of happening by chance and infinite K-complexity when considered from an algorithmic starting point. That is incorrect. Given arithmetic, there is a simple algorithm which can calculate the series. You provided one yourself. 4 * sum for n = 1 to infinity of ((n mod 2 * 2 ) - 1) / (n * 2 - 1) Zachriel
zac says, Anything that can solve problems can be treated as an oracle. I say, The problems need to include "halting problems", Are you actually claiming that evolution can solve halting problems? zac says, It’s not clear evolution is algorithmic as there is a great deal of simultaneity involved, more like a neutral network. I say, Are you claiming that evolution is conscious? zac says, That makes it {God} unbounded K-complexity, not zero. That’s because its shortest description is at least as long as an exhaustive list of its operations I say, There is no exhaustive list. just as there are no inner workings. An oracle is a black box when viewed from a algorithmic perspective So the K-complexity is zero. you say Just because you call it a black box doesn’t mean it doesn’t entail complexity I say I'm not necessarily claiming here that God is not complex. Although I would claim he is simple. I'm claiming that he like any oracle or axiom is not K-complex. peace fifthmonarchyman
Zac says, Complexity measures don’t work that way. A given measure may be finite while another may be infinite. I say I would say that the string that I posted has 0 probability of happening by chance and infinite K-complexity when considered from an algorithmic starting point. However if an oracle specifies the string with the constant/Axiom Pi it can be described very easily. Here is an easier one. look at this string 8675309 The probability is .000001 The k-complexety from an algorithmic starting point a is few bytes larger than the length of the string itself. However Jenny and Tommy Tutone know that number so they could specify it with much less K-complexety. Something like J's# would work The deep connection between the k-complexety of Jenny's specification and the probability of the string occurring by chance is what CSI is all about. peace fifthmonarchyman
fifthmonarchyman: Since God is not algorithmic he adds zero to the K-complexety. That makes it unbounded K-complexity, not zero. That's because its shortest description is at least as long as an exhaustive list of its operations. fifthmonarchyman: You’re the one who insisted that the use of oracles didn’t add complexity to the system. Z: Just because you call it a black box doesn’t mean it doesn’t entail complexity. There are even complexity classes for oracle machines. In any case, the minimal descriptive complexity is at least equal to the information passed through the interface. fifthmonarchyman: Are you now claiming that “Evolution” is an oracle? Anything that can solve problems can be treated as an oracle. fifthmonarchyman: That is quite a claim for an algorithmic process like “Evolution” wouldn’t you agree. It's not clear evolution is algorithmic as there is a great deal of simultaneity involved, more like a neutral network. Zachriel
Zac said, You’re the one who insisted that the use of oracles didn’t add complexity to the system. I say, Are you now claiming that "Evolution" is an oracle? quote: an oracle machine can be visualized as a Turing machine with a black box, called an oracle, which is able to decide certain decision problems in a single operation. The problem can be of any complexity class. Even undecidable problems, like the halting problem, can be used. end quote: from here http://en.wikipedia.org/wiki/Oracle_machine That is quite a claim for an algorithmic process like "Evolution" wouldn't you agree. peace fifthmonarchyman
zac said, God did it through gravity is more complex than God did it. The former includes two entities, while the latter only includes one I say. Again K-complexety is only interested in the length of the algorithm. Since God is not algorithmic he adds zero to the K-complexety. Please if you disagree with this stament explain why. Don't Just keep repeating yourself How would you quantify the K-complexety of a Turing Oracle? peace fifthmonarchyman
fifthmonarchyman: If that sentiment was expressed more often by the powers that be you would have a much less restless and more content populace when it comes to Darwinian Evolution. Doubtful. Creationism is an entrenched social and cultural phenomenon. fifthmonarchyman: Pi has a precise meaning. It only has a precise meaning with regards to K-complexity if you allow it in the language. fifthmonarchyman: The meaning of the term “evolution” keeps evolving as new detail is required. unless you want define evolution as simply change over time. You're the one who insisted that the use of oracles didn't add complexity to the system. fifthmonarchyman: I can measure age with carbon dating or by counting tree rings I will just need to calibrate as the need arises. Complexity measures don't work that way. A given measure may be finite while another may be infinite. Zachriel
wow zac here goes Zac says, it’s unlikely to ever be complete in any real sense, certainly not in the near term. I say, I tell you this as a Darwin critic and a fundi. If that sentiment was expressed more often by the powers that be you would have a much less restless and more content populace when it comes to Darwinian Evolution. Instead what we usually get is something like "NeoDarwinism has explained the panorama of life with out the need for a designer" you say, That is true, but if we admit the description word “evolution” into the language, then it is only one word, just like pi is only one word. I say, Pi has a precise meaning. The meaning of the term “evolution” keeps evolving as new detail is required. unless you want define evolution as simply change over time. If you did that even rabid YECs would be happy with that term. you say, No scientific test for God did it has been developed. I say, Here is one. If God did it he must have different attributes than Zac ;-) you say, K-complexity can only be evaluated in a given description language, so you have to specify the language. I say, In order to make this easy you are free to choose your own language. just tell me which one you used once you do that translation into any language is not much of a problem. you say, Similarity is not the same as being equal. I say, I completely agree. If two measurements are similar we can use one as a proxy for the other as long as we make note of the differences as they arise. I can measure age with carbon dating or by counting tree rings I will just need to calibrate as the need arises. peace fifthmonarchyman
fifthmonarchyman: So if I said that Darwinism in it’s most recent form is incomplete to adequately explain the panorama of life you would agree. If the theory of evolution were complete, there would be no need for evolutionary biologists. fifthmonarchyman: Better yet if I said any materialistic model that could possibly be offered to explain the panorama of life is destined to be incomplete you would agree. Sure, especially as evolutionary theory includes an historical component, it's unlikely to ever be complete in any real sense, certainly not in the near term. fifthmonarchyman: Not when measured by K complexety That is incorrect. God did it through gravity is more complex than God did it. The former includes two entities, while the latter only includes one. fifthmonarchyman: I have already shown a scholarly case can be made that all measures of complexity are equivalent That is incorrect. For instance, algorithmic complexity is not equivalent to effective complexity or logical depth. fifthmonarchyman: Zac did it could also be inserted for anything and everything and just like the hypothesis that God did it we can propose scientific ways of testing the claim. No scientific test for God did it has been developed. fifthmonarchyman: I’m not saying that the expansion of Pi requires additional K-complexity I’m saying that instruction to halt the algorithm at a certain digit requires additional k-complexity. It requires precisely one number. fifthmonarchyman: on the other hand the specification “Pi” is not subject to this weakness because we know it is an irrational constant and it can therefore be treated as an axiom in our model If pi is a member of the description language, then it is simpler than the algorithm, however, you had said the K-complexity of the algorithm was unbounded, which was incorrect. fifthmonarchyman: By the same token When it comes to a model for evolution each additional detail that is added to the RM plus NS core will require a longer description and thus is more K-complex. That is true, but if we admit the description word "evolution" into the language, then it is only one word, just like pi is only one word. fifthmonarchyman: the answers that they came up with for how to measure complexity bear a considerable similarity to each other. Similarity is not the same as being equal. fifthmonarchyman: What is their K-complexity? K-complexity can only be evaluated in a given description language, so you have to specify the language. Zachriel
a little more on topic keith S said In that quote, Lloyd is referring to “complexity” as used by mainstream scientists and mathematicians, not Dembski’s idiosyncratic usage where “complexity” actually means “improbability”. I say, consider the following string of digits with zero entropy 3.141592653589793238462643383279502884197169399375105820974944592307816406286...... two questions What is the probability that they will occur by chance? What is their K-complexity? peace fifthmonarchyman
keith's said, In that quote, Lloyd is referring to “complexity” as used by mainstream scientists and mathematicians, not Dembski’s idiosyncratic usage where “complexity” actually means “improbability”. I say, I'm not discussing CSI right now. I'm discussing using K-complexety to measure the complexity of a given model. So Lloyd's comment is very germane to my point. I realize that this overall thread is about CSI so you can ignore my off topic digressions if you wish. As far as complexity in CSI goes I think you know my position. Agree to disagree Peace fifthmonarchyman
keith s says 1. The two answers are equal only if the expansion goes on forever. I say, Again you are missing the contribution of the oracle. When I deal with Pi I can treat it like a natural number I don't need to do the math I simply plug in the value with the resolution I need. An algorithm on the other hand does not have that option You say, The Kolmogorov complexity of the expansion is fixed. It doesn’t “grow forever”. I say Here is where I think I might have been a little unclear, I agree that the K-complexety of 4*(1 – 1/3 + 1/5 – 1/7 + …)is fixed that is not my argument. My argument is that 4*(1 – 1/3 + 1/5 – 1/7 + …) is not enough to compute the ratio of a actual phyiscal circle’s circumference to its diameter. To describe the ratio of an individual physical circle you need to add a function to halt the algorithm at a given resolution then you need to add additional descriptive language to model the places where the circle deviates from the original algorithm all of these add additional K-complexity. That complexity will grow until you reach the limits of your measurement system You say 3. The Kolmogorov complexity is determined by the shortest description of an entity using the given description language. Since “Pi” and the Taylor series expansion refer to the same number, their Kolmogorov complexities with respect to any particular description language are equal. I say, As I said before you are missing the contribution of the oracle. Pi is only a valid term when you know what it means. I hope I am making myself clear now. I appreciate the feed back. Once I grasp a concept I often simply assume that everyone else can see what I see. I struggle at times with clear explanation. peace fifthmonarchyman
fifthmonarchyman:
I’m referring to Seth Lloyd’s claim.
In that quote, Lloyd is referring to "complexity" as used by mainstream scientists and mathematicians, not Dembski's idiosyncratic usage where "complexity" actually means "improbability".
Are you saying that you think that Dembski is more of an authority on complexity measures than a professor of mechanical engineering at the MIT?
No, I'm saying that despite all his faults, even Dembski understands the difference between Kolmogorov complexity and improbability:
But given nothing more than ordinary probability theory, Kolmogorov could at most say that each of these events had the same small probability of occurring, namely 1 in 2^100, or approximately 1 in 10^30. Indeed, every sequence of 100 coin tosses has exactly this same small probability of occurring. Since probabilities alone could not discriminate E sub R from E sub N, Kolmogorov looked elsewhere. Where he looked was computational complexity theory. The Design Inference, p. 169
Take some time to learn about Kolmogorov complexity, FMM. It's a really interesting subject. keith s
fifthmonarchyman:
It’s possible I did not make myself clear enough at the outset. I understand your point but you are misunderstanding mine. I’m not saying that the expansion of Pi requires additional K-complexity I’m saying that instruction to halt the algorithm at a certain digit requires additional k-complexity.
No, here's what you said:
suppose I wanted to calculate the the K-complexety of the answer to this question “what is the ratio of a circle’s circumference to its diameter?” We could start out by trying to square the circle and end up with an algorithm like this 4*(1 – 1/3 + 1/5 – 1/7 + …) The k-complexity using this approach is huge and grows forever as more detail is needed. On the other hand we could answer the question with the following solution “Pi”
You're making multiple mistakes here. 1. The two answers are equal only if the expansion goes on forever. 2. The Kolmogorov complexity of the expansion is fixed. It doesn't "grow forever". 3. The Kolmogorov complexity is determined by the shortest description of an entity using the given description language. Since "Pi" and the Taylor series expansion refer to the same number, their Kolmogorov complexities with respect to any particular description language are equal. keith s
@ Keith S It's possible I did not make myself clear enough at the outset. I understand your point but you are misunderstanding mine. I'm not saying that the expansion of Pi requires additional K-complexity I'm saying that instruction to halt the algorithm at a certain digit requires additional k-complexity. on the other hand the specification "Pi" is not subject to this weakness because we know it is an irrational constant and it can therefore be treated as an axiom in our model By the same token When it comes to a model for evolution each additional detail that is added to the RM plus NS core will require a longer description and thus is more K-complex. I hope that clarification is sufficient You say, No. Dembski understands this I say, I am not particularly bound by Dembski's opinion one way or another. Especially given the newness of the whole enterprise. Ideas are in flux and can be modified as the result of discussion and contemplation. I'm referring to Seth Lloyd's claim. quote: An historical analog to the problem of measuring complexity is the problem of describing electromagnetism before Maxwell's equations. In the case of electromagnetism, quantities such as electric and magnetic forces that arose in different experimental contexts were originally regarded as fundamentally different. Eventually it became clear that electricity and magnetism were in fact closely related aspects of the same fundamental quantity, the electromagnetic field. Similarly, contemporary researchers in architecture, biology, computer science, dynamical systems, engineering, finance, game theory, etc., have defined different measures of complexity for each field. Because these researchers were asking the same questions about the complexity of their different subjects of research, however, the answers that they came up with for how to measure complexity bear a considerable similarity to each other. end quote: from here http://web.mit.edu/esd.83/www/notebook/Complexity.PDF Are you saying that you think that Dembski is more of an authority on complexity measures than a professor of mechanical engineering at the MIT? peace fifthmonarchyman
keiths:
Kolmogorov complexity is about the length of the description, not about the length of the computation.
fifthmonarchyman:
Of course but the description is simply an algorithm to compute the model.
The description and the computation are distinct, which is why your statement about the expansion of pi was incorrect:
The k-complexity using this approach is huge and grows forever as more detail is needed.
The Kolmogorov complexity doesn't grow at all, because the algorithm doesn't change. The description is finite, but the expansion is infinite. The algorithm is distinct from the result. Likewise, this description of the natural numbers is finite:
1. 0 is a natural number. 2. If n is a natural number, then n+1 is a natural number.
The description is short and simple, but the set being described is infinite. Low Kolmogorov complexity, large size.
Not when measured by K complexety and as I have already shown a scholarly case can be made that all measures of complexity are equivalent
No. Dembski understands this. Reread this quote. keith s
Think, Joe. A crystal consists of a unit cell repeated many times in a specific spatial pattern. Orgel explains it in the very passage that Barry quoted in the OP:
Crystals are usually taken as the prototypes of simple, well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way.
To specify a cylindrical crystal of pure silicon, you would merely need to 1. Specify the unit cell. 2. Specify the spatial pattern for adding new unit cells to an existing crystal. 3. Specify the boundaries of the cylindrical crystal (to stop adding unit cells when the crystal has reached the desired size and shape). The crystal's simple description gives it a low Kolmogorov complexity. However, such a crystal is highly unlikely to form spontaneously, giving it a high specified complexity by Dembski's metric. Kolmogorov complexity is obviously not the same as Dembski's specified complexity. Barry, KF and fifthmonarchyman got it wrong. keith s
I challenge keith s to provide the Kolmogorov complexity of a "cylindrical crystal of pure silicon" or stop using it as it is clear he doesn't have a clue. Show your work, keith. Joe
zac said We merely pointed out that God did it can be inserted for anything, and by the standard you set, can be inserted for everything. I say Zac did it could also be inserted for anything and everything and just like the hypothesis that God did it we can propose scientific ways of testing the claim. Do you think we can not test the claim that Zac did X? peace fifthmonarchyman
Zac, All scientific theories are incomplete by nature. I say, So if I said that Darwinism in it's most recent form is incomplete to adequately explain the panorama of life you would agree. Better yet if I said any materialistic model that could possibly be offered to explain the panorama of life is destined to be incomplete you would agree. Correct? now we are getting somewhere you say, More complicated. I say, Not when measured by K complexety and as I have already shown a scholarly case can be made that all measures of complexity are equivalent peace fifthmonarchyman
fifthmonarchyman: By the same token Darwinism is offered as model for the Panorama of life and because it is an algorithm it can never get you there. All scientific theories are incomplete by nature. They're not judged by a standard of perfection, but of utility and explanatory power. fifthmonarchyman: If you define scientific as algorithmic then only algorithmic models qualify. We did no such thing. We merely pointed out that God did it can be inserted for anything, and by the standard you set, can be inserted for everything. fifthmonarchyman: What about God did it using gravity? More complicated. fifthmonarchyman: Of course but the description is simply an algorithm to compute the model. The algorithm is the model. Zachriel
Keith S says Kolmogorov complexity is about the length of the description, not about the length of the computation I say Of course but the description is simply an algorithm to compute the model. peace fifthmonarchyman
Zac says By that standard the simplest model of planetary movement is God did it, much simpler than gravity I say, What about God did it using gravity? This is just as simple computationally speaking as gravity but unlike gravity alone it is completely sufficient to explain the phenomena, peace fifthmonarchyman
Zac says By that standard the simplest model of planetary movement is God did it, I say If you ask a classical musician how they got to Carnegie hall the simplest model is Practice!!!!! It is simple and it is complete and it is valid it just lacks comprehensive detail. Of course you could try to square the circle and propose an alternative algorithmic model that has more detail but can never be complete No specific step by step procedure or formula no matter how complex will ever get a musician to Carnegie hall there are intangibles that will always come into play. By the same token Darwinism is offered as model for the Panorama of life and because it is an algorithm it can never get you there. It will always fall short it will always need additional tweaks each with ever more computational costs. You say, (Of course, that’s not a scientific explanation, or even a scientific description, but that should be apparent.) I say Ah now we are at the crux of the matter. If you define scientific as algorithmic then only algorithmic models qualify. It also means that things like archeology are not science Peace fifthmonarchyman
DDD, It is well known in modelling and analysis that transforming from one form to another is a reasonable approach. Here, log-probabilities are info measures and info as I outlined above can be directly counted off (esp. in bits, effectively what Orgel described) and can be refined using stochastic measures. For D/RNA, even were the info per codon only 1 bit (effectively, hydro phil/phob for instance), the set of proteins for simplest life takes us well past the 500 - 1,000 bit threshold. If you wand protein metrics that use the exploration of AA sequence space for proteins across the world of life as an index of variability on locus, try Durston et al, which I clipped from and noted on here, which builds on Shannon's H and I values already explained above. There are links to the 2007 paper, which gives Fits values for 15 protein families based on a very reasonable empirical process. The fits metric they give measures info in light of what is flexible to what extent. But of course this is a case where no information, evidence or logic will move those locked into Lewontin's a priori evolutionary materialism. Only public collapse of the system will, as happened for Marxism. KF kairosfocus
fifthmonarchyman, A few days ago, you wrote:
If I understand Kolmogorov complexity it is not about the effort it takes to describe something it is about the effort it takes to compute it
That's not correct, and you're making the same mistake when you write things like this:
The k-complexity using this approach is huge and grows forever as more detail is needed.
Kolmogorov complexity is about the length of the description, not about the length of the computation. keith s
fifthmonarchyman: Pi*r^2 is less complex than r^2*(4*(1 – 1/3 + 1/5 – 1/7 + …) Don’t you agree? Yes, given that pi is part of the description language; but your claim was that the k-complexity of the expansion is huge, and essentially unbounded. That claim isn't true. It's simple and finite. fifthmonarchyman: It’s not a God of the gaps argument it’s a argument for the less complex model. By that standard the simplest model of planetary movement is God did it, much simpler than gravity (as long as you don't have to include the complexity of God did it). Indeed, by that standard, the simplest description of every single phenomenon in the entire universe, from the smallest quantum interaction to why Mabel turned into the Five-and-Dime, to the large-scale structure of the universe is God did it (as long as you don't have to include the complexity of God did it). (Of course, that's not a scientific explanation, or even a scientific description, but that should be apparent.) Zachriel
Zac says, The model includes the oracle, obviously. I say, Just as Pi*r^2 includes Pi. K complexity speaking Pi*r^2 is less complex than r^2*(4*(1 – 1/3 + 1/5 – 1/7 + …) Don't you agree? You say, That’s a novel use of God of the Gaps, though! I say, It's not a God of the gaps argument it's a argument for the less complex model. You say, You can’t use the term K-complexity and invent a new meaning. I say, No new meaning here. I talking about the the length of the shortest possible description of the model to explain the panorama of life. You say The decimal expansion of one divided by three is algorithmically simple. I say, So? what does a non halting decimal expansion of 1/3 have to do with a process that can't be modeled via an algorithm? peace fifthmonarchyman
fifthmonarchyman: I never said there was necessarily no complexity in Turing Oracles it’s just there is no additional K-complexety in a model that contains them. The model includes the oracle, obviously. That's a novel use of God of the Gaps, though! fifthmonarchyman: It is simple to state that a series begins but very very difficult to say when it should end K-complexity is a proper name. You can't use the term K-complexity and invent a new meaning. The decimal expansion of one divided by three is algorithmically simple. Zachriel
Zac says, Just because you call it a black box doesn’t mean it doesn’t entail complexity. I say, I never said there was necessarily no complexity in Turing Oracles it's just there is no additional K-complexety in a model that contains them. There is a difference You say, Your expansion is a simple series, which is algorithmically simple. I say It is simple to state algorithmically that a series begins but very very difficult to say when it should end, peace fifthmonarchyman
fifthmonarchyman: By definition there are no “workings” to be described when it comes to a Turing Oracle it is a black box when it comes to computational description. Heh. Just because you call it a black box doesn't mean it doesn't entail complexity. There are even complexity classes for oracle machines. In any case, the minimal descriptive complexity is at least equal to the information passed through the interface. fifthmonarchyman: The k-complexity using this approach is huge and grows forever as more detail is needed. No. That is not correct. Your expansion is a simple series, which is algorithmically simple. Zachriel
Allow me to elaborate a little suppose I wanted to calculate the the K-complexety of the answer to this question "what is the ratio of a circle's circumference to its diameter?" We could start out by trying to square the circle and end up with an algorithm like this 4*(1 - 1/3 + 1/5 - 1/7 + ...) The k-complexity using this approach is huge and grows forever as more detail is needed. On the other hand we could answer the question with the following solution "Pi" This solution has very little K-complexety and is more complete than our first attempt. The problem with Darwinism is that it begins with the first approach and can only continue to increase in complexity as time passes. It will never be simpler peace fifthmonarchyman
Zac said, If the agent is non-algorithmic, it adds at least as much K-complexity as required to describe the workings of the agent, which may very well be infinite. I say, By definition there are no "workings" to be described when it comes to a Turing Oracle it is a black box when it comes to computational description. An Oracle may be very complex just not K-Complex. check it out http://www.blythinstitute.org/images/data/attachments/0000/0041/bartlett1.pdf and http://en.wikipedia.org/wiki/Oracle_machine Peace fifthmonarchyman
fifthmonarchyman: By that logic a model in which the lower trunk of the tree of life is more bush like would be much more complex than one with one a unified trunk from the very beginning. That's correct, but still much lower descriptive complexity than if each kind is its own tree. fifthmonarchyman: On the other hand adding intelligent design to any model of evolution adds no K complexity at all because by definition intelligence is not algorithmic. What? Not by definition certainly. If the agent is non-algorithmic, it adds at least as much K-complexity as required to describe the workings of the agent, which may very well be infinite. Zachriel
Zac says, Depends what you mean by huge. I say, I mean having a large amount of K complexity more that say a simple explanation like RM/NS alone You say. Without the organizing principle of common descent, then the descriptive complexity is even higher. I say, By that logic a model in which the lower trunk of the tree of life is more bush like would be much more complex than one with one a unified trunk from the very beginning. Wow, the more I think about it the more I realize that a Darwinisttic model would have to be profoundly K complex. Just think about it. Every single additional add-on to the simple RM/NS core costs the model more from a computational perspective. On the other hand adding intelligent design to any model of evolution adds no K complexity at all because by definition intelligence is not algorithmic. Isn't that an odd insight? peace fifthmonarchyman
fifthmonarchyman: So any model based on Darwinism would have a huge amount of K complexity Depends what you mean by huge. Without the organizing principle of common descent, then the descriptive complexity is even higher. Zachriel
ZAc says. Biological evolution is far more complex than that. Petrushka says, There are 20 or so named varieties of mutation. I say. So any model based on Darwinism would have a huge amount of K complexity Correct? peace fifthmonarchyman
There are 20 or so named varieties of mutation. Petrushka
fifthmonarchyman: In the case of Darwinism it can never be shorter than RM plus NS. Biological evolution is far more complex than that. fifthmonarchyman: The problem was that that simple description was not quite sufficient to explain the phenomenon so we needed to add the additional complexity of NeoDarwinism Darwin's theory was much more than 'RM plus NS'. It also included common descent with all its variations such as hybridization, not to mention contingency. 'RV plus NS' was shorthand for complex processes even in Darwin's time. Darwin lacked a working theory of genetics, but that doesn't mean he was unaware that there had to be a mechanism of heredity and for the generation of novelty. While the former was obvious, the latter you can treat as a prediction. fifthmonarchyman: his model relies on algorithmic mutation to introduce diversity, whereas real organisms generally undergo bitwise mutation. Bitwise mutation is only one form of variation. Variation also includes recombination, splicing, and various forms of network regulation. Zachriel
That being said I do appreciate the link Me_think. Gregory Chaitin is no slouch when it comes to this stuff so looks like an interesting read. I'm not sure what to make of the generally poor reviews from both sides of the debate. I suppose I need to check it out myself. peace fifthmonarchyman
From Me_Think's link Quote: But there's one fundamental problem here - his model relies on algorithmic mutation to introduce diversity, whereas real organisms generally undergo bitwise mutation. Hence, his model allows for a much more sophisticated search of the genotype space than is allowed in nature. In the same vein, by his own admission, the model can not actually be simulated, because it relies on a fitness function that can not be guaranteed to produce a result. end Quote: This is what we get from a 2013 book. Apparently there is more work to be done to produce the "whole math of evolution". I wonder what the K complexity will be when and if we finally have a working model based on Darwinism. I would assume it will be huge. peace fifthmonarchyman
DesignDetectiveDave @ 59
Show me the maths
If you want to look beyond Dembski, you can read the whole math of evolution in : Proving Darwin: Making Biology Mathematical by Gregory Chaitin Let me know how good it is. I haven't read it :-) Me_Think
E.Seigner:
Orgel: The crystals fail to qualify as living because they lack complexity, the mixtures of polymers fail to qualify because they lack specificity.
There is no such thing as a simple living being. There is no such thing as a complex living being that is not specified. Orgel, or Dembski? Mung
poor keiths. so lost. Mung
Show me the maths. Show me a please? No, that's OK, manners are often lacking around here. But you're vary laconic, and while I appreciate (and even envy) that quality, it means I'm not really sure what maths. I have no idea how to calculate Orgel's specified complexity. He doesn't seem to have thought it was readily quantifiable either, but I haven't read his book so I'm just assuming that. Calculating Dembski's metrics is sort of notoriously, hilariously hard to do. But I think the best source of the calculation is Dembski's 2005 paper, Specification etc. He gives the calculation on page 24. I can't reproduce it here, because I don't know how to reproduce the non-Latin symbols. I also don't think I could calculate it, unless someone specified the inputs for me--figuring those out is also notoriously, hilariously hard to do. But I know that the calculation includes an explicit, necessary assessment of "P(T|H)." In the context of a biological structure, T is "the evolutionary event/pathway that brings about that pattern." H is "the relevant chance hypothesis that takes into account Darwinian and other material mechanisms." I don't think Orgel's thoughts on specified complexity contemplate either T or H. (Maybe T, if "evolve this thing" is a cognizable instruction, but then wouldn't his instruction sets be super-short and thus not complex?) I don't see anything like H in Orgel's work. KF keeps implying it's there. Maybe he can show you the maths. Learned Hand
Zac says, There may be a shorter description that eludes you I say, In the case of Darwinism it can never be shorter than RM plus NS. Correct? You say, With biology, there was the elaboration pre-Darwin, which was replaced by the simpler, unifying Darwinian description. I say, The problem was that that simple description was not quite sufficient to explain the phenomenon so we needed to add the additional complexity of NeoDarwinism .correct? peace fifthmonarchyman
fifthmonarchyman: K complexity is easy. K-complexity is hard. K-complexity refers to the shortest possible description in a given description language. Providing a description and showing it is the shortest possible description are quite different problems. fifthmonarchyman: Now if it turned out that our model did not completely explain the Panorama of life we would have to add terms like neutral drift and HGT to the algorithm thus increasing it’s complexity So let's say we have a very elaborate description, perhaps an astrolabe to mirror the movements of the planets. However, you haven't shown it is the simplest description. There may be a shorter description that eludes you, say gravity. With biology, there was the elaboration pre-Darwin, which was replaced by the simpler, unifying Darwinian description. Before Darwin, there were all these individual species. After Darwin, they were all manifestations of a common ancestry. Zachriel
Show me the maths. DesignDetectiveDave
DDD, what don't you believe? That Orgel's complexity isn't calculable, or that Dembski's is? Orgel uses e coli DNA as an example, and points out that the instructions for building it would have to specify each strand to get an accurate copy. I guess you could calculate that, if the algorithm were just picking bases. Dembski's approach is different, though. He's not looking at how many steps it would take to assemble something, but rather whether that assembly could happen without design. His calculations require an assessment of the probability of a non-design origin. The length of Orgel's instruction chain is relevant to those calculations--I agree there's a connection between these ideas--but it's not an equivalency. Dembski's definitions are far afield of Orgel's thinking, based on the excerpt our host defensively asserts are definitive proof that they're identical. Learned Hand
K complexity is easy. Just measure the length of the algorithm that produces a particular output. The question is, do we have a candidate algorithm that we can measure? suppose we were using the following algorithm "RM + NS= Panorama of life" Depending on the language we would assign a value to the model lets say 10. Now if it turned out that our model did not completely explain the Panorama of life we would have to add terms like neutral drift and HGT to the algorithm thus increasing it's complexity, we would continue the process until "Panorama of life" was completely explained peace fifthmonarchyman
I don't believe you. Can Anyone here do the math? KF? DesignDetectiveDave
How would you calculate Orgel's complexity? I think you've identified another fundamental difference between the two concepts: Dembski's is actually calculable in theory, if you know inputs such as P(T|H). I think he even acknowledged that difference. If I remember right, he said he was trying to operationalize or formalize Orgel's work. Learned Hand
If you think they're different LH why don't you calculate both and show it? DesignDetectiveDave
Regardless I’d say complexity is complexity as far as information in CSI goes it’s the specification that is important . I think I got that impression from your earlier comments, and I think that's where I mistook your meaning. I disagree in that I think Orgel's complexity is distinct from Dembski's complexity, but since I also think Dembski is generally pretty consistent about how he defines complexity (at least in his works written on a level I can comprehend) I don't think it's a significant issue. Learned Hand
Learned hand says, But now it looks like you’re asking whether there’s some way to tell whether something is “complex” without relying on probability. No, I don’t think there is–I think Dembksi only really thinks of this in probabilistic terms. I say. I've been thinking about this for quite a while now. I think a strong argument against Darwinism can be made based on Kolmogorov complexity. It's not exactly Dembski's argument but it rhymes and Dembski was a large part of the inspiration. If it was not so far off topic and did not require so much background I'd love to knock it around here. I'm biding my time till the right thread comes up. stay tuned. Regardless I'd say complexity is complexity as far as information in CSI goes it's the specification that is important . Peace fifthmonarchyman
LH, blanket dismissal in the teeth of well warranted facts, with just a dash of ad hominem. KF kairosfocus
Recall that this began when Barry alleged that Orgel and Debmski were using the terms in "exactly" the same way. (I'm pretty sure the emphasis was original.) KF's long, rambling arguments make it embarrassingly obvious that the concepts aren't "exactly" the same or "almost indistinguishable" as Barry now claims. How could they be, when they produce very different results for examples such as Keith's? But those same long, rambling arguments make it appear as if there must be an answer; how could someone write so much without actually addressing Keith's simple point? (Dear reader--if there's an answer somewhere in KF's lectures, I have not found it.) It seems to me to be, in the local vernacular, a literature bluff. The very length and roundabout nature of KF's attempts to salvage this increasingly desperate face-saving campaign provide an inadvertent rebuttal to Barry's bluff. Concepts that are "exactly" the same or "almost indistinguishable" don't take so much sweaty effort to connect. Dembski is relying on probability. Orgel is relying on length of instruction set. As Keith's examples and mine show, there's a real difference between the two--they produce different answers in different situations. The two concepts can certainly be connected--Dembski is making a good-faith effort to do that--but so can temperature and pressure. They're different concepts nevertheless. But that won't be acknowledged, or addressed simply. Guys, the harm of being childish and demanding apologies from those who doubted you is that when it turns out you've made a mistake, it'll be almost impossible to muster up the character to back down. How can you, when you've demanded that people grovel for the sin of doubting you? Instead we'll get more bluster and rambling efforts to shore up the tenuous connection. Learned Hand
F/N: Just as a 101 note, when for instance we set up a pendulum, and explore the period of oscillations we assess a particular aspect of the entity, we ignore its colour, etc. And in assessing period, we will see further aspects of the phenomenon, some that are mechanically necessary and some that display a noise pattern due to various chance factors. Of course, the string, the bob, the suspension, the timing clock etc are all designed, but that is not relevant to the particular aspects of interest. In short, addressing entities and phenomena on aspects is a routine approach in the real world of doing science, something we would do in early experiments in science in school. Just, to draw out a bit the selective hyperskepticism, strawman tactic, zero concessions tactics and fallacy of the closed mind unfortunately shown above by KS. KF kairosfocus
Could both Kairosfocus and KeithS please state for the record if they think A cylindrical crystal of pure silicon has high or low specified complexity and high or low Kolmogorov complexity? Also I think a glossary of technical terms might be useful? DDD DesignDetectiveDave
KS, you are simply repeating an error that was specifically corrected this morning. A sadly familiar pattern for you. KF kairosfocus
KF, If we can identify something that is high in Dembski's specified complexity but low in Kolmogorov complexity, then we have shown that the two concepts are distinct. A cylindrical crystal of pure silicon fits the bill. High specified complexity, low Kolmogorov complexity. You and Barry got it wrong, and no amount of tap dancing on your part will change that. keith s
Your #4, 5thMM 'Is the intelligence needed to create and uphold physical laws chopped liver in your opinion (nightlight)?' Got it in one, it seems, 5MM. No mention of condiments, however. Axel
F/N: Onlookers, pardon my citing in extenso from my note to draw out the links between probability, information and entropy, materials that have been linked from every comment I have ever made at UD: ______________ >>The second major step is to refine our thoughts, through discussing the communication theory definition of and its approach to measuring information. A good place to begin this is with British Communication theory expert F. R Connor, who gives us an excellent "definition by discussion" of what information is:
From a human point of view the word 'communication' conveys the idea of one person talking or writing to another in words or messages . . . through the use of words derived from an alphabet [NB: he here means, a "vocabulary" of possible signals]. Not all words are used all the time and this implies that there is a minimum number which could enable communication to be possible. In order to communicate, it is necessary to transfer information to another person, or more objectively, between men or machines. This naturally leads to the definition of the word 'information', and from a communication point of view it does not have its usual everyday meaning. Information is not what is actually in a message but what could constitute a message. The word could implies a statistical definition in that it involves some selection of the various possible messages. The important quantity is not the actual information content of the message but rather its possible information content. This is the quantitative definition of information and so it is measured in terms of the number of selections that could be made. Hartley was the first to suggest a logarithmic unit . . . and this is given in terms of a message probability. [p. 79, Signals, Edward Arnold. 1972. Bold emphasis added. Apart from the justly classical status of Connor's series, his classic work dating from before the ID controversy arose is deliberately cited, to give us an indisputably objective benchmark.]
To quantify the above definition of what is perhaps best descriptively termed information-carrying capacity, but has long been simply termed information (in the "Shannon sense" - never mind his disclaimers . . .), let us consider a source that emits symbols from a vocabulary: s1,s2, s3, . . . sn, with probabilities p1, p2, p3, . . . pn. That is, in a "typical" long string of symbols, of size M [say this web page], the average number that are some sj, J, will be such that the ratio J/M --> pj, and in the limit attains equality. We term pj the a priori -- before the fact -- probability of symbol sj. Then, when a receiver detects sj, the question arises as to whether this was sent. [That is, the mixing in of noise means that received messages are prone to misidentification.] If on average, sj will be detected correctly a fraction, dj of the time, the a posteriori -- after the fact -- probability of sj is by a similar calculation, dj. So, we now define the information content of symbol sj as, in effect how much it surprises us on average when it shows up in our receiver: I = log [dj/pj], in bits [if the log is base 2, log2] . . . Eqn 1 This immediately means that the question of receiving information arises AFTER an apparent symbol sj has been detected and decoded. That is, the issue of information inherently implies an inference to having received an intentional signal in the face of the possibility that noise could be present. Second, logs are used in the definition of I, as they give an additive property: for, the amount of information in independent signals, si + sj, using the above definition, is such that: I total = Ii + Ij . . . Eqn 2 For example, assume that dj for the moment is 1, i.e. we have a noiseless channel so what is transmitted is just what is received. Then, the information in sj is: I = log [1/pj] = - log pj . . . Eqn 3 This case illustrates the additive property as well, assuming that symbols si and sj are independent. That means that the probability of receiving both messages is the product of the probability of the individual messages (pi *pj); so: Itot = log1/(pi *pj) = [-log pi] + [-log pj] = Ii + Ij . . . Eqn 4 So if there are two symbols, say 1 and 0, and each has probability 0.5, then for each, I is - log [1/2], on a base of 2, which is 1 bit. (If the symbols were not equiprobable, the less probable binary digit-state would convey more than, and the more probable, less than, one bit of information. Moving over to English text, we can easily see that E is as a rule far more probable than X, and that Q is most often followed by U. So, X conveys more information than E, and U conveys very little, though it is useful as redundancy, which gives us a chance to catch errors and fix them: if we see "wueen" it is most likely to have been "queen.") Further to this, we may average the information per symbol in the communication system thusly (giving in termns of -H to make the additive relationships clearer): - H = p1 log p1 + p2 log p2 + . . . + pn log pn or, H = - SUM [pi log pi] . . . Eqn 5 H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: "it is often referred to as the entropy of the source." [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form. Though somewhat controversial even in quite recent years, this is becoming more broadly accepted in physics and information theory, as Wikipedia now discusses [as at April 2011] in its article on Informational Entropy (aka Shannon Information, cf also here):
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
Summarising Harry Robertson's Statistical Thermophysics (Prentice-Hall International, 1993) -- excerpting desperately and adding emphases and explanatory comments, we can see, perhaps, that this should not be so surprising after all. (In effect, since we do not possess detailed knowledge of the states of the vary large number of microscopic particles of thermal systems [typically ~ 10^20 to 10^26; a mole of substance containing ~ 6.023*10^23 particles; i.e. the Avogadro Number], we can only view them in terms of those gross averages we term thermodynamic variables [pressure, temperature, etc], and so we cannot take advantage of knowledge of such individual particle states that would give us a richer harvest of work, etc.) For, as he astutely observes on pp. vii - viii:
. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . .
And, in more details, (pp. 3 - 6, 7, 36, cf Appendix 1 below for a more detailed development of thermodynamics issues and their tie-in with the inference to design; also see recent ArXiv papers by Duncan and Samura here and here):
. . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . . [deriving informational entropy, cf. discussions here, here, here, here and here; also Sarfati's discussion of debates and the issue of open systems here . . . ] H({pi}) = - C [SUM over i] pi*ln pi, [. . . "my" Eqn 6] [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp - beta*yi) = Z [Z being in effect the partition function across microstates, the "Holy Grail" of statistical thermodynamics]. . . . [H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . . Jayne's [summary rebuttal to a typical objection] is ". . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly 'objective' quantity . . . it is a function of [those variables] and does not depend on anybody's personality. There is no reason why it cannot be measured in the laboratory." . . . . [pp. 3 - 6, 7, 36; replacing Robertson's use of S for Informational Entropy with the more standard H.]
As is discussed briefly in Appendix 1, Thaxton, Bradley and Olsen [TBO], following Brillouin et al, in the 1984 foundational work for the modern Design Theory, The Mystery of Life's Origins [TMLO], exploit this information-entropy link, through the idea of moving from a random to a known microscopic configuration in the creation of the bio-functional polymers of life, and then -- again following Brillouin -- identify a quantitative information metric for the information of polymer molecules. For, in moving from a random to a functional molecule, we have in effect an objective, observable increment in information about the molecule. This leads to energy constraints, thence to a calculable concentration of such molecules in suggested, generously "plausible" primordial "soups." In effect, so unfavourable is the resulting thermodynamic balance, that the concentrations of the individual functional molecules in such a prebiotic soup are arguably so small as to be negligibly different from zero on a planet-wide scale. By many orders of magnitude, we don't get to even one molecule each of the required polymers per planet, much less bringing them together in the required proximity for them to work together as the molecular machinery of life. The linked chapter gives the details. More modern analyses [e.g. Trevors and Abel, here and here], however, tend to speak directly in terms of information and probabilities rather than the more arcane world of classical and statistical thermodynamics, so let us now return to that focus; in particular addressing information in its functional sense, as the third step in this preliminary analysis. >> ______________ For record. KF kairosfocus
KS, all you have managed to do is convince me that no evidence whatsoever will ever convince you of evident truth. It is patent that Orgel's identification of organisation as a second contrast to randomness was pivotal, and that his use of specified complexity in connexion with molecular level functional biological forms that are information bearing, sets out the concept functionally specific complex organisation and associated information. This is the pivotal form of CSI. Orgel went on to indicate that the informational content of such FSCO/I can be quantified in the first instance by description length. Which is of course what structured y/n q's in a string will do, in bits, as say AutoCAD files or the like do, reducing structures to node-arc patterns. I draw to your attention, again, Wiki's introduction which is a useful summary -- and which you have obviously not taken on board seriously:
In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity (also known as descriptive complexity, Kolmogorov–Chaitin complexity, algorithmic entropy, or program-size complexity) of an object, such as a piece of text, is a measure of the computability resources needed to specify the object. It is named after Andrey Kolmogorov, who first published on the subject in 1963.[1][2] For example, consider the following two strings of 32 lowercase letters and digits: abababababababababababababababab 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7 The first string has a short English-language description, namely "ab 16 times", which consists of 11 characters. The second one has no obvious simple description (using the same character set) other than writing down the string itself, which has 32 characters. More formally, the complexity of a string is the length of the shortest possible description of the string in some fixed universal description language (the sensitivity of complexity relative to the choice of description language is discussed below). It can be shown that the Kolmogorov complexity of any string cannot be more than a few bytes larger than the length of the string itself. Strings, like the abab example above, whose Kolmogorov complexity is small relative to the string's size are not considered to be complex.
The direct parallel to Orgel: (" Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. One can see intuitively that many instructions are needed to specify a complex structure."), and to Dembski ("T is detachable from E, and and T measures at least 500 bits of information . . ." and "In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently [--> thus, described or specified] of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . .") as I cited this morning already is patent. Save, to those devoted to selective hyperskepticism and determined to make zero concessions to anyone associated with the design view regardless of cost in want of fairness or objectivity. As to your attempted example, the first aspect -- notice the stress I have made for years on aspect by aspect examination, the only reasonable basis for properly understanding the application of the design inference process -- to observe about a block of pure Si is that it is a crystal. Its structure as such is the repetition of a unit cell, which is a case of mechanical necessity in action. The second aspect, extreme purity suitable for use in a fab to make ICs etc, is indeed something that is functionally specific and complex, also highly contingent as locus by locus in the body of the crystal, there are many possible arrangements. So in the ultra-astronomical config space applicable, we are indeed in a zone T, with cases E1, . . . En. And, lo and behold, you have acknowledged that he explanation for that FSCO/I is, design, probably by highly complex zone melt refining techniques or the like. Where, in nature starting from stellar furnaces, it is overwhelmingly likely that when Si forms as atoms and is able to condense into solid materials, it will be closely associated with impurities, due to the high incidence of chance and the high reactivities involved. Chance does not credibly explain FSCO/I but credibly explains the sort of stochastic contingencies that are common and easily empirically observed. That is, you again failed to reckon with the design inference process aspect by aspect, and filed to see that you in fact provided probably another trillion or so by now cases in point of FSCO/I being caused by design in our observation. In short, as has happened with many dozens of other attempted counter examples to the consistent pattern of FSCO/I being caused by design, it turns out to be an example of what it was meant to overturn. Please, think again. KF PS: It is quite evident also that you refuse to attend to the direct link between information and probability as captured in I = - log p, I have lined again my 101. kairosfocus
KF,
Onlookers, note, that BA has been vindicated in the face of some loaded dismissive comments.
No, he hasn't. You and Barry are still confusing Kolmogorov complexity with Dembski's specified complexity. I already gave fifthmonarchyman an example of something with high specified complexity but low Kolmogorov complexity:
Consider a cylindrical crystal of pure silicon, of the kind used to make integrated circuits. It has a regular structure and thus low Kolmogorov complexity. Yet it is extremely unlikely to be produced by unintelligent natural processes, so Dembski’s equation attributes high specified complexity to it. Low Kolmogorov complexity, high specified complexity. “Specified improbability” would have been a better, more accurate name for what Dembski calls “specified complexity”. This is obvious given the presence of the P(T|H) term — a probability — in Dembski’s equation. He confused Barry, KF, and a lot of other people by using the word “complexity” instead of “improbability”.
keith s
LH, Let's compare Orgel: >> It is possible to make a more fundamental distinction between living and nonliving things by examining their molecular structure and molecular behavior. In brief, living organisms are distinguished by their specified complexity.*· Crystals are usually taken as the prototypes of simple, well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures which are complex but not specified. The crystals fail to qualify as living because they lack complexity, the mixtures of polymers fail to qualify because they lack specificity. These vague ideas can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. One can see intuitively that many instructions are needed to specify a complex structure. On the other hand, a simple repeating structure can be specified in rather few instructions. Complex but random structures, by definition, need hardly be specified at all . . . . When we come to consider the most important functions of living matter, we again find that they are most easily differentiated from inorganic processes at the molecular level. Cell division, as seen under the microscope, does not appear very different from a number of processes that are known to occur in colloidal solutions. However, at the molecular level the differences are unmistakable: cell division is preceded by the replication of the cellular DNA. It is this genetic copying process that distinguishes most clearly between the molecular behavior of living organisms and that of nonliving systems. In biological processes the number of information-rich polymers is increased during growth; when colloidal droplets “divide” they just break up into smaller droplets.>> Notice, use of term specified complexity, association with functionality dependent on arrangement of parts, further association with functional specificity, in the case of D/RNA, ALGORITHMIC functional specificity, per the action of ribosomes in making proteins. Dembski, defining CSI in his key work, NFL, pp 148 and 144 giving priority to direct informational measures: >>p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . [Manfred] Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity, or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . . Biological specification always refers to function . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .” p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ” >> The priority of functionally specific complex organisation and associated information in Dembski is thus patent. And, this is the work that principally set out his argument and laid its basic framework. Where he defines, as well. He also here bridges to onward thought by laying out a configuration space, we can symbolise as W standing in for Omega. Within W we have E in a zone of similar cases of presumably FSCO/I, T. The issue now looks at a blind, highly contingent search for zones T and stipulates that 500 bits as a measure of complexity or search challenge is required before considering the case relevant. That threshold sets up a situation where blind search is maximally implausible as a mechanism. Best appreciated in Darwin's warm pond or the like pre-life environment. Mechanical necessity under closely similar initial conditions, produces closely similar outcomes, hence laws of mechanical necessity such as Newton's cluster of laws of motion and Gravitation, the paradigm cases. High contingency rules out such as a plausible explanation for an aspect of a phenomenon or process. Empirically, that leaves blind chance and intelligently directed configuration aka design on the table. Of these the default is chance. But when we see outcomes E from a zone T in a deeply isolated island of function such that chance is of negligible plausibility [i.e. FSCO/I], design is best explanation. On trillions of observed cases, that inference is empirically reliable. The truth is, it is only controversial in respect of origin of life based on cells or of complex body plans because a speculative theory backed up by a priori materialist ideology rules the roost. Number of cases where, for life or for other cases of FSCO/I, it has been observed to originate by blind chance and mechanical necessity, NIL. Number of cases by design, trillions. So, by the vera causa principle on explaining the remote unobservable past on forces seen to be adequate causes in the present, the proper warranted best explanation is design. Wallace, not Darwin, at minimum. But, ideology dominates, as Lewontin so aptly if inadvertently documented:
the problem is to get them to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth [[--> NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting]. . . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [[--> another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [[--> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door. [NYRB, 1997. If you imagine this is quote mined, kindly cf here for wider context and remarks.]
KF PS: In case you are labouring under issues on probability vs information measures and observability, I again note on the already linked 101, and point here on the bridge from the 2005 discussion to FSCO/I, which is separately quite readily seen in live cases. If you doubt me, go to a tackle shop and ask to look at some reels and their exploded view diagrams, which are readily reducible to node arc pattern descriptions per AutoCAD etc, in bits. That is chains of structured Y/N q's. And bits from this angle are directly connected to bits from the neg log prob angle. As Shannon noted in his paper. Think about how plausible it would be to expect to form a reel by shaking up a bag of its parts. Imagine a cell-sized reel with parts capable of diffusion in a vat of 1 cu m . . . 10^18 1-micron cells . . . and ponder on possible arrangements of parts vs functional ones, then think about what diffusional forces would likely do by comparison with a shaken bag of parts or a reel mechanic. Fishing reels are a LOT simpler than watches. Cells are a LOT more complex than watches. And the von Neumann kinematic self replication facility using codes and algorithms with huge volumes of info, is part of what has to be explained at OOL. Design sits to the table as a serious candidate for the tree of life right from the root. KF kairosfocus
Onlookers, note, that BA has been vindicated in the face of some loaded dismissive comments. The continued lack of willingness to acknowledge even a first basic fact is revealing on the zero concessions policy of too many design objectors. In reply to much of the above, I say, read the OP. I have already linked a graphical illustration in the literature for a decade on the relationships between complexity, compressibility and functionality; I find no sign of serious engagement of Trevors & Abel above (NB: there is a trove of serious discussions by these authors that ties directly to Orgel's point). And, transformation by log reduction to the more easily observed information value -- as the chain of y/n q's gives a good first info metric (as is used in common file sizes on a routine basis) -- is a reasonable step. Ip = 1/log p is a longstanding basic result, cf my 101 here on in my always linked note. KF PS: Just to underscore, let me cite T & A in the paper already linked:
"Complexity," even "sequence complexity," is an inadequate term to describe the phenomenon of genetic "recipe." Innumerable phenomena in nature are self-ordered or complex without being instructive (e.g., crystals, complex lipids, certain polysaccharides). Other complex structures are the product of digital recipe (e.g., antibodies, signal recognition particles, transport proteins, hormones). Recipe specifies algorithmic function. Recipes are like programming instructions. They are strings of prescribed decision-node configurable switch-settings. If executed properly, they become like bug-free computer programs running in quality operating systems on fully operational hardware. The cell appears to be making its own choices. Ultimately, everything the cell does is programmed by its hardware, operating system, and software. Its responses to environmental stimuli seem free. But they are merely pre-programmed degrees of operational freedom. The digital world has heightened our realization that virtually all information, including descriptions of four-dimensional reality, can be reduced to a linear digital sequence [--> think, 3-d animated engineering drawings or the like] . . .
Those familiar with both Orgel and Wicken on one hand and the onward path T & A followed will find much here. In particular, the significance of functionally specific complex organisation dependent on interaction of parts to achieve function and associated information (FSCO/I) is obvious. As, is the ability to infer an information metric in a context where function is an observable constraint on acceptable bit-chain equivalent descriptions. PPS: Caption to TA, fig 4 already linked: >> Superimposition of Functional Sequence Complexity onto Figure 2. The Y1 axis plane plots the decreasing degree of algorithmic compressibility as complexity increases from order towards randomness. The Y2 (Z) axis plane shows where along the same complexity gradient (X-axis) that highly instructional sequences are generally found. The Functional Sequence Complexity (FSC) curve includes all algorithmic sequences that work at all (W). The peak of this curve (w*) represents "what works best." The FSC curve is usually quite narrow and is located closer to the random end than to the ordered end of the complexity scale.[--> island of func] Compression of an instructive sequence slides the FSC curve towards the right (away from order, towards maximum complexity, maximum Shannon uncertainty, and seeming randomness) with no loss of function. >> PPPS: Busy with local issues, will get back on FSCO/I later. kairosfocus
Because the sentence you provided is a throwaway line. Sorry, we're not communicating. I don't know why you think it's a "throwaway line;" it a short, clear, simple statement of his approach. If you don't think it's accurate, then you probably need to read the book or some of his articles for yourself. If I understand correctly Dembski was not saying that the only way to compute specified complexity was by estimating probability. He was saying that low probability was not enough to show that something has specified complexity, Low probability is not enough to determine specified complexity. It's just one necessary component of the analysis. The "complexity" part of specified complexity is a measure of probability, so you have to at least estimate that probability to determine whether the subject is "complex." I'm not aware of any other way to estimate "complexity" as Dembski uses the term, but I haven't read everything he's written. Determining whether the subject is specified is a separate step. You need both steps to tell whether it has "specified complexity." I think I misunderstood your initial question; I thought you were saying something about the Orgel-Dembski connection. But now it looks like you're asking whether there's some way to tell whether something is "complex" without relying on probability. No, I don't think there is--I think Dembksi only really thinks of this in probabilistic terms. But like I said, I'm not an expert on his thinking or qualified to follow his equations. Learned Hand
Fifthmonarchyman, From a November comment of mine:
Once he realized his error, Barry deleted the thread to hide the evidence. That’s funny enough, but here’s another good one: Dembski himself stresses the distinction between Kolmogorov complexity and improbability:
But given nothing more than ordinary probability theory, Kolmogorov could at most say that each of these events had the same small probability of occurring, namely 1 in 2^100, or approximately 1 in 10^30. Indeed, every sequence of 100 coin tosses has exactly this same small probability of occurring. Since probabilities alone could not discriminate E sub R from E sub N, Kolmogorov looked elsewhere. Where he looked was computational complexity theory. The Design Inference, p. 169
I look forward to Barry’s explanation of how Dembski is an idiot, and how we should all trust Barry instead when he tells us that Kolmogorov complexity and improbability are the same thing.
keith s
Learned hand asks, Why is Dembski’s own language not the answer to your question? I say, Because the sentence you provided is a throwaway line. If I understand correctly Dembski was not saying that the only way to compute specified complexity was by estimating probability. He was saying that low probability was not enough to show that something has specified complexity, Suppose I said that "The energy in ‘specified energy' is a measure of electricity.” It's possible even probable that measures of magnetism would work just as well. To demonstrate that Dembski means to rule out all other measures of complexity it would I think require more than a single sentence without context. Especially when a scholarly case has been made that all measures of complexity are related and possibly synonymous. I hope that makes sense peace fifthmonarchyman
E.Seigner, you should learn to read. Mung
you say, Probability is specifically relevant to complexity, I say, Depends on the tool we are using to measure complexity check it out Dembski says that probability is the measure he uses to measure complexity. And because specified complexity is a special case of complexity, unless there's some special exception, probability is part of the SC determination. I don't really understand what your question is anymore. Why is Dembski's own language not the answer to your question? Learned Hand
learned hand said, I think you’re trying to read “specified complexity” as a single thing. Which is fair enough usually, except here we’re talking about the components of it. I say. I'm not talking talking about individual components per say I'm talking about the unified concept of specified complexity and how it might be measured. you say, Probability is specifically relevant to complexity, I say, Depends on the tool we are using to measure complexity check it out http://web.mit.edu/esd.83/www/notebook/Complexity.PDF you say, setting specification completely aside for the moment I say, I just don't think we can set it aside peace fifthmonarchyman
Mung
Orgel clearly intended to associate his concept of specified complexity with the concept of information, something the opponents have repeatedly denied (having never read the source material until it was shoved in their face).
Having read the source material (both Orgel and Dembski), I say that Orgel clearly associates his concept with life while Dembski associates it with P(T|H) and information. How these things are in some people's minds all the same is anybody's guess. Orgel: The crystals fail to qualify as living because they lack complexity, the mixtures of polymers fail to qualify because they lack specificity. E.Seigner
If I understand Kolmogorov complexity it is not about the effort it takes to describe something it is about the effort it takes to compute it am I missing something? I don't know. I'm the wrong person to ask about the details of Kolmogorov anything. I'm just comparing and contrasting how Dembski described complexity--a measure of probability--with how Orgel did it--a measure of the length of ht instruction set--and observing that these are not the same thing. As a test of that conclusion, I observe that some things, like a perfect sphere of water ice on the surface of the moon, would be complex by Dembski's standards but not Orgel's. BA and KF think that Orgel and Dembski are so obviously talking about the same concept that the grovelling apologies should begin immediately. But I don't see how they reconcile the obvious inconsistency. Learned Hand
I read NFL and if I recall correctly it’s specification that is the core of his argument. Do you have anything besides a throw away sentence about one half of the term in question? Sorry, I don’t think I understand your confusion. It’s not a “throw away sentence,” he’s explicitly defining complexity as a measurement of improbability. You asked where Dembski “argues that improbable things are complex by their nature.” Since complexity is a measurement of improbability, improbable things are going to be complex by their nature. (I guess the exception would be some additional standard that would exempt some improbable things from being complex. I’m not aware of any such exception he’s ever identified; my understanding is that if something makes an otherwise improbable thing more likely to occur, such as evolution, it would remove the complexity.) I read NFL and if I recall correctly it’s specification that is the core of his argument. Based on this, and your numbers example, I think you’re trying to read “specified complexity” as a single thing. Which is fair enough usually, except here we’re talking about the components of it. Probability is specifically relevant to complexity, setting specification completely aside for the moment. If Dembski means that complexity is a measure of probability, and Orgel means it’s a measure of the length of the instruction set, then they’re talking about two different definitions of “complexity.” If they have two different definitions of “complexity,” they have two different definitions of “specified complexity.” And for these purposes, again, specification is set completely to the side—it doesn’t matter if their definitions of “specification” are verbatim the same, because “specified complexity” relies on complexity as much as specification. Learned Hand
Learned hand said, impossibly improbable and descriptively simple, such as a royal flush drawn ten times consecutively from a fair deck of cards. I say, If I understand Kolmogorov complexity it is not about the effort it takes to describe something it is about the effort it takes to compute it am I missing something? peace fifthmonarchyman
Learned hand quoting Dembski says, “The ‘complexity’ in ‘specified complexity’ is a measure of improbability.” I say, I read NFL and if I recall correctly it's specification that is the core of his argument. Do you have anything besides a throw away sentence about one half of the term in question? I'm not trying to be difficult here. I just want to understand the difference between the two measurements if any. for example A 20 digit string of random numbers has Kolmogorov complexity but not specified complexity. But suppose I came across the following string 31415926535897932384. I would say that the string has specified complexity despite the fact any single digit of the string is not especially improbable. now look at this string 31514926535897922384 It has the same probability as the first one when each digit is viewed independently but again no specified complexity. You might have guessed from my recent ramblings around here that I think integration of information is where the cool stuff is. That is why I think Kolmogorov has more promise as a tool. Regardless it's the specification that makes Dembski's concept a valuable contribution to the discussion not his particular chosen ruler. IMO but I am open to correction. peace fifthmonarchyman
5MM, Can you point me to a place where Dembski argues that improbable things are complex by their nature? Instead of the other way around? In No Free Lunch, he writes, "The 'complexity' in 'specified complexity' is a measure of improbability." Note that this is not the same as a measure of how long the instruction set is, as something can be both impossibly improbable and descriptively simple, such as a royal flush drawn ten times consecutively from a fair deck of cards. Orgel and Dembski were discussing different things. Learned Hand
me earlier. I’ve recently been doing a lot of thinking about the complexity of Pi (Both kinds) and how it relates to ID Me now, That is some real unintentional comedy. ;-) I meant both kinds of complexity not both kinds of Pi fifthmonarchyman
keith s says, Yet it is extremely unlikely to be produced by unintelligent natural processes, so Dembski’s equation attributes high specified complexity to it. I say, Interesting I always assumed that Dembski was arguing that specified complexity was improbable not that improbability was "specifically" complex. For example because of it's simplicity If we were to find a cylindrical crystal of pure silicon on Mars I would not infer design. I would assume some unknown natural law until we could rule that out. Can you point me to a place where Dembski argues that improbable things are "specifically" complex by their nature? Instead of the other way around? Thank you in advance peace fifthmonarchyman
fifthmonarchyman,
You can measure distance by laser or by tape measure but the property you are measuring is still distance.
Sure, but in that case you are measuring the same underlying quantity using different methods. Kolmogorov complexity and Dembski's specified complexity are not the same quantity. Different quantities, different terms, different measurements.
You would have a point if you could demonstrate that an object with lots of specified complexity could be produced with an algorithm of minimal size. Can you do that?
Sure. Consider a cylindrical crystal of pure silicon, of the kind used to make integrated circuits. It has a regular structure and thus low Kolmogorov complexity. Yet it is extremely unlikely to be produced by unintelligent natural processes, so Dembski's equation attributes high specified complexity to it. Low Kolmogorov complexity, high specified complexity. "Specified improbability" would have been a better, more accurate name for what Dembski calls "specified complexity". This is obvious given the presence of the P(T|H) term -- a probability -- in Dembski's equation. He confused Barry, KF, and a lot of other people by using the word "complexity" instead of "improbability". keith s
Hey Petrushka, Was your comment at 23 addressing my question? If it was please elaborate. I'm not sure I follow. Are you saying that a circle is an algorithm to compute the digits of Pi? I would probably characterize a circle as a lossless data compression/specification of the digits of Pi and not an algorithm. Do you think this idea is incorrect? If so why? I've recently been doing a lot of thinking about the complexity of Pi (Both kinds) and how it relates to ID I really want to make sure I'm not heading down the wrong path. So any insight would be appreciated thanks in advance fifthmonarchyman
Pi. The digits of pi. A circle. Petrushka
Keith S says, Kolmogorov complexity is not the same as Dembski’s specified complexity. I say These methods of measuring complexity are analogous and deeply related. Much like different methods of measuring length are analogous and deeply related. You can measure distance by laser or by tape measure but the property you are measuring is still distance. You would have a point if you could demonstrate that an object with lots of specified complexity could be produced with an algorithm of minimal size. Can you do that? peace fifthmonarchyman
KF, Tap dance all you like, but the fact remains: Kolmogorov complexity is not the same as Dembski's specified complexity. You and Barry got it wrong. keith s
KF,
KS, On being wise in one’s own eyes...
Barry is the butt of your joke. Being "wise in his own eyes", he posted a mocking OP. It backfired badly on him, so he dishonestly attempted to erase the evidence -- but he got caught. Will you be scolding Barry for his dishonesty? Or is honesty something you demand only of "Darwinists", and not of yourself or your fellow IDers? keith s
PPS: I remind KS of the note in 8 above to LH:
. . . it’s coming on four years it was pointed out that WmAD extracted an information metric. FYI, it is a commonplace in science and mathematical modelling to transform from one form to another more amenable to empirical investigation. In this context a log-probability has been known to be an effective info metric since the 1920?s to 40?s. And, the Orgel remarks when they go on to address metrics of info on description length, gives such a metric. Reduce a description to a structured string of Y/N q’s to specify state and you have a first level info metric in bits, e.g 7 bits per ASCII character. Where, the implication for relevant cases such as protein codes, is that the history of life has allowed exploration of the effective space of variability for relevant key proteins, so an exploration on the H-metric of avg info per element in a message (the same thing entropy measures using SUM pi log pi do . . . ) gives a good analytical approach, cf Durston et al.
Refusal to acknowledge the force of a relevant response is not a healthy sign on the Isaiah 5:20-21 front. KF kairosfocus
PS: A note on K-Complexity. Let's start with a useful Wiki clip:
In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity (also known as descriptive complexity, Kolmogorov–Chaitin complexity, algorithmic entropy, or program-size complexity) of an object, such as a piece of text, is a measure of the computability resources needed to specify the object . . . . For example, consider the following two strings of 32 lowercase letters and digits: abababababababababababababababab 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7 The first string has a short English-language description, namely "ab 16 times", which consists of 11 characters. The second one has no obvious simple description (using the same character set) other than writing down the string itself, which has 32 characters. More formally, the complexity of a string is the length of the shortest possible description of the string in some fixed universal description language . . . It can be shown that the Kolmogorov complexity of any string cannot be more than a few bytes larger than the length of the string itself. Strings, like the abab example above, whose Kolmogorov complexity is small relative to the string's size are not considered to be complex.
As the second example illustrates, a genuinely random string will resist compression so that the best way to capture it is to cite it and say as much. This relates to random tars or minerals in granites as Orgel discussed. A strictly orderly pattern such as ab, repeat n times, will be much more compressible. This directly relates to the order of crystals as discussed or simple repetitive polymers. A complex, functionally specific organised pattern such as an Abu 6500 C3 fishing reel, or a protein that must fold stably and predictably, fit key-lock style into a particular location and then must carry out a role dependent on its structure and proper location can be described in a similar way [especially as a structured string of y/n q's] and will resist compression but not as much as a strictly random entity. Where, the existence of AutoCAD etc shows practically that a 3-d functional entity may be reduced descriptively to a nodes-arcs pattern and then described further as a structured string. The resulting can be taken as cashing out the practical information content of such a structure. This, Orgel highlighted as a key characteristic of life. It is highly likely that in so writing, Orgel was aware of the issue of descriptive complexity as developed by Kolmogorov, Chaitin et al. So, yes, K-complexity can in fact be used as an index of randomness. As Trevors and Abel did in their Fig 4 on three types of sequence complexity, OSC, RSC and FSC, cf here in my always linked and the onward linked 2005 paper. It will be seen that they describe a trade-off between algorithmic compressibility and complexity, with a third axis with sharp peakedness indicating an index of functionality in a co-ordinated organised process. This diagram is in fact an illustration of the island of function effect strongly associated with FSCO/I. So, Orgel is applicable, and complexity/compressibility is indeed an index of randomness as opposed to order or organisation. KF kairosfocus
KS, On being wise in one's own eyes (especially in a context where BA has shown that those who falsely accused him of distorting the meaning of Orgel's remarks have been shown spectacularly wrong . . . ), I suggest you and others would be well advised to reflect upon a bit of ancient wisdom that long anticipated anything of substance in Dunning and Krueger:
Is 5:Woe to those who draw iniquity with cords of falsehood, who draw sin as with cart ropes . . . Woe to those who call evil good and good evil, who put darkness for light and light for darkness, who put bitter for sweet and sweet for bitter! 21 Woe to those who are wise in their own eyes, and shrewd in their own sight! [ESV]
KF kairosfocus
NL, this is not an appropriate context for debates on side issues, it is a place where there is an opportunity to face the resolution of a case of accusation and/or insinuation of quote-mining. The OP clearly establishes that what Orgel and Dembski discussed are substantially the same phenomenon. For you to resolve your patent misunderstandings of the design inference, its context and what complex specified information (and more particularly functionally specific complex organisation and associated information, especially digital information such as we find in D/RNA) is about you are directed to the weak argument correctives under the UD blog page resources tab, at the top of this and every UD page. KF PS: It is my intention to address the nature and significance of FSCO/I in light of the Orgel citation and other material points, soon. kairosfocus
LH, when an accusation or insinuation of quote mining is made (especially when this is unfortunately a standard rhetorical strategy used by objectors to design thought in response to embarrassing citations of key testimonies against interest) and it is corrected on record, that should be acknowledged. As, a basic step of civility. KF kairosfocus
Barry, I'm surprised you didn't learn your lesson the last time around. Don't you remember what happened? You got it completely wrong, and you got caught trying to erase the evidence before the rest of us could see it. Here's a reminder from that thread:
I just discovered something even funnier: Jeffrey Shallit himself — the very authority that Barry appeals to — confirms that Barry got it completely wrong: Barry Arrington: A Walking Dunning-Kruger Effect:
The wonderful thing about lawyer and CPA Barry Arrington taking over the ID creationist blog, Uncommon Descent, is that he’s so completely clueless about nearly everything. He truly is the gift that keeps on giving. For example, here Barry claims, “Kolmogorov complexity is a measure of randomness (i.e., probability). Don’t believe me? Just ask your buddy Jeffrey Shallit (see here)“. Barry doesn’t have even a glimmer about why he’s completely wrong. In contrast to Shannon, Kolmogorov complexity is a completely probability-free theory of information. That is, in fact, its virtue: it assigns a measure of complexity that is independent of a probability distribution. It makes no sense at all to say Kolmogorov is a “measure of randomness (i.e., probability)”. You can define a certain probability measure based on Kolmogorov complexity, but that’s another matter entirely. But that’s Barry’s M. O.: spout nonsense, never admit he’s wrong, claim victory, and ban dissenters. I’m guessing he’ll apply the same strategy here. If there’s any better example of how a religion-addled mind works, I don’t know one.
Excellent work, Barry. You’ve shown all of us that: 1. You have strong opinions about things you know nothing about. 2. You’ve attempted to mock someone who understands this stuff far better than you do. 3. The very authority you appealed to confirms that you got it completely wrong, as do Robb and I and Dembski himself, through his book. 4. You tried to erase the evidence by deleting the entire thread. You look pretty ridiculous right now. Is there anything else you’d like to do to embarrass yourself in front of your audience?
keith s
Learned Hand:
Did Dembski not tie specified complexity to the calculation of P(T|H)? Seems like he does in Specification: The Pattern That Signifies Intelligence. Orgel doesn’t seem to use it at all; rather than the probability of something arising through a non-design hypothesis (or probabilities at all), he’s looking for the length of the instruction set.
You're right, LH. Barry is repeating his earlier mistake. He doesn't understand the difference between improbability and Kolmogorov complexity. keith s
#6 kairosfocus
what independent - detachable - specification is there regarding the impression of a rock in mud, by contrast with a shod footprint with a shoe-maker's logo?
You are attempting to smuggle in the mentioned "minimum length of interaction chain" augmentation of CSI -> CSI(L), only in a vague, informal language suitable for postmodernist literary essay or casual chitchat on some forum. Namely, there is no scientifically objective "independent" or "detached" attribute of natural events that are within the common light cone, since there is always interaction chain, long and convoluted as it may be, that connects them. Hence, scientifically, all events within the sphere of visible universe are mutually connected via chain of interactions with each other, hence they are not objectively "independent" or "detached". And of course, within the vastly smaller sphere, such as Earth, or Solar system, everything is connected with everything else via countless interaction chains in myriad different ways i.e. nothing is objectively "independent" from each other. Among others, the shapes of the rocks on Mount Rushmore are certainly connected via many unbroken chains of physical interactions with the faces they mimic, just longer than the interaction chain connecting the shape of the fallen rock with its mud image in that counterexample. You are simply trying to sell here as an objective scientific criterium the subjective term "independent" which is simply a vague restatement of the threshold length of interaction chain beyond which one calls, simply by definition, events "independent" or "detached". It's no different than labeling things or creatures pretty or ugly, likeable or unlikable,... When attaching labels to things, anything goes whatever you heart desires. That kind of "objective" criteria will certainly get you as far as poetry, gender or race or social studies,... but they fall far short of any objective natural science. When you 'intelligently design' something, there is always an unbroken interaction chain between activities of your neurons and whatever product you hands intelligently produced, just like the ubroken chain, except perhaps longer, between the rock and mud in that high CSI counterexample. There is no "independence" or "detachment" anywhere in either process, other than as a wishful subjective figure of speech (like pretty and ugly,...). When you reshape a chunk of clay in your own liking, there is no objective scientific criteria that can distinguish that "intelligent action" from rock reshaping mud bank in its own liking. The most you can do is what I suggested as "augmentation" from CSI -> CSI(L) i.e. introduce a verbal convention based on the length (or 'intricacy' defined vias some labeling convention) of interaction chains L, so that one chooses to label connections with interaction chains below the threshold L as say 'xyz', and those with interaction chains above L as 'zyx' or whatever else your wish to call it, 'intelligent' vs 'dumb', 'pretty' vs 'ugly'... it's your wishful labels, you call the shots. Of course, as explained in the previous post, such verbal games "augmenting" CSI -> CSI(L), are of no more scientific value in objectively distinguishing 'intelligently designed' vs 'product of contingency and chance' than they are distinguishing 'pretty' from 'ugly'. They're empty labeling conventions, not scientific discoveries of some fundamental laws or patterns of nature. nightlight
"I’m not a scientist or an expert. I could easily be totally wrong about how I’m reading Dembski, Orgel, or both." Yes, if you think they are talking about different things, that is proof enough that you have no idea what you are talking about. Perhaps you should read up on it a bit before you comment. BTW, the snark at the end is especially silly when you're wrong. Just sayin' Barry Arrington
Let's all do the best we can to envision an exercise in missing the point. Ready? Go! Orgel clearly intended to associate his concept of specified complexity with the concept of information, something the opponents have repeatedly denied (having never read the source material until it was shoved in their face). Mung
BA has taken time to prove a significant false accusation... I'm not sure that it's an "accusation" as opposed to a "disagreement," but in any event his point is only proven if those two things are actually fundamentally the same. The text doesn't support that, since they're talking about the concept in fundamentally different ways and Dembski is employing P(T|H) is a way totally foreign to Orgel's thinking. I'm not a scientist or an expert. I could easily be totally wrong about how I'm reading Dembski, Orgel, or both. But "Neener neener neener, read this blockquote and apologize!" is not a persuasive argument, much less "proof." Learned Hand
LH, there is a time for favourite debate points and tactics, and there is a time to admit false accusation. BA has taken time to prove a significant false accusation, those who accused him need to own up and do the right thing. KF PS: On your side-track, it's coming on four years it was pointed out that WmAD extracted an information metric. FYI, it is a commonplace in science and mathematical modelling to transform from one form to another more amenable to empirical investigation. In this context a log-probability has been known to be an effective info metric since the 1920's to 40's. And, the Orgel remarks when they go on to address metrics of info on description length, gives such a metric. Reduce a description to a structured string of Y/N q's to specify state and you have a first level info metric in bits, e.g 7 bits per ASCII character. Where, the implication for relevant cases such as protein codes, is that the history of life has allowed exploration of the effective space of variability for relevant key proteins, so an exploration on the H-metric of avg info per element in a message (the same thing entropy measures using SUM pi log pi do . . . ) gives a good analytical approach, cf Durston et al. kairosfocus
Did Dembski not tie specified complexity to the calculation of P(T|H)? Seems like he does in Specification: The Pattern That Signifies Intelligence. Orgel doesn't seem to use it at all; rather than the probability of something arising through a non-design hypothesis (or probabilities at all), he's looking for the length of the instruction set. Maybe it's self-evident how those two things are "almost indistinguishable." It doesn't seem to be evident from the text. Learned Hand
NL, what independent -- detachable -- specification is there regarding the impression of a rock in mud, by contrast with a shod footprint with a shoe-maker's logo? Any rock impression will do. As for the reflection of a mountain in the surface of a lake, this is strictly mechanical necessity and a causal process linked to that, per incident angle equals reflected, it is a mechanical phenomenon or event, not an informational specification of high complexity. Any object bearing the relevant angular relationship will be reflected, and there is nothing that is specific about the image. A photo of the mountain and its reflection will bear a phenomenon that uses optics but will also have high functional specificity and complexity to produce an accurate record. Even, with a pinhole box camera and film, fixed and developed. KF PS: 5th, to get to a cosmos with terrestrial planets with rocks, mud, mountains and reflective lakes is a lot of fine tuning. PPS: It is side debates like this which lead me to stress that LO spoke in the context of functionally specific complex organisation and associated information [FSCO/I . . . as did Wicken in 1979], and to underscore the significance of the design inference explanatory process across mechanical necessity, blind chance and design per aspect of an object or phenomenon. kairosfocus
Just for context: Leslie Orgel (1927–2007 ) News
Nightlight says No intelligence seems evident in either process, other than the intelligence needed to create and uphold physical laws. I say, Is the intelligence needed to create and uphold physical laws chopped liver in your opinion? It's possible that the origin of life and it's evolution was the result of some fantastically amazing frontloading natural law. Does that mean that intelligence was not involved in the process in your opinion? Peace fifthmonarchyman
K @ 1: Indeed. Apologies and retractions should be pouring in, but I will not be holding my breath. Being a Darwinist seems to mean always being right, even when you are not. Barry Arrington
While indeed Dembski & Orgel have essentially the same definition of specified complexity (including illustrations), the fundamental problem of that definition for the purpose used is a simple counter-example of non-live, non-inteligent instance of high specified complexity: A larger rock falls on a mud bank leaving detailed imprint of its surface in the the mud, creating instantly hundreds of megabytes, possibly into gigabytes, of specified complexity in the mud image. Or even simpler example with far greater CSI -- the whole mountain of rocks above the lake producing detailed image of their appearance reflected from the glassy surface of the lake, yielding arbitrary quantities of CSI. No intelligence seems evident in either process, other than the intelligence needed to create and uphold physical laws. The Orgel-Dembski concept of specified complexity fails to distinguish that case from DNA of the organism analogously imprinting physical, chemical and biological properties of the environment (so that organism is well harmonized with or adapted to the environment as we can observe with life). There is an obvious difference between the cases of rock+mud and DNA+environment in the length of the interaction chain -- while the rock & mud interaction chain producing high CSI is very short (in time and space), the chain of interactions linking DNA and the environment is very long. The Orgel-Dembski's CSI is completely blind, dumb and mute about the length of interaction chain. But even if one were to augment their CSI to some enhanced version, call it CSI(L), so that it defines some specific threshold/minimum length L (in space-time) of interaction chain before it declares a high CSI(L) an indicator of intelligent design behind it, it still doesn't help very much or for long. Namely, the short length of the interaction chain in the rock-mud case is a mere artifact of the particular example with a short time span (particularly short for rocks & lake example). One can easily conceive of far longer interaction chains e.g. rock bouncing/rolling out of the mud imprint and a resin filling the imprint, hardening eventually into amber still carrying the detailed imprint of the rock surface. Then after eons of time, the rock and amber could end up continents apart, with interaction chain linking them becoming arbitrarily long (as long as any that links DNA of an organism with its environment). Hence, neither CSI no CSI(L) work for the purpose Dembski wishes to use them. nightlight
BA, Uh huh, retractions of objections in 5, 4, 3, 2 . . . NOT. (Don't hold your breath.) KF kairosfocus

Leave a Reply