Uncommon Descent Serving The Intelligent Design Community

At Some Point, the Obvious Becomes Transparently Obvious (or, Recognizing the Forrest, With all its Barbs, Through the Trees)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

At UD we have many brilliant ID apologists, and they continue to mount what I perceive as increasingly indefensible assaults on the creative powers of the Darwinian mechanism of random errors filtered by natural selection. In addition, they present overwhelming positive evidence that the only known source of functionally specified, highly integrated information-processing systems, with such sophisticated technology as error detection and repair, is intelligent design.

[Part 2 is here. ]

This should be obvious to any unbiased observer with a decent education in basic mathematics and expertise in any rigorous engineering discipline.

Here is my analysis: The Forrests of the world don’t want to admit that there is design in the universe and living systems — even when the evidence bludgeons them over the head from every corner of contemporary science, and when the trajectory of the evidence makes their thesis less and less believable every day.

Why would such a person hold on to a transparently obvious 19th-century pseudo-scientific fantasy, when all the evidence of modern science points in the opposite direction?

I can see the Forrest through the trees. Can you?

Comments
Dr. Liddle, response to comment #5: "Well, my position is that IDists have failed to demonstrate that what they consider the signature of intentional design is not also the signature of Darwinian evolutionary processes." Darwinian processes are not known to be capable of generating functional biological systems that have a very high degree of functional specificity -- such as many protein-based systems (by functional specificity is meant a system where novelfunction is only realized by the existence of very specifically arranged and types of amino acid residues in the protein-based system). However, intelligent designers are quite capable of designing proteins that require a high degree of specificity (e.g., Kuhlman et al., 2003). And so if such systems were found in nature, this would be seem to be a signature of intelligent design -- something intelligent design can do but Darwinian processes cannot do. Such systems have been found, of course. References: Kuhlman B., et al. Design of a Novel Globular Protein Fold with Atomic-Level Accuracy. Science, 302 (5649): 1364-1368, 2003. LivingstoneMorford
Dr Liddle: Crick, letter to son Michael, Mar 19, 1953:
"Now we believe that the DNA is a code. That is, the order of bases (the letters) makes one gene different from another gene (just as one page of print is different from another)." [NB: From about 1961 on, that code has been identified, and is now routinely used in scientific work.]
GEM of TKI kairosfocus
OK, if you mean DNA is "base 4 digital" in the same sense as the English alphabet is "base 26 digital" then, fine. But I don't find it a useful description. On the other hand, I think that the way DNA is "read" can be very usefully be viewed as a digital system (on/off), indeed as a system of logic gates. But at least we agree that the thing is somewhat computer like :) Haven't forgotten your other post. Have a half-composed response, not sure when I'll get it finished, maybe tonight, maybe not for a couple of days. Cheers Lizzie Elizabeth Liddle
Hello again Dr Liddle, Quite frankly, I think you may have a bunch of people standing around looking at each other shrugging their shoulders, as if to say “What the heck is she talking about?!?!” To the best of my limited knowledge, the base of a digital coding system simply expresses the number of unique characters within the system which could occupy any given digit. A base two system such as binary code has two unique characters. In computer systems it’s either 1 or 0, but it could be any two symbols. A base four system such as quaternary code has four unique characters, such as 0, 1, 2, and 3 (or in the case of the genetic code A, T, C, and G). A base sixteen system such as hexadecimal code has sixteen unique characters, such as 0-9 plus A-F. A base sixty-four system has sixty-four unique characters, and so on. I also think you are conflating the output of a system with the coding format of a system. To put on a pot of tea and call it a base-two system because at one point it boils and at another it doesn’t … is frankly, just way out there. - - - - - - - - - I await your responses. Cheers… Upright BiPed
I'm currently engaged in writing a reasonably subtantive response to UBP(in between RL distractions!) but I would just like to say: I am aware that a number of people, some of whom are on "my side" have described DNA as a "Base 4 digital code". I think that is an extremely poor description, and the fact that it is given by people who should IMO know better doesn't improve it as far as I am concerned! A "four letter alphabet" is a better description, though I am also leery of language metaphors; however, language is a marginally better metaphor for DNA than "digital base 4" is, and you would not (sensibly) describe the English alphabet is not a "digital base 26" system. Like English letters, bases (in the chemical sense) are not switches, and to change the meaning of an English sentence you do not change individual letters (or very rarely). However, as I said, there is a fairly useful sense in which DNA can be regarded as a Base 2 system: at the level of the gene, we really do have a switched system, and a very fine one it is too. But it's binary - genes are switched off and on. Elizabeth Liddle
"I’m still not convinced that 'digital' is a good description of DNA information..."
Life, too, is digital information written in DNA. - Ridley, 2000, p.16
Mung
Some typical questions to be addressed in this book are: Are information and information-processing exclusive attributes of living systems, related to the very definition of life? How does information appear in Darwinian evolution? The main objective here is to show that information plays the defining role in life systems. ...information as such plays no active role in natural inanimate systems... – J.G. Roederer, Information and Its Role in Nature
Mung
Information is embedded in a particular pattern in space or time - it does not come in the form of energy, forces or fields, or anything material, although energy and/or matter are necessary to carry the information in question. Information always has a source or sender (where the original pattern is located or generated) and a recipient (where the intended change is supposed to occur). It must be transmitted from one to the other. And for the specific change to occur, a specific physical mechanism must exist and be activated. We usually call this action information processing. Information can be stored and reproduced, either in the form of the original pattern, or of some transformation of it. ...a fundamental property of information is that the mere shape or pattern of something - not its field, forces or energy - can trigger a specific change in a recipient, and do this consistently over and over again (of course, forces and energy are necessary in order to effect the change, but they are subservient to the purpose of the information in question). This has been called the pragmatic aspect of information. It is important to emphasize again that the pattern alone or the material it is made of is not the information per se, although we are often tempted to think that way. ...information must not only have a purpose on part of the sender, it must have a meaning for the recipient in order to elicit the desired change. – Juan G. Roederer, Information and Its Role in Nature Mung
So what is this powerful yet "ethereal" something that resides in CDs, books, sound waves, is acquired by our senses and controls our behavior, sits in the genome and directs the construction and performance of an organism? It is not the digital pits on the CD, the fonts in the books, the oscillations of air pressure, the configuration of synapses and distribution of neural activity in the brain, or the bases in the DNA molecule - they all express information, but they are not the information. Shuffle them around or change their order ever so slightly - and you may get nonsense, or destroy an intended function! On the other hand, information can take many forms and still mean the same - what counts in the end is what information does , not how it looks or sounds, how much it is, or what it is made of. Information has a purpose , and the purpose is, without exception, to cause some specific change somewhere, some time - a change that otherwise would not occur or would occur only by chance. Information may lay dormant for eons, but it is always intended to cause some specific change. ... In summary, information is a dynamic concept. - Juan G. Roederer, Information and Its Role in Nature Mung
I won’t be able to make a start on the coding for a week or two, but I’ll give you the odd progress report
Post any source code as you go? Mung
Dr Liddle,
The critical issue here is that a ribosome does not change its state as a result of reading the information. It more takes the form of an assembler, and it does something with the information it reads. Charged tRNA (providing the necessary protocols for decoding the symbols) are physically brought together with those symbols inside the ribosome, and that meeting results in the proper ordering of amino acids. The issue is not about chemicals reacting with one another to change states; it’s about the processing of information in a chemical domain. The symbols are embedded in chemistry, the protocols are embedded in chemistry, the assembler is embedded in chemistry, but the output of the system is constrained by the prescriptive and informational sequence (not by a change in states). This dynamic would of course need to be reflected in your simulation.
Yes, that’s fine, though I’d point out that this describes catalytic reactions which are not particularly mysterious. But yes, that’s fine. I will ensure that the output of my system at least leave the original informational sequence unchanged (though of course, as in life, it may deform or reform during the process of doing what it has to do – DNA is not inert as it is being “read”.)
I think we may be talking past each other here. I was indicating that the ribosome does not change its state as you had suggested in your previous post. Instead, it assembles polypeptides based upon the informational sequence given to it by mRNA. It does so by bringing the mRNA sequence together with the charged tRNA, which has an amino acid bound on one end of the molecule and an anti-codon discretely positioned on the other (isolated from the amino acid). In response to this you return to say that you’ll make sure the “original informational sequence” is left unchanged (except for any random variation that may impact it). (?!?!) What “original informational sequence” are you referring to? For there to be an informational sequence, it would require that arbitrary symbols and discrete protocols already be established – the rise of which is the very thing that you wish to demonstrate. I also cannot parse this sentence “DNA is not inert as it is being ‘read’” in order to understand what you intended to convey. The sequence within DNA is always inert, and at no time during the transcription or translation of DNA does that sequence become necessary. I am not sure where you are going with this, but it concerns me that it could have no relevance to the observed biology.
There have been numerous attempts to find a chemical basis for a particular codon being matched to a particular amino acid. All those attempts (meaning each and every one of them) has ultimately failed. The slight stereochemical affinities within the system indicate that the constituent parts are well suited to their job, but there is zero evidence that stereochemistry actually determined the full suite of associations (and even if they did, that would still not determine the sequence of codons). In the end, what can be said about the relationship of codon to AA is what can be actually observed from the physical evidence itself. The association of one codon to one amino acids is caused by the sequence of symbols in DNA which codes for the production of specialized tRNA to provide the protocol for translation. Those tRNA molecules hold the amino acid on one end of the molecule while displaying the anticodon on the other end. The amino acid and the codon do not interact. In other words, it’s the arbitrary gap in the causal chain which is bridged only by the information in DNA.
I’m not sure I buy that. But let’s leave it for now. We seem to have established a fairly usable amount of common ground
Exactly what part don’t you buy? You didn’t actually say. It’s critically important if you wish to incorporate the observed dynamics into your simulation.
One of the inferences within the design argument is that the only demonstrated source of digitized information is an intelligent agent. In the case of the genome, it is base 4 digital, read linearly.
I think this argument is fallacious. For a start, even if “digital” were a good description of DNA, it wouldn’t follow that because some known cases of digital information have been intelligently designed, all must be. But for a second, I’m still not convinced that “digital” is a good description of DNA information (for the same reasons I am uneasy about your previous point) and if it is, it isn’t in “Base 4?! Or only in the most attenuated of senses. On the other hand I am quite happy to concede that in another sense DNA is is “digital” but in Base 2 – the switching on and off of genes in response to nested contingencies really is rather like a digitized database, where the right “record” is selected according to a complicated “find” algorithm.
Referring to the symbol system in DNA as digital is not my argument – or better to say, it is my argument, but I am not the one that created it. Everyone from Richard Dawkins to Elmer Fudd has pointed to the digital nature of DNA sequencing, and I agree with them as a matter of my own observation. - - - - - - Just to be open, after reading your past couple of posts, I am concerned that what you intend to provide is not going to be a meaningful reflection of the dynamics at work. Perhaps you can describe at some sensible level of detail what you propose or expect to show, and how that incorporates the observations we've discussed. Upright BiPed
UBP: Just responding to the second part of your post?
Now some comments regarding your responses to my last post (points 1-9)
I am certainly happy to stipulate that information only exists when it is “recorded”. And I’d like to suggest that “recording” must involve a) the storage of the information in some form that can be “read” by another object in such a way that that object can change its own state according to the “information” read. If you are happy with this (I don’t think it’s perfect, but it’s not bad) then I’m with you. And in that context, then I would accept that DNA, for example, contains recorded information, as it can be “read” by another object (which, depending on the level of analysis, we can regard as the cell itself, or a specific ribozome) which then changes its own state (kinetically or morphologically) as a result. And if you want to call this “symbolic” then that is fine.
The critical issue here is that a ribosome does not change its state as a result of reading the information. It more takes the form of an assembler, and it does something with the information it reads. Charged tRNA (providing the necessary protocols for decoding the symbols) are physically brought together with those symbols inside the ribosome, and that meeting results in the proper ordering of amino acids. The issue is not about chemicals reacting with one another to change states; it’s about the processing of information in a chemical domain. The symbols are embedded in chemistry, the protocols are embedded in chemistry, the assembler is embedded in chemistry, but the output of the system is constrained by the prescriptive and informational sequence (not by a change in states). This dynamic would of course need to be reflected in your simulation.
Yes, that's fine, though I'd point out that this describes catalytic reactions which are not particularly mysterious. But yes,that's fine. I will ensure that the output of my system at least leave the original informational sequence unchanged (though of course, as in life, it may deform or reform during the process of doing what it has to do - DNA is not inert as it is being "read".)
Well, if we define information as recorded information, and if we define recorded information as symbolic, then this is necessarily true, indeed, circular. So obvious not falsifiable. However, if there is wiggle room between recorded information and symbolic representation, then it is not circular, but then I need to know in what way you are distinguishing recorded information from symbolic representation.
Allow me to modify my statement slightly. Matter that has been arranged to contain information is arranged to contain a symbolic representation. This is a comment about the form of the arrangement, it is symbolic.
OK.
I would certainly agree that given one nucleotide, there is no chemical grounds for predicting the next. However, I would not agree (if it were what you were saying) that a given sequence (a codon, for instance) is chemically unrelated to the amino acid that it “codes” for. Is that what you are saying? Although I might agree that a different kind of cell (perhaps on another planet) might have a different kind of ribosome that resulted in a different amino acid from the one that would result from a given codon in an earthly cell. So if that is the sense in which the codon is abitrarily assigned, then I guess I could get behind that, and concede that “symbol” is appropriate.
There have been numerous attempts to find a chemical basis for a particular codon being matched to a particular amino acid. All those attempts (meaning each and every one of them) has ultimately failed. The slight stereochemical affinities within the system indicate that the constituent parts are well suited to their job, but there is zero evidence that stereochemistry actually determined the full suite of associations (and even if they did, that would still not determine the sequence of codons). In the end, what can be said about the relationship of codon to AA is what can be actually observed from the physical evidence itself. The association of one codon to one amino acids is caused by the sequence of symbols in DNA which codes for the production of specialized tRNA to provide the protocol for translation. Those tRNA molecules hold the amino acid on one end of the molecule while displaying the anticodon on the other end. The amino acid and the codon do not interact. In other words, it’s the arbitrary gap in the causal chain which is bridged only by the information in DNA.
I'm not sure I buy that. But let's leave it for now. We seem to have established a fairly usable amount of common ground :)
A distinction is made between information presented in analog form, versus that in the genome which is a sequence of repeating digital symbols being decoded in a linear fashion following rules established by the configuration of the system (that configuration itself being determined by the information it is created to decode). What distinction? Or what distinction that matters? (Also I’m uneasy about “digital” here, but maybe it’s OK.)
One of the inferences within the design argument is that the only demonstrated source of digitized information is an intelligent agent. In the case of the genome, it is base 4 digital, read linearly.
I think this argument is fallacious. For a start, even if "digital" were a good description of DNA, it wouldn't follow that because some known cases of digital information have been intelligently designed, all must be. But for a second, I'm still not convinced that "digital" is a good description of DNA information (for the same reasons I am uneasy about your previous point) and if it is, it isn't in "Base 4"! Or only in the most attenuated of senses. On the other hand I am quite happy to concede that in another sense DNA is is "digital" but in Base 2 - the switching on and off of genes in response to nested contingencies really is rather like a digitized database, where the right "record" is selected according to a complicated "find" algorithm. But that's for another thread maybe :)
I had to rush, but I tried to cover some territory here, and hope I was successful. I look forward to your response.
And then I was slow to find your rushed response! Well, we got there, and yes, there's a lot of level playing field there that I think we can use. Thanks. I won't be able to make a start on the coding for a week or two, but I'll give you the odd progress report :) Lizzie Elizabeth Liddle
d'oh. Thanks! Elizabeth Liddle
Thanks. I’m not familiar with the abbreviation “IT”
IT is the word "it" in capital letters, not an abbreviation. :) Mung
Then, I await your results. Upright BiPed
@Upright BiPed #339 Thank you very much for your post, and the link to it, and apologies for the delay. I have now bookmarked this thread (which unfortunately is so long that it takes an unconscionable time to load!) and will try to check it regularly, until this conversation moves to a new thread.
Dr Liddle, To endure the amount of grief that ID proponents have to take, one would think that at the bottom of the theory there would at least be a big booming “tah-dah” and perhaps a crashing cymbal or two. But unfortunately that’s not the case; the theory doesn’t postulate anything acting outside the known laws of the universe.
Cool. Yes, I understand that.
I bring this up because you want to design a simulation intended to reflect reality to the very best of your ability, and in this simulated reality you want to show something can happen which ID theory says doesn’t happen. Knowing full well that reality can’t be truly simulated, it’s interesting that the closer you get to truly simulating reality, the more stubborn my argument becomes. Only by not simulating reality does your argument have even a chance of being true.
Heh. I recognise the sentiment. The devil is always in the details :) But we shall see.
Yet, if ID says that everything in the material universe acts within the laws of the universe, then what is it exactly to be demonstrated within this simulation? In other words, what is the IT? Of course, since this is set up to be a falsification, the IT is for prescriptive information exchange to spontaneously arise from chance and necessity. But that result may be subject to interpretation, and so consequently you want to know exactly what must form in order for me to concede that your falsification as valid.
Thanks. I'm not familiar with the abbreviation "IT" unfortunately, but I think I get your drift. I hope so. I would certainly agree that the Study Hypothesis (H1 in my language) is "for prescriptive information exchange to spontaneously arise from chance and necessity". And so to falsify the null (that prescriptive information exchange can spontaneously arise from chance and necessity) yes, I want to know the answer to that question. Good!
I intend to try and fully answer that question in this post. - – - – - – - – - – - – - I’m sure you are aware of the Rosetta stone, the ancient stone with the same text written in three separate ancient scripts. Generally, it gave us the ability to decode the meaning of the ancient hieroglyphs by leading us to the discrete protocols behind the recorded symbols. This dovetails precisely with the conversations we’ve had thus far regarding symbols, in that there is a necessary mapping between the symbol and what it is to be symbolized. And in fact, it is the prime characteristic of recorded information that it does indeed always confer that such a mapping exists – by virtue of those protocols it becomes about something, and is therefore recorded information as opposed to noise.
Trying to parse: the prime characteristic of recorded information is that it confers (establishes? requires?) a [necessary] mapping between symbol and what is symbolised. So what about these "protocols"? What I'm thinking is that in living things, the big genetic question is: by what means does the genotype impact the phenotype? And the answer is something like a protocol I like. But let me read on....
In retrospect, when I stated that recorded information requires symbols in order to exist, it would have been more correct to say that recorded information requires both symbols and the discrete protocols that actualize them. Without symbols, recorded information cannot exist, and without protocols it cannot be transferred. Yet, we know in the cell that information both exists and is transferred.
Yes. And I like that you refer to "the cell" and not simply "the DNA".
This goes to the very heart of the claim that ID makes regarding the necessity of a living agent in the causal chain leading to the origin of biological information.
Let me be clear here: by "living agent", are you referring to the postulated Intelligent Designer[s]? Or am I misunderstanding you?
ID views these symbols and their discrete protocols as formal, abstract, and with their origins associated only with the living kingdom (never with the remaining inanimate world). Their very presence reflects a break in the causal chain, where on one side is pure physicality (chance contingency + physical law) and on the other side is formalism (choice contingency + physical law). Your simulation should be an attempt to cause the rise of symbols and their discrete protocols (two of the fundamental requirements of recorded information between a sender and a receiver) from a source of nothing more than chance contingency and physical law.
Cool. I like that.
And therefore, to be an actual falsification of ID, your simulation would be required to demonstrate that indeed symbols and their discrete protocols came into physical existence by nothing more than chance and physical law.
Right. :)
The question immediately becomes “how would we know?” How is the presence of symbols and their discrete protocols observed in order to be able to demonstrate they exist? For this, I suggest we can use life itself as a model, since that is the subject on the table. We could also easily consider any number of human inventions where information (symbols and protocols) are used in an “autonomous” (non-conscious) system.
OK.
For instance, in a computer (where information is processed) we physically instantiate into the system the protocols that are to be used in decoding the symbols. The same can be said of any number of similar systems. Within these systems (highlighting the very nature of information) we can change the protocols and symbols and the information can (and will) continue to flow. Within the cell, the discrete protocols for decoding the symbols in DNA are physically instantiated in the tRNA and its coworkers. (This of course makes complete sense in a self-replicating system, and leads us to the observed paradox where you need to decode the information in DNA to in order to build the system capable of decoding the information in DNA).
Nicely put. And my intention is to show that it is not a paradox - that a beginning consisting of a unfeasibly improbable assemblage of molecules, brought together by no more than Chance (stochastic processes) and Necessity (physical and chemical properties) can bootstrap itself into a cycle of coding:building:coding:building: etc.
Given this is the way in which we find symbols and protocols physically instantiated in living systems (allowing for the exchange of information), it would be reasonable to expect to see these same dynamics at work in your simulation.
Yes, I agree. Cool!
I hope that helps you “get to the heart of what [I] think evolutionary processes can’t do”.
Yes, I think so. That is enormously helpful and just what I was looking for. This will take me some time (weeks, anyway, and I have a few RL things to do as well!) but I may pop back if I have any questions in the interim. Thanks! I'll address your responses to my other posts later (again, it may take me a while as there is a lot to chew on, and I have the weekend-from-hell on the horizon. Don't think, if I pop in and comment elsewhere, that I have forgotten the rest of your post ;) Cheers Lizzie Elizabeth Liddle
Sorry, Upright BiPed, and thanks for the link. I am reading your long post now. Elizabeth Liddle
Well, my position is that IDists have failed to demonstrate that what they consider the signature of intentional design is not also the signature of Darwinian evolutionary processes.
But the signature of Darwinian evolutionary processes is that they can do anything, no matter how improbable. Mung
And if Dr Liddle you do not intend on responding to #339 can we expect you to retract this statement: "Well, my position is that IDists have failed to demonstrate that what they consider the signature of intentional design is not also the signature of Darwinian evolutionary processes." Upright BiPed
Dr Liddle, do you intend on responding to post #339....or not? Upright BiPed
In everyday language we say we have received information, when we know something now that we did not know before. If we are exceptionally honest, or a philosopher, we assert only that we now believe something to be the case which we did not previously believe to be the case. Information makes a difference to what we believe to be the case. It is always information about something. It's effect is to change, in one way or another, the total of 'all that is the case' for us. This rather obvious statement is the key to the definition of information. – Donald M. MacKay, Information, Mechanism and Meaning
Mung
We shall find it profitable to ask: 'To what does information make a difference? What are its effects?' This will lead us to an 'operational' definition covering all senses of the term, which we can then examine in detail for measurable properties. - Donald M. MacKay, Information, Mechanism and Meaning
Until his death in February, Donald M. MacKay (1922-1987) was professor emeritus of neuroscience at Keele University in Staffordshire, England. He founded the Department of Communication and Neuroscience at Keele in 1960, which has become a research institute of international standing. Mung
ID views these symbols and their discrete protocols as formal, abstract, and with their origins associated only with the living kingdom (never with the remaining inanimate world).
Addresses numerous questions, including: Is information reducible to the laws of physics and chemistry? Does the Universe, in its evolution, constantly generate new information? Or are information and information-processing exclusive attributes of living systems, related to the very definition of life? Information and Its Role in Nature
Mung
Mung, Oh most definitally. ;) Upright BiPed
Sorry for the delay, the group that pays me to analyze data needed some data analyzed.
Would it be safe to say that the data they wanted analyzed is about something? Mung
Dr Liddle, To endure the amount of grief that ID proponents have to take, one would think that at the bottom of the theory there would at least be a big booming “tah-dah” and perhaps a crashing cymbal or two. But unfortunately that’s not the case; the theory doesn't postulate anything acting outside the known laws of the universe. I bring this up because you want to design a simulation intended to reflect reality to the very best of your ability, and in this simulated reality you want to show something can happen which ID theory says doesn't happen. Knowing full well that reality can't be truly simulated, it’s interesting that the closer you get to truly simulating reality, the more stubborn my argument becomes. Only by not simulating reality does your argument have even a chance of being true. Yet, if ID says that everything in the material universe acts within the laws of the universe, then what is it exactly to be demonstrated within this simulation? In other words, what is the IT? Of course, since this is set up to be a falsification, the IT is for prescriptive information exchange to spontaneously arise from chance and necessity. But that result may be subject to interpretation, and so consequently you want to know exactly what must form in order for me to concede that your falsification as valid. I intend to try and fully answer that question in this post. - - - - - - - - - - - - - I'm sure you are aware of the Rosetta stone, the ancient stone with the same text written in three separate ancient scripts. Generally, it gave us the ability to decode the meaning of the ancient hieroglyphs by leading us to the discrete protocols behind the recorded symbols. This dovetails precisely with the conversations we've had thus far regarding symbols, in that there is a necessary mapping between the symbol and what it is to be symbolized. And in fact, it is the prime characteristic of recorded information that it does indeed always confer that such a mapping exists – by virtue of those protocols it becomes about something, and is therefore recorded information as opposed to noise. In retrospect, when I stated that recorded information requires symbols in order to exist, it would have been more correct to say that recorded information requires both symbols and the discrete protocols that actualize them. Without symbols, recorded information cannot exist, and without protocols it cannot be transferred. Yet, we know in the cell that information both exists and is transferred. This goes to the very heart of the claim that ID makes regarding the necessity of a living agent in the causal chain leading to the origin of biological information. ID views these symbols and their discrete protocols as formal, abstract, and with their origins associated only with the living kingdom (never with the remaining inanimate world). Their very presence reflects a break in the causal chain, where on one side is pure physicality (chance contingency + physical law) and on the other side is formalism (choice contingency + physical law). Your simulation should be an attempt to cause the rise of symbols and their discrete protocols (two of the fundamental requirements of recorded information between a sender and a receiver) from a source of nothing more than chance contingency and physical law. And therefore, to be an actual falsification of ID, your simulation would be required to demonstrate that indeed symbols and their discrete protocols came into physical existence by nothing more than chance and physical law. The question immediately becomes “how would we know?” How is the presence of symbols and their discrete protocols observed in order to be able to demonstrate they exist? For this, I suggest we can use life itself as a model, since that is the subject on the table. We could also easily consider any number of human inventions where information (symbols and protocols) are used in an “autonomous” (non-conscious) system. For instance, in a computer (where information is processed) we physically instantiate into the system the protocols that are to be used in decoding the symbols. The same can be said of any number of similar systems. Within these systems (highlighting the very nature of information) we can change the protocols and symbols and the information can (and will) continue to flow. Within the cell, the discrete protocols for decoding the symbols in DNA are physically instantiated in the tRNA and its coworkers. (This of course makes complete sense in a self-replicating system, and leads us to the observed paradox where you need to decode the information in DNA to in order to build the system capable of decoding the information in DNA). Given this is the way in which we find symbols and protocols physically instantiated in living systems (allowing for the exchange of information), it would be reasonable to expect to see these same dynamics at work in your simulation. I hope that helps you “get to the heart of what [I] think evolutionary processes can’t do”. - - - - - - - - - - - - - In a previous post, you offered the following operational definition as a starting point.
Information is a representation of a discrete object/thing embedded in an arrangement of matter or energy, where the object/thing represented is entirely dissociated from the representation.
Perhaps that definition can be adjusted slightly to accommodate the preceding, such as: Information is a representation of a discrete object/thing embedded in an arrangement of matter or energy, where the object/thing represented is entirely dissociated from the representation, but where the association of the two can be established by means of a protocol instantiated in the receiver of the information. Personally I think the definition is a little unwieldy, but perhaps it’s workable as definition when the sender could indeed be an inanimate process instead of a living thing. Yet, even if the preceding definition were acceptable, it doesn’t capture some of the issues previously discussed. For instance it says nothing about the fact that the rise of information you are attempting to simulate is digital in form. - - - - - - - - - - - - - Now some comments regarding your responses to my last post (points 1-9)
2. The state of an object does not contain information; it is no more than the state of an object. To become recorded information, it requires a mechanism in order to bring that recording into existence outside of the object itself. As I said earlier, a carbon atom has a state which a physicist can demonstrate, but a librarian can demonstrate the information exists as well. They both must be accounted for.
OK. This is important, so I’m going to try to be as articulate as I can: I am certainly happy to stipulate that information only exists when it is “recorded”. And I’d like to suggest that “recording” must involve a) the storage of the information in some form that can be “read” by another object in such a way that that object can change its own state according to the “information” read. If you are happy with this (I don’t think it’s perfect, but it’s not bad) then I’m with you. And in that context, then I would accept that DNA, for example, contains recorded information, as it can be “read” by another object (which, depending on the level of analysis, we can regard as the cell itself, or a specific ribozome) which then changes its own state (kinetically or morphologically) as a result. And if you want to call this “symbolic” then that is fine.
The critical issue here is that a ribosome does not change its state as a result of reading the information. It more takes the form of an assembler, and it does something with the information it reads. Charged tRNA (providing the necessary protocols for decoding the symbols) are physically brought together with those symbols inside the ribosome, and that meeting results in the proper ordering of amino acids. The issue is not about chemicals reacting with one another to change states; it’s about the processing of information in a chemical domain. The symbols are embedded in chemistry, the protocols are embedded in chemistry, the assembler is embedded in chemistry, but the output of the system is constrained by the prescriptive and informational sequence (not by a change in states). This dynamic would of course need to be reflected in your simulation.
4. Matter that has been arranged in order to contain information doesn’t exist without symbolic representations. Prove it otherwise.
Well, if we define information as recorded information, and if we define recorded information as symbolic, then this is necessarily true, indeed, circular. So obvious not falsifiable. However, if there is wiggle room between recorded information and symbolic representation, then it is not circular, but then I need to know in what way you are distinguishing recorded information from symbolic representation.
Allow me to modify my statement slightly. Matter that has been arranged to contain information is arranged to contain a symbolic representation. This is a comment about the form of the arrangement, it is symbolic.
[Point #5] I would certainly agree that given one nucleotide, there is no chemical grounds for predicting the next. However, I would not agree (if it were what you were saying) that a given sequence (a codon, for instance) is chemically unrelated to the amino acid that it “codes” for. Is that what you are saying? Although I might agree that a different kind of cell (perhaps on another planet) might have a different kind of ribosome that resulted in a different amino acid from the one that would result from a given codon in an earthly cell. So if that is the sense in which the codon is abitrarily assigned, then I guess I could get behind that, and concede that “symbol” is appropriate.
There have been numerous attempts to find a chemical basis for a particular codon being matched to a particular amino acid. All those attempts (meaning each and every one of them) has ultimately failed. The slight stereochemical affinities within the system indicate that the constituent parts are well suited to their job, but there is zero evidence that stereochemistry actually determined the full suite of associations (and even if they did, that would still not determine the sequence of codons). In the end, what can be said about the relationship of codon to AA is what can be actually observed from the physical evidence itself. The association of one codon to one amino acids is caused by the sequence of symbols in DNA which codes for the production of specialized tRNA to provide the protocol for translation. Those tRNA molecules hold the amino acid on one end of the molecule while displaying the anticodon on the other end. The amino acid and the codon do not interact. In other words, it’s the arbitrary gap in the causal chain which is bridged only by the information in DNA.
7. A distinction is made between information presented in analog form, versus that in the genome which is a sequence of repeating digital symbols being decoded in a linear fashion following rules established by the configuration of the system (that configuration itself being determined by the information it is created to decode).
What distinction? Or what distinction that matters? (Also I’m uneasy about “digital” here, but maybe it’s OK.)
One of the inferences within the design argument is that the only demonstrated source of digitized information is an intelligent agent. In the case of the genome, it is base 4 digital, read linearly. - - - - - - - - - I had to rush, but I tried to cover some territory here, and hope I was successful. I look forward to your response. Upright BiPed
Heh. No problem. I have a fair bit of data chunking through some analysis programs myself right now :) Lizzie Elizabeth Liddle
Elizabeth Liddle, Sorry for the delay, the group that pays me to analyze data needed some data analyzed. :) I will return shortly... Upright BiPed
F/N: Just to make it clear that the Chi metric is not THE definition of CSI, let us look at the X-metric for functionally specific information (similar to what is in the UD weak argument correctives): X = C * S * B, in functionally specific bits C is a dummy variable that is 1/0 according as one is beyond the 1,000 bit threshold, S is a similar dummy variable on functional specificity and B is the file size or the information metric in bits. In effect if you see a Word file that takes up 197 k bits, and it is being looked at from the perspective of functional specificity, the file size recognised as functionally specific, will be reported: 197 k bits. By contrast, a random bit string of similar size will not be functionally specific and will be reported as a 0. A functionality specific file of size 90, 490 or 900 bits will similarly go to zero on the grounds that the size is insufficient to be confident that chance based processes could not have given rise to it. [Again, high contingency is seen a, on the grounds that lead us to infer to lawlike natural necessity from consistency in behaviour.) The approach is similar, but the way the result is reported is different. GEM of TKI kairosfocus
F/N: On operational definitions and definitionitis as a road to infinite regress and/or question-begging. One of the advantages of taking a comparative difficulties, worldviews approach to issues, is that it allows one to see the balance of different approaches on the merits, in light of factual adequacy, coherence and explanatory power [simply elegant, not simplistic or ad hoc]. So, we can easily see that one of the fundamental limits we face is that we cannot warrant everything relative to something else. If we question A, we have to move to B, then the question is why B. Thence, C, D, . . . We have but three options: (a) infinite regress -- absurd; (b) circularity -- self-defeating [and this includes systems that claim to hang together like a raft floating on a sea -- coherence is not all]; (c) grounding our systems of thought in first plausibles that are properly basic. Indeed, we are left with the often un-acknowledged importance of self-evident foundational claims. That is, things which are seen as not only so, but which, on our experience of the world and understanding of it, MUST be so, on pain of patent absurdity. In short, "rigour" has its limits, and the demand for "proof" and "definition" have even sharper limits. Not all things are capable of proof, not even mathematics -- post Godel -- is capable of proof beyond possibility of contradiction, and chains of definitions must end in accepted primitive concepts not amenable to the demand for further definition, other than in the end by pointing to examples of reality and implicitly calling on our capacity to understand; i.e. ostensive definition on key examples and family resemblance thereto. As a classic example, LIFE [the major underlying feature of our world that is at the heart of the ID controversy] is not amenable to any other definition than by pointing out key examples and calling for sufficient family resemblance. So, to then act as though other things will not end there [e.g. by dismissing an ostensive definition as an appeal to analogy or the like], is to be absurdly selectively hyperskeptical, and if one does this knowingly, one is hypocritical. Going on, we must face a critical weakness of operational definitions: they are tainted by their ancestral roots in self-refuting logical positivism, which in effect brazenly asserted that the only meaningful claims were those that could be subjected to operational tests and/or were true by being analytic statements [in effect first tautologies as in Mathematics]. The fatal flaw: this claim is self-referential and itself is -- surprise -- not subject to operational test and is plainly not analytic. It is self-refuting, as it declares itself meaningless by implication. But, as long as that self referentiality and incoherence are not spotted, it often seems invulnerable and a very handy club to knock over what one is inclined to reject. (Notice the importance of a balance of the three main worldview tests and that of self-examination of a system by bringing it too to the table of comparative difficulties.) So, we know that (while such definitions can be useful in our toolbox of concept clarifications) not everything can be operationally defined, and that many things can only be accessed in light of concepts formed on open-minded examination of key examples. In addition, mathematical models and metrics are actually quite flexible, as was discussed at length in a previous thread. The same posts showed as well that "rigour" in mathematics is subject to sharp limits in practice, so we must beware of self-refuting selective hyperskepticism in dismissal arguments. So, yes, there hsould be no double-standard of an easy pass to a favoured view, but that also means tha there should be no imposition of selective hyperskepticism, demanding: "extraordinary claims require extraordinary [adequate, reasonable, accessible] evidence." We must note, too that the hyperskeptic, so long as he is pointing the finger elsewhere and presenting demands for "proof" etc, can appear invincible, but so soon as s/he must now step up to the same table of comparative difficulties as the rest of us, his or her unexamined assumptions and/or self referential incoherences soon enough raise serious questions. On our experience, we know what design is. Even, to post a comment challenging the nature of design is to give an example of design -- the poster designs the comment. Similarly, we can easily enough know what complex specified information is, as the same post is an example. In Wicken's classic description in the context of living systems:
‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65.]
Earlier, Orgel had written:
. . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.]
Similarly, when one composes a post, one provides a string of alphanumerical characters that must be arranged according to rules of meaningful function, and which are essentially arbitrary symbols expressing a code. So, let us have done with pretence that CSI is not real for want of sufficiently rigorous definitions, metrics etc. We know it, we cannot but exemplify it, and the same sort of examples that we provide in making posts that pretend to object to it, are to be found in the heart of the living cell's digitally coded instructions: digital, symbolic code that follows rules of meaning. But, can we so define this that we can measure it in ways that are amenable to empirical testing? Yes. And again, this has long been adequately done over and over and over again, never mind oceans of wasted ink and rivers of equally wasted digital bandwidth on all sorts of objections that ever so often fail by exemplifying what they object to: the reality of comnplex specified information, and especially of functionally specific complex information. Dembski's key contribution was to identify that if we see an informational event E that comes from a separately definable and specific set T, where T is itself in a config space, let's say W, that is sufficiently large that the set T is maximally unlikely to be chanced upon by a random walk rewarded by trial and error success, on accessible resources, then the best explanation of E from T in W is design. What this boils down to is that if we can reduce E to a string or if E is already a string of symbols, that can in turn be evaluated as a set of structured yes/no questions of sufficient length, such a random walk based search will be maximally likely to fail, as the scope of search relative to the scoe of W is so small that the search, let us call that R, rounds down to R/W --> 0. But, a random walk implies a flat random distribution and we have no right to such an assumed distribution! Not quite, it can be shown that on average, searches -- as opposed to searches designed based on knowledge of the domain being searched -- in spaces will do no better than the sort of random walk just described, providing we have a topology that does not give us clues as to the location of the target zones T unless we are already there. In short, the features of clever searches that exploit clues that point to targets, such as nice trends, will have no information fed into their search and improve modules in the algorithms [and the search for a relevant and effective algorithm is a higher order, more complex search than the direct RW based search]. Besides, the very point of something that can store symbolic information is that it must be highly contingent, so to a first approaximation, it will fit close enought to this sort of approach that the model is instructive. In this context, the reason why E is probably about 1/8 of the letters in this post and other similar texts in English is because an intelligence is following the conventional rules of English, not because of any inherent bias in the physical system. [As at now my keybpoard's E looks more like an I than an E, no prizes for guessing why.] So, we may quantify. Simplifying Demsbski's discussion, we may reduce a certain metric of such CSI as follows:
1: Where I = - log2 P, per standard definition and methods 2: Or where, as the case of Shannon showed, I may be directly estimated from the physical storage characteristics and/or the statistical patterns of symbol usage 3: And where S is a dummy variable of value 1/0 according as we have a warranted conclusion that an event E comes from a specific zone T in config space of possibilities W, e.g. on seeing the effect of modest perturbation, or on observing the specificity of the rules for a code, or by noting the specifications and tolerances to function etc [i.e. this is a definition by key case and family resemblance], 4: We see Chi_1000 = I*S - 1,000, in bits beyond the observed cosmos threshold 5: Where Chi_1000 goes positive, we are warranted to infer on best current explanation that an object or event E with that vsalue, is designed.
It is asserted that this is a well-tested metric, and that it is empirically quite reliable. For intance, it is claimed thsat all cases of at least 143 ASCII characters of meaningful text on the Internet that we know the source of, will prove to be the product of intelligences. Indeed, it is asserted that we routinely infer form say posts in this thread to intelligent posters, on intuitive grounds closely connected to this quantification. The real problem is, this suggests that certain phenomena, i.e. the living cells, are the ultimate product of design. That is, that language, intelligently designed code, algorithms, functional organisation and the like are antecedent to the cell based life we observe. But many claim to "know" that this is wrong, or that the claimed powers of chance variations and natural selection and extensions from that to origin ofr life contexts are sufficient to explain what we see without such an inference. On what grounds? a: Demonstration by example? (Genetic algorithms are usuually trotted out as a claimed example. But such are plainly intelligently designed and start inside zones T, where there are so-called fitness fucntions that provide trends pointing to peaks of performance. These are therefore actually examples of the truth of the claim they are used to object to.) b: Intelligence was not possible at the time and place in question? (On what grounds, apart from question-begging assertions?) c: We must not allow a Divine Foot in the door of the hallowed halls of science? (This is blatant question-begging materialistic prejudice and censorship that undermines the inegrity of science.) d: Maybe, we will see an explanation in the future that will solve the problem of how chance and necessity can account for CSI without appealk to design? (This is an appeal to blind faith in an IOU that has been in circulation for 150 years, and has so far not been redeemed. Backing it up by the sort of intimidatory tactics an d snide dismissals that are unfortunstely all too common does not help matters.) e: Something else will come up? (This is an appeal to blind faith.) So, now, let us look seriously at the matter, with fresh eyes. GEM of TKI kairosfocus
Dr Liddle: Okay, let's see if we can find a good common point for discussion. GEM of TKI kairosfocus
kairosfocus: Thanks for your responses. I agree we are going in somewhat useless circles right now, so I will step back for a little from the issue of Shannon Information. I do think that part of the trouble (and it is no fault on your part) is lack of shared vocabulary, or, at least, lack of shared referents for our respective vocabularies. I will re-read your posts, and try to understand what you are saying. Getting to common ground is, as we agreed, difficult. Still, where there is a will there is a way :) With best wishes Lizzie Elizabeth Liddle
Thanks, Upright BiPed! I need to slow my pace anyway (got my Java homework to do!) Will check back later. Cheers Lizzie. Elizabeth Liddle
PS: I should underscore -- the genetic code is a known digital coded meaningful system, feeding into an automated assembly process. Biology, then, is not the magical exception to the patterns of such systems, unless those who claim it is can show that. In short, the turnabout dismissal fails. kairosfocus
Dr Bot: I have pointed out the general characteristics of coded systems t6hat have to carry meanings. the use of rules or constraints of meaning and/or function lock out vast swaths of the nominally possible configurations as non-functional or meaningless. This is a general property of organised structured functional systems, and we already know that for proteins -- a main expression of genetic codes -- getting the AA sequence wrong can easily destabilise folding or function. We even know that some correctly sequences proteins need chaperoning to make sure to fold right, and that cells have corrective mechanisms to deal with mischained and misfolded or non- folded proteins. All of these are mutually supportive on the existence of identifiable islands of functional organisation, which are narrowly specified. Indeed, let us observe how cells assemble proteins step by step, acid by acid, and often chaperone the resulting AA chain to see to it that there is a resulting correct protein. It is those who wish to suggest that the expected and in material part observed island of function pattern does not hold, who have a duty of warrant here. GEM of TKI kairosfocus
Goodmorning (local time) Elizabeth, I have read, and very much appreciated, your response at 304. I hope to be able to respond to it later (late) today, or first thing in the morning. As for the operational definition yous seek, to my mind the attempts thusfar have not captured the center of what is to be demonstrated. However, I think there may the enough on the table now to change that. cheers... Upright BiPed
Show how by single letter changes funcitonal all the way you can get from a Hello World to an operating system, and so on.
Why is this relevant? You are basically saying: "The topography of space for high level computer languages consists of many discrete isolated points of function seperated by large expanses of non-function - therefore the topology of of space for biology is the same" This looks like a strawman argument to me. Kindly show why a high level computer language is a valid direct paralell of biology. We have been over this many times before KF and it gets a little tedious having to correct you!
Kindly show that he case is not so
Kindly show that it is! Personally I think the jury is still out. DrBot
F/N: Let me put this a somewhat different way, in hopes it will get through the perceptions and beliefs. Designers of complex systems know that they have to very carefully configure if the resulting composite object is to work. Similarly,for writers of programs or of text, or even mechanics putting in a car part. Islands of function is a commonplace reality where clumping at random is overwhelmingly unlikely to result in required function. Even in biological systems, the most ardent Darwinists will shy away form high doses of ionising radiation, because precisely they know that he most likely results of mutations or random reorganisation of biomoloecules will be damaging, with radiation sickness and cancer lurking. In short the overwhelming evidence is that the islands of function view is correct. Those who wish to reject it have to show empirical grounds for doing so, GEM of TKI kairosfocus
F/N: You may dismiss the point of islands of function on what might be all you wish. Kindly show that he case is not so, i.e that protein fold domains are not deeply isolated in AA sequence space -- in the teeth of the research that points that way. Show that functional coded messages are not narrow cross sections of sequence space, where the overwhelming majority of complex strings are non functional and even meaningless. Show how by single letter changes funcitonal all the way you can get from a Hello World to an operating system, and so on. Until you show such, we have every reason to accept the observation of islands of fuction in seas of non-function, especially given the impact of the constraints of required function on possibilities that would otherwise obtain. kairosfocus
F.N: When it comes to quantifying CSI, or more relevantly FSCI, the approach of identifying the presence of an event E in a zone of interest T (from a very large config space relative to available resources), quantifying the number of bits and comparing to a threshold of being large enough that it is unlikely that you were found in T by accident rather than intent is reasonable and effective, as has been discussed for months. That is what he simple brute force X-metric did for years now, and it is what he log reduced Chi metric does: X = C*S*B , where C is 1/0 if beyond 1,000 bits capcity, S is specificity 1/0 and b is the number of functional bits. Chi_1,000 = I*S - 1,000, bits beyond a threshold. You may object, as is your privilege, but I daresay these two metrics have long done and will do the job of identifying reliably cases of designed informational objects, as can be tested against known cases. GEM of TKI kairosfocus
Dr Liddle: This is beginning to go in useless circles. The context of any reasonable extra information to judge a particular message has already been addressed, and that to exhaustion. Above I have excerpted Shannon, which should give further context if that was needed. And, given what we know about signals and noise, it would indeed be very possible to identify a single extraterrestrial message as that. The relevant context is not just an immediate traffic analysis of the string of messages. The point of the CSI concept is again to identify that which is organised as opposed to at random or merely preset on mechanical necessity, an issue that as I citred above, was put onteh table by the 1970's Pardon, but I find this discussion is beginning to be a tedious, non-progressive treading around circles or in effect demanding of us that we show basic and long since established things over and over and over again, but no repetition or elaboration will ever be deemed sufficient to be acceptable. I think you need to look at he issue of what makes for adequate warrant, and at what point you may be treading unawares into selective hyperskepticism based on suspicion of sources. Much of the above for me is stuff I first saw as a telecomms course student many, many years ago, and I am astonished to see this sort of stuff being suddenly suspect. When I saw that Schneider was trying to "correct" and dismiss Dembski as ignorant on the most commonplace definition of information there is int eh field, I = - log p, that did it for me. After that point I had no further confidence in Mr Schneider or those who blindly followed him. GEM of TKI kairosfocus
PS: Remember, Shannon's context was to use the well known fact of typical patterns of symbol frequencies to measure information, on the grounds that the rarer symbols were more informative. A log frequentist probability metric was then fairly obvious, as had been suggested by Hartley maybe 20 years previously. (I gather that Morse consulted printers on the known frequency of occurrence of letters in English text when he constructed his code, e.g. E is about 1/8 of typical English text.) Just to settle things, here is a clip from the introduction to the 1948 paper:
The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design. If the number of messages in the set is finite then this number or any monotonic function of this number can be regarded as a measure of the information produced when one message is chosen from the set, all choices being equally likely. As was pointed out by Hartley the most natural choice is the logarithmic function. Although this definition must be generalized considerably when we consider the influence of the statistics of the message and when we have a continuous range of messages, we will in all cases use an essentially logarithmic measure.
He shortly goes on to say:
The choice of a logarithmic base corresponds to the choice of a unit for measuring information. If the base 2 is used the resulting units may be called binary digits, or more briefly bits, a word suggested by J. W. Tukey. A device with two stable positions, such as a relay or a flip-flop circuit, can store one bit of information. N such devices can store N bits, since the total number of possible states is 2N and log2 2^N =N . . . . a decimal digit is about 313 bits. A digit wheel on a desk computing machine has ten stable positions and therefore has a storage capacity of one decimal digit.
We thus see here the direct use of both statistical considerations and the physical features of storage devices as measures of information. By Section 2 he is saying:
We now consider the information source. How is an information source to be described mathematically, and how much information in bits per second is produced in a given source? The main point at issue is the effect of statistical knowledge about the source in reducing the required capacity of the channel, by the use of proper encoding of the information. In telegraphy, for example, the messages to be transmitted consist of sequences of letters. These sequences, however, are not completely random. In general, they form sentences and have the statistical structure of, say, English. The letter E occurs more frequently than Q, the sequence TH more frequently than XP, etc. The existence of this structure allows one to make a saving in time (or channel capacity) by properly encoding the message sequences into signal sequences. This is already done to a limited extent in telegraphy by using the shortest channel symbol, a dot, for the most common English letter E; while the infrequent letters, Q, X, Z are represented by longer sequences of dots and dashes.
In short, from the outset the statistical properties of symbol strings were a part of the considerations, and were integrated into the theory as Taub and Schilling summarise along with many others. He continues:
We can think of a discrete source as generating the message, symbol by symbol. It will choose successive symbols according to certain probabilities depending, in general, on preceding choices as well as the particular symbols in question. A physical system, or a mathematical model of a system which produces such a sequence of symbols governed by a set of probabilities, is known as a stochastic process.3 We may consider a discrete source, therefore, to be represented by a stochastic process. Conversely, any stochastic process which produces a discrete sequence of symbols chosen from a finite set may be considered a discrete source . . . . A more complicated structure is obtained if successive symbols are not chosen independently but their probabilities depend on preceding letters. In the simplest case of this type a choice depends only on the preceding letter and not on ones before that. The statistical structure can then be described by a set of transition probabilities pi( j), the probability that letter i is followed by letter j. The indices i and j range over all the possible symbols. A second equivalent way of specifying the structure is to give the “digram” probabilities p(i; j), i.e., the relative frequency of the digram i j . . . . The zero-order approximation is obtained by choosing all letters with the same probability and independently. The first-order approximation is obtained by choosing successive letters independently but each letter having the same probability that it has in the natural language.5 Thus, in the first-order approximation to English, E is chosen with probability .12 (its frequency in normal English) and W with probability .02, but there is no influence between adjacent letters and no tendency to form the preferred digrams such as TH, ED, etc. In the second-order approximation, digram structure is introduced. After a letter is chosen, the next one is chosen in accordance with the frequencies with which the various letters follow the first one. This requires a table of digram frequencies pi( j). In the third-order approximation, trigram structure is introduced. Each letter is chosen with probabilities which depend on the preceding two letters. ++++++++++ F/N 5: 5Letter, digram and trigram frequencies are given in Secret and Urgent by Fletcher Pratt, Blue Ribbon Books, 1939. Word frequencies are tabulated in Relative Frequency of English Speech Sounds, G. Dewey, Harvard University Press, 1923.
this suffices to show that the approaches I have spoken of are in fact part and parcel of Shannon's approach. In addition, he recognises the centrality of meaningfulness but is focussed on aspects tied closely to sending information down channels, an engineering task. Other participants in this discussion and I have already spoken to how the context of meaningfulness can be added back in, through following the ID concept of zones of functional configurations from the field of possibilities. GEM of TKI kairosfocus
Kairosfocus:
Pardon, but it is a commonplace of communications theory that one characterises the pattern of symbol usage in typical messages, to infer to the probabilities of symbols on a sampling basis. It is not always accurate, e.g. that novel in was it the 1930?s that was cleverly designed not to have in it a single E.
Yes, and that's what I'm saying! That you need data external to the message to ascertain the information content of the message. You can't just look at the message alone and determine whether it has information content or not. You need independent information. That's all I was saying. You seem to agree - but at one point you seemed to be saying that you could look at a single message (perhaps a SETI message) and determine what if any of it was signal and what noise. I don't think you can, at least not using Shannon Information. Presumably that's why you need CSI. Except I think that has problems too :) But I may have misunderstood what you were saying earlier, in which case I apologise. We seem to agree wrt Shannon. Elizabeth Liddle
Dr Liddle: Pardon, but it is a commonplace of communications theory that one characterises the pattern of symbol usage in typical messages, to infer to the probabilities of symbols on a sampling basis. It is not always accurate, e.g. that novel in was it the 1930's that was cleverly designed not to have in it a single E. But it is a general and typical approach, and indeed it is closely tied to the wider practices of statistics. Why then the belabouring as though I am saying something new that needs special demonstration? Let me cite again from Taub and Schilling, Princs of Comm Systems, 2nd edn, Taub and Schilling (McGraw Hill, 1986), p. 512, Sect. 13.2,a work I have used in one edition or another for almost 30 years:
Let us consider a communication system in which the allowable messages are m1, m2, . . ., with probabilities of occurrence p1, p2 [My NB: generally -- cf here where I use F R Connor's approach, detected through statistical studies of messages, e/g E is about 1/8 of typical English text], . . . . Of course p1 + p2 + . . . = 1. Let the transmitter select message mk of probability pk; let us further assume that the receiver has correctly identified the message [My nb: i.e. the a posteriori probability in my online discussion is 1]. Then we shall say, by way of definition of the term information, that the system has communicated an amount of information Ik given by Ik = (def) log2 1/pk (13.2-1) [i.e. Ik = - log2 pk, in bits]
This is all of course in the context of an observed communication system -- an irreducibly complex entity that reeks of design, and of a pattern that allows us to recognise messages as distinct from noise and the symbols in the messages [cf previous links on the eye diagram challenge that contrasts natural noise from intelligent, intentional signal] with some assurance that what is detected is what was sent. (The Connor derivation addresses this complexity.) In short, once we are in the context of signals, codes, systems like this, there is an overwhelming evidence of design to begin with. In that context, and the point of message as opposed to noise is that the signal has characteristics that normally reflect purpose and are distinct from the sort of randomness that messes up amplitude and timing etc. Notice, too how above I pointed to a source of noise in the comms system used in protein assembly: the generic nature of the coupler to the AA in the tRNA, which leads to the observation of error correcting edit functionality. This is yet another strong index of design. So, no I am not in fundamental disagreement from UB or Mung. Signals, comm systems that use them and in particular symbols are -- per massive direct observation of their cause -- intentional, and the fact that a flat, uncorrelated distribution of pi would give a peak value for the H metric is simply an artifact of the mathematics. The divide the other side gambit fails. And, when it comes to DNA and AA's, we have entire families of proteins and classes of organism to look at, a considerable population, and longstanding empirical justification for code assignments as well. In addition, we see that the AA string is based on a generic coupler, the NH2 and COOH bonding, just as in nucleic acids, the bases are joined with a generic sugar phosphate coupler. The information is stored in key lock configs of side-chains [as von Neumann proposed for his kinematic replicator -- rods with amplitude storing digital information], but the sequencing allows us to deduce that we are dealing with a 4-state digital code. As has been known and publicly stated since the 1950's and as was decoded across the 1960s. Indeed, we have the situation where Venter used the code to put in his "watermark" signature. Other researchers have loaded what normally would be a stop codon with AAs and have chained novel chains as a result. this is symbolic, purposeful language with the possibility of reprogramming. We thus know the info storage capacity per symbol, and we know how the codes used make use of that capacity. I find it tiresome to have to be going over this ground again and again and again as in recent days and weeks, as it is so much a commonplace. I think it is you who would really need to carry the burden of warranting the claim that DNA is not a digital coded molecule that stores up to 2 bits per GCAT symbol. Information that is used to make proteins by a step by step assembly process using mRNA, tRNA, ribosomes and enzymes etc. That this seems to be such a struggle to reach agreement on is itself a strong evidence of the implications of this pattern of credibly established and commonplace facts. These are not pretendeddirect observations of a remote past but things we can see through well established practices in the present now culminating in the ongoing sequencing and publishing of the genomes for organism after organism, including us. GEM of TKI kairosfocus
kairosfocus (continuing with #315...) You write:
4 –> In the case of D/RNA and AA’s, we know we have a flexible string structure capable of storing 4- state [2 bits] or 20-state [4.32 bits] information per symbol.
OK.
5 –> So, we have a baseline of storage potential. In this context, one that is hard wired into the chemistry: essentially any base can go next to essentially any base, and the same holds for AA’s.
Agreed re bases; not sure that it's true for AAs (suspect some combos don't result in viable proteins, but I don't know).
6 –> In the case of AA’s, in life forms, the actual sequences used are specified by base sequences in DNA thence mRNA courtesy the ribosome as an automated assembly plant. mRNA can and does sequence essentially any AA followed by any AA.
OK.
7 –> That from the field of possibilities, in life forms certain sequences are chosen, has to do with constraints of purpose or onward utility, not the chemistry of chaining.
Yes indeed, with the caveat that I think we can substitute "teleonomy" for "purpose". A persisting system is constrained by what facilitates persistence; in other words, in a persisting system, we would expect to find elements and mechanisms that promote persistence, not dissolution.
8 –> In short, the islands of function where we have properly folding, agglomerating, activated AA sequences that dot he work of life are the sequences USED, as opposed to the sequences that are POSSIBLE.
Yes, indeed, except that I am wary of the metaphor, "islands". A sequence that is USED may adjoin a sequence that is POSSIBLE but not USED.
9 –> To get to these functional AA sequences [which are deeply isolated in the field of possible AA chains], we have a symbolic mapping in the D/RNA system, with 3-letter codons specifying 64 states mapped onto the 20 AA states and certain procedural instructions: start chain, stop chain.
Well, as I've said above, I'm wary of the term "symbol" in this context. But, in the sense that other systems could (perhaps) in theory exist (perhaps in some unrelated part of the universe) in which the reading system was different, and resulted in a different mapping, then, maybe it's OK. I'll try not to let it hold me up. I'm more concerned about "isolated" but then that is the point of contention :)
10 –> So, it is immediately seriously arguable that the relevant number of possibilities per element are 4 and 20 for D/RNA and AA’s, respectively. Certainly, that holds from the getting to the first functional case end of things.
OK-ish.
11 –> Now of course — as is true of any real world code pattern, there is not in practice a perfectly even distribution of the symbols as used. So, we can take a frequentist approach on the codes as observed, and infer that from the information carrying capacity, the USED variablility is somewhat different, and yields a lower number of bits per element.
Indeed there is not an even distribution of the codes used, nor of the combinations of codes used. This is a crucial point.
12 –> Sounds plausible, but this comes at a surprisingly high price. Namely, this directly implies design of a language, not the outworkings of blind chemistry.
No, this I believe is an error. Darwin proposed something very simple but very important, which was, in effect, a series of filters, each of which alters the pdf of what is found at the next, and which is specifically biased towards sequences that promote persistence (survival and replication) of the sequence. This hugely alters the case. We are not talking about a flat, or near flat distribution that has no relationship with "teleonomy" (the constraints on persistence). The pdfs are serially forced through a filters that "select" the sequences that best persist. And lest this seem tautological (which it almost is, but only because phrasing it like that makes it more complicated than it is), a simpler way of saying it is: sequences that promote persistence will tend to persist while sequences that undermine persistence will tend to be lost.
13 –> For, it is implicitly inferring that it is not the chemistry that controls but the grammar, and the fitness for onward function.
Yes, exactly. But that does not allow us to infer intelligente design, merely the persistence of sequences that promote "onward function", by definition.
14 –> So, there is a choice: if we look to the chemistry, we see that degree of flexibility of possibilities that immediately leads us to having no preference for any particular sequences patterns, if we look to the frequencies of symbols in observed cases, we are looking at in effect a symbol assignment on fitness for function.
Well, yes, but that is explained by "natural selection" acting on "variance", or, to put it as I prefer, the tendency of sequences with phenotypic effects that raise the probability of reproduction in the current environment to propagate through the population
15 –> In any case, the chemistry is an independent source of information [!] on probability distribution functions, i.e. we have no in-principle reason to reject a flat-random pattern as a first probability distribution function estimate; especially if one is appealing to blind chemistry leading to random walks and trial and error as cause.
Nor any reason to assume it. Generally flat distributions are rare; Gaussian distributions much more common (as the Central Limit Theorem suggests). Time-to-failure distributions even more common. Essentially natural selection samples the right hand tail of time-to-failure distributions, and then resamples the previous sample it in each generation.
16 –> If instead we look at the frequency exhibited by the population used in actual bio-functional molecules, that is not circular, it is inferring that functional messages in the system make use of symbols — per empirical investigations — in certain ways. So, we may access that empirical study to estimate the a priori probability of such symbols in any given case.
Not without ignoring the inbuilt sampling I have mentioned. When we include that, we get a very different pdf, I would argue, and one that hugely increase the value of p(T|H) (not to mention rendering effectively incalculable in any given case).
17 –> This is a typical pattern in statistical investigations, where we study cases that seem to be typical to infer to the properties of the population as a whole. Sure, things can go wrong, but so long as we keep an open mind and recognise the inherent provisionality, we have good mathematical grounds for reasonable confidence. (So, to single out a particular case of a general pattern and to criticise it as though that were somehow a special and unique problem warranting specific suspicion, is to slip into selective hyperskepticism.)
Yes indeed. That's why I'm not bothered that much by the Very Big Number on the left of the CSI formula. It's the Very Small Number on the right that I think is wrong :)
18 –> In this context, Shannon worked out an average info per symbol metric that takes into account the different observed frequencies of occurrence of symbols: H = – [SUM on i] pi log pi
Yes, but we still need p, and the calculation of p is what is at issue.
19 –> So, we have two valid approaches, and we have comparable results up to an order of magnitude, certainly in the aggregate. 20 –> Namely, the functional information in the living cell, in aggregate can be estimated on more or less standard techniques, and if we do so, we see that we have quite enough functionally specific and complex information to infer that the best explanation for what we observe is design.
Well, I disagree, profoundly, for the reasons above. The "more or less standard techniques" completely ignore the filtering process known as "natural selection", and this is precisely why I think we cannot distinguish (by this means at any rate) between the products of natural selection and the products of design. Design may indeed constrain the pdfs of the components of the code, but so does natural selection, because of the simple truism that sequences that promote their own persistence will tend to persist, while those that don't, won't. This is the heart of Darwin's insight, and whether or not ID is true, it renders the inference of ID from the kind of functional sequences we observe in life unjustified. At least, that is the argument I am making :)
21 –> Shooting the messenger is not going to change the message he has already delivered. GEM of TKI
No, but finding the error in the message may alter the conclusions we draw :) Elizabeth Liddle
F/N: Significance of the UPB. It marks a threshold where random walk based search that starts at an arbitrary initial point, will be maximally unlikely to find an island of function, for failure of adequate resources to get enough of a sample to be credibly different form no sample. It has already been shown that the average search algorithm is this one. And, one will need a credible source of unintelligent [which BTW is unintentional by definition] bias that puts one in the near vicinity of such islands if one is going to be able to argue that chance and necessity can get the ball rolling so to speak. kairosfocus
BTW, the above was a response to #315, sorry should have made that clear. Crossposted with #316. Elizabeth Liddle
So you are saying, and in this you appear to disagree with Mung and myself, and possibly with Upright BiPed, that the value p, which you need in order to quantify "surprise" can be estimated from the frequency of each character within the message? If not, and you say: "In that context, so soon as we can characterise a typical symbol frequency distribution, we are well on the way", then that frequency distribution must be estimated from some other information source, which was my point: if you are only estimating your pdfs from the message, then you have no way of knowing whether the message contains any useful information or not - you have no distribution under the null hypothesis with which to compare your message. To take Mung's example above, of a string of Ones. If we take the pdf from the message, the message contains zero bits of information, because the probability of a One (estimated from the message) is 1. However, if we know that the pdf under the null is equiprobable Ones and Zeros, then the message contains 100 bits of information. The point being that the information content of the message is a function of your priors concerning the distribution from which possible messages are drawn. Without those priors, you can't compute the information content of the message. To quote Shannon himself (1948):
The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design. If the number of messages in the set is finite then this number or any monotonic function of this number can be regarded as a measure of the information produced when one message is chosen from the set, all choices being equally likely.
(Italics as in the original). In other words, for Shannon information, firstly, meaning is irrelevant; and secondly, the estimate of information content requires knowledge of the set from which possible messages are drawn. This is why I demurred about using Shannon Information as a measure of information where information is taken as having meaning, and also as a measure of meaning when we do not know anything about the set of possible messages. And why I disagreed with you when you said you could distinguish signal from noise from inspection of the message alone. Do you see my point? Elizabeth Liddle
F/N: Some clarifications following up from the above:
[KF:] P(T|H) is a probability metric on a random walk hyp, perhaps with a bias. [EL:] And “natural selection” can best by expressed as such a bias. But drift is also a factor, and the two interact.
1 --> Question-begging: the issue is not hill climbing within an island of function where differential reproductive success allows for specialisation to niches, but to ARRIVE at islands of function across vast seas of non-functional configs where there is no reproductive success possible at all on the relevant body plan. 2 --> Bias, drift and hill-climbing to wander about within an island of function has not addressed the focal issue for FSCI: getting to such islands of function in the face of the field of possibilities for strings of that degree of complexity. 3 --> let me be direct: (i) can you account for the origin of novel, FSCI-rich body plans in the face of the implications of codes and the requisites of functional folding (given the freedom in the chemistry) that only very specific sequences will work? (ii) can you show empirically that once any one initial body plan e.g. a unicellular one, has been arrived at, all other body plans are connected by small incremental changes within reach of the sort of drift and variation we have seen: a few point mutations, or some duplication and variation of a string of DNA etc? (iii) If so, what is it, and if so, what then is the answer to the remarks of Gould et al, e.g. here? [Please note the three different quotes or clusters of quotes.] (iv) Similarly, how do you then account for say the Cambrian fossil life revolution, on what specific empirical evidence? Next:
the probabilities have to be computed (if they can be – we simply do not have enough facts, in practice to do it) as contingent probabilities. Clearly the probability of a given strand of DNA from a living organism arising by chance is very tiny. But that is not the null. The null is that a much earlier, chance sequence happened to give rise to something that increased the strand’s probability of being transmitted (replicated) and thereby created an an enhanced number of oppportunities for a subsequent enhancement of the original something to occur. This lies at the heart of Darwinian evolution.
4 --> Begs the question. Consistently, you argue about the moving around in an island of function, without grounding how you can get to the island of function in the first place. 5 --> This issue holds for first cell based life -- the ONLY observed biological life we have, and it holds for subsequent more complex body plans, which have to be embryologically feasible in genes expressed early in development if there is to be a new body plan at all, as that is when it is expressed. 6 --> the consistency with which Darwin supporters cannot seem to see this problem tells me that there is a problem of paradigm indoctrination here that blinds minds to what should otherwise be plain and obvious. 7 --> Let us remember Kuhn's warning that a paradigm is not only a way to see but a way to be blinded to what it does not see. In this case, the problem of complex functional organisation that has to be expressed at early stages of development of a body plan, for it to be viable at all and reproduce. 8 --> the injection of a bias on function and advantage would be relevant in the case of an already functional body plan, but that is not where we are starting, we are starting with the chemistry of chaining,and with the situation of a complex information system that expresses the coded information in that chain of bases or AAs. 9 --> In the warm little pond or the like, the chaining is in the face of chirality, cross-interference, and the unfavourable thermodynamics of the relevant molecules, all attested to by the assembly line processes that are used to build the molecules in the living cell. These are not thermodynamically favourable, and have to be paid for by using energy-rich molecules to drive the process forward. Not to mention, the need to spontaneously invent and assemble systems that express symbolic codes, step by step execution algorithms with halting, and assembly lines to make the things effective. The only empirically supported source for such is intelligence, and that is backed up by the relevant analysis of config spaces. Our observed cosmos just does not have enough resources in it to search enough of such spaces to make it reasonable that happenstance is a reasonable explanation. 10 --> In the case of our proposed ancestral organism, there would need to be a complete chain of simple favourable steps to move from primitive so-called to complex body plans. But the jump to a new body plan credibly requires 10 - 100+ mn bits of information that has to come in the form of regulatory and instructional codes. 11 --> In addition, one needs enough time and population to fix these, then go on to the next stage, all in the face of the unfavourable balance of mutations, where marginally damaging mutations are also much more likely than those of incremental improvement and are quite likely to get fixed. i.e. we see reason to infer to net deterioration, embrittlement and ultimate breakdown of the genome and life function, not for progress on this model. 12 --> This is of course the genetic entropy challenge, and it is equivalent to the problem of the compulsive gambler: he may win small or big on occasion, but he odds are net unfavourable so on average he is being ruined all along. GEM of TKI kairosfocus
Dr Liddle: This caught my eye:
My point was that in order to know how much Shannon information is in a message, in any useful sense, you have to have independent information about the pdfs for each character in the message. If you don’t, all you have is the pdf from the message itself, and, in the case of a message from a 100 of coin tosses, that will be more or less 100 bits. But that’s a useless measure, because the pdf wasn’t derived independently.
1 --> Nope, we first know that we have a metric that we can turn degree of surprise within a system of symbols into a measure of how informative a string of such symbols is. I = log (1/p) 2 --> In that context, so soon as we can characterise a typical symbol frequency distribution, we are well on the way. 3 --> Directly related, information systems have a structure, and we may directly inspect what in them stores symbols, and how they work, which gives us a baseline on possibilities. For instance, if something does not allow for contingencies, it cannot store information. 4 --> In the case of D/RNA and AA's, we know we have a flexible string structure capable of storing 4- state [2 bits] or 20-state [4.32 bits] information per symbol. 5 --> So, we have a baseline of storage potential. In this context, one that is hard wired into the chemistry: essentially any base can go next to essentially any base, and the same holds for AA's. 6 --> In the case of AA's, in life forms, the actual sequences used are specified by base sequences in DNA thence mRNA courtesy the ribosome as an automated assembly plant. mRNA can and does sequence essentially any AA followed by any AA. 7 --> That from the field of possibilities, in life forms certain sequences are chosen, has to do with constraints of purpose or onward utility, not the chemistry of chaining. 8 --> In short, the islands of function where we have properly folding, agglomerating, activated AA sequences that dot he work of life are the sequences USED, as opposed to the sequences that are POSSIBLE. 9 --> To get to these functional AA sequences [which are deeply isolated in the field of possible AA chains], we have a symbolic mapping in the D/RNA system, with 3-letter codons specifying 64 states mapped onto the 20 AA states and certain procedural instructions: start chain, stop chain. 10 --> So, it is immediately seriously arguable that the relevant number of possibilities per element are 4 and 20 for D/RNA and AA's, respectively. Certainly, that holds from the getting to the first functional case end of things. 11 --> Now of course -- as is true of any real world code pattern, there is not in practice a perfectly even distribution of the symbols as used. So, we can take a frequentist approach on the codes as observed, and infer that from the information carrying capacity, the USED variablility is somewhat different, and yields a lower number of bits per element. 12 --> Sounds plausible, but this comes at a surprisingly high price. Namely, this directly implies design of a language, not the outworkings of blind chemistry. 13 --> For, it is implicitly inferring that it is not the chemistry that controls but the grammar, and the fitness for onward function. 14 --> So, there is a choice: if we look to the chemistry, we see that degree of flexibility of possibilities that immediately leads us to having no preference for any particular sequences patterns, if we look to the frequencies of symbols in observed cases, we are looking at in effect a symbol assignment on fitness for function. 15 --> In any case, the chemistry is an independent source of information [!] on probability distribution functions, i.e. we have no in-principle reason to reject a flat-random pattern as a first probability distribution function estimate; especially if one is appealing to blind chemistry leading to random walks and trial and error as cause. 16 --> If instead we look at the frequency exhibited by the population used in actual bio-functional molecules, that is not circular, it is inferring that functional messages in the system make use of symbols -- per empirical investigations -- in certain ways. So, we may access that empirical study to estimate the a priori probability of such symbols in any given case. 17 --> This is a typical pattern in statistical investigations, where we study cases that seem to be typical to infer to the properties of the population as a whole. Sure, things can go wrong, but so long as we keep an open mind and recognise the inherent provisionality, we have good mathematical grounds for reasonable confidence. (So, to single out a particular case of a general pattern and to criticise it as though that were somehow a special and unique problem warranting specific suspicion, is to slip into selective hyperskepticism.) 18 --> In this context, Shannon worked out an average info per symbol metric that takes into account the different observed frequencies of occurrence of symbols: H = - [SUM on i] pi log pi 19 --> So, we have two valid approaches, and we have comparable results up to an order of magnitude, certainly in the aggregate. 20 --> Namely, the functional information in the living cell, in aggregate can be estimated on more or less standard techniques, and if we do so, we see that we have quite enough functionally specific and complex information to infer that the best explanation for what we observe is design. 21 --> Shooting the messenger is not going to change the message he has already delivered. GEM of TKI kairosfocus
We seem to be arguing at cross-purposes, Mung. It's probably my fault, but it's getting late, and I'll be off line for a bit now. But the long and short of it is I'm not disagreeing with you. My point was that in order to know how much Shannon information is in a message, in any useful sense, you have to have independent information about the pdfs for each character in the message. If you don't, all you have is the pdf from the message itself, and, in the case of a message from a 100 of coin tosses, that will be more or less 100 bits. But that's a useless measure, because the pdf wasn't derived independently. So we all agree: Shannon information is only a useful measure if we have some independent information about the source. We can't figure it out from the message alone. Or, if we do, we get a silly answer. I'm sorry if I appeared to suggest otherwise. See you guys probably in a few days. It's been a really interesting conversation so far. Cheers Lizzie Elizabeth Liddle
For instance, you say that my statement that “…on that definition, any stochastic process creates information” is false. But you just gave an example of a stochastic process that created information, not a stochastic process that didn’t.
I did no such thing. :) In fact, I showed just the opposite. On the Information Content of a Randomly Generated Sequence On the Meaning of Shannon Information On the Information Content of a Randomly Generated Sequence (cont.) The sequence of 0's and 1's representing true/false answers to the questions posed was by no means stochastic or random. It's hard for me to conceive of how a randomly generated sequence of symbols sent in a message could convey information. If Upright BiPed was asking questions about the configuration of Heads and Tails in the sequence you generated by tossing a coin and in response you sent him a randomly generated sequence of 0's and 1's you would have been sending him nonsense, not 100 bits of Information. Let me put it another way. You've tossed a coin 100 time and recorded the sequence. Upright BiPed wants to obtain information about the sequence. He asks a series of questions. Q1. Was the result of the first coin tossed a heads? Now lets say you have a transmitter with three buttons. Press the first button it sends a 0. Press the second button it sends a 1. Press the third button and either a 0 or a 1 is sent with equal probability. If in response to his first question, your claim is that if you strike button three, you've sent him one bit of Shannon Information. And if for each of his 100 questions, you hit button three in response to every one of the questions. You would claim you've sent him 100 bits of Shannon Information. So he asks questions about the configuration of the sequence of heads/tails and each time you send him a 0 or a 1 but the symbol sent has nothing at all to do with the actual head or tail that was recorded. You claim you've sent him information. I say you haven't. Mung
I hope not, Mung. That's why I tried to pin down something in my response to you above. Operationalizing hypotheses are a tedious but absolutely necessary part of scientific methodology (probably rather like trying to write a good piece of legislation). But the aim is to find stuff out, not claim that we can't know anything! And it's perfectly doable, just, well, tedious. Elizabeth Liddle
I’m not going to agree or disagree without – you guessed it – an operational definition of “about something”.
Well, it's at least comforting to know I wasn't imagining things. So once we start talking about about without knowing what we are talking about are we then going to talk about the words that tell us about what it means for something to be about something and claim that we're now involved in circular reasoning and can never therefore know anything about anything at all because we can't know anything about about without appealing to what it means for something to be about something? Mung
I'm not going to agree or disagree without - you guessed it - an operational definition of "about something". However, I think Upright BiPed and I may have come to something close. I would certainly agree (indeed, it's a point I've been making for some time) that in order to quantify Information (Shannon Information) we need to know what additional Information is available to the receiver regarding the probability distribution of the characters in the message under the null hypothesis of "no information". That way, we can conclude, that if the probability distribution of the characters, and their sequence, in the message is improbable under the null, that the message is "about" something, i.e. that it is informative. Does that count? Elizabeth Liddle
Elizabeth, Information, in order to be Information, must be about something. Do you agree or disagree?
A fundamental, but a somehow forgotten fact, is that information is always information about something.The Mathematical Theory of Information
Sorry, but there's still some doubt in my mind about whether you accept this as true. Mung
Kairosfocus:
Pardon my mis-speaking, I think I was tired. GEM of TKI
No problem :) I mis-speak (and mistype) all the time. My error-monitoring system is aging, and my eyesight is not what it was, either! Peering through the only bit of my glasses with the right focal length for a computer screen doesn't help . Oh for the wisdom of age with the vigour of youth :) Regarding the UPB: I'm not especially concerned about the setting of what I am calling "alpha". As I said, I would accept a much lower threshold has evidence against the null. What I am more concerned about is the computation of the probability of the observed pattern under the null (allowing for correction for multiple hypothesises, namely the number of similarly compressible patterns). You say:
P(T|H) is a probability metric on a random walk hyp, perhaps with a bias.
And "natural selection" can best by expressed as such a bias. But drift is also a factor, and the two interact.
But, once we reduce it to the information metric, we can also come back to it from the direct observation of information storage or messages in that storage area. DNA is a direct info store, and so are amino acid chains. Subtler cases come with functionally organised entities, where we can infer information stored int eh functional organisation based on perturbation to get tolerances and the number of yes/no structured questions to specify the resulting function. This is implicit in say the engineering drawings for a machine.
I agree that DNA can be regarded as an "information store" as well as a great many other components of living organisms. But the null hypothesis the patterns that encode that information has to include the consequences of cumulative acquisition. In other words the probabilities have to be computed (if they can be - we simply do not have enough facts, in practice to do it) as contingent probabilities. Clearly the probability of a given strand of DNA from a living organism arising by chance is very tiny. But that is not the null. The null is that a much earlier, chance sequence happened to give rise to something that increased the strand's probability of being transmitted (replicated) and thereby created an an enhanced number of oppportunities for a subsequent enhancement of the original something to occur. This lies at the heart of Darwinian evolution. However, as you pointed out, it does not explain the origin of the strand, or the mechanisms by which certain sequences of such strands were able to enhance the probability of the strand being replicated. That is what I hope to address in my proposed project. Elizabeth Liddle
F/N: Since there is an assertion of a blunder, it seems I need to again point out the basis for the Dembski type bound on number of possible events. Pardon, but I find it a little tiring to see "corrections" that are not correct. 1 --> It is commonly estimated that there are some 10^80 particles in the observable cosmos, which we take as a crude estimate of number of atoms. (This is already conservative.) 2 --> The Planck time is about 5 *10^-44 s, which is rounded down to 10^-45 s. there are about 10^20 P-times in the duration of a strong force nuclear interaction, and about 10^30 in that of a fast ionic chemical interaction [organic reactions are MUCH slower, with ms or even s not unlikely]. 3 --> Number of seconds since the big bang is about 10^17, and the time from BB to heat death may reasonably be put at about 50 mn times this, 10^25 s. 4 --> Number of states possible for 10^80 atoms, in 10^25 s at 10^45 states/s is thus 10^150. 5 --> This is an upper bound on the number of events in the observed cosmos. 6 --> Similar estimates for our solar system since the big bang give an upper bound of order 10^102 possible events for 10^57 atoms. 7 --> You will see this is independent of Seth Lloyd's numbers and his framework of conversions to get 10^90 bits [i.e. this is the scope of the equivalent storage register to the observed cosmos . . . ] carrying out 10^120 operations. [One can take it that it is atoms that are acting and taking up states in the relevant context of events. NB: Dark matter does not seem to be conventionally atomic, based on observations of its behaviour, so it is not relevant to the calculation.] 8 --> By comparison, 500 bits will have ~ 3* 10^150 possible configs, and 1,000 bits will have ~ 1.07 * 10^301 possible configs. 9 --> The Solar system will only scan up to 1 in 10^48 of the number of configs for 500 bits, and the observed cosmos will scan up to 1 in 10^150 or so of those for 1,000 bits. _________ The Dembski type bound is reasonable. GEM of TKI PS: For a sounder analysis than was linked just above, I suggest -- again -- Abel's Universal plausibility metric paper. kairosfocus
Dr Liddle: Pardon me,I misspoke earlier but corrected myself above. Information metrics are equivalent to or catch up probability metrics. As noted above, you can directly deduce info carrying capacity, and you can use a frequentist analysis of symbols in information bearing messages. P(T|H) is a probability metric on a random walk hyp, perhaps with a bias. But, once we reduce it to the information metric, we can also come back to it from the direct observation of information storage or messages in that storage area. DNA is a direct info store, and so are amino acid chains. Subtler cases come with functionally organised entities, where we can infer information stored int eh functional organisation based on perturbation to get tolerances and the number of yes/no structured questions to specify the resulting function. This is implicit in say the engineering drawings for a machine. If you look at the log reduction, you will see that the p(T|H) term goes to the information metric, and the other terms go to the threshold. I may as well clip it again, just to make it clear what is going on:
what about the more complex definition in the 2005 Specification paper by Dembski? Namely:
define ?S as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . . where M is the number of semiotic agents [S's] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history.[31] . . . [Then] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity [?] of T given [chance hypothesis] H [in bits] . . . as [the negative base-2 log of the conditional probability P(T|H) multiplied by the number of similar cases ?S(t) and also by the maximum number of binary search-events in our observed universe 10^120] ? = – log2[10^120 ·?S(T)·P(T|H)] . . . eqn n1
How about this (we are now embarking on an exercise in “open notebook” science): 1 –> 10^120 ~ 2^398 2 –> Following Hartley, we can define Information on a probability metric: I = – log(p) . . . eqn n2 3 –> So, we can re-present the Chi-metric: Chi = – log2(2^398 * D2 * p) . . . eqn n3 [p is the probability term] Chi = Ip – (398 + K2) . . . eqn n4 [now an information term] 4 –> That is, the Dembski CSI Chi-metric is a measure of Information for samples from a target zone T on the presumption of a chance-dominated process, beyond a threshold of at least 398 bits, covering 10^120 possibilities. 5 –> Where also, K2 is a further increment to the threshold that naturally peaks at about 100 further bits. [this is an allusion to essentially the limit of our solar system and/or the cosmos . . . ]
Pardon my mis-speaking, I think I was tired. GEM of TKI kairosfocus
Kairosfocus @ #299 (golly, 299!!!)
Dr Liddle: The P(T|H) term etc get subsumed in the limit, in effect a threshold is set beyond which these will not reasonably go for the solar system or the observed cosmos. In efect you have set every atom to work looking for the edge of a zone of interest, but with a big enough field, the isolation of the zones tells. With Chi_1,000, the whole observed cosmos is unable to scan enough of the space of possibilities to make a difference from no scan. I have already shown how that happens, so I will not repeat myself. That’s why there is a threshold imposed.
I understand why there is a threshold imposed. It is the equivalent (if not precisely the same as) an alpha value. What the chi threshold does, it seems to me, is to say: If, under null, the probability that an event of class X will happen at least once in the history of the universe is less than .5, we can reject the null. (I will leave aside Howard Landman's note http://www.scribd.com/doc/23648196/Landman-DEMBSKI-S-SPECIFIED-COMPLEXITY-A-SIMPLE-ERROR-IN-ARITHMETIC-2008-6 regarding the number of possible events in the universe as having been underestimated, and thus rendering the threshold unexceedable by any pattern, even by those known to have been designed and thus making it impossible to conclude design for any event, as I am perfectly happy to reject the null on a less conservative alpha). So what we need to do, therefore, to test whether our observed pattern reaches the threshold at which we can reject the null is to calculate the probability of observing it under the null (this is straightforward standard null hypothesis testing procedure of course). Which makes "non-design" the null and "design" the hypothesis (with a very high bar for "design") So how do we go about calculating the probability of observing the observed under the null? Without a way of calculating that, we cannot test whether a pattern's chi exceeds the threshold and allows us to reject the null. And "that" is given by: ?S(T)·P(T|H) P(T|H) is not "subsumed into the limit". It must be calculated (as must ?S(T)) in order to determin whether, in effect, the ratio of the Seth Lloyd estimate over the probability of one of ?S(T) patterns of class T being observed under the null is less than .5. No?
The estimates for actual parameters will REDUCE the scope of search below that. Think about converting he observable cosmos into banana plantations, trains to move the bananas, forests of paper and monkeys typing impossibly fast at keyboards, from the big bang to the heat death, they will not exceed the limit we have set. Nor will any other scenario.
Yes, I understand the principle, once you have the probability under the null. But you can't conflate the alpha value (how improbable a thing has to be, under the null before you can reject the null) with computing the probability of the observed under the null. It seems to me that is what you are doing. If not, what am I not seeing?
As VJT showed, months ago, now. We have an upper limit, and we have reason to see that we are going to be within that limit, then we see also how the resources of the solar system or cosmos will be vastly inadequate. GEM of TKI
Yes of course. But my question still stands :) And it is important, because the argument made against ID is not that very improbable patterns can happen by chance, but that the patterns deemed by IDists as improbable under the null, are not, in fact improbable. This point I thought was made elegantly in the conclusion to Granville Sewell's paper discussed here recently. Cheers Lizzie Elizabeth Liddle
Upright BiPed: Thank you for your long and thoughtful post. No problem about the delay - a slow pace suits me right now, as I have a rather long to-do list! But this is interesting.
Lizzie, “the confusion has arisen because I was trying to establish what criterion UB wanted to use for information.” We talked about it, and many things were mentioned. Do we want to have a conversation, and then turn around only to remember what you can fit into a convenient definition, pretending for a moment that we can fit the entirety of our knowledge on a postage stamp and then argue over what gets left off? What would Popper say? Operational definitions are not limitless constructs; they are as fallible as any other good idea (and in a variety of contexts). If in this instance they can be used to skirt the strength of an opposing argument, they will be. And we wouldn’t want that to happen.
If "they can be used to skirt the strength of an opposing argument" the aren't what they say on the tin:) That's why I want to get this right. However, I'm not quite sure what you mean when you say "operational definitions are not limitless constructs". To be useful, they need to be as limiting as possible (in one sense anyway, possibly not the sense you intended, which is why I am asking for clarification), i.e. leave as little as possible open for subjective nuance or alternative interpretation. I am not looking for a "convenient" definition. I am looking for a rigorously specified definition that can be applied to any candidate output, so that the presence ir absence and/or the quantity of the thing defined can be ascertained objectively. To quote wikipedia: "An operational definition defines something (e.g. a variable, term, or object) in terms of the specific process or set of validation tests used to determine its presence and quantity. That is, one defines something in terms of the operations that count as measuring it." http://en.wikipedia.org/wiki/Operational_definition
So relax…and spare me the pedantics. ;) If I say something illogical and unsupported, you won’t need your rule book to point it out to me. You say that you want a solid definition and you don’t want any shifting of goalposts. Well, exactly which goalpost would you like then? If it’s not too much to ask; is it the one that actually reflects reality? You say that you never promised abiogenesis, and that is technically correct, yet at least in large measure, that is exactly what you propose. Living things are animated by the organization that comes from the rise of information, specifically information that is recorded by means of a sequence of repeating chemical symbols mapped to specific actions taken by the cellular machinery. If you can explain the rise of this symbolically recorded information, then you can most probably explain Life. As for myself, this is the only goalpost that ultimately matters.
What I am looking for, as I am proposing to demonstrate that it can be generated by no more than Chance and Necessity, is an operational definition of the kind of Information that you (or IDists) claim requires Intelligent Design to generate. So if that definition includes a threshold of some kind, obviously I don't want that threshold to move! And if I succeed, according to the agreed operational definition, I don't want people to say: ah, but this has nothing to do with real chemistry (unless of course chemistry is included in the operational definition). And, conversely, I don't want any wiggle room for myself either. That's really the whole point of operationalizing a hypothesis - to make sure that the playing field is level, and both sides are clear about what both success and failure would look like.
Also, you are approaching this with a specific end in mind, and you have already stated what that end is. Your intent in this is to be able to say that ID must “think again” because it’s “flawed”. You’ve illuminated this intent several times already.
Sure. But that's the nature of scientific inquiry - I am setting up a test of the hypothesis that, contrary to the claims of ID, Information (of a specific type, which we are currently trying to operationalise) can be generated without Intelligent Design. Obviously I will do my best to find a context that supports my hypothesis. But I may fail. That's the downside (but also the glory) of science. On the other hand, if I succeed, then the ID argument fails.
And you proposed to empower this ignominious conclusion by designing a fully non-empirical simulation, separated by orders of magnitude from what actually happens in reality. Hello?
No, I am not proposing to "empower a conclusion". I am proposing to test a hypothesis. The conclusion will depend on the results of that test. If I fail, I will not be able to conclude that I have succeeded, obviously :) In other words I plan to conclude something - the conclusion is not foregone. That wouldn't be science, and isn't what I propose. As for your second point: the study involves empirical hypothesis testing. It could probably be tested non-empirically, i.e. purely mathematically, from first principles, I don't know. But increasingly, hypothesis that depend on non-linear interactions between multiple variables actually have to be tested empirically by running iterative computations (as in finding out the intricacies of the Mandebrot set, for instance), and, when it comes to hypotheses that include stochastic processes, by running models. That these empirical studies are run on computers doesn't make them not empirical. And I am not actually proposing a "simulation" at all - although my model is inspired by theories about abiogenesis. It is not intended to demonstrate that life formed spontaneously from chemical reactions in the early earth. It is intended to test the hypothesis that Information of the kind considered to be the signature of Life can be generated without Intelligent Design (i.e. from Chance and Necessity alone).
You see Lizzie, at this point it no longer matters what I want you to show, it’s what you want to show. If I were you, I would choose the size of my bite wisely. And given that you will not be going for the only goalpost that actually reflects reality, I would suggest more than a teaspoon of humility in announcing the stunning breadth of your conclusions.
Well, humility is always good advice :) But it certainly matters what you want me to show, because my claim was that I believed that I could demonstrate to be possible something that you believe to be impossible. I originally understood that your claim was that Information of the kind that is seen in living things could not be generated by Darwinian processes. I think it can, and I offered to demonstrate that it could. Sure it was a bit lacking in humility, I guess, but it's not as though I was unprepared to put my efforts where my mouth is and risk hubris :) I am.
Now before I move on to other matters, I would like to clear up how we got here. To save space I will only post the relevant text. You were talking to BA77 about genetic information and said: I simply do not accept the tenet that replication with modification + natural selection cannot introduce “new information” into the genome. It demonstrably can, IMO, on any definition of information I am aware of. To which I (butted-in) and replied: Neo-Darwinism doesn’t have a mechanism to bring information into existence in the first place. To speak freely of what it can do with information once it exist, is to ignore the 600lbs assump-tion in the room. And then you stated: Well, tell me what definition of information you are using, and I’ll see if I can demon-strate that it can And in my return: You are going to demonstrate how neo-darwinism brought information into existence in the first place??? Please feel free to use whatever definition of information you like. If that definition is meaningless, then we’ll surely both know it.
Thank you very much for this - I was unable to find the original conversation unfortunately. This lets us back up: My response to ba77 was "I simply do not accept the tenet that replication with modification + natural selection cannot introduce “new information” into the genome." The reason I said that it "demonstrably" could, is that on any definition of information that I am/was aware of, a new variant of an existing allele "tells" the cell to do something slightly different to what the old allele did. So we have a "message" (the new DNA sequence) and a "receiver" (the cell processes) and a "meaning" (a different instruction, which could be to make a slightly different protein, or to make that protein under a slightly different set of contingencies, or to change the ratios of two different protein variants), i.e.has a phenotypic effect. And we know that new alleles happen from time to time, and we know quite a lot about the various mechanisms by which those variants are generated. Moreover, if that allele turns out to improve the organisms chances of breeding successfully, however slightly, that information is not only meaningful (makes a difference to the phenotype) but useful from the point of view of the population through which that allele starts to propagate, as it increases the probability that the population will continue to thrive in that environment. However, you then raised a different(and highly important) claim) that: "Neo-Darwinism doesn’t have a mechanism to bring information into existence in the first place. To speak freely of what it can do with information once it exist, is to ignore the 600lbs assump-tion in the room. Now, I may have understood what you meant by "in the first place" - you may simply have meant: "the source of the new allele" which, indeed, is not explained by "Darwinism", (Darwin didn't even know about genetics), nor by "neo-Darwinism" as I understand (or misunderstand) the term as a modification on Darwin's original concept of natural selection, but by what we now know about the mechanisms of DNA replication processes and the generation of variance. If so, it is true that neo-Darwinism doesn't account for it, but not true that we can't. However, at the time I assumed that you did not mean this, but meant: but how could the first Information-bearing self-replicator come about, if Darwinian processes only account (as they do) for the selection of useful Information once self-replicators-with-variance have appeared? And I assumed you meant this on a theoretical level, as posed by Dembski: how can mere Chance and Necessity generate Information in the first place? And that is what I offered (in good faith) to try an demonstrate - by setting up a model in which there is no initial breeding-with-variance population, but only a world of Deterministic and non-deterministic rules (Necessities and Chance) from which I hope my self-replicators will emerge.
So now moving on… There is an underlying issue within this conversation that I have tried and failed to get you to realize. In explaining it again, I must note that I somewhat separate myself from several proponents on this forum, so any embarrassment here is my very own. I think that there are many here who disagree with me at some point or another, and that is perfectly fine. I make no absolutely comment about the validity of their perceptions of the evidence; it’s just that I have my own.
Cool. Groupthink is boring :)
I’d first like to remind you that I am not making an argument about CSI, or Shannon Information, or Kolmogorov complexity, or any of it. Nor am I suggesting that these things are not interesting, important, and play a role in the issues at hand. But, I am making a purely semiotic case for the existence of information.
OK. In that case I do seem to have misunderstood you. I apologise.
In order to try and focus the discussion on the point I am trying to convey to you, I would like to ask you for a moment of your imagination. (I have done this before on UD, so readers in the second matinee can fall asleep at will). Lizzie, imagine for a moment you are the sole person on a lifeless planet in a distant galaxy. You stand there in your spacesuit gazing out across the inanimate nothingness. Then as you go about your mission, your experience and training brings something of a striking thought to mind. It occurs to you that outside your spacesuit, there is absolutely nothing that means anything at all to anything else. Your spacesuit represents a monumental divide in observed semiotic reality. Outside your suit there is no information, there are no symbols and no meaning of any kind. The rocks stacked upon themselves in the outcroppings mean absolutely nothing to the other rocks, nor to the molecules in the atmosphere or anything else. Yet, inside your suit it is a completely different matter; signals and symbols, and information, and meaning abound in all directions. My own suggestion is that there are three domains in which these things exist. First there is your demonstrated ability as a sentient being to create symbols and assign meaning at will. Then there are also the systems within your body that are constantly creating and utilizing transient information by means of intercellular signals and second messengers, etc. These systems are created by the configuration of the specialized constituent parts, discretely created, each one brought into existence by the third domain of semiotic reality. That third domain being the recorded information in your genome which is replete with semiotic content – sequenced patterns of discrete chemical symbols.
I'm with you up to this point, I think. Beautifully put.
Now, I notice that you choke on the word “symbol”. My message to you is that it doesn’t matter what we call it; it is what it is, a relational mapping of two discrete objects/things. One thing represents another thing, but is separate from it. And if that symbol should reach a receiver, then the mapping between the symbol and the object being symbolized becomes realized by that receiver.
So far, so good-ish.
You seem to prefer calling a symbol a “representation” instead, which is fine by me, except that it doesn’t capture the reality. The shadow of a tree could be construed as a representation of a tree, but the word “tree” is a symbolic representation. They are distinctly different. The shadow contains no information and it doesn’t exist in order to do so. The word “tree” is a symbol (matter/energy arranged to contain information) which exist specifically to do so.
Yes, I understand that. I don't have a problem with the word "symbol" per se, precisely because of the distinction you make. My problem is in applying the world "symbol" to something that is not (IMO) self-evidently a symbol-user. I don't think that a ribozome is self-evidently a symbol-user!
The point I would like you to understand, is that recorded information cannot exist without symbols (symbolic representations).
hmmm. Well, I would be happy to accept this as definitional, but then I'd probably want to argue a bit more about what a symbol is. However, let's put that to one side for now.
So revisiting your lifeless planet, there are no symbols and therefore no information outside your suit, but inside suit it is the core reality that must be addressed.
I am more than happy to agree that there are symbols within the suit but not outside, and if symbols are the prerequisite for information, then the only information is inside the suit. Cool.
I know that you are stalwart against anthro-humanizing the observations, and inputting into them some-thing that is not there. Yet what is there has been repeatedly validated.
In what sense and where? (Not disputing it, but just wanting to get clear what you are saying.)
And it must be understood, the human capacities which you wish to not conflate with the observations – those that we are told did not arise for billions of years after the origin of Life – show every sign of having been in existence from the very start.
There's a sense in which I agree with you, but probably not a sense you would approve :)
As I said upthread, humans did not invent symbolic representations or recorded information; we found it was already existed.
An important point, and one that needs to be unpacked before we can proceed. Good.
Given the length of this post already, I am going to cut to the chase. You want goalposts that don’t move? You want to design a non-empirical simulation to send ID packing? My only hope is to try and bring you back to reality. Here is my list (probably non-comprehensive). We can argue over these points if you wish, but I am confident that each can be fully supported. And as I said from the very start, you can develop your own operational definition. You asking me to do it for you only illuminates your desire to compete; it has nothing to do with the search for truth.
Oh, there you are quite wrong, although I fully accept that the communication fault may be on my side. Firstly, the reason I want an operational definition has nothing to do with "competition" and everything to do with making sure we are talking about the same thing (not apples on one side and oranges on the other) when you say you think X is the case and I think it is not. That's not competitive, though it may be dialectical; that's no problem though, science is intrinsically dialectical (which is why Popper proposed the criterion of falsfication). Secondly, the reason I want you propose, or at least approve, the operational definition, is not either laziness or competitiveness on my part, but merely an essential part of ensuring that I am actually addressing the postulate you are putting forward. Thirdly, and this is simply personal: I am a notorioiusly uncompetitive person, to a degree that can easily be personally problematic! I am simply not interested in "winning" for the sake of winning - anything. I'd far rather lose an argument and be enlightened than win it and remain in error. I can't prove this to you of course, but it is true.
1. The origin of recorded information has never been associated with anything but the living kingdom; never from the remaining inanimate world.
Yes, that is probably true, although I am still stuck on this "symbol" thing. On my own understanding of the word, I'd say that all symbol-users are alive. I would not, however, willingly say that all living things are symbol users. This is the part we need to hammer out.
2. The state of an object does not contain information; it is no more than the state of an object. To become recorded information, it requires a mechanism in order to bring that recording into existence outside of the object itself. As I said earlier, a carbon atom has a state which a physicist can demonstrate, but a librarian can demonstrate the information exists as well. They both must be accounted for.
OK. This is important, so I'm going to try to be as articulate as I can: I am certainly happy to stipulate that information only exists when it is "recorded". And I'd like to suggest that "recording" must involve a) the storage of the information in some form that can be "read" by another object in such away that that object can change its own state according to the "information" read. If you are happy with this (I don't think it's perfect, but it's not bad) then I'm with you. And in that context,then I would accept that DNA, for example, contains recorded information, as it can be "read" by another object (which, depending on the level of analysis, we can regard as the cell itself, or a specific ribozome) which then changes its own state (kinetically or morphologically) as a result. And if you want to call this "symbolic" then that is fine. And I would still probably agree that this is largely found in living things, possibly exclusively, but not necessarily necessarily so (the double use of necessarily is not a typo!)
3. A rational distinction is made between a) matter, b) information, and c) matter which has been arranged in order to record information.
Indeed.
4. Matter that has been arranged in order to contain information doesn’t exist without symbolic repre-sentations. Prove it otherwise.
Well, if we define information as recorded information, and if we define recorded information as symbolic, then this is necessarily true, indeed, circular. So obvious not falsifiable. However, if there is wiggle room between recorded information and symbolic representation, then it is not circular, but then I need to know in what way you are distinguishing recorded information from symbolic representation.
5. From all known sources, symbols and symbolic representations are freely chosen (they have to be in order to operate as symbols). And as a matter of observable fact, when we look into the genome, we find physico-dynamically inert patterns of symbols. That is, the chemical bonds that cause them to exist as they do, do not determine the order in which they exist – and the order in which they exist is where the information is.
OK, so you do seem to agree with me that a key property of a symbol (as opposed to a sign or a template) is that it is arbitrarily assigned to a signifier. And your claim is that the relationship between the chemical bonds that "cause the [patterns of symbols] to exist as they do do not determine the order in which they exist". hmmm. I would certainly agree that given one nucleotide, there is no chemical grounds for predicting the next. However, I would not agree (if it were what you were saying) that a given sequence (a codon, for instance) is chemically unrelated to the amino acid that it "codes" for. Is that what you are saying? Although I might agree that a different kind of cell (perhaps on another planet) might have a different kind of ribosome that resulted in a different amino acid from the one that would result from a given codon in an earthly cell. So if that is the sense in which the codon is abitrarily assigned, then I guess I could get behind that, and concede that "symbol" is appropriate. OK, I'll buy it :) (If it's what you mean).
6. Recorded information requires a (discrete) suitable medium in order to exist – a medium that allows the required freedom of arrangement.
Agreed.
7. A distinction is made between information presented in analog form, versus that in the genome which is a sequence of repeating digital symbols being decoded in a linear fashion following rules established by the configuration of the system (that configuration itself being determined by the information it is created to decode).
What distinction? Or what distinction that matters? (Also I'm uneasy about "digital" here, but maybe it's OK.)
8. The origin of information requires a mechanism to establish the relationship (mapping) between the object and the symbolic representation which is to symbolize it.
OK, well, assuming we are now on the same page regarding the use of "symbol" to describe such things as transcription, then yes. Although of course, that mapping is the kind of thing that evolutionary processes (I would argue) can account for. For example, if, in early life forms, there were several kinds of ribosomes, some resulting in one set of mappings, some in another, if one kind tended to be more efficient at promoting successful replication than the others, it would tend to become more prevalent, go to "fixation" and be inherited by all its descendents. or, alternatively, go to fixation by simple drift, and ditto.
9. Recorded information exists for a purpose, that purpose being manifest as a receiver of the information – that which is to be informed.
Now we are getting philosophical! I'm happy to go there, but will leave it hanging for now :)
You indicate that you can provide evidence that neo-Darwinian processes can assimilate all these points as well as those we’ve already discussed. My hat’s off to you. Your simulation will have nothing to do with chemical reality, and it will end with an unsupported Darwinian assumption (as they always do) but it should be interesting nonetheless. Cheers…
No, I'm not going to attempt to demonstrate that the specific Instantiation of Information in cell biochemistry was brought about by Darwinian processes, because, indeed, it may not have been. All I am proposing to demonstrate is that Information (recorded Information, even symbolic information, as I think we now mutually understand it) can arise from a non-intelligent source. Not that it did in the case of life. And because of that limited objective, chemical reality is irrelevant. However, what is not irrelevant is your very helpful unpacking of the essentials and principles at stake. So I can now reframe may project as: To test the hypothesis that symbolic information can arise from non-intelligent sources, where "symbolic representation" is the recording of information about the state of an object that can be read by another object whose future state[s] are contingent on that information, and "non-intelligent sources" are sources that consist only of Chance and Necessity. If I succeed, I will not have demonstrated that life evolved without input from an Intelligent Designer, but I will, I submit, have demonstrated that we cannot conclude that it must have had input from an ID on the grounds that non-intelligent sources cannot create symbolic representations. Does that make sense? I've responded in some detail, because I think you hit a lot of nails on the head, and I wanted to make sure that I figured out which nails I'm happy with, and which nails are genuine differences between us. I hope this brings us closer to the nub of the issue at issue :) Elizabeth Liddle
F/N 2: Observe a noise/error handling procedure for the case of misloaded tRNAs:
The two major groups of tRNA synthetases, class I and II, seem to minimize impacts of misinserted amino acids in protein sequences by tRNAs that were misloaded by these tRNA synthetases [1,2]. Accurate loading of tRNA acceptor stems with cognate amino acids by tRNA synthetases is a crucial step in protein synthesis, and indeed misacylated (misloaded) tRNAs are frequently edited by tRNA synthetases [3], which sometimes even edit tRNAs at advanced stages in the translational pathway [4]. Both pre- or post-transfer editing occur. These mechanisms are not exclusive and depend on catalytic sites other than the aminoacylation site [5,6]. The complex editing functions of some tRNA synthetases probably originated from multifunctionality of ancient tRNA synthetases, at the origins of the genetic code and the translational machinery [7]. Note that some mutations affecting editing associate with mitochondrial diseases [8].
Error handling methods and editing, even rooted in multifuncitonality [!!!!!] are of course yet another level of sophistication in an information system. This thing is getting plainer and plainer. GEM of TKI kairosfocus
F/N: From the April 14 Newsflash thread OP, again: ____________ >> what about the more complex definition in the 2005 Specification paper by Dembski? Namely: define ?S as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . . where M is the number of semiotic agents [S's] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history.[31] . . . [Then] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity [?] of T given [chance hypothesis] H [in bits] . . . as [the negative base-2 log of the conditional probability P(T|H) multiplied by the number of similar cases ?S(t) and also by the maximum number of binary search-events in our observed universe 10^120] ? = – log2[10^120 ·?S(T)·P(T|H)] . . . eqn n1 How about this (we are now embarking on an exercise in “open notebook” science): 1 –> 10^120 ~ 2^398 2 –> Following Hartley, we can define Information on a probability metric: I = – log(p) . . . eqn n2 3 –> So, we can re-present the Chi-metric: Chi = – log2(2^398 * D2 * p) . . . eqn n3 Chi = Ip – (398 + K2) . . . eqn n4 4 –> That is, the Dembski CSI Chi-metric is a measure of Information for samples from a target zone T on the presumption of a chance-dominated process, beyond a threshold of at least 398 bits, covering 10^120 possibilities. 5 –> Where also, K2 is a further increment to the threshold that naturally peaks at about 100 further bits. >> _______________ Now, the issue for today is that there is a challenge to p, to get to I. the answer to this is direct and simple: while it is theoretically grounded on the above considerations, I is a very familiar entity, one normally estimated more directly from symbol frequency patterns, or from directly observed storage capacity. As we just saw, the estimates on such bases will be CONSERVATIVE. The how do you get to p objection is misdirected. Yes there are limitations, so we make a reasonable estimate, and where possible a conservative one. DNA has four states per symbol, and proteins generally -- this has to be noted because of a certain class of objector who would pounce on the rare exceptions -- have 20 per symbol. There may be some adjustment relative to symbol frequencies. But that is not going to overwhelm a situation where you have hundreds of proteins averaging 300 AA's coded for by D/RNA with three letters per codon, and with supportive regulatory elements. Just as a sampler, let us think of 200 proteins, at 300 AA avg, or information to account for 60,000 AAs, at 3 bases each, with say 10% more for regulation, making for a minimal genome of 200,000 or so 4-state elements. That is definitely in the order observed for simplest life and it is two orders of magnitude of bits beyond the cosmos level informational threshold, where each bit doubles the config space. However you may want to adjust and cite limitations, it does not take away the central implication of the functionally specific complex organisation and information in the living cell: it is best explained on design. GEM of TKI kairosfocus
Dr Liddle: Following up:
if I don’t know how many symbols you could have used, then you have sent me not much more than 1 bit, because while the first bit may surprise me, by the end of the message each subsequent one is reducing my uncertainty that the next will not be a 1 by only a tiny amount. And this goes back to the point I was trying to make to kairosfocus; to know how much information there is in a signal, we have to know something about what other signals are possible.
In short, we need to know about the communication system and its protocols. Immediately, this highlights that a symbolic or modulated communication system is an irreducibly complex, sophisticated entity, which in turn points straight to design as its best explanation. But that is a bit of an aside. More direct to our considerations is that such a communication system has a range of possible legitimate signals, and a protocol of rules that controls how such signals are encoded, modulated, detected, demodulated, decoded, and used. Again, pointing to sophisticated, integrated design. Going further, we are dealing with a coded information system in the heart of cell based life, using a 4-state digital code based on highly endothermic, complex -- thus, inherently thermodynamically unstable -- chemicals known to be assembled into polymers based on an algorithmic process. All of which points us back to the questions I asked previously on the known sources of algorithms, codes, and assembly lines. Transparently obvious: intelligence, with intent and knowledge and skill. Now, too, to configure such messages, we need things that are inherently highly flexible, i.e the elements in the strings etc must be highly contingent. It actually turns out that a lot of alternative chemistry could happen to both D/RNA or proteins [or, more properly their monomers . . . start with just he implications of possible opposite chirality, and how that would destroy folding and/or key-lock fitting, where the other chirality has the same heats of formation as a rule], but the controlled environment of the cell, is set up to block that from happening. That is the context in which we see a 4-state digital symbol system, with an assembly line system that uses the mRNA as a tape to guide step by step assembly of proteins, which are based on essentially a 20-state system, with some relatively rare mods. Proteins function in the cell, based on AA sequence, folding, agglomeration and activation. All of which are quite remote from the specifying of a particular sequence of AA's. Even, the loading of a particular AA to a particular tRNA taxi molecule is a matter of a universal connector, with the actual AA attached being informationally controlled by the setting up of a loading enzyme. Which is in turn the product of the same system. All of this is extremely highly contingent, and would point to the information content estimates we have been using being CONSERVATIVE. In other words the field of chemical possibilities is much larger than we have been considering. But being conservative is good. Within the ambit of the set-up system, we have a 4-state digital coded info storage subsystem. That gives us a carrying capacity of 2 bits per symbol, some of which may not be used in any given case, as we may have redundancies and symbol frequencies that are not flat-distributed. Not tha this makes a material difference. the same extends to proteins, where there are maybe 80 and by now more possible AAs that could be used, and all but a few of these will be chiral. But, conservative is good. Protein chains are assembled step by step and may be chaperoned to fold right -- the prions [mad cow disease] issue -- so they will function. Conservative, again. That all feeds back into the expression: Chi_500 = I*S - 500 The 500 bits takes int eh thresholds set by considerations of sufficiently isolating the narrow islands of interest and/or function, that the search space challenge will swamp out any random walk plus trial and error rooted approach, including impossibly fast ones. The only way out of this is to bias the search, so that the walk has an oracle to pull it in, allowing hill-climbing on a trend. But,t hat is precisely to jigger the case. The evidence of protein folds is that they are deeply isolated islands in sequence space. Codes are likewise, once we get to any complexity worth discussing. And, functional organisation of complex entities on a wiring diagram can be reduced to the same pattern, through devising a structured set of yes/no questions to construct the wiring diagram: the teeth in the saying "a picture is worth a thousand words." But, all of this has been said in various ways, over and over and over. And the conclusion is increasingly transparently obvious. But then, in an era where to say the obvious is to bare one's throat to those all too eager to slice with the knife, the objectively obvious is often the least subjectively obvious. But, we can all see for ourselves the balance of the case -- and it is noteworthy that just for saying he objectively obvious I am now the subject of a slander blog that is produced by one who has no hesitation to indulge in outing intimidatory behaviour, and in outright false accusations of UD being a nest of perversion, as well as a mouth in need of Sister V's soap cleansing. Worse, in the name of freedom of expression, such misbehaviour is tolerated or even enabled by those who should know better. Can you imagine, I have seen the turnabout accusation that to point out that I have every right to shun such cesspits is to offend those who are there delicately reasoning quite decently amidst the stench and the angry mosquitoes tanking up on rage and fallacious or slanderous talking points? Patent absurdity. Mi ca'an believe it!!!! Anyway, I think we can await the promised simulation. GEM of TKI kairosfocus
Mung:
Elizabeth Liddle @293: Well, let me give a more nuanced answer: I’m trying to get beyond nuanced. :) If we don’t have clear and unambiguous answers we cannot hope to agree.
Absolutely. But clear and unambiguous answers depend on clear and unambiguous questions. We need to formulate the question in such a way that the answer is not, and cannot be, ambiguous. For instance, you say that my statement that "...on that definition, any stochastic process creates information" is false. But you just gave an example of a stochastic process that created information, not a stochastic process that didn't. But let's not get bogged down here: I am simply after a clear definition that captures what we want to capture, and as I hope I have made clear above (and it seems consistent with your own posts): information quantity, in the sense that we want it to mean anything, is a function not only of the signal, but of the expectations of the receiver. In the absence of knowledge about the expectations of the receiver, we could compute the probability distribution of the characters of the message from the message itself, which is the sense in which a stochastic process can create a measureable amount of information: we can compute -log2(P) where P is the probability of each items as given by the frequency of its occurrence within the message. But that isn't terribly useful, as your example of a series of ones elegantly shows. To get a sensible measure of information we have to compute P from an independent source of information regarding the probability distribution of each item, and as far as the receiver is concerned, that source of information has also to be available. Otherwise the message won't be "about" anything :) So perhaps we are nearing an operational definition of "aboutness", which must reference an additional independent source of information regarding the probability distribution of the characters in the message, under the null hypothesis of a random draw. That enables me to differentiate between a string of Ones that are drawn from a probability distribution in which Ones have a probability of 1 and any other character has a distribution of zero, in which case the information content is zer0 (-log2(2)=0), and a string of Ones that are drawn from a probability distribution in which Ones and Twos have equal probability, and all other characters have zero, in which case each item in the message will convey 1 bit of information (-log2(.5)=1). This is why I keep banging on about the important of establishing the probability distribution of the components of the message under the null hypothesis of "no signal".
If I know that the system you are using to send me a message has uses two possible symbols, 1, and 0, then you have sent me 100 bits of information (or however many ones there are, I didn’t count).
Well, it wasn’t a sequence of 100 1?s, thank God. (In my browser the 1?s extend across the page with no line break. So I apologize for that.) But lets say, for the sake of argument, that the pattern I sent did consist of a series of 100 1?s. You are now saying that my pattern of strictly a sequence of 1?s contains the same amount of Shannon Information as your pattern of 0?s and 1?s which were completely random. How can that be?
See above (sorry I was a bit inarticulate last night, not booze, just exhaustion). I hope this morning's attempt is clearer. The short version is: How much Shannon information is in your string is not simply a function of the string, but of independent information regarding the probability distribution under the null from which the items in that string were drawn.
At some point in the series, shouldn’t your surprisal have actually been reduced?
Depends on my priors regarding the probability distribution under the null :) If I knew nothing about it, and got a string of ones, in many senses of the word "surprise", my surprise would have been gradually reduced, until I concluded that all this blooming signal was ever going to produce was Ones. On the other hand if I knew that the source of the message normally produced messages that overall contained equal numbers of ones and zeros (ones and zeros equiprobable) then I'd be just as surprised by each successive one as I was by the last, and I'd become increasingly certain that the message was not a random draw from Ones and Zeros. I could even bring out my trusty binomial theorem to compute just how certain I was!
You also said that your example had 100 bits of Shannon Information but seemed to intuitively recognize that my series of 1?s had “not very much” information. So I hope you’ll understand my confusion at the apparent lack of consistency.
Not lack of consistency but drawing attention to a crucial extra source of information we need if we are going to distinguish signal from noise. This is why I disagree with kairosfocus that we can distinguish signal from noise reliably by looking at the message itself. We can't - we rely on independent information about the probability distributions under the null of noise.
In the same post @93 You wrote: If instead of coin tosses, I sent 1010101010101010101….. You’d start to make some pretty good guesses at the rest of the series, so the amount of new information I’d created would be very small. You are not being consistent.
I'm trying to point out the fact that we need more information than simply the message itself in order to figure out how much information it contains. I'm sorry this was unclear - I do not have an axe to grind about what Information is. I want to make sure we have an operational definition that captures what an IDist would regard as legitimate Information (the kind that is claimed not to be generatable by Chance and Necessity). We are making some progress I think :)
If you had repeated your example of the repeating pattern “10101010…” for a total of 100 characters, would you say that it contained 100 bits of Shannon Information? IOW, you need to explain how a fixed sequence contains the same amount of Shannon Information as a randomly generated sequence.
I hope I have clarified what I think we all agree, that the Information content of a message cannot be computed sensibly from the message itself alone, but that we need to also factor in (and find a way of quantifying) the additional information that is required in order to compute it. We can still compute a value without that additional information, by looking at the frequency distributions in the message itself, but it won't mean much. For if we compute it for the 1010101010 example, we can quickly see that the probability of a Zero given a previous One is 1, and the probability of a One, given a previous Zero is also 1. So the information can be computed as zero (or approaching zero). But given a prior as to the probability distribution under the null it could be an extremely informative message - it could, for instance represent in binary form a large and important integer. In which case the number of bits transmitted would be 100. Elizabeth Liddle
Dr Liddle: The P(T|H) term etc get subsumed in the limit, in effect a threshold is set beyond which these will not reasonably go for the solar system or the observed cosmos. In efect you have set every atom to work looking for the edge of a zone of interest, but with a big enough field, the isolation of the zones tells. With Chi_1,000, the whole observed cosmos is unable to scan enough of the space of possibilities to make a difference from no scan. I have already shown how that happens, so I will not repeat myself. That's why there is a threshold imposed. The estimates for actual parameters will REDUCE the scope of search below that. Think about converting he observable cosmos into banana plantations, trains to move the bananas, forests of paper and monkeys typing impossibly fast at keyboards, from the big bang to the heat death, they will not exceed the limit we have set. Nor will any other scenario. As VJT showed, months ago, now. We have an upper limit, and we have reason to see that we are going to be within that limit, then we see also how the resources of the solar system or cosmos will be vastly inadequate. GEM of TKI kairosfocus
...to know how much information there is in a signal, we have to know something about what other signals are possible.
There's that word again. ABOUT. Did you read my post @283? Mung
Lizzie, “the confusion has arisen because I was trying to establish what criterion UB wanted to use for information.” We talked about it, and many things were mentioned. Do we want to have a conversation, and then turn around only to remember what you can fit into a convenient definition, pretending for a moment that we can fit the entirety of our knowledge on a postage stamp and then argue over what gets left off? What would Popper say? Operational definitions are not limitless constructs; they are as fallible as any other good idea (and in a variety of contexts). If in this instance they can be used to skirt the strength of an opposing argument, they will be. And we wouldn't want that to happen. So relax…and spare me the pedantics. ;) If I say something illogical and unsupported, you won’t need your rule book to point it out to me. You say that you want a solid definition and you don’t want any shifting of goalposts. Well, exactly which goalpost would you like then? If it’s not too much to ask; is it the one that actually reflects reality? You say that you never promised abiogenesis, and that is technically correct, yet at least in large measure, that is exactly what you propose. Living things are animated by the organization that comes from the rise of information, specifically information that is recorded by means of a sequence of repeating chemical symbols mapped to specific actions taken by the cellular machinery. If you can explain the rise of this symbolically recorded information, then you can most probably explain Life. As for myself, this is the only goalpost that ultimately matters. Also, you are approaching this with a specific end in mind, and you have already stated what that end is. Your intent in this is to be able to say that ID must “think again” because it’s “flawed”. You’ve illuminated this intent several times already. And you proposed to empower this ignominious conclusion by designing a fully non-empirical simulation, separated by orders of magnitude from what actually happens in reality. Hello? You see Lizzie, at this point it no longer matters what I want you to show, it’s what you want to show. If I were you, I would choose the size of my bite wisely. And given that you will not be going for the only goalpost that actually reflects reality, I would suggest more than a teaspoon of humility in announcing the stunning breadth of your conclusions. Now before I move on to other matters, I would like to clear up how we got here. To save space I will only post the relevant text. You were talking to BA77 about genetic information and said:
I simply do not accept the tenet that replication with modification + natural selection cannot introduce “new information” into the genome. It demonstrably can, IMO, on any definition of information I am aware of.
To which I (butted-in) and replied:
Neo-Darwinism doesn’t have a mechanism to bring information into existence in the first place. To speak freely of what it can do with information once it exist, is to ignore the 600lbs assump-tion in the room.
And then you stated:
Well, tell me what definition of information you are using, and I’ll see if I can demon-strate that it can
And in my return:
You are going to demonstrate how neo-darwinism brought information into existence in the first place??? Please feel free to use whatever definition of information you like. If that definition is meaningless, then we’ll surely both know it.
- - - - - - - - - - - So now moving on… There is an underlying issue within this conversation that I have tried and failed to get you to realize. In explaining it again, I must note that I somewhat separate myself from several proponents on this forum, so any embarrassment here is my very own. I think that there are many here who disagree with me at some point or another, and that is perfectly fine. I make no absolutely comment about the validity of their perceptions of the evidence; it’s just that I have my own. I’d first like to remind you that I am not making an argument about CSI, or Shannon Information, or Kolmogorov complexity, or any of it. Nor am I suggesting that these things are not interesting, important, and play a role in the issues at hand. But, I am making a purely semiotic case for the existence of information. In order to try and focus the discussion on the point I am trying to convey to you, I would like to ask you for a moment of your imagination. (I have done this before on UD, so readers in the second matinee can fall asleep at will). Lizzie, imagine for a moment you are the sole person on a lifeless planet in a distant galaxy. You stand there in your spacesuit gazing out across the inanimate nothingness. Then as you go about your mission, your experience and training brings something of a striking thought to mind. It occurs to you that outside your spacesuit, there is absolutely nothing that means anything at all to anything else. Your spacesuit represents a monumental divide in observed semiotic reality. Outside your suit there is no information, there are no symbols and no meaning of any kind. The rocks stacked upon themselves in the outcroppings mean absolutely nothing to the other rocks, nor to the molecules in the atmosphere or anything else. Yet, inside your suit it is a completely different matter; signals and symbols, and information, and meaning abound in all directions. My own suggestion is that there are three domains in which these things exist. First there is your demonstrated ability as a sentient being to create symbols and assign meaning at will. Then there are also the systems within your body that are constantly creating and utilizing transient information by means of intercellular signals and second messengers, etc. These systems are created by the configuration of the specialized constituent parts, discretely created, each one brought into existence by the third domain of semiotic reality. That third domain being the recorded information in your genome which is replete with semiotic content - sequenced patterns of discrete chemical symbols. Now, I notice that you choke on the word “symbol”. My message to you is that it doesn’t matter what we call it; it is what it is, a relational mapping of two discrete objects/things. One thing represents another thing, but is separate from it. And if that symbol should reach a receiver, then the mapping between the symbol and the object being symbolized becomes realized by that receiver. You seem to prefer calling a symbol a “representation” instead, which is fine by me, except that it doesn’t capture the reality. The shadow of a tree could be construed as a representation of a tree, but the word “tree” is a symbolic representation. They are distinctly different. The shadow contains no information and it doesn’t exist in order to do so. The word “tree” is a symbol (matter/energy arranged to contain information) which exist specifically to do so. The point I would like you to understand, is that recorded information cannot exist without symbols (symbolic representations). So revisiting your lifeless planet, there are no symbols and therefore no information outside your suit, but inside suit it is the core reality that must be addressed. I know that you are stalwart against anthro-humanizing the observations, and inputting into them some-thing that is not there. Yet what is there has been repeatedly validated. And it must be understood, the human capacities which you wish to not conflate with the observations - those that we are told did not arise for billions of years after the origin of Life – show every sign of having been in existence from the very start. As I said upthread, humans did not invent symbolic representations or recorded information; we found it was already existed. Given the length of this post already, I am going to cut to the chase. You want goalposts that don’t move? You want to design a non-empirical simulation to send ID packing? My only hope is to try and bring you back to reality. Here is my list (probably non-comprehensive). We can argue over these points if you wish, but I am confident that each can be fully supported. And as I said from the very start, you can develop your own operational definition. You asking me to do it for you only illuminates your desire to compete; it has nothing to do with the search for truth. 1. The origin of recorded information has never been associated with anything but the living kingdom; never from the remaining inanimate world. 2. The state of an object does not contain information; it is no more than the state of an object. To become recorded information, it requires a mechanism in order to bring that recording into existence outside of the object itself. As I said earlier, a carbon atom has a state which a physicist can demonstrate, but a librarian can demonstrate the information exists as well. They both must be accounted for. 3. A rational distinction is made between a) matter, b) information, and c) matter which has been arranged in order to record information. 4. Matter that has been arranged in order to contain information doesn’t exist without symbolic repre-sentations. Prove it otherwise. 5. From all known sources, symbols and symbolic representations are freely chosen (they have to be in order to operate as symbols). And as a matter of observable fact, when we look into the genome, we find physico-dynamically inert patterns of symbols. That is, the chemical bonds that cause them to exist as they do, do not determine the order in which they exist – and the order in which they exist is where the information is. 6. Recorded information requires a (discrete) suitable medium in order to exist – a medium that allows the required freedom of arrangement. 7. A distinction is made between information presented in analog form, versus that in the genome which is a sequence of repeating digital symbols being decoded in a linear fashion following rules established by the configuration of the system (that configuration itself being determined by the information it is created to decode). 8. The origin of information requires a mechanism to establish the relationship (mapping) between the object and the symbolic representation which is to symbolize it. 9. Recorded information exists for a purpose, that purpose being manifest as a receiver of the information – that which is to be informed. - - - - - - - - - - - You indicate that you can provide evidence that neo-Darwinian processes can assimilate all these points as well as those we’ve already discussed. My hat’s off to you. Your simulation will have nothing to do with chemical reality, and it will end with an unsupported Darwinian assumption (as they always do) but it should be interesting nonetheless. Cheers… Upright BiPed
Elizabeth Liddle @293:
Well, let me give a more nuanced answer:
I'm trying to get beyond nuanced. :) If we don't have clear and unambiguous answers we cannot hope to agree.
If I know that the system you are using to send me a message has uses two possible symbols, 1, and 0, then you have sent me 100 bits of information (or however many ones there are, I didn’t count).
Well, it wasn't a sequence of 100 1's, thank God. (In my browser the 1's extend across the page with no line break. So I apologize for that.) But lets say, for the sake of argument, that the pattern I sent did consist of a series of 100 1's. You are now saying that my pattern of strictly a sequence of 1's contains the same amount of Shannon Information as your pattern of 0's and 1's which were completely random. How can that be? At some point in the series, shouldn't your surprisal have actually been reduced? You also said that your example had 100 bits of Shannon Information but seemed to intuitively recognize that my series of 1's had "not very much" information. So I hope you'll understand my confusion at the apparent lack of consistency. In the same post @93 You wrote:
If instead of coin tosses, I sent 1010101010101010101….. You’d start to make some pretty good guesses at the rest of the series, so the amount of new information I’d created would be very small.
You are not being consistent. If you had repeated your example of the repeating pattern "10101010..." for a total of 100 characters, would you say that it contained 100 bits of Shannon Information? IOW, you need to explain how a fixed sequence contains the same amount of Shannon Information as a randomly generated sequence. Mung
Please excuse for the length of the post I am about to make. I have been away from the computer while the conversation raged on (and will likely be away for the remainder of the weekend). So I am just catching up to everyone else's word count. :) Upright BiPed
HINT: Q1A: Was the result of the first coin tossed a heads? Regardless of whether the answer is a 0 or a 1, if Lizzie has responded truthfully, the configuration of the first element in the sequence is known. Q2B: Was the result of the second coin toss a tails? Regardless of whether the answer is a 0 or a 1, if Lizzie has responded truthfully, the configuration of the second element in the sequence is known. So then the question becomes, why must Upright BiPed ask 100 questions? p.s. Each answer provides 1 BIT of information. p.p.s. Note that each answer produces a reduction in the uncertainty about something. Mung
Well, let me give a more nuanced answer: If I know that the system you are using to send me a message has uses two possible symbols, 1, and 0, then you have sent me 100 bits of information (or however many ones there are, I didn't count). However, if I don't know how many symbols you could have used, then you have sent me not much more than 1 bit, because while the first bit may surprise me, by the end of the message each subsequent one is reducing my uncertainty that the next will not be a 1 by only a tiny amount. And this goes back to the point I was trying to make to kairosfocus; to know how much information there is in a signal, we have to know something about what other signals are possible. If your message was the result of a series of coin-tosses, and I knew that, your message would contain a lot of information (100 bits). If I didn't know that ones and zeros on each go were equiprobable, though, I'd quickly infer that your cat had gone to sleep on your keyboard. So no, I don't think it reduces my claim to absurdity. As long as I know that pattern X is not the only pattern possible, then a replication of pattern X tells me that information has been transferred, whether that pattern is a pattern of all ones or the pattern you gave later. So both qualify as information. How much information depends on whether I have prior knowledge of the probability distribution from which the symbols are drawn, or whether I have to deduce it from the probability distribution observed in the signal. Elizabeth Liddle
On the Information Content of a Randomly Generated Sequence (cont.) : Elizabeth Liddle @93:
So if I send you a series of 100 ones and zeros, and I arrange it so that at each position, ones and zeros are equiprobable, then I have sent you 100 bits of information, right? Well, I don’t even need natural selection to do that, I can just toss a coin 100 times! And, by an entirely stochastic process, I have sent you 100 hundred bits of information. So on that definition, any stochastic process creates information. Indeed, the more “intelligent” the process, the less information I actually create.
: Upright BiPed @202:
The reason I asked you what it was about is because if information is not about anything then its not information – at best, in the Shannon sense, it’s noise. - – - – - This is why I said I don’t care what you want to say the information is about, but it must be about something. Your choice.
So in the case that Lizzie tosses a fair coin which has a "heads" on one side and a "tails" on the other side 100 times and records the sequence where H stands for heads and T stands for tails. Lizzie then encodes each H as a 1 and each T as a zero. She then transmits the sequence of 1's and 0's to Upright BiPed. IF Upright BiPed understands that a 0 "means" a Tails and a 1 "means" a Heads. THEN, Lizzie has indeed transmitted 100 bits of information ABOUT the sequence of coin tosses which she recorded. So it was not the case that the information was not about anything. Why 100 bits? Say that it is the case that Upright BiPed is asked to discover (become informed about) the recorded sequence of coin tosses by asking a series of questions to which the response would consist of only YES/NO or TRUE/FALSE (binary = base 2) answers. Say that Upright BiPed and Lizzie had agreed that by convention, in response to the question posed by Upright BiPed, Lizzie would be truthful and send a "0" to represent a NO or FALSE and that she would send a "1" to represent a YES or TRUE. How many questions, minimum would Upright BiPed need to ask in order to become fully informed about the sequence of heads and tails recorded by Lizzie? Lizzie:
So on that definition, any stochastic process creates information.
FALSE. Mung
Elizabeth Liddle @208: My position is not that Stuff (events, phenomena, complexity, whatever) is either caused by Accident (things bumped into each other in such a way that something amazing and improbable occurred) or Design (someone deliberately planned these amazing things – it couldn’t possibly have happened by Accident)... And on the other hand we have: Creatures of Accident: The Rise of the Animal Kingdom Mung
Yes, and in that instance, Mung, not very much information.
Specifically, how much? Isn't the information content of that pattern measurable? You were able to come up with a value of 100 bits of Shannon Information for your example, so I assume you know how to measure the information content of my example.
Yes, and in that instance, Mung, not very much information.
How do you know it's not very much, if you can't measure it? I congratulate you for understanding the argument, but do you not see it as a reductio ad absurdum to your claim?
And if a pattern is transmitted, I suggest that information has been transferred.
And I suggest that it depends upon the pattern along with some other factor or factors. So at the other end of the scale I offer you the following: d;slit 8upoq4ewyt sjhfgoij54ir e;laieu kjfnfdl skjt ljts s/a/.khjtpwoo96p[3q9u6;l2 That's a pattern, right? Can you explain why it qualifies as information? Mung
kairosfocus:
The log reduced form of the Chi metric is not about the formulation of chance hyps, it is about the issue of finding isolated islands of interest in large enough config spaces.
The log reduction (which of course is standard with Shannon information) isn't the point, kairosfocus - the point is that in all versions of Chi that I have seen, eg in the UD glossary: ? = – log2[10^120 ·?S(T)·P(T|H)] you need a value for P(T|H), where P(T|H) "is the probability of being in a given target zone in a search space, on a relevant chance hypothesis". Taking the log doesn't appear to me to obviate that requirement :) What matters, surely, is whether what happened is vanishingly unlikely under the null hypothesis of No Intelligent Designer (or, if you like, by Chance and Necessity alone). So unless we actually calculate the probability under that null, we cannot determine whether it could be expected to happen at least once in the number of events that are possible in the known universe, or whatever alpha you want to use. I'm not querying the alpha value; I'm asking how you calculate the probability of the observed pattern under the null hypothesis. Elizabeth Liddle
Yes, and in that instance, Mung, not very much information. Elizabeth Liddle
F/N: Dr Liddle: The log reduced form of the Chi metric is not about the formulation of chance hyps, it is about the issue of finding isolated islands of interest in large enough config spaces. For our solar system the number of configs for 500 bits is 48 ordered of mag more than the number of quantum states for the 10^ 57 or so atoms involved, and for 1,000 bits we are 10^ 150 beyond the number of Planck time q-states for the 10^80 or so atoms in the observed cosmos. [There are about 10^30 Planck times in the fastest -- ionic -- chemical reaction times.] The point being, if all the atoms of the observed cosmos -- or for a more realistic limit our solar system -- working flat out under the most favourable possible conditions could not sample an appreciable fraction of the states, the scope of your search just rounded down to an effective zero. UNLESS YOU BELIEVE IN INCREDIBLE LUCK NOT DISTINGUISHABLE FROM MIRACLES. So, if your informational measure is specific and comes in a scope over the thresholds given, the chance hyp is irrelevant, it is not going to exceed a Planck time quantum state search of 10^102 or 10^150 states. So, note, with warranted specificity explicitly invoked: Chi_500 = I* S - 500, bits beyond the solar system threshold GEM of TKI kairosfocus
Hi Mung: What happens is that if you do a maximisation on the h-formula, you g4t peak value when p1 is flat across i, as a result of the math, I think Shannon even plotted a maximum diagram in his original paper IIRC. That is just an oddity of the mathematics, and it is irrelevant to real world signals, as real world codes do not go to zero redundancy, and will not push to have all symbols appearing with the same relative frequency in typical messages. As to noise vs signal characteristics, one key one is the classic eye diagram where the degree of opening of the eye will mark a clean/dirty signal. GEM of TKI kairosfocus
And if a pattern is transmitted, I suggest that information has been transferred.
111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 Mung
Mung, re your last point: I think it is a very important point, and, indeed, embodied in the Shannon notion of "reduction of uncertainty". This whole aspect of information is of professional interest to me because I'm interested in the neural mechanisms that encode "surprise". As for the question of whether the sender can be different from the source: What I was distinguishing was the content of the signal at source from the content of the signal at reception. Degradation can occur between those to points (and does). But, for example, with my Duplo blocks there was a "signal" (the order of the blocks) that had a "source" (the original string) and was duplicated with less than perfect fidelity (signal at source differed from signal at reception), ergo there was loss of signal/noise contamination. However, the sender was coterminous with the signal-at-at-source and the receiver was coterminous with the signal-at-reception. So I can see why kairosfocus rejected it, even though, in some senses of the word "information" (even in Shannon terms), "information" had been transmitted from source to receiver, albeit imperfectly. When it comes to living things, there are a number of analogs to signal theory that can be applied at different levels. For example, we can regard DNA as both the signal-at-source, and the RNA as the receiver. Or we can regard the parent cell as the sender, the DNA as the transmission medium and the daughter cell as the receiver. Or we can regard an unknown Intelligent Designer as the sender, DNA as the signal, transmitted from cell to cell, and the cell mechanisms of reproduction as the receiver. Or we can regard the environment as the sender, differential reproduction as the message, transmitted from one generation to the next and the next generation as the receiver. In that last instance, the message can even be expressed in words: "the alleles that work best in this environment are the ones you have now". In this sense, the "information" comes from exactly the same place as the "information" that is supposed to be "smuggled into" the genomes via the fitness function in a GA! So yes, let's go back to Shannon and his concept of "reduction of uncertainty". In a cell, it seems to me, we have a "signal" encoded in the cell's DNA that is transmitted to the daughter cells (I say cells, plural, because cells replicate by division, unlike most multicellular animals). However, more than merely the cell's DNA is transmitted; what is also transmitted (at least in a multicellular organism) is the state of the parent cell. And the state of the daughter cells may well change from the state they inherit, in which cases they pass on that additional information to their daughter cells, and so on. This is why I am very wary of focussing on DNA as the coded "message". The really important bit of coding is the updates. Not only that, but the cell also needs to respond to signals from other cells, as well as from the external word, in order to fulfill its functions. So it is far from straightforward to map signal theory on to the activities of living cells, and therefore to account for all the information that is being transmitted at any given time. However, what I do think is that any definition of Information, to be useful, has to involve the concept of transmission. Transmission is what enables us to consider "specification", and is why, above, I pointed out that we cannot simply separate signal from noise without knowing something about the signal. And so, I would argue, that any system in which a pattern is consistently duplicated involves the transmission of information. We can have transmission without duplication (or at least the duplication can be in a very different modality) but I don't see that we can have duplication without transmission. And if a pattern is transmitted, I suggest that information has been transferred. Elizabeth Liddle
On the Meaning of Shannon Information Hopefully I'm not beating a dead horse here, but I'm not sure this question was ever resolved. In order for something to qualify as information in the Shannon sense, it must have some surprisal value. If there is no surprisal value then it is not information in the Shannon sense of the term. But for there to be a surprisal value there must be some expectation. The receiver would have to be surprised about something. We can also phrase this in terms of uncertainty and the reduction in uncertainty upon receipt of some amount of information. Does it follow from the above observations that information, to qualify as Shannon Information, must be about something and must reduce the uncertainty about something? Can the above thoughts be expanded upon and/or made more clear? Am I conflating the measurement with what is being measured?
A fundamental, but a somehow forgotten fact, is that information is always information about something. - The Mathematical Theory of Information
Mung
(sorry to be answering in teaspoonfuls)
I consider myself to be a micro-blogger. :) One point per post is about all I can handle. Just ask BA77, lol. Mung
But it probably makes sense even if we postulate non-intelligent senders and receivers (e.g. a cell and its progeny).
At first I thought I agreed with that statement, but on second thought, lol. Can a sender be different from the source? Let's remove the possible equivocation. By sender do you mean information source or transmitter? By receiver do you mean receiver or destination? Please see Fig. 1: http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf
I do think the fidelity of the transmission is an important aspect of the concept.
Absolutely. If it cannot be corrupted, can it be information? [An interesting theological question!] Mung
Mung: (sorry to be answerwing in teaspoonfuls) Re Ruby: you are probably right, but I am doing a Java course right now! Re 275:
iirc, the original “challenge” said nothing about CSI.
tbh, I can't actually remember the original challenge (it was in a different thread) but what I intended when I made the claim was to demonstrate that whatever kind of information UDists say can't be generated by "Darwinian processes" can be :) However, we then got on to issues of how Darwinian process get started in the first place, hence my current formulation. As for CSI, I am assuming that something that counts as CSI is the relevant kind of information.
1. Clarify whether Information will meet the challenge, or whether it needs to be Complex Specified Information.
That would be a good start.
2. Don’t we need to get Information first, before we can get to Complex Specified Information? If you can’t generate Information you sure as heck can’t generate CSI, so why not start with Information.
I would claim that I can already do that (indeed my Duplo Chemistry demonstrated that, I would contend - faithful tranmission from one generation to the next).
3. It was your claim, you should get to choose (imo). Baby steps.
Well, I guess.
I fail to see how anyone can object. For Darwinism just does propose to get to CSI, but in little steps, not big ones.
Yes. Re 276:
On the proposed simulation
My intention is to circumvent both those objections firstly by providing no fitness function, and relying on the intrinsic “fitness function” embodied in any self-building-replicator (i.e. a self-replicator that sees to its own self-replication) namely, the more efficiently it produces offspring, the more prevalent the traits it passes to those offspring will be represented in the next generation.
I see some potential problems you may face. 1. Will you have a fixed population size, or have you decided?
Yes, I have decided, and no there will not be a fixed population size, and I'm not even starting with any self-replicators at all, just a sea of materials from which they may form. How many self-replicators emerge will depend on how the vMonomers they combine, and the population size will not be constrained in any way.
2. There will be no intrinsic fitness function, because you won’t actually have any self-building-replicators.
Well, my challenge is to set up my virtual world so that they emerge from the conditions in that world (the chemistry and the physics).
3. If and when you get one, how will you decide how efficient it is without a fitness function?
I won't decide. Fitness will be an intrinsic property in the sense that is an intrinsic property of living self-replicators. Individuals that self-replicate better are fitter than those that self-replicate less efficiently. Natural selection is differential reproduction. I expect my self-replicators to self-replicate with differential efficiency.
No particular need to reply, just some things to think about.
Yes, indeed :) Elizabeth Liddle
Mung:
Hi Lizzie, Let me just talk, hopefully briefly. On the one hand I think perhaps my attempts to contribute have actually hindered the debate. I think you probably feel pulled in different directions and that you’re not really getting a coherent message from us.
Well,things are a little divergent right now, but that is often the dark before dawn :)
So in one sense I feel I should shut up and let you and Upright BiPed work things out. It was my intent to see if you two could come to an agreement on the challenge to be met and not introduce my own qualifications. But on the other had I find this all so intriguing. And I think it could be fun to know the results of your experiment just for the sake of seeing what happens and then debating the meaning, if any, of the results.
Me too :)
So I don’t see myself bowing out. But I will try to make myself clear about whether I am being critical of your project or just talking about concepts and ideas. If I am talking about your virtual chemical world I’ll try to make it clear.
Well there are a lot of strands to this issue, and sorting them out is part of the process of solving the problems. I always go on about how the essence of problem-solving is a good problem statement :) So if I seem as though I am niggling, it's not evasiveness, just terminal commitment to nailing down stray loose concepts before proceeding.
My suggestion is that first and foremost you talk to UPB and try to understand what the goal of the project is and whether step by step you are even addressing the issue raised. I think you were on the right path when you were talking about send and receiver, but that’s just my opinion. Does it make sense to talk about information apart from communication?
Personally, I don't think so. But it probably makes sense even if we postulalate non-intelligent senders and receivers (e.g. a cell and its progeny). I do think the fidelity of the transmission is an important aspect of the concept.
Best Wishes
reciprocated :) Elizabeth Liddle
Mung:
Also, naive question: what does F/N stand for? I’ve been wondering!
Footnote. ;) But I think it’s great that you can ask the question. Says good things about you.
Thanks for the information and the kind remarks *blush* Oddly enough someone said the same thing to me yesterday about some questions I'd asked at a meeting :) I've never minded asking silly questions, and sometimes I find that I'm not the only one who doesn't know the answer! Not always, though. Still, I don't mind looking silly if I get the information in the end, even if I'm last to know :) Curiosity can be a powerful and under-rated drive :) Elizabeth Liddle
On the Information Content of a Randomly Generated Sequence : kairosfocus @260:
I have previously pointed out that the underlying premise of the Hartley-based metric is that information can be distinguished from noise by its characteristics associated with meaningful signals.
I think you are being redundant in the use of the term "meaningful signal." What would a meaningless signal look like? Can there be a sign that is not about anything at all?
...to then take this anomaly of the metric and use it to pretend that a random bit or symbol string more generally is thus an instance of real meaningful information, is to commit an equivocation and to misunderstand why Shannon focused on the weighted average H-metric.
I think we're in agreement here. I think this is what I have been trying to say for some time. At first my objection was intuitive, but now I think I am beginning to have real understanding.
To then take this and try to infer that a random bit string is informational in any meaningful sense, is clearly a basic error of snipping out of context and distorting, often driven by misunderstanding.
Once again let me quote MacKay: Shannon’s analysis of the ‘amount of information’ in a signal, which disclaimed explicitly any concern with its meaning, was widely misinterpreted to imply that the engineers had defined a concept of information per se that was totally divorced from that of meaning. It appears to me that we are in agreement. So once again I raise the question. Does a randomly generated sequence contain maximal Shannon Information? Donald Johnson said it does. I said it didn't. You seemed to side with Johnson. I am thinking I am more right now than then. But have you changed your mind? Are you now saying, that to make that claim requires an equivocation? If so, I agree. In a follow-up post I'll address what I think about how random sequences obtain their Shannon Information content. Mung
On the proposed simulation
My intention is to circumvent both those objections firstly by providing no fitness function, and relying on the intrinsic “fitness function” embodied in any self-building-replicator (i.e. a self-replicator that sees to its own self-replication) namely, the more efficiently it produces offspring, the more prevalent the traits it passes to those offspring will be represented in the next generation.
I see some potential problems you may face. 1. Will you have a fixed population size, or have you decided? 2. There will be no intrinsic fitness function, because you won't actually have any self-building-replicators. 3. If and when you get one, how will you decide how efficient it is without a fitness function? No particular need to reply, just some things to think about. Mung
Information, or Complex Specified Information (CSI) iirc, the original "challenge" said nothing about CSI. 1. Clarify whether Information will meet the challenge, or whether it needs to be Complex Specified Information. 2. Don't we need to get Information first, before we can get to Complex Specified Information? If you can't generate Information you sure as heck can't generate CSI, so why not start with Information. 3. It was your claim, you should get to choose (imo). Baby steps. I fail to see how anyone can object. For Darwinism just does propose to get to CSI, but in little steps, not big ones. Mung
F/N: Use Ruby! http://www.ruby-lang.org/en/ ;) It is SO much easier to program in Ruby than in Java. And it is free. Free Open Source Interpreted Object Oriented Dynamic Mung
Hi Lizzie, Let me just talk, hopefully briefly. On the one hand I think perhaps my attempts to contribute have actually hindered the debate. I think you probably feel pulled in different directions and that you're not really getting a coherent message from us. So in one sense I feel I should shut up and let you and Upright BiPed work things out. It was my intent to see if you two could come to an agreement on the challenge to be met and not introduce my own qualifications. But on the other had I find this all so intriguing. And I think it could be fun to know the results of your experiment just for the sake of seeing what happens and then debating the meaning, if any, of the results. So I don't see myself bowing out. But I will try to make myself clear about whether I am being critical of your project or just talking about concepts and ideas. If I am talking about your virtual chemical world I'll try to make it clear. My suggestion is that first and foremost you talk to UPB and try to understand what the goal of the project is and whether step by step you are even addressing the issue raised. I think you were on the right path when you were talking about send and receiver, but that's just my opinion. Does it make sense to talk about information apart from communication? Best Wishes Mung
Also, naive question: what does F/N stand for? I’ve been wondering! Footnote. ;) But I think it's great that you can ask the question. Says good things about you. Mung
kairosfocus:
F/N 3: Perhaps, I need to remind us about where the thought on these things had already reached by the 1970?s: ____________ Wicken, 1979: >> ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. >>
The above can't be of great help to us, unfortunately, given the part I have bolded, as that would make confound the conclusion with the premise! We need an operational definition of the properties of my output that is independent of the concept we want to test.
Orgel, 1973: >> . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.] >> ______________ I think the distinctions being made here are fundamental and need to be brought on board with further considerations.
Well, the second is more useful than the first, and I agree the distinction is important. So we need a clear operational version of "chi". The problem with all the formulations I have yet seen is that they beg the question of how to formulate the chance hypothesis. This seems to me to be crucial.
Elizabeth Liddle
F/N: I am sorry, but it is not a blunder on my part to point out the significance of being in a zone where you have a rising fitness function and an algorithm that knows what to do with it. In short the key functional complexity has already been built in at that point. You are simply making explicit what is implicit in the inputs [including what is built into the algorithm itself and the code for it]. the key issue that design theory highlights is the need to get to those islands of function. Hill climbing within an island on built in complex information does not solve that problem.
I'm orry, but I don't see how this relates to my proposal. Can you explain, with specific reference to the items in my proposal? Also, naive question: what does F/N stand for? I've been wondering! Elizabeth Liddle
kairosfocus:
Dr Liddle: Please take this as a: WARNING, something is seriously amiss at the outset . . . On looking at your just above, it seems to me that the basic problem is that you are going to equivocate between random strings etc that can in effect catalyse copies and the sort of specifically meaningful or functional information that the CSI/FSCI concept addresses.
Oh dear. I'm trying to figure out how to say this in a completely unambiguous manner: I not only do not intend to equivocate about anything, I specifically want to operationalise the definitions we are using before I start so that no equivocation is possible! The reason I haven't started yet, and am still banging on about definitions is precisely so that there is no room for equivocation by anyone, least of all me! tbh, my own view - hunch, at least- is that the UD approach to information is fatally flawed and the reason it is so difficult to get an operational definition (one that can be applied to open-ended systems, for instance) is that there are intrinsic equivocations within the concept (between intention and intelligence, for instance). But I am willing to be convinced otherwise, if we can hammer out a clear, unequivocal definition that can be applied to the kind of project that I have proposed, namely to start with no more than Chance and Necessity and create some Improbable quantity of information.
There is no issue that random unconstrained strings can be constructed, or even that a copying system or templating can replicate such, perhaps even with variation.
That's fine, I'm glad we agree on that.
And, in the case where strings are pre-programmed through nicely co-ordinated patterns of what will come together and what will not, the organising information was preloaded.
If you count the basic laws of physics and chemistry as "pre-programmed" information, then why look to life as evidence for the hand of a designer? Why not simply say: the laws of Necessity must have been designed? More to the point, if Necessity itself is Designed, then we cannot infer Design by ruling out Necessity. It seems to me that what you have just said undermines the entire UD concept.
Observe please, as has now been repeatedly noted — and I missed if you ever responded to this — the COOH-NH2 bond string for proteins is a standard click-together, and so the string of AAs dep4nd on being informed through the mRNA and ribosome to form the — deeply isolated in config space — sequences that fold and function. Linked to that, the AAs are attached to tRNAs through the COOH end, to a standard CCA end, i.e. chemically any AA could attach to any tRNA, what controls this is tha the loading enzymes match the specific tRNA and lock in the right AA, informationally based on a structured key-lock fit. In turn that enzyme forms though the same process [chicken and egg], and is in a functionally isolated fold and function island.
I'm not disputing this - I'm not sure in what sense you want me to respond to it.
Going on, RNAs and DNA similarly have a sugar-phosphate backbone that is a standard click-together. The information is in the sequencing, and is expressed by a key-lock fit on the side chain so to speak, similar to the key-lock fit of a Yale type lock, and of course it is generally accepted and understood that this is done using a 4-state digital info storage system as expressed e.g. in the genetic code and its dialects, with provisions for regulatory codes also. But all of that is distractive from and misdirected relative to the key issue: finding strings etc from specific functionally or meaningfully organised zone in wide config spaces. Cf my thought exercise on the spontaneous assembly of a functional microjet from parts in a vat, which of course bears more than a passing resemblance to your proposed model.
I don't think any of the above bears more than a passing resemblance to my proposed model, except inasfar as what I hope will emerge is a population of systems that code for their own replication, which can be fairly easily quantified by evaluating how like their parent each pair of daughters is. It certainly won't do it as complicatedly as a modern cell. But I do propose more than the simple self-replication of strings, because I am now inserting an additional requirement - the content of the strings must contribute to the efficiency with which they are self-replicated (which was not true of my Duplo model). In other words, I am not proposing a "mould" or "stamp" system, in which a specified pattern is replicated because it is stamped out by the pattern, but a system in which the pattern itself specifies the events that must happen in order to result in the faithful self-reproduction of the whole.
Let’s just say, there is a reason why something like an aircraft or even the instruments on its dashboard so to speak, are not designed that way. Notice, work is defined in terms of the product of applied force and distance along line of application. It could be applied broadly and twisted into all sorts of rhetorical puzzles, but that is kept out by a consideration of context: impartation of orderly as opposed to disorderly motion. And, when we see the work to unweave diffusion by clumping and then organising towards a functional whole, we see that this work has to be controlled informationally if it is to credibly succeed. That is why designers plan their work, and why a design is a plan. It informs organising work to effect the plan. Dembski: . . . (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi. HT: ENV.) Similarly, it is NOT a general consensus that GA’s produce novel meaningful information out of the thin air of success rewarded chance variation. In fact they are set up in carefully designed islands of existing function, and they depend on hill-climbing algorithms that are just as carefully designed, explitong metrics that are designed, and relying on underlying models that can interpolate to essentially any degree of precision. Such models are making implicit information explicit, they are not creating new function where none existed before, out of the thin air of chance and mechanical necessity without intelligent direction and control. Just think about how a GA knows how to stop. I think a fresh start on a sounder footing is indicated. GEM of TKI
As I said, I am not proposing a GA. It is precisely in order to start on a fresher and sounder footing that my proposal is what it is. There will be no fitness function. There will be no initial population of breeding individuals. There will be a chemistry and a physics, representing both Chance and Necessity. However, if that last what constitutes the "design" I have inserted within my system, then I suggest that UD moves away from the argument that ID can be inferred ID from living systems, and towards the argument that ID can be inferred from the physics and chemistry that make the emergence of living systems possible (I guess that would be the "fine tuning" argument, and would put you in the same camp as many "theistic evolutionists" :)) Elizabeth Liddle
F/N 3: Perhaps, I need to remind us about where the thought on these things had already reached by the 1970's: ____________ Wicken, 1979: >> ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. >> Orgel, 1973: >> . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.] >> ______________ I think the distinctions being made here are fundamental and need to be brought on board with further considerations. kairosfocus
F/N: I am sorry, but it is not a blunder on my part to point out the significance of being in a zone where you have a rising fitness function and an algorithm that knows what to do with it. In short the key functional complexity has already been built in at that point. You are simply making explicit what is implicit in the inputs [including what is built into the algorithm itself and the code for it]. the key issue that design theory highlights is the need to get to those islands of function. Hill climbing within an island on built in complex information does not solve that problem. kairosfocus
Dr Liddle: Please take this as a: WARNING, something is seriously amiss at the outset . . . On looking at your just above, it seems to me that the basic problem is that you are going to equivocate between random strings etc that can in effect catalyse copies and the sort of specifically meaningful or functional information that the CSI/FSCI concept addresses. There is no issue that random unconstrained strings can be constructed, or even that a copying system or templating can replicate such, perhaps even with variation. And, in the case where strings are pre-programmed through nicely co-ordinated patterns of what will come together and what will not, the organising information was preloaded. Observe please, as has now been repeatedly noted -- and I missed if you ever responded to this -- the COOH-NH2 bond string for proteins is a standard click-together, and so the string of AAs dep4nd on being informed through the mRNA and ribosome to form the -- deeply isolated in config space -- sequences that fold and function. Linked to that, the AAs are attached to tRNAs through the COOH end, to a standard CCA end, i.e. chemically any AA could attach to any tRNA, what controls this is tha the loading enzymes match the specific tRNA and lock in the right AA, informationally based on a structured key-lock fit. In turn that enzyme forms though the same process [chicken and egg], and is in a functionally isolated fold and function island. Going on, RNAs and DNA similarly have a sugar-phosphate backbone that is a standard click-together. The information is in the sequencing, and is expressed by a key-lock fit on the side chain so to speak, similar to the key-lock fit of a Yale type lock, and of course it is generally accepted and understood that this is done using a 4-state digital info storage system as expressed e.g. in the genetic code and its dialects, with provisions for regulatory codes also. But all of that is distractive from and misdirected relative to the key issue: finding strings etc from specific functionally or meaningfully organised zone in wide config spaces. Cf my thought exercise on the spontaneous assembly of a functional microjet from parts in a vat, which of course bears more than a passing resemblance to your proposed model. Let's just say, there is a reason why something like an aircraft or even the instruments on its dashboard so to speak, are not designed that way. Notice, work is defined in terms of the product of applied force and distance along line of application. It could be applied broadly and twisted into all sorts of rhetorical puzzles, but that is kept out by a consideration of context: impartation of orderly as opposed to disorderly motion. And, when we see the work to unweave diffusion by clumping and then organising towards a functional whole, we see that this work has to be controlled informationally if it is to credibly succeed. That is why designers plan their work, and why a design is a plan. It informs organising work to effect the plan. Dembski:
. . . (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi. HT: ENV.)
Similarly, it is NOT a general consensus that GA's produce novel meaningful information out of the thin air of success rewarded chance variation. In fact they are set up in carefully designed islands of existing function, and they depend on hill-climbing algorithms that are just as carefully designed, explitong metrics that are designed, and relying on underlying models that can interpolate to essentially any degree of precision. Such models are making implicit information explicit, they are not creating new function where none existed before, out of the thin air of chance and mechanical necessity without intelligent direction and control. Just think about how a GA knows how to stop. I think a fresh start on a sounder footing is indicated. GEM of TKI kairosfocus
Kairsfocus:
/N: Dr Liddle, please beware of beginning your work within or in near proximity to a target zone based on having done the targetting work off-stage. That is in fact the subtle fallacy — and point of injection of intelligently developed active information — in all GAs and similar algorithms that in effect use the idea of a space that tells you warmer/colder, directly or indirectly (e.g. a nice smoothly varying fitness metric that points you conveniently uphill, ignoring the evidence of vast seas of non-functional configs). At least, when they are presented as exemplars of chance plus necessity giving rise to functional, organised complexity withou8t intelligent direction.
Firstly, I think this is in itself a fallacy: The idea that information is somehow "smuggled into" a GA via the fitness function arises from a mistake about the levels at which we are evaluating information. Yes, of course, in a GA, the fitness function is rich in information, and yes, in a GA, that fitness function is, clearly "intelligently designed". But the fitness function does not, explicitly does not, contain any information as to how fitness is to be achieved. It is that information (and very useful it can be too) that is created within the GA. Secondly, as I've explained to Mung, what I am proposing is not a GA, and there will be no "fitness function". The other information that is provided by the designer of a GA, in addition to the fitness function, is the information needed to replicate the individuals - they start off with a population of breeding individuals. I am providing no such information. All I am providing is a chemistry and a physics. Yes, I will select my chemistry in such a way that it is likely to result in my anticipated self-replicating systems, but that is completely kosher. Nobody is suggesting that any old chemistry will result in self-replicating critters. If I succeed, that will not solve the problem of abiogenesis, because my chemistry is only a toy chemistry, and only maps crudely on to real-world chemistry, and we do not even know, for sure, what real chemicals might have been around in the early earth (though we have some ideas). But it will, I suggest, demonstrate that we can push the need for an intelligent designer at least back as far as selecting the initial physics and chemistry, and that the claim that "chance and necessity" cannot produce the kind of information displayed by self-reproducing entities is false. Elizabeth Liddle
I have previously pointed out that the underlying premise of the Hartley-based metric is that information can be distinguished from noise by its characteristics associated with meaningful signals. Thus, signal to noise ratio.
Yes indeed, but only if we have prior knowledge of the characteristics associated with a "meaningful signal". For example, if I attempt to transmit my name: N O W _ I S _ T H E _ T I M E _ F O R _ A L L _ G O O D _ M E N _ T O _ C O M E _ T O _ T H E _ A I D _ O F _ T H E _ P A R T Y down a noisy channel, it may appear like this: Q Q Q _ I Q _ T Q Q _ Q Q Q E _ Q O R _ Q L L _ G Q O D _ Q E N _ Q O _ Q O M E _ T O _ Q Q E _ Q Q D _ O F _ T H Q _ P A R T Y. Because we know there is a meaningful code called English, and because we know something about the probability distribution and contingencies of English letters in English sentences, we can immediately infer that the Qs are noise, not least because they frequently occur without a following U, which is extremely rare in English text. So we can ignore the Qs, or at least assume that a most a very small proportion of them are part of the original signal. That's fine. However, let's say I was communicating to you in a code in which the total number of Qs in the message was an extremely important piece of information - the number of enemy ships in the Channel for instance. And I had deliberately disguised this information by randomly selecting sentences from a typing manual to intersperse among the Qs. In that instance, the Qs would be the message, and the other letters would simply be irrelevant noise, albeit deliberately transmitted as a decoy. Or, let's say, we have an alarm system, in which I repeatedly send the simple message "Q" which means "execute emergency code Q". However, the message is contaminated by cross talk from the Party HQ next door. Again, the signal is the Qs and the noise is the other letters. For this reason I do not find it self-evident that we can distinguish signal from noise without prior knowledge about the signal. We can of course compare the signal sent with the signal received and quantify the noise in the channel, and quantify transmission fidelity. And that seems to me to be a reasonable approach to evaluating the results of my proposed project: if I end up with a self-replicating structure, we can compare the "parent" structure with the "daughter" structures and quantify the fidelity of the transmission. Not,that I do not suggest that a "random string" contains useful information. I do suggest,though, that evaluating whether a string is "random" is a whole nuther ball game, as the concept of CSI implies. And in the case of my proposed project, the daughers critters' morphology will be, by definition, specified by the parent critter. Not any old nice-looking pattern will do. It has to match, with some, if not total, fidelity, the parent pattern. If it does, I submit that information has been transmitted. Indeed, in my thought experiment with the Duplo Chemistry, baggage carousel and cold store, information in that sense was also transmitted. But possibly not enough :) Elizabeth Liddle
F/N: Dr Liddle, please beware of beginning your work within or in near proximity to a target zone based on having done the targetting work off-stage. That is in fact the subtle fallacy -- and point of injection of intelligently developed active information -- in all GAs and similar algorithms that in effect use the idea of a space that tells you warmer/colder, directly or indirectly (e.g. a nice smoothly varying fitness metric that points you conveniently uphill, ignoring the evidence of vast seas of non-functional configs). At least, when they are presented as exemplars of chance plus necessity giving rise to functional, organised complexity withou8t intelligent direction. kairosfocus
Mung (and Dr Liddle): I have previously pointed out that the underlying premise of the Hartley-based metric is that information can be distinguished from noise by its characteristics associated with meaningful signals. Thus, signal to noise ratio. I also pointed out that as an artifact of the definition and the use of a weighted average measure to get avg info per symbol, we see that a flat random distribution will give a value of metric, and indeed will be a peak of the avg info per symbol metric. [What that really means is that if we could squeeze out all redundancy and associated differences in symbol frequencies, we would get a code that would push through the maximum quantum of information per symbol, but in fact that is not technically desirable as reliability of signals is a consideration, thus the use of error detection and correction codes. the use of a parity check bit is the first level of this.] I set that in the context where, to then take this anomaly of the metric and use it to pretend that a random bit or symbol string more generally is thus an instance of real meaningful information, is to commit an equivocation and to misunderstand why Shannon focussed on the weighted average H-metric. As I said before, one of his goals was to identify the carrying capacity of noisy, bandlimited channels such as telephone or telegraph lines. To then take this and try to infer that a random bit string is informational in any meaningful sense, is clearly a basic error of snipping out of context and distorting, often driven by misunderstanding. (I also think the deisre to defy the infinite monkeys result and draw out complex messages from lucky noise is a factor. But, we have very good reason to see that complex messages are unreachable by random noise, and sufficiently complex starts at 143 ASCII characters.) GEM of TKI kairosfocus
OK,thanks for the thoughtful and helpful responses above. I am absolutely serious about my intention to attempt the feat, but obviously I do want to make sure we all agree what success would look like if I achieved it! That's essentially what operationalising a hypothesis is. As I said, it's not a evading tactic (I'm actually dying to get started), it's simply sound methodology. There's no point in demonstrating something if either people think it's trivially true, or alternatively, don't that you've demonstrated what they think you have claimed to demonstrate. So I hope my good faith can now be considered beyond reasonable question. So, where are we? Mung: I haven't proposed a conventional GA, because I don't think there is any disagreement (is there?) that a GA can result in increased information. Moreover, while the entire GA is not self-replicating, a GA incorporates a populationg self-replicating "critters" - "individuals" with a genome that potentially encodes a solution to some problem. These individuals are copied, with variance, and the probability with which they are copied is modulated by the degree to which the "solution" they encode succeeds. In this respect they are a good analog of Darwinian evolution, as they evolving population is enriched by the traits that raise the probability of reproduction. The usual objection to IDs as an analog of Darwinian evolution is that the fitness function is designed by the GA writer (who has her own purpose in writing it) and the copying algorithm is also external to the critters (the critters encode their own offspring, but not the mechanism by which they give birth to those offspring). My intention is to circumvent both those objections firstly by providing no fitness function, and relying on the intrinsic "fitness function" embodied in any self-building-replicator (i.e. a self-replicator that sees to its own self-replication) namely, the more efficiently it produces offspring, the more prevalent the traits it passes to those offspring will be represented in the next generation. Secondly, unlike a GA, my virtual world will not start off with a self-replicator at all. I will build in no self-replication machinery, nor any set of starting genotypes. I will simply let these emerge from the binding rules that govern my "vMonomers" and the stochastic kinetic energy they receive from the virtual fluid medium they inhabit (call it heat if you like, and consider it, for the purposes of entropy discussions, as originating from an external source). I hope that addresses your question, but if not, I am eager to know why. Regarding measuring the "information" my virtual world (I hope) will create: yes, if we can come to an agreement of how we measure the useful/meaningful information embodied in my emergent critters, that would be cool. I am happy to use Shannon information as a starting point. I am concerned, however, as to how we will apply whatever we use to my emergent critters. To be specific, as the rules governing my "chemistry" will include philias and phobias, I anticipate that I will start with ambiphilic lipid-like vMonomers as well as base-like vMonomers, and that these will tend to assemble into vesicles, strings, and other compounds, which in turn will tend to dissassemble and reform. I do not intend to this as "self-replication" although there is a sense in which patterns may tend to persist. What I hope, however, is that eventually, particular polymer-containing vesicles may fortuitously have contents that tend to result in the vesicle self-dividing into two "offspring" vesicles, each with at least some of the properties of the first, and that these in turn will self-divide, and so on, the ones that fortuitously have the properties most conducive to successful division and preservation and transmission of what I would at that point be inclined to call the "Information" embodied in the parent, will come to dominate the population. If I succeed (and as I keep saying, I have no absolute confidence that I will), then I would contend that Information by any definition also applicable to living cells, would have been generated simply by means of the rules of Chance and Necessity established at the start. Indeed, if I use the same random number seed, I should get the same result (Necessity only), yet I have nowhere encoded in my program the information required to either build or replicate any given emergent structure. Indeed, I simply do not know what vesicle contents, if any, are likely to maximise the probability of vesicle division. What I would like to know, (thanks PaV) of course is, that given the final population matrix, how one would estimate its CSI, and whether it exceeds the required threshold. That would be cool :) Cheers Lizzie Elizabeth Liddle
Hi Elisabeth "Yes, it will be “virtual” chemistry, but that is different to saying it’s a mathematical demonstration of the principle." Ambitious. It 'll be very difficult to simulate reality at molecular level. For example, quantum entanglement has significant effect in stabilizing DNA molecule according to paper by Vedral and group http://arxiv.org/abs/1006.4053 Entanglement has instantaneous effect and unlimited range. It will be challenge to take that into a model. Good luck. Eugen
Elizabeth Liddle @237:
All I claimed to be able to do was to produce Information from Chance and Necessity.
Yet you're describing a system which is decidedly non-darwinian, but UPB writes:
UB’s challenge was an demonstration of neo-Darwininan forces that caused the rise of the recorded information in the cell, and has long morphed away to a sim that will have nothing to do with chemical reality.
I read that as neo-Darwinian, not non-Darwinian. Mung
kairosfocus:
In short, such an exercise is a red herring led off to a strawman. It is not along the right track.
Would you say that it's like receiving the same symbol over and over? But what it seems to me is that Lizzie is proposing to create a symbol generating system, even if it does only generate the same symbol over and over. So it's like a sender with no receiver. It seems to me that Lizzie would need to create a communication system in order to demonstrate the generation of novel information. But I'm not sure how to make the case. Mung
Elizabeth Liddle:
I’m simply going to demonstrate (I hope) that Information can arise spontaneously from nothing more than Chance and Necessity.
What's your reason for not wanting to use a GA? Now supposedly evolution itself is a non-teleological process of nothing but Chance and Necessity, and is purported to be able to generate information, and not just information, but Complex Specified Information. So if you could show that using a GA, I don't know what the objection would be, or why any objections would be any different to what you are proposing in your virtual world. IOW, I personally don't understand precisely what the difference is if it's pre or post darwinian. Mung
#213 “I’ve described how I propose to attempt the challenge. If you both are happy with the proposal, I am happy to start work Don't let me stop you. As far as I am concerned this is between you and Upright BiPed and I'm just along for the ride. Mung
On the Simulation of the Generation of Information Elizabeth, I don't see why you couldn't use a GA. GA's aren't self-replicators. In fact, Schneider's claims is just that ev can generate information de novo. He goes further to claim that it can even generate CSI. And it sounds like what you're trying to create is similar in ways to ev, with it's binding sites. Or if you think that ev is misguided, you might want to look at it anyways to not repeat the same mistakes. Mung
I’m taking a look at the formulations for CSI now...
Let me know if you find any relationship between CSI and Shannon Information. :) Wouldn't that just be amazing! Mung
Sorry for the mistakes. Was in a hurry and didn't proof. So, the question isn't, really, if chance and necessity could give rise to information. PaV
Elizabeth Liddle [241]:
I’ve said, explicitly, the role I plan to give to Chance, and the role I plan to give to Necessity. I’ve also set a fairly high bar for Information, as, without any self-replicating algorithm or starter critter, I plan to let my self-replicators emerge, then evolve.
If set twenty monkeys in front of typewriters, I bet we could get them to type out what we would recognize as English words: like "zebra-crossing". Well, actually, not that long. And, so, we would have chance---the monkeys---and necessity (the mechanical structure of the typewriters). So information would arise. So, the question isn't, really, if chance and necessity can't give rise to information. The question is, how much information can it give rise to? That's why there's an UPB. And that's why it would take the entire time of the known universe for the monkeys to type out, at random, this sentence. Does this help to give perspective? PaV
ME: You appear to have abandoned Shannon Information after having first introduced it. Can you explain why? Elizabeth Liddle:
Because obviously it doesn’t work as a measure of the kind of information that either you or UB count (reasonably) as information. Because my information (despite having 100 bits in Shannon terms) wasn’t “about” anything, you regard its information content as zero. So what I need is a measure of information that won’t give us a false posititive.
ok, thanks for your response. I appreciate it. So perhaps one way forward is not to abandon Shannon Information, but rather to see if perhaps we (one or both of us) were mistaken. Can we make it work? Mung
Mung, the confusion has arisen because I was trying to establish what criterion UB wanted to use for information. I wasn't offering a definition at all. What I want is an operationalized definition as it is used in the claim that Chance and Necessity cannot result in it (or not Complex, Specified Information, anyway). I'm taking a look at the formulations for CSI now, but I did hope that my description of what I anticipated would emerge from my virtaul world would clearly qualify (being a structure that embodied the information necessary to duplicate itself). If so, then it should also exhibit CSI. Elizabeth Liddle
kairosfocus:
1 –> To distinguish signal from noise the signal has to have informational characteristics, not noise characteristics. Signal to noise ratio is in fact a key metric in communications. And you do not need to know the specific meaning to spot a signal from noise. (Yes, this is a design inference.)
Pretty amazing coincidence, lol. I had written up a comment yesterday on how it might be information if it was subject to distortion/loss by noise and correction, but deleted it because I did not want to muddy the waters concerning the nature of information. But you make a very good point. Mung
^sorry, messed up the tags. I hope it is clear where the words are UB's. Elizabeth Liddle
kairosfocus:
9 –> As for meaningful info, the way to measure it is to look at its functional specificity
I'm a bit surprised to learn that you believe in non-meaningful information. Seems to me to be an oxymoron. :) Mung
Upright BiPed: yes, indeed I missed this post, I do apologize (#229)
Lizzie, And what I hope to demonstrate is that in that virtual world, self-reproducing structures will emerge. If I succeed, then the very fact that I have self-reproducing structures, means, I think, that information (in your sense) has been created, because each structure embodies the information required to make a copy of itself. …and what shall we do with the observed discrete-ness ? Lizzie, as much of an achievement as it might be, the issue is not if you can concoct a realistic simulation with parameters where self-replicating structures spontaneously appear. That schtick has already been done with intelligent agents feeding energy and pre-programmed units into an intelligently constrained system (yawn).
But I do not propose to "pre-programmed units" into an intelligently constrained system. I thought I had made that clear. All I am providing is a "chemistry", not "preprogrammed units". The "units" will not be "programmed" at all. They will simply have a set of properties, as real compounds do. As for "feeding energy" - well, yes, of course the system will need energy. I do not understand your objection to this. The issue is can you get an encoded symbolic abstraction of a discrete state embedded into a discrete medium, whereby that representation/medium is physically transferred to a receiver in order that the receiver become informed by the decoded representation. Well, I think the word "symbolic" is problematic, as I've already said, because I do not regard biochemistry as "symbolic". But inasmuch as biochemistry is symbolic, mine will be too, in the terms I set out above, and to which I think you agreed. In other words the self-replication will not simply be a "negative" of the original. I anticipate that what I will end up with is something that contains elements that "code for" the replication of the whole. And moreover, this will not be coded by me, in any sense. All I will provide is the chemistry and the energy. As you can see, the rise of recorded information entails the rise of the abstraction, the symbol, and the translation apparatus/receiver. To approach it otherwise would be to attempt a book prior to the onset of paper, ink, the alphabet, or the reader. I wonder if you are failing to truly appreciate the conceptual issues you face. I know you are enamored with some idea of a mechanical representation, (like a shadow for instance) but that is not what is observed. Even the leading materialist researchers on this issue (Yarus, Knight, etc) concede the observed indirect nature of translation. It is this prescriptive quality which you are shooting to mimic, and it is very much related to Pattee’s “epistemic cut” or Abel’s “cybernetic cut”, and even Polanyi’s “boundary condition”. This is where the mechanism of the mind asserts itself in the causal chain, and for you to be successful, it is that quality (and its observed effects) you must reproduce without a mind. Well, I have told you how I propose to do it, and what I anticipate the result will be. My challenge to you is: If, having provided no more than Chance energy and Nesessary (deterministic) rules, what emerges is a structure that embodies the coding for its own replication, not as a shadow, or a mould of each part, but a copy (with variance) of the whole, on what grounds could you say that I had not supported my claim? I should probably say at this point, and I hope people here agree, that I do not regard DNA as a code for a whole organism. It simply does not contain sufficient information. The information necessary to make a whole organism (or even another cell) is embodied not just in the DNA, but in the entire cell. Denis Noble, rightly IMO, regards DNA not as a program but a database. I agree, and what I envisage is that eventually, if I succeed, my virtual world will be populated by cell-like structures containing database like structures that supply the materials necesary for the maintenance and self-replication of the whole. Yes, it will be "virtual" chemistry, but that is different to saying it's a mathematical demonstration of the principle. Nothing wrong with math.
As I stated in my previous post: “…this is what information is, and it is also what is found in the living cell. Information is being used to create function, and that is an observed reality. I am not interested in a loose example that truly only fulfils the need to complete a game; I am interested in an example relevant to the observation.”
So am I. Although I am also interested in the principle, as ID is based on a principle. If that principle is found to be flawed, than ID has to think again, I suggest. And if a principle of ID is that Complex Specified Information cannot result from mere Chance and Necessity, then the context is irrelevant if I can demonstrate that it can. Yes? Elizabeth Liddle
p.s. It should follow from our discussion above the the mere ability to measure it is not what makes it information. Mung
Elizabeth Liddle @213:
Because my information (despite having 100 bits in Shannon terms) wasn’t “about” anything, you regard its information content as zero.
We must have misunderstaood each other, though I find it difficult to understand how that came to be the case. I thought that you had agreed that if your "information" wasn't about something, it was not information after all. And yet here you are claiming that your "information" is in fact information, even though we both agreed that it wasn't about anything at all. In fact, you are claiming that your example contains 100 bits of information according to Shannon's measure. So from where I sit, you are contradicting yourself. Let's review: Mung @119:
As to your Shannon Information example. Even Shannon Information pre-supposes the existence of something called information. He just gives a way to measure it. True?
Elizabbeth Liddle @166:
hmmm. Yes, I guess he does – no point in measuring something you don’t think exists
So we seem to be going backwards, not forwards, or perhaps in circles. Do you now say that your string of bits contains 100 bits of information about nothing at all? Why do you think that is the case? Now I have tried to think about this, really. There are two options, as I see it. Let me know what you think. 1. Any random assortment of anything contains a measurable amount of Shannon Information regardless of whether it is "about" anything. [That seems to be your stance]. 2. You thought your information was not about anything at all, but you were mistaken. You just failed to correctly identify what it was about. Mung
ME: And of course, all the evidence in favor of the existence of a designer must be disregarded for, after all, there is no evidence there has ever been one. ellazimm @122
I’m sorry that I don’t see any good evidence. But even Christians don’t agree what parts of the Bible are literal truth and what parts are metaphor.
What an enormous non-sequitur. Mung
tbh, I'm at a loss, UB. From my PoV, I offered a clear proposal to demonstrate a clear claim (mine) that you challenged me to support, namely, that Chance and Necessity (or the equivalent, I can't remember the original wording) could produce Information. I've been accused of trying to evade the challenge, and of hiding behind demands for definitions, when the reverse is the case; my problem with the definitions offered are not that they are too stringent but that they are not stringent enough. It's also been implied that I've been moving goalposts, and yet whenever I look there seems to be another goal post to meet. I did not claim to be able to demonstrate how abiogenesis occurred. I did not even claim to be able to simulate life. All I claimed to be able to do was to produce Information from Chance and Necessity. I've said, explicitly, the role I plan to give to Chance, and the role I plan to give to Necessity. I've also set a fairly high bar for Information, as, without any self-replicating algorithm or starter critter, I plan to let my self-replicators emerge, then evolve. I may fail. My claim is that if I succeed, I will have demonstrated that Chance and Necessity can produce Information - not just any old Information, but Information encoded in a structure that enables that structure to duplicate itself. If that doesn't count as Information, then I really don't understand the ID argument at all - I thought it was precisely that kind of information (the information encoded within living cells/organisms that allows them to reproduce themselves that IDists insist cannot be produced by mere Chance and Necessity. So what is the problem with my proposal? Is it that people think I am going to sneak something other than Chance and Necessity into my virtual world? (I will supply the source code). Is it that Chance and Necessity can indeed create Information, but that what I propose won't be Enough? Or is it that you can see it won't work and want to save me the trouble of attempting it? Seriously, I don't see what in my proposal fails to be a test of my claim. UB, you say:
UB’s challenge was an demonstration of neo-Darwininan forces that caused the rise of the recorded information in the cell
Well, I didn't think that was exactly the original challenge (it wasn't my original claim, anyway) but I could be wrong. My proposal is to show that "neo-Darwinian forces" can result in a cell-like structure that records information. Actually I go further than that, because I'm actually proposing not to set up a Darwinian simulation with a starting primitive self-replicator, but to actually start with no self-replicators at all, just Chance and Necessity. I did this because people started saying: but first you've got to account for getting to self-replication in the first place, which cannot, obviously, be a result of Darwinian processes (which assume an existing self-replicator).
, and has long morphed away to a sim that will have nothing to do with chemical reality.
"Chemical reality" was not in the original spec. However, in fact I have chosen to emulate a chemical process, as that seemed to me a way of getting past the holdup of defining information. If what emerges from my model is something that is clearly analogous to a living cell, then we can, I would have assumed,that it embodies the same kind of information that living things do.
UB has now written two posts to warn you of this impending doom, yet you’ve ignored the content of both posts. What do you think UB is thinking now?
I have not knowingly ignored any posts of anyone's, but I will go back and see what I may have inadvertently missed. However, I would appreciate a response to this post, and I would furthermore point out, as I have pointed out a few times: My requests for an operational definition of Information (and indeed an operationalised statement of the problem), far from being an attempt to evade the challenge, are an attempt to ensure that the challenge is watertight, and I cannot produce something that passes on a technicality. My original claim, as I recall (I can't remember the actual thread, so I can't copy it, although I'll try to look later) was that I could demonstrate that Darwinian processes could generated Information. I did not say I could do it in a living cell - I was talking about the mathematical principle. I still am. However, as the point was raised (by kf) that that didn't address the question as to how the Darwinian processes would get started in the first place, I "morphed" the challenge into something even harder: to start with no more than Chance and Necessity, get from there to a self-replicator, and leaving the Darwinian processes to result in evolution. And I anticipate the results will be a population of self-replicators, which must, of their nature, embody meaningful Information (meaningful, in exactly the sense that cells contain meaningful Information. If not, why not? Cheers Lizzie Elizabeth Liddle
PS: Kindly compare my remarks at 214 above on the related math, and BTW, the equation is derived form and expands Dembski's metric. This is one case where the easiest path to understanding is to move forward and simplify. kairosfocus
Dr Liddle: Pardon, that sounds like a red herring chase to me, especially where the strings or whatever structures you seem to propose will evidently have no real world functionality based on a structured system that has the additional facility of self-replication. That issue of additionality to an existing, complex and specific function was on the table since Paley in 1806. Paley talked of self-replicating watches, I have discussed self replicating automata that in the case of the living cell are metabolic nanomachine entities, in the case of the Industrial Civ 2.0 would include in effect a self-replicating modular factory capable of manufacturing a key technology industrial base given reasonable inputs. GEM of TKI kairosfocus
"I don’t see why that won’t meet UB’s challenge (if I succeed of course!)" UB's challenge was an demonstration of neo-Darwininan forces that caused the rise of the recorded information in the cell, and has long morphed away to a sim that will have nothing to do with chemical reality. UB has now written two posts to warn you of this impending doom, yet you've ignored the content of both posts. What do you think UB is thinking now? Upright BiPed
KF, I am not proposing to "design" an automaton. I am proposing to set up a virtual world consisting of a set of rules (necessity) governing mutual bonding in population of vMonomers, and give them random kinetic energy (random as to direction and timing). And I anticipate that my self-replicating automaton will emerge within that world. I will not design the automaton. No self-replicating algorithm will be written by me, and none will be written in the MatLab or Java code. It will simply arise, I predict, from the rules of Necessity and the kinematics of Chance. I don't see why that won't meet UB's challenge (if I succeed of course!) Elizabeth Liddle
Hi Kairos Hi Elizabeth ...interesting experiment with vMonomers.... While ago I setup thought experiment with electronic components. It could be done in reality but I have no time for it. Lets fill 30% of the bag with logic gates which we can consider a simple rules for electron flow. Than attach little magnets to gates contacts to provide simple assembly rules. We cannot forget energy source so lets put small batteries in the bag and attach magnets to them as well. Everything should be floating in dielectric liquid and kept in slight motion to provide chance collisions. How long before we get one bit adder? Eugen
F/N: Please mark a careful distinction or two: (i) in principle anything is strictly possible to occur by chance, once it is based on a configuration, the issue is whether the likelihood is sufficient that chance is a reasonable explanation or mechanism to do it, (ii) I am interested in things that work by real world mechanisms, and are sufficiently complex to be relevant. In particular, you need to explain how languages, algorithms, codes and complex programs with data structures and implementing machinery -- not just to self replicate [mould-like, or even variable mould like] but to effect systems that transform environmental resources into useful objects -- will emerge from chance and necessity. kairosfocus
Dr Liddle: Remember, my primary interest is kinematic, not software. I am full to the gills of Langton loops, Cellular Automata, artificial life sprites, Life 2.0 sims, etc etc that are essentially pointless and irrelevant. And all of which are -- surprise . . . NOT -- intelligently designed. The energy conversion devices I am interested in will have to do things that can seriously convert the equivalent of solar, thermal, biofuel or wind etc energy into shaft work, shaft work that is controlled by coded inputs [no cam bars please]. And that can be then onward made locally through self-replication. [No games with feed in energy parameter, exhaust, pull in next bundle of e-parameter . . . ] Then, we need a code and the equivalent of the tape reader and writer that can drive position-arm assembly robot units that make machines from available components, at most components that can be refined from and made with local materials [including discarded junk]. (I would love something that can cost effectively make Al and Fe from dirt, and plastics from cellulose!) Then we start to look at constructing machine tools, power packs, transmissions, and devices for farming, fishing, construction, commercial and industrial spaces, roads and transportation etc. An open source industrial base, creating a C21 sustainable civilisation not dependent on whether the latest ME dictator or hot heads in the bazaars do something crazy this morning. Coming back from my angle on all this, I am sure you will see why I am looking at the cell as a model, and why I want to make sure we do not go barking up a strawman-decorated tree. Remember, I am not only interested in how the cell does it, but in creating an industry 2.0 self-contained facility. Ship 'er in in 2 - 3 40 ft containers and your'e good to go. (Then, Moon and Mars, here we come!) GEM of TKI kairosfocus
Well, I'm not going round in circles, kairosfocus. I'm just trying to establish what the challenge is that I have to meet. I'm not claiming to have met it, just trying to make sure we agree what meeting it would look like. i.e. I'm not saying: look this is possible, therefore ID is wrong; I'm saying: if I demonstrate that this is possible, then that is a strong case against the ID argument. I know you have given examples, but what I am trying to do is to find a doable (IMO) example that would meet the challenge. And I really liked your formulation of what I must achieve. I agree to it - is there a remaining problem? I know you believe that my task is impossible, but I don't. The best way of seeing who is right is for me to attempt it, right? I will attempt to set up a world in which metabolising, self-replicating entities emerge from nothing more than chance (as defined above) and necessity (as defined above). If my critters appear, as I believe they will, they will utilise the kinetic energy of the movement of vMomomers (and any result vPolymers) to maintain themselves, and self-replicate (I envisage by cell division). Do you agree that if I succeed I will have made my point? Elizabeth Liddle
Dr Liddle: It seems to me that we are going in essentially fruitless circles on the concepts chance, necessity, choice/art/design. I have already given key examples on the three, and I would appreciate if they were revisited. I note that the quantum level events make no difference to whether or not a dropped heavy object falls at a certain rate. They make no difference to whether a bob on a string obeys the classic pendulum relationships, or the more sophisticated versions when we bring in other aspects. Yes, when the dropped heavy object is a diwe the way we get to chaince outcomes -- which face is up -- is by accidental, untraceable and unpredictable stochastic distributions tied tot he nonlinear behaviour of the eight corners and twelve edges. But, in the old days, my dad used to use the want of correlation between telephone number assignments and surnames of people to generate random numbers. (He actually showed me how to do it once, and I actually did so once, oddly enough to drive a telephone calling survey, because I did not want to use a pseudorandom number process). The point is, that once we have similar enough starting points but statistically driven -- especially flat random -- end points, we have chance contingency. And, sky or shot or Johnson noise give rise to pretty random outputs that can be ironed out flat. Similarly, the Maxwell-Boltzmann result for molecules can be modelled using hard little balls in a box disturbed, withthe inevitable variations between the boxes. An array of boxes as close to identical as we can make it, hit by teh same initial impulses will after a time have a similar distribution, but the boxes will NOT mimic each other, i.e the pattern is sufficiently random to be useful as that. I already gave a specific case of choice contingency and a discussion on why it is that specified, complex information -- per the presence in narrow and separately definable target zones -- will not credibly be accessed by chance driven contingency, due tot he issue of dominant statistical weights of clusters of microstates. That is why 1,000 coins spelling out the ASCII code for the beginning 143 characters of this post would reliably indicate intelligence, not chance and necessity: tossing a coin is tossing a 2-side die, with gravity etc providing the necessity and the chance coming from the finely sensitive variables attaching to that ring edge and how it interacts with the box etc. there is no serious escape from the issue of functionally specific complex information requiring very special clusters of configs that will be so isolated int eh space of possibilities that the best explanation for reaching hem is choice not chance. And when it comes to the laws and parameters of nature that set up an operating point conducive to C-chemistry, cell based life, I have long gone on record on my view -- and reasons for it -- that such fine tuned complex organisation is a signature of choice not chance. But that is not what we are dealing with in the narrow circle of the code in D/RNA and the precisely specified functional configs of proteins, not to mention the requisites for a living cell to be a metabolising, self-replicating entity. THAT is what is to be explained, and not some arbitrary construct that leads in an utterly different direction not supported by empirical observation in the real world. GEM of TKI kairosfocus
kf:
What is needed is to generate a metabolic entity capable of transforming parts in its environment into components it uses, and energy to drive its processes that use energy. In addition, it has to code a representation of itself and in effect a self-assembly process for copying itself, metabolic and self replicating facility included.
Yes, exactly. And that is what I intend to do, with nothing more than a set of Necessities, random Brownian motion, and, probably, heat fluctuations (they may not be necessary). So, can we agree, that if my resulting virtual critters: 1> are capable of transforming parts in its environment into components it uses, 2. Use energy to drive its processes that use energy. 3. Codes a representation of itself and in effect a self-assembly process for copying itself, metabolic and self replicating facility included. I will have succeeded? If so, to further clarify, would you agree that if the things spontaneously self-replicate that in itself is evidence that they embody the code necessary for copying themselves, together with their self-replicating capacity? If so, we have a WINNER!!! (Well, maybe a loser, but let's see...) As I said, it will take me a while, especially as I'm going to try, if possible, to do it in Java rather than MatLab, so people can run it themselves. But I may give up and use MatLab! (I'm learning Java right now). Cheers Lizzie Elizabeth Liddle
F/N: Chi metric as applied to a 900 digit 4-state random string -- and let us grant for the moment that we are in a nice friendly medium without cross-interfering chemical species that would break it up and/or render such a string maximally unlikely to assemble -- no chirality issues, no different bonding possibilities. On the Hartley suggested metric on a flat random string, 900 4-state digits is 1800 bits. However, specificity is zero, so: Chi_500 = [1800 * 0] - 500 = 0. On replication, the new string is reproducing not originating, so we now have Chi_500 {gen 2] = 0. Similarly: Chi_500 [gen 3] = 0 Chi_500 [gen n] = 0. Or, if we wish we can interpret the first string as a template for a reproduced crystal, replicated n times. The originally random string is now redefined to be a template. The template being replicated n times, there is no addition of information. The template, let's call it X is simply being replicated: XXXXXXXXXXXXXXXXX . . . Functionally specific, complex information is found in neither randomness nor in simple repetition. Suppose the replication can occasionally generate a random change: XXXXXXXXXXX . . . YYYYYYY . . . In this case, we are still seeing he same basic problem, we are not genrating a funcirtonallys pecific complex, vNSR, and irreducibly complex object. And, that is what was to be generated. In short, such an exercise is a red herring led off to a strawman. It is not along the right track. What is needed is to generate a metabolic entity capable of transforming parts in its environment into components it uses, and energy to drive its processes that use energy. In addition, it has to code a representation of itself and in effect a self-assembly process for copying itself, metabolic and self replicating facility included. This is what we need, instead:
(i) an underlying storable code to record the required information to create not only (a) the primary functional machine [[here, for a "clanking replicator" as illustrated, a Turing-type “universal computer”; in a cell this would be the metabolic entity that transforms environmental materials into required components etc.] but also (b) the self-replicating facility; and, that (c) can express step by step finite procedures for using the facility; (ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with (iii) a tape reader [[called “the constructor” by von Neumann] that reads and interprets the coded specifications and associated instructions; thus controlling: (iv) position-arm implementing machines with “tool tips” controlled by the tape reader and used to carry out the action-steps for the specified replication (including replication of the constructor itself); backed up by (v) either: (1) a pre-existing reservoir of required parts and energy sources, or (2) associated “metabolic” machines carrying out activities that as a part of their function, can provide required specific materials/parts and forms of energy for the replication facility, by using the generic resources in the surrounding environment.
Until we can create something that can do that, we are not even close to what is needed. (And BTW, I have a technology interest here, as a modular system to replicate itself and to do a universal programmable constructor or even something a lot less than that, is a way to industrial civilisation 2.0, and onward space colonisation. DV, I intend to make a blog on that -- ID based thought is potentially a key to the technological transformation of the 3rd world. Imagine, a network of communities with such modular self-replicating technology bases that can manufacture from local ingredients the decisive things for farming, commerce, general manufacturing, education and trade!) kairosfocus
Hi, Chris!
Thank-you very much for your response (208) , Lizzie. The thrust of your sideways approach to Accident or Design is an appeal to “stochastic processes [that] involve feedback loops”. That appeal raises the question: was that stochastic process put in place intentionally by the Creator? You suggest that your approach “certainly doesn’t rule out a creator God” and if you actually did rule in the Creator, that would make you a theistic evolutionist like Kenneth Miller, for example. The acid test would be do you believe that: a. The Creator planned our existence (ie. it was intended or Designed), or b. The Creator was surprised by our existence (ie. it was unintended or Accidental) However, if in fact you reject the existence of a Creator then you are actually proposing an Accidental choice to explain existence. All you are saying is that everything just made itself by a stochastic process that involves feedback loops. That is not a third way.
Well, as I said, it's a sideways look at the problem, I'm not suggesting an additional "third" way :) To be more specific, I am claiming that stochastic processes can result in complex structures. Indeed non-stochastic processes that involve feedback can, likewise (the most famous example is probably the Mandebrot set), which is bound entirely by Necessity (no Chance involved). I guess my point is that at first sight "Necessity" evokes deterministic processes that no-one expects to have a non-natural (supernatural?) cause (the cause is indeed regarded as "natural law"), while "Chance" evokes the idea of noise process - accidents that make the operation of natural law unpredictable (contaminants in the chemistry set, or whatever). However, I suggest that when these concepts are carefully unpacked, they turn out to be somewhat different. "Necessity" at quantum level turns out to be stochastic (i.e. events are drawn from a probability distribution, not a deterministic algorithm), and "Chance" turns out mostly to consist of things obeying natural laws but at unanticipated times. In other words "Chance" turns out to be largely a function of how much we know (we as intelligent predictive observers or our world) rather than some alternative to "necessity". After all, a good Newtonian physicist can sometimes beat the odds at Monte Carlo by close observation of the velocity and mass of the roulette ball and of the terrain it will encounter before it stops. In other words, whether something is "chance" or not, depends on the information that an intelligent observer has at his/her disposal :) Which is somewhat the other way round from the usual ID argument!
Now then, back to arguing about the definition of information even though you’re not disputing that the cell is probably the most amazing source of information in existence! Even if you can “make the case that ‘information’ can arise from simple beginnings”, and you most certainly have not made that case yet, that would not even begin to help you explain the existence of the super-computer, super-factory like cell. If pebbles were the first computer, then no matter how many times we randomly rearrange them, we’ll never end up with even a digital watch never mind a super-computer. Even if you somehow managed to show that pebbles could, through an unintended stochastic process involving feedback loops, turn into a digital watch, the gulf from that to the super-computer like cell remains as unbridgeable as ever. The more sophisticated, complex and functional information becomes, the more prone it is to loss through unguided influences… never, ever gain.
Well, I would dispute that last claim. I don't think it's true :) At least not in the context of self-replication.
in the existence of “precursors of that first modern cell” is an entirely unscientific one. It is not supported by experimental results or observational evidence. You merely ‘hope’ they existed to prevent the global paradigm shift in favour of Intelligent Design (which, of course, does solve the problem of the origin of life). Evolutionists know that attempting to explain Accidental abiogenesis scientifically, with reference to cell biology, is ‘clear nonsense’ which is the only reason they keep trying to disown it despite the fact that their beliefs commit them to it (unless they’re theistic evolutionists).
Well, no. "They" don't "keep trying to disown it", so we do not need to explain why they do. And the reason scientists are interested in OOL questions is easy to explain - because the question is both fascinating and unsolved! Nothing a scientist likes more, whatever his/her beliefs :) Elizabeth Liddle
Lizzie,
And what I hope to demonstrate is that in that virtual world, self-reproducing structures will emerge. If I succeed, then the very fact that I have self-reproducing structures, means, I think, that information (in your sense) has been created, because each structure embodies the information required to make a copy of itself.
…and what shall we do with the observed discrete-ness ? Lizzie, as much of an achievement as it might be, the issue is not if you can concoct a realistic simulation with parameters where self-replicating structures spontaneously appear. That schtick has already been done with intelligent agents feeding energy and pre-programmed units into an intelligently constrained system (yawn). The issue is can you get an encoded symbolic abstraction of a discrete state embedded into a discrete medium, whereby that representation/medium is physically transferred to a receiver in order that the receiver become informed by the decoded representation. As you can see, the rise of recorded information entails the rise of the abstraction, the symbol, and the translation apparatus/receiver. To approach it otherwise would be to attempt a book prior to the onset of paper, ink, the alphabet, or the reader. I wonder if you are failing to truly appreciate the conceptual issues you face. I know you are enamored with some idea of a mechanical representation, (like a shadow for instance) but that is not what is observed. Even the leading materialist researchers on this issue (Yarus, Knight, etc) concede the observed indirect nature of translation. It is this prescriptive quality which you are shooting to mimic, and it is very much related to Pattee’s “epistemic cut” or Abel’s “cybernetic cut”, and even Polanyi’s “boundary condition”. This is where the mechanism of the mind asserts itself in the causal chain, and for you to be successful, it is that quality (and its observed effects) you must reproduce without a mind. As I stated in my previous post: “…this is what information is, and it is also what is found in the living cell. Information is being used to create function, and that is an observed reality. I am not interested in a loose example that truly only fulfils the need to complete a game; I am interested in an example relevant to the observation.” Upright BiPed
Well, this is why I want an actual operational definition of information, and a clear method of quantifying it (if the criterion of success is going to be a quantified threshold). Upright BiPed imposed no such threshold, and that is the challenge I sought to meet. However, we did agree that the "representation" of the information should be disassociated from the content. So that rules out "moulds". What I am proposing (or rather expecting to arise from my starting conditions) is not as simple as my earlier "Duplo Chemistry" example, in which the Duplo polymers duplicated only themselves. I am expecting that an entire system (a "structure") consisting of an identifiable assemblage of vMonomers (probably polymerised in various ways) will reproduce itself, i.e. will intrinsically encode the information required to produce a not-quite faithful copy of that pattern. In other words, will do what living cells do (except on a much simpler scale). Moreover, I am not starting with a pattern, as, indeed, I did not, before. So I have not "designed" a mould that stamps out a pattern. I have simply set up a world consisting of Chance and Necessity in which self-replicating structures tend to emerge, and, in this case, evolve into ever-more-efficient self-replicators. If I succeed, then I submit it will be a powerful argument against the case that Information (as exemplified in the capability of living cells to self-replicate) cannot arise only through Chance and Necessity. If not, why not? Remember, I am programming nothing, merely laying down the Rules and the Hazards and letting the rest happen as it will. Elizabeth Liddle
Thank-you very much for your response (208) , Lizzie. The thrust of your sideways approach to Accident or Design is an appeal to “stochastic processes [that] involve feedback loops”. That appeal raises the question: was that stochastic process put in place intentionally by the Creator? You suggest that your approach “certainly doesn’t rule out a creator God” and if you actually did rule in the Creator, that would make you a theistic evolutionist like Kenneth Miller, for example. The acid test would be do you believe that: a. The Creator planned our existence (ie. it was intended or Designed), or b. The Creator was surprised by our existence (ie. it was unintended or Accidental) However, if in fact you reject the existence of a Creator then you are actually proposing an Accidental choice to explain existence. All you are saying is that everything just made itself by a stochastic process that involves feedback loops. That is not a third way. Now then, back to arguing about the definition of information even though you’re not disputing that the cell is probably the most amazing source of information in existence! Even if you can “make the case that ‘information’ can arise from simple beginnings”, and you most certainly have not made that case yet, that would not even begin to help you explain the existence of the super-computer, super-factory like cell. If pebbles were the first computer, then no matter how many times we randomly rearrange them, we’ll never end up with even a digital watch never mind a super-computer. Even if you somehow managed to show that pebbles could, through an unintended stochastic process involving feedback loops, turn into a digital watch, the gulf from that to the super-computer like cell remains as unbridgeable as ever. The more sophisticated, complex and functional information becomes, the more prone it is to loss through unguided influences… never, ever gain. Belief in the existence of “precursors of that first modern cell” is an entirely unscientific one. It is not supported by experimental results or observational evidence. You merely ‘hope’ they existed to prevent the global paradigm shift in favour of Intelligent Design (which, of course, does solve the problem of the origin of life). Evolutionists know that attempting to explain Accidental abiogenesis scientifically, with reference to cell biology, is ‘clear nonsense’ which is the only reason they keep trying to disown it despite the fact that their beliefs commit them to it (unless they’re theistic evolutionists). Chris Doyle
Dr Liddle: The issue is chance and necessity producing information beyond a threshold linked to either the solar system or the observed cosmos Planck-time quantum state thresholds, i.e we are looking at 500 or 1,000 bits as the lower threshold of functionally specific complex info to be produced by chance and necessity not with the injection of choice, as GA's routinely do. We already have cases where info of order 20 - 24 or so ASCII digits is produced by random text generators, i.e plumbing 10^50 possibilities. We are looking at 10^150 or 10^300 possibilities as the threshold, as that is where we can comfortably say the solar system or cosmos scale resources are swamped by the scope of search. 125 bytes of functional info in an algorithm or 143 ascii characters at the higher end. In addition, I have no interest in the equivalent of going back and forth with moulds, even digitised moulds, as you did already with the idea of plastic modules. The mould REPLICATES the pattern, it does not create it. Mould-copying is not creation of information. And, the pattern is not functional in itself, especially in the sort of code based vNSR context we are examining. if we are going down a mould road, that is a red herring led out to a strawman, as was already addressed. the information in question in mRNA is coded in the sequence of the string, and it specifies start, elongation and halt, in an algorithmic context based on a digital code. Which in turn are effected through machines in and around the ribosome. GEM of TKI kairosfocus
F/N: A version on cellular automata that fit together and replicate on a simulation do not answer to the issue. We need something that is chemical, not algorithmic.
It will be chemical (albeit simulated chemistry) not algorithmic. I will provide no algorithm for self-replication. This will have to emerge from no more than Chance (as in the movement of the vMonomers in the virtual world) and Necessity (the chemical rules and cyclical temperature fluctuations that govern bonding between vMonomers).
Unless, you will have thousands of cross interfering chemicals, chirality and for the equivalent of AAs in proteins 50-50 peptide non peptide bonds, in a context where the relevant protein machines are on avg 300 20-state AAs long and the relevant D/RNA strands — which must make up a code fortuitously and the machines to make proteins must form equally fortuitously — must be of order 1800 4-state elements, just for a toy example. And your numbers of elements in your pond would have to be of order 10^20 – 26.
Well, we will see. What I would like to agree before I start, however, is that if a population of self-replicators does emerge from Chance and Necessity, that that will satisfy the requirement for Information. After all, if a structure can replicate itself, it must embody the information required to do so, and the capacity to communicate that information to the system that assembles the replication, right? Elizabeth Liddle
Dr Liddle: If you can credibly produce a replicable unweaving of diffusion or similar phenomena [and remember there is a sale issue on fluctuations] — empirically, not a simulation — you have a Nobel Prize coming, as you are blowing away the 2nd Law of Thermodynamics. Let us see you do it . . . GEM of TKI
First of all, my project will of course be a simulation. I don't see a problem with that - a simulation is just a type of mathematical model, and we are talking about demonstrating (actually, falsfying) a principle here. If the falsification holds in the simulation then it holds. The principle I am attempting to falsify is that Chance and Necessity cannot produce Information. And to make sure that Chance and Necessity are not confounded, I propose that the movement of my vMonomers is drawn from a completely flat random distribution (all directions equally probable; the probability of a move on any one trial equal for all vMonomers), i.e. Chance whereas the way they combine is completely deterministic (Necessity). I may also include cyclical fluctuation in a variable to stand for temperature, that will govern the bonding, but that will also be entirely deterministic. If I succeed, I will not have violated the 2nd Law of Thermodynamics because I am not proposing that this occurs in a closed system, but one, like planet earth, that receives energy from an outside source. Same would apply if I did it using actual chemistry, but I'm not going to use actual chemistry. I'm simply going to demonstrate (I hope) that Information can arise spontaneously from nothing more than Chance and Necessity. Elizabeth Liddle
F/N: A version on cellular automata that fit together and replicate on a simulation do not answer to the issue. We need something that is chemical, not algorithmic. Unless, you will have thousands of cross interfering chemicals, chirality and for the equivalent of AAs in proteins 50-50 peptide non peptide bonds, in a context where the relevant protein machines are on avg 300 20-state AAs long and the relevant D/RNA strands -- which must make up a code fortuitously and the machines to make proteins must form equally fortuitously -- must be of order 1800 4-state elements, just for a toy example. And your numbers of elements in your pond would have to be of order 10^20 - 26. kairosfocus
Pardon typo: SCALE issue. kairosfocus
Dr Liddle: If you can credibly produce a replicable unweaving of diffusion or similar phenomena [and remember there is a sale issue on fluctuations] -- empirically, not a simulation -- you have a Nobel Prize coming, as you are blowing away the 2nd Law of Thermodynamics. Let us see you do it . . . GEM of TKI kairosfocus
kairosfocus at 214: This is very helpful. I need to print it out and read it carefully. Thanks. Elizabeth Liddle
kairosfocus, #
3 –> Sorry, but this is hand waving. You are essentially calling for the unweaving of diffusion and brownian motion to create complex self-replicating systems, with informational control. the number of dispersed states will so overwhelm clumped at random states then the random clumped ones the functionally combined ones that you will run straight into Hoyle’s tornado in a junkyard. (Cf my own thought experiment discussion here.)
Well, no it isn't "handwaving", kf! It would be if I didn't intend to actually do it, but I do :) If I succeeded would you accept that the "tornado in a junkyard" argument had failed? Elizabeth Liddle
PS: I'd point out that what I propose is not a conventional GA, because I am not proposing to start off with a self-reproducing structure. I am starting of with a randomly generated population of non-self-reproducing vMonomers, with certain "chemical" binding properties. I am trusting that my self-replicators will emerge "naturally" in this population and then go on to evolve. Does that make sense? Elizabeth Liddle
Thanks for this, UB:
I am trying to get you to demonstrate a natural process whereby a symbolic representation of a discrete state becomes embedded in a discrete medium, then that representation/medium is transferred to a receiver in order for that receiver to become informed by that representation. (And quite frankly, I am giving you a HUGE amount of leeway, given the facets of recorded information and information transfer that have yet to even be discussed). Why am I asking for this? Well, primarily because you suggested you could do it. But moreover, because this is what information is, and it is also what is found in the living cell. Information is being used to create function, and that is an observed reality. I am not interested in a loose example that truly only fulfils the need to complete a game; I am interested in an example relevant to the observation.
However, I should make it clear: leeway is is precisely what I do not want! This is why I have been trying to drill down to an operational definition with minimal leeway, of what you would regard as information. So if what you suggest provides me with "a HUGE amount of leeway" then it isn't terribly useful! That's why I presented a specific proposal. I've thought it out a little more thoroughly, so here it is: I propose to devise a virtual world populationed by virtual monomers (which I will refer to as vMonomers). Each of these monomers will have a set of "chemical" properties, i.e. they will have certain affinities and phobias (if that's the right word) and some of those affinities will be shallowly contingent. This if you like is the "Necessity" part of the virtual world - a set of simple rules that govern what happens when my vMonomers come into contact with each other. It will be entirely deterministic. In contrast, the way my vMonomers move around their virtual world will be entirely stochastic (virtual Brownian motion, if you like) so that the probability of any one of them moving in any given direction is completely flat - all directions are equiprobable. So we have Necessity, and we have Chance. And what I hope to demonstrate is that in that virtual world, self-reproducing structures will emerge. If I succeed, then the very fact that I have self-reproducing structures, means, I think, that information (in your sense) has been created, because each structure embodies the information required to make a copy of itself. However, those copies will not be perfect, and so I also foresee that once my self-reproducing structures have emerged they will evolve, in other words the most prevalent structure type in each generation will tend to change. As I say, I don't know that I can do this (although I believe it can be done!) If I succeeded, would you agree that information (meaningful information, i.e. the information required to duplicate a structure) had been created by Chance and Necessity? Or do you see a loophole? Because I do NOT want to work on this and discover I have slipped through a loophole! Cheers Lizzie Elizabeth Liddle
#213 "I’ve described how I propose to attempt the challenge. If you both are happy with the proposal, I am happy to start work Please take note of my post at 212. Upright BiPed
EL, In your post #207 you say that there is a potential problem because Mung claims that information can be measured without knowing what its about, while I claim that it must be about something. You then asked if I see the problem. No I don't. Both those claims are correct, and not in conflict with one another. Upright BiPed
Dr Liddle: The Hartley-based info metric will assign a maximal value of H, average info per symbol to a flat random string of symbols, due to the mathematics of weighted averages: H = - [SUM on i] pi* log pi This is an artifact of the mathematics, and is irrelevant to the issue that meaningful informational strings can be measured on observed frequencies of symbols interpreted as probabilities, yielding a measure of information: Ik = log (1/pk) = - log pk A flat random string of digits equivalent to 100 coins, will take a value on the metric, but that has nothing to do with whether or not it is informational in the functional, meaningful sense. Now, we can in fact address the measurement of meaningful information, on the fact that it will normally be structured per rules of meaning [such as with say a compressed A/D conversion rule for an analogue signal] or even codes [ASCII text in English] and an observed event E will therefore come from a confined region T of the set of possibilities for a string or related set of symbols. You will doubtless recall above the thought exercise of 1,000 coins in a tray with square slots, let's say a 10 X 100-slot array. A tray full of coins simply tossed will with extremely high probability be near to 50-50 H/T in no particular order. This is the statistically dominant cluster of configs, or, microstates if you will. So if you saw a pattern that reflected that overwhelming dominance, there would be no reason to remark on it. All, is as expected. But, if the same tray were now to be seen as having the ASCII code for the first 143 characters of this post, that would transform our estimate of the best explanation. Precisely because what we now see is utterly unexpected on the null hyp of chance distributions. Given the scope of the possibilities for 1,000 bits, we could transform the whole observed cosmos into coin trays like that and toss the for its thermodynamic lifespan and we would have utterly no credible basis for expecting that ANY such tray -- much less the one we have in hand so to speak -- would do anything like that. For, event he most impossibly fast coin tosses would not be able to sample more than 1 in 10^150 of the config space, so chance is not a credible explanation of such a specific, complex [the space of possibilities is very large] and informationally functional event. A far better explanation is that the coins were configured by choice, the other known causal explanation of highly contingent outcomes. (As was discussed earlier, necessity will produce strongly similar outcomes under similar initial conditions, i.e. this is how we detect and identify natural laws of necessity, like F = ma etc] Now, over the past few months, there has been a considerable discussion on the Dembski Chi-metric in and around UD. The upshot of such is that it is best to reduce the metric -- through expanding the log and simplifying the threshold value -- to the following form:
Namely:
define ?S as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . . where M is the number of semiotic agents [S's] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history.[31] . . . [Then] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity [?] of T given [chance hypothesis] H [in bits] . . . as [the negative base-2 log of the conditional probability P(T|H) multiplied by the number of similar cases ?S(t) and also by the maximum number of binary search-events in our observed universe 10^120] ? = – log2[10^120 ·?S(T)·P(T|H)] . . . eqn n1
How about this (we are now embarking on an exercise in “open notebook” science): 1 –> 10^120 ~ 2^398 2 –> Following Hartley, we can define Information on a probability metric: I = – log(p) . . . eqn n2 3 –> So, we can re-present the Chi-metric: Chi = – log2(2^398 * D2 * p) . . . eqn n3 Chi = Ip – (398 + K2) . . . eqn n4 4 –> That is, the Dembski CSI Chi-metric is a measure of Information for samples from a target zone T on the presumption of a chance-dominated process, beyond a threshold of at least 398 bits, covering 10^120 possibilities. 5 –> Where also, [following VJT and the implications of there being about 10^102 possible Planck-time quantum states of the 10^57 or so atoms in our solar system since the big bang] K2 is a further increment to the threshold that naturally peaks at about 100 further bits . . .
Introducing as well the dummy variable S for observed or inferred specificity to a simply describable zone of interest T, where if an event E from T in a space of possibilities W, is so specific, S = 1, 0 otherwise: Chi_500 = I*S - 500, bits beyond the threshold. As was shown above, this will show how if we have a randomly generated bit string I will be high but S will be zero, and if we have a forced orderly repetitive pattern like unit cells in a crystal, we will have I low or zero even though S is 1. (All of this, BTW, is quite similar to the reasoning behind the simple brute force X-metric that is in the UD WAC's and which produces an equivalent result for the 1000 bit threshold. (I prefer this as this takes in the resources of the observed cosmos.) When MG first made her guest post the X-metric was used to show how the CSI can be estimated for such an event.) The point of the reduced Chi metric is that it allows us to identify events from a narrowly specific zone that has an observed or inferred meaning or function, and to then address the challenge of getting to the configuration on a random walk driven trial and error search, the benchmark search. For, it has been shown that on average searches will do no better than this, if a search is picked at random from the set of possible algorithms. As has also been shown [cluster of papers by Evo Info Lab], a simple subtraction will then suffice to show a value for intelligently injected active and problem specific information that allows a search to outperform the benchmark. GEM of TKI kairosfocus
Mung:
Well, Upright BiPed may think i’m strange, but so far I’ve heard no disagreement, so I have no reason to think we’re at odds. Why not just ask UB if it would be ok to use Shannon’s measure to operationalize whatever you propose to offer as information? You knew that your original “information” was <bnot about anything, and none of us disagreed with you. We all saw right away that it wasn’t about anything. So I see no reason to think that we can’t agree whether an example is about something.
But, pending an explicit reference to a method for quantifying information, this is what I propose:
iirc, UB never objected to your use of Shannon’s measure, and even cites Shannon’s paper, and only wanted to know what the “information” was about. You had to admit it wasn’t about anything. It wasn’t information at all. You appear to have abandoned Shannon Information after having first introduced it. Can you explain why?
Because obviously it doesn't work as a measure of the kind of information that either you or UB count (reasonably) as information. Because my information (despite having 100 bits in Shannon terms) wasn't "about" anything, you regard its information content as zero. So what I need is a measure of information that won't give us a false positive. There's a metric for CSI in the glossary, but it seems to me to come with problems, and right now, as UB has agreed to my operationalized version of his/her second definition, I'm happy to go with that. More to the point, I've described how I propose to attempt the challenge. If you both are happy with the proposal, I am happy to start work :) It may take me a while though. Elizabeth Liddle
EL,
Well, that’s potentially a problem, Upright BiPed. Mung says that you can measure Information without knowing what it is about. But you are saying it has to be about something, and in order to know whether it’s about anything we have to have some criterion by which to judge whether it’s about anything.
The issue is that I am not necessarily trying to measure information. I am not asking you to measure information. I am not interested in the ratio of signal to noise, or how many bits of data are relayed, or how much uncertainty is alleviated. None of that. And the question of about-ness was only related to the information having a function; it was never intended to be a stumbling block. I am not suggesting (even for a moment) that these other issues are unimportant, they are just not what I am asking for. I am trying to get you to demonstrate a natural process whereby a symbolic representation of a discrete state becomes embedded in a discrete medium, then that representation/medium is transferred to a receiver in order for that receiver to become informed by that representation. (And quite frankly, I am giving you a HUGE amount of leeway, given the facets of recorded information and information transfer that have yet to even be discussed). Why am I asking for this? Well, primarily because you suggested you could do it. But moreover, because this is what information is, and it is also what is found in the living cell. Information is being used to create function, and that is an observed reality. I am not interested in a loose example that truly only fulfils the need to complete a game; I am interested in an example relevant to the observation. Upright BiPed
Dr Liddle: A few notes: 1 --> To distinguish signal from noise the signal has to have informational characteristics, not noise characteristics. Signal to noise ratio is in fact a key metric in communications. And you do not need to know the specific meaning to spot a signal from noise. (Yes, this is a design inference.) 2 --> You have gone on to say:
I’m going to start off with a “toy” chemistry – a virtual environment populated with units (chemicals, atoms, ions, whatever) that have certain properties (affinities, phobias, ambiphilic, etc) in a fluid medium where motion is essentially brownian (all directions equiprobable) unless influenced by another unit. I may have to introduce an analog of convection, but at this stage I’m not sure. And what I propose to do is that starting with a random distribution of these units, a self-replicating population of more complex units will evolve, in which each unit (or “organism” if you like, or “critter”) has, encoded with in it, the “recipe” for its own offspring. That way we will have a Darwinian process (if I achieve it) where I don’t even specify a fitness function that isn’t intrinsic to the “chemistry”, that depends entirely on random motion (“Chance” if you like) and “necessity” (the toy chemistry) to create an “organism” with a “genome” that encodes information for making the next generation. Information “about” the next generation that is “sent” to the processes involved in replication.
3 --> Sorry, but this is hand waving. You are essentially calling for the unweaving of diffusion and brownian motion to create complex self-replicating systems, with informational control. the number of dispersed states will so overwhelm clumped at random states then the random clumped ones the functionally combined ones that you will run straight into Hoyle's tornado in a junkyard. (Cf my own thought experiment discussion here.) 4 --> In short, you have run straight into the second law of thermodynamics, statistical form. 5 --> Going further, the empirical evidence of the informaitonal polymers of life show tha they tend to combine on standard modules, with sugar-phosphate chains for D/RNA and with COOH- NH2 chains for proteins. IT IS THE SEQUENCING THAT IS HIGHLY CONTINGENT, AND THAT SEQUENCING IS INFORMATIONALLY CONTROLLED USING ALGORITHMIC STEP BY STEP PROCESSES WITH SET-UP, START, STEPS AND HALTING. 6 --> We have not addressed chirality, peptide vs non peptide bonds, and the cross reaction of other chemicals in a soup, or the high probability of breakdown of highly endothermic molecules. 7 --> there is a reason why we found this exchange [read down a bit from here] between Orgel and Shapiro a few years ago:
[[Shapiro:] RNA's building blocks, nucleotides contain a sugar, a phosphate and one of four nitrogen-containing bases as sub-subunits. Thus, each RNA nucleotide contains 9 or 10 carbon atoms, numerous nitrogen and oxygen atoms and the phosphate group, all connected in a precise three-dimensional pattern . . . . [[S]ome writers have presumed that all of life's building could be formed with ease in Miller-type experiments and were present in meteorites and other extraterrestrial bodies. This is not the case. A careful examination of the results of the analysis of several meteorites led the scientists who conducted the work to a different conclusion: inanimate nature has a bias toward the formation of molecules made of fewer rather than greater numbers of carbon atoms, and thus shows no partiality in favor of creating the building blocks of our kind of life . . . . To rescue the RNA-first concept from this otherwise lethal defect, its advocates have created a discipline called prebiotic synthesis. They have attempted to show that RNA and its components can be prepared in their laboratories in a sequence of carefully controlled reactions, normally carried out in water at temperatures observed on Earth . . . . Unfortunately, neither chemists nor laboratories were present on the early Earth to produce RNA . . . [[Orgel:] If complex cycles analogous to metabolic cycles could have operated on the primitive Earth, before the appearance of enzymes or other informational polymers, many of the obstacles to the construction of a plausible scenario for the origin of life would disappear . . . . It must be recognized that assessment of the feasibility of any particular proposed prebiotic cycle must depend on arguments about chemical plausibility, rather than on a decision about logical possibility . . . few would believe that any assembly of minerals on the primitive Earth is likely to have promoted these syntheses in significant yield . . . . Why should one believe that an ensemble of minerals that are capable of catalyzing each of the many steps of [[for instance] the reverse citric acid cycle was present anywhere on the primitive Earth [[8], or that the cycle mysteriously organized itself topographically on a metal sulfide surface [[6]? . . . Theories of the origin of life based on metabolic cycles cannot be justified by the inadequacy of competing theories: they must stand on their own . . . . The prebiotic syntheses that have been investigated experimentally almost always lead to the formation of complex mixtures. Proposed polymer replication schemes are unlikely to succeed except with reasonably pure input monomers. No solution of the origin-of-life problem will be possible until the gap between the two kinds of chemistry is closed. Simplification of product mixtures through the self-organization of organic reaction sequences, whether cyclic or not, would help enormously, as would the discovery of very simple replicating polymers. However, solutions offered by supporters of geneticist or metabolist scenarios that are dependent on “if pigs could fly” hypothetical chemistry are unlikely to help.
8 --> And, as for the evolution ideas, the basic problem is the origin of the functional information, in a context of high contingency [and I notice you have not responded to the die and 1,000 coin examples above].Lucky noise, filtered by trial and error is not a plausible source of informaiton, algorithms, codes, and string data structures. 9 --> As for meaningful info, the way to measure it is to look at its functional specificity, in the context of its complexity -- thence the active info injected t outperform random walk and trial and error, something that the design thinkers are clearly bringing out. GEM of TKI kairosfocus
Elizabeth Liddle @208:
My position is not that Stuff (events, phenomena, complexity, whatever) is either caused by Accident (things bumped into each other in such a way that something amazing and improbable occurred) or Design (someone deliberately planned these amazing things – it couldn’t possibly have happened by Accident),
"I do not see why a “purposeless, mindless process” should not produce purposeful entities, and indeed, I think it did and does." - Elizabeth Liddle Mung
Well, Upright BiPed may think i'm strange, but so far I've heard no disagreement, so I have no reason to think we're at odds. Why not just ask UB if it would be ok to use Shannon's measure to operationalize whatever you propose to offer as information? You knew that your original "information" was <bnot about anything, and none of us disagreed with you. We all saw right away that it wasn't about anything. So I see no reason to think that we can't agree whether an example is about something.
But, pending an explicit reference to a method for quantifying information, this is what I propose:
iirc, UB never objected to your use of Shannon's measure, and even cites Shannon's paper, and only wanted to know what the "information" was about. You had to admit it wasn't about anything. It wasn't information at all. You appear to have abandoned Shannon Information after having first introduced it. Can you explain why? Mung
Hi, Chris, @ #180! I do apologise for keeping you waiting. You wrote:
Greetings Lizzie, I mean for the terms Accident and Design to be as all-encompassing as possible – hence the initial capital letters. For me personally (going beyond the remit of ID science), either the Universe, and everything in it, is a product of the Grand Architect, or it just made itself without any kind of design whatsoever. With that emphasis in mind, I put it to you that *all* explanations that are on offer to explain existence ultimately fall into one of those two categories. The two main contenders in science – neo-Darwinian Evolution and Intelligent Design – are just competing explanations for Accident versus Design. There is no third way. Just saying there is one, without providing any details, doesn’t count by the way!
Fair enough! And my answer is a sideways one, I'm afraid (though I don't really apologise for that - sometimes problems can be solved by turning them sideways!) My position is not that Stuff (events, phenomena, complexity, whatever) is either caused by Accident (things bumped into each other in such a way that something amazing and improbable occurred) or Design (someone deliberately planned these amazing things - it couldn't possibly have happened by Accident), but that where stochastic processes involve feedback loops, Design, and even, what I call Intentional Design (which I find less ambiguous than Intelligent Design) emerges. In other words, I think that Intentional agents are one of the possible results from chaos (in the technical sense of non-linear stochastic processes). Now I'm still not giving you any details, although I'm happy to elaborate (though it's a long story....) - but that's my take. It certainly doesn't rule out a creator God (you could certainly make a case for the genius required to realise that a feedback system eventually, given eternity, will generate intelligent, intentional, moral creatures :)) but it does, I submit, make postulating that such a being might need to tinker with the thing unnecessary (and, IMO, bad theology!)
This thread and others are littered with definitions of information. Either every single definition provided fails because the cell is just “a simple homogenous globule of plasm”. Or, actually, we all know what we’re talking about here (we can even see it in the video in the top right hand corner of this very webpage) and all this talk of ‘gathering dust’ and 1s and 0s is, at best, missing the point (at worst, deliberately avoiding it). Why choose ‘gathering dust’ as a starting point for information (particularly in the cell) when supercomputers and superfactories are far more obvious and accurate associations?
Because if I (or anyone else) is to make the case that "information" can arise from simple beginnings, we need to know what the simplest possible "information" example is. I hope it goes without saying (though from responses on another thread I guess it does) that no "Darwinist" thinks that the first modern cell formed "accidentally" from a fortuitous coming together of lipids, polymers, amino acids and proteins. That is clear nonsense. The issue is not whether that might have happened (vanishingly unlikely) but whether precursors of that first modern cell are possible, and whether the very early precursors might have been simple enough to have formed spontaneously under plausible scenarios for the environment on early earth. But that, of course, isn't a Darwinian issue. Darwin knew he hadn't solved that problem. No-one has, yet. So putting aside that problem, a secondary issue is: given the essentials for Darwinian evolution, namely self-replicating entities whose offspring vary from their parents, and vary in such a way that at least a few of them reproduce more efficiently than their parents, can the "information" we see in modern cells arise? I think the answer is clearly yes, but obviously people here differ. So what I will try to do is to show that even with a very primitive (but of course "toy") chemistry, "organisms" with these properties emerge. It'll be fun, but may take me a while :) Cheers Lizzie Elizabeth Liddle
@ Upright BiPed
Lizzie: “because Upright BiPed asked my what my original “message” was about, and of course, it wasn’t “about” anything.” The reason I asked you what it was about is because if informion is not about anything then its not informion – at best, in the Shannon sense, it’s noise. This was exactly Shannon’s point in his schematic Fig. 1 on the second page of his famous paper. It offers a schematic diagram with five individually-named boxes. From left to right there is as arrow which passes through four of the five boxes in a specific order to indicate the flow of information. The flow begins at “Information Source” then passes through “Transmitter” to “Receiver” and finally to “Destination”. The fifth of the five boxes is tangentially tied to the flow of information between the “Transmitter” and the “Receiver”. The fifth box in entitle “Noise Source”. - – - – - This is why I said I don’t care what you want to say the information is about, but it must be about something. Your choice.
Well, that's potentially a problem, Upright BiPed. Mung says that you can measure Information without knowing what it is about. But you are saying it has to be about something, and in order to know whether it's about anything we have to have some criterion by which to judge whether it's about anything. Do you see what I mean? However, I'm not losing heart, because I think that operational definition we hammered out still works, although I'd like something with less wiggle room if possible. But, pending an explicit reference to a method for quantifying information, this is what I propose: I'm going to start off with a "toy" chemistry - a virtual environment populated with units (chemicals, atoms, ions, whatever) that have certain properties (affinities, phobias, ambiphilic, etc) in a fluid medium where motion is essentially brownian (all directions equiprobable) unless influenced by another unit. I may have to introduce an analog of convection, but at this stage I'm not sure. And what I propose to do is that starting with a random distribution of these units, a self-replicating population of more complex units will evolve, in which each unit (or "organism" if you like, or "critter") has, encoded with in it, the "recipe" for its own offspring. That way we will have a Darwinian process (if I achieve it) where I don't even specify a fitness function that isn't intrinsic to the "chemistry", that depends entirely on random motion ("Chance" if you like) and "necessity" (the toy chemistry) to create an "organism" with a "genome" that encodes information for making the next generation. Information "about" the next generation that is "sent" to the processes involved in replication. If I succeeded, would you accept that I had met the challenge, or do you foresee a problem? (I have to say, I'm not sure I can do it!) Elizabeth Liddle
To test a claim we need a set of definitions that will enable an independent objective observer to evaluate whether the claim has been met.
Why do we need a set of definitions when we already have a measure? ME: We don’t need to know what it [a particular signal] is about in order to measure it, so why do we need to know what it [a particular signal] is about in order to operationalize it? But hey, maybe I'm way off base.
An operational definition defines something (e.g. a variable, term, or object) in terms of the specific process or set of validation tests used to determine its presence and quantity. That is, one defines something in terms of the operations that count as measuring it. http://en.wikipedia.org/wiki/Operational_definition
Lot;s of good stuff on that wiki page:
In quantum mechanics the notion of operational definitions is closely related to the idea of observables, that is, definitions based upon what can be measured. Operational definitions are the foundation of the diagnostic nomenclature of mental disorders (classification of mental disorders) from the DSM-III onward. An operational definition is a procedure agreed upon for translation of a concept into measurement of some kind
Now I don't speak for Upright BiPed, and if he disagrees he can certainly say so, but I'm willing to see where Shannon Information takes us, since it already exists as an accepted measure of information. The mistake I think you're making, which both UB and I have pointed out, is that you seem to assume that information can be generated simply by flipping a coin and that Shannon Information defines a concept of information per se that is totally divorced from that of meaning.
Shannon’s analysis of the ‘amount of information’ in a signal, which disclaimed explicitly any concern with its meaning, was widely misinterpreted to imply that the engineers had defined a concept of information per se that was totally divorced from that of meaning. – Donald M. MacKay, Information, Mechanism and Meaning You know who MacKay is? So I thought we were all on the same track, that it was agreeed that information must be about something, only to find out that apparently we aren't. If it does not reduce the uncertainty at the receiver, is it information? So I was rather hoping you would continue with the coin flipping but let us know the meaning of the various heads or tails, or the meaning of a sequence, say every group of three. 2^0 = 1 2^1 = 2 2^2 = 4 2^3 = 8 2^4 = 16 Tells us how much information, in bits, can be coded in a sequence of heads/tails. log(2) 8 = 3. Here's why (in binary) and in coin language where H=1/T=0: 000 = TTT 001 = TTH 010 = THT 011 = THH 100 = HTT 101 = HTH 110 = HHT 111 = HHH It is assumed that T/H is equiprobable. The probabilty distribution is 1/2, 1/2, 1/2. Which, amazing enough, when multiplied = 1/8. And if we toss three coins in the air what is the probabilty of a specific combination of heads and tails. Ain't math fun. Keep it simple please, haha.
Mung
You are on the Dirt and Time team, so I await your response.
oh, that made me laugh. Lizzie, tell him you need more time. Mung
No problem, Lizzie. The transfer window is open 'til the end of August! Chris Doyle
#200 "Do you have an analog for the sender, btw?" Well that is what is yet to be determined, isn't it? Some people say that a God or Gods did the arranging. Others say that extra-terrestrials could have been the source. Modern science says that Dirt and Time did it. :) You are on the Dirt and Time team, so I await your response. Upright BiPed
Lizzie: "because Upright BiPed asked my what my original “message” was about, and of course, it wasn’t “about” anything." The reason I asked you what it was about is because if informion is not about anything then its not informion - at best, in the Shannon sense, it's noise. This was exactly Shannon's point in his schematic Fig. 1 on the second page of his famous paper. It offers a schematic diagram with five individually-named boxes. From left to right there is as arrow which passes through four of the five boxes in a specific order to indicate the flow of information. The flow begins at “Information Source” then passes through “Transmitter” to “Receiver” and finally to “Destination”. The fifth of the five boxes is tangentially tied to the flow of information between the “Transmitter” and the “Receiver”. The fifth box in entitle “Noise Source”. - - - - - This is why I said I don't care what you want to say the information is about, but it must be about something. Your choice. Upright BiPed
Chris - haven't forgotten you, but my access time is limited right now! Fitting this in between a java class and a rehearsal! Elizabeth Liddle
Upright BiPed
Information is a representation of a discrete object/thing embedded in an arrangement of matter or energy, where the object/thing represented is entirely dissociated from the representation
Lizzie, At first glance, I have no particular problem with the definition you propose, save one element which we have both already acknowledged – that is the acknowledgement that there is a receiver to be informed by the representation. In the cell, this is obviously the ribosome (at least in regards to protein synthesis). - – - – -
OK, that's fine, thanks for clarifying. Do you have an analog for the sender, btw? (My response doesn't depend on it, I'm just curious.)
And just so you remember, you were going to demonstrate how material (neo-Darwinian) processes can account for the rise of information from the start.
Yes indeed. I certainly have not forgotten. If I have any more questions I will let you know, but I think I can work with this. Mung:
Lizzie, I think you’re doing a great job of mangling the meaning of information in order to avoid, well, the meaning of information. :)
…the first problem here is the word “symbol” as it somewhat, IMO, implies an arbitrary assignation of signifier to signified, and is one of the points at issue.
Oh my. Just when I thought we were making progress. I thought that you had admitted that information needs to be about something.
No need for distress, Mung :) As I've said, a few times, I'm not trying to avoid anything. Precisely the reverse - I want us to have an agreed operational definition so that if I succeed, we can all agree that I've succeeded. It's no good if we end up arguing whether I've succeeded after I've done the work, and in any case, I can't attempt the work until I know what I'm trying to do. That's why operational definitions are absolutely crucial to scientific methodology.
We don’t need to know what it is about in order to measure it, so why do we need to know what it is about in order to operationalize it?
Well, if we don't, that's fine. The issue of what the thing was about came up because Upright BiPed asked my what my original "message" was about, and of course, it wasn't "about" anything. However, i think we have captured the "aboutness" to some extent in the current formulation. Nonetheless, if you can point me to a clear metric for how to measure any information I manage to generate, that would be really cool. That's what I'm after, and that would be better than the verbal version above. I won't be able to get to this till Sunday, so if someone can post the (or reference the paper in which it can be found) that would be cool. Thanks. Elizabeth Liddle
Shannon's analysis of the 'amount of information' in a signal, which disclaimed explicitly any concern with its meaning, was widely misinterpreted to imply that the engineers had defined a concept of information per se that was totally divorced from that of meaning. - Donald M. MacKay, Information, Mechanism and Meaning
Mung
PPPS: A stone carving is not a case of culling by blind contest on differential reproductive success. Indeed, it is a case of -- INTELLIGENT DESIGN: what is chipped off is very carefully chosen based on a specific target outcome that is specified by the artist's intent. (Just think, do forces of erosion -- chance and necessity -- credibly explain the four portrait figures at Mt Rushmore?) --> I don't have time for a point by point, so let's pick key snips that are the proverbial slices of teh cake with the key ingredients. kairosfocus
PPS: Exercise 2. Get 1,000 coins of the same kind, say US pennies. Define H = 1, T = 0. Put in a tray of square slots and toss. Overwhelmingly, through sheer statistical dominance of contingent possibilities, they will tend to settle at near 50-50 H/T, and in no particular discernible order or organisation. Now, suppose you went away for an hour or so and came back, seeing he coins now starting from slot 1 and following on down, giving the ASCII code for the opening words of this comment. Would you say that the particular outcome is just as likely as any other single outcome, so it cannot be explained by differentiating characteristics of necessity, chance and choice? Or, would you accept that since this event E is a very specific and independently "simply" describable zone of outcomes T, the best explanation is intelligence, as the odds of not being in any such zone T are so overwhelming that the outcome should not be observable just once in the lifespan of our observable cosmos by chance. And, there is plainly no mechanical necessity that forces the coins to that sort of meaningful pattern. Explain your reasoning, and tell us if that explanation would be persuasive to the House over in Las Vegas, why. kairosfocus
PS: And BTW, the unpredictability of weather in the specific is a case where fine differences in initial conditions -- essentially a chance issue, make a big difference to overall outcomes. The forces that make winds move and the conditions under which water will precipitate out of the air have not changed. kairosfocus
Dr Liddle: Let's start by getting lawlike natural regularity tracing to forces of nature right, re your:
I don’t accept, at least as self-evident, that “natural regularity” is incompatible with “contingency”, or even high contingency. Or at least, I would need to see very clear operational definitions of those terms as used in that claim.
1 --> Get a die, the ordinary 6-sided, non-loaded kind. 2 --> Hold it up above a table, and drop it. Several times. 3 --> Does it reliably fall, and could you measure the initial rate at a constant acceleration? 4 --> After it lands of the table and tumbles, does it settle to a reading? 5 --> Does it always read the same? 6 --> Now, take up the same die and set it on the table to read 1, 2,3, . . . 6. 7 --> You have just seen the difference between lawlike regularity tracing to mechanical necessity, chance contingency and choice contingency. GEM of TKI kairosfocus
It is the choice of words we make that gives our utterances meaning, not the phoneme bank from which they are drawn.
It's the words we choose not to use that give our utterances meaning. The phenome bank is of huge importance to the number of words which may be left unsaid, without which, words would be meaningless. Mung
Lizzie, I think you're doing a great job of mangling the meaning of information in order to avoid, well, the meaning of information. :)
...the first problem here is the word “symbol” as it somewhat, IMO, implies an arbitrary assignation of signifier to signified, and is one of the points at issue.
Oh my. Just when I thought we were making progress. I thought that you had admitted that information needs to be about something. We don't need to know what it is about in order to measure it, so why do we need to know what it is about in order to operationalize it?
Information is a representation of a discrete object/thing embedded in an arrangement of matter or energy, where the object/thing represented is entirely dissociated from the representation.
The first problem here is the word “representation” as it somewhat, IMO, implies an arbitrary assignation of signifier to signified. Mung
Information is a representation of a discrete object/thing embedded in an arrangement of matter or energy, where the object/thing represented is entirely dissociated from the representation
Lizzie, At first glance, I have no particular problem with the definition you propose, save one element which we have both already acknowledged - that is the acknowledgement that there is a receiver to be informed by the representation. In the cell, this is obviously the ribosome (at least in regards to protein synthesis). - - - - - And just so you remember, you were going to demonstrate how material (neo-Darwinian) processes can account for the rise of information from the start. Upright BiPed
Mel is right, NS is a culler, not a creator.
Yes and no. NS is certainly a culler, but culling can be creative (cf stone carving). However, if by "creative" you mean the provision of stuff from which to cull, then yes, the stuff is not created by NS. However, I'd say that the information lies in the culling, not in the "creating". It is the choice of words we make that gives our utterances meaning, not the phoneme bank from which they are drawn.
And it is by no means a given that the path to successful novelties lies always step by step uphill.
Absolutely.
Indeed evidence of use of codes and presence of irreducible complexity point to islands of function.
To maintain the metaphor, we are talking about "buttes" right? summits to which there is no gentle path? There I disagree. In a high-dimensioned landscape there are often alternative routes to summits, and some of these may include gentle dips as well as plateaus. Elizabeth Liddle
Upright BiPed:
EL, I gave you a definition two days ago at 143. Echoing an apparent new trend in debate, you have failed to say what it is about the definition you find prohibitive, and why that is so. Instead, it seems possible that the revolving claim for a definition will stand in as a tactic for avoiding the question.
No it is not a "tactic for avoiding the question" Upright BiPed, and if it is a "tend" then it is a trend with a reason. The problem with your definition, which I quote below, is firstly that it is not operationalized, and secondly that it is potentially circular. So let me have a go at operationalizing it, and see whether you are happy with it. You wrote:
As I already said, my starting point is the historical use of the word; that which gives form, to in-form (from the Latin verb informare), or, from the information processing domain; a sequence of symbols that can cause a transformation within a system. Either is suitable. If these are not sufficient for you, then I will add this: Information is an abstraction of a discrete object/thing embedded in an arrangement of matter or energy. This definition is fully compliant with what is found at the genomic level, as well as inter-cellular transient signaling systems, and every other instance of information I am aware of.
To test a claim we need a set of definitions that will enable an independent objective observer to evaluate whether the claim has been met. My claim was that I could demonstrate how Darwinian processes could generate information. So show you a Darwinian process that I claim has generated information, we need an operational definition of information that will enable an independent observer to verify my claim is justified. OK, so let's first operationalise the terms of one of your definitions:
a sequence of symbols that can cause a transformation within a system.
Now the first problem here is the word "symbol" as it somewhat, IMO, implies an arbitrary assignation of signifier to signified, and is one of the points at issue. So I suggest that we replace it with something more neutral like "items". The second problem is the word "system". The problem here is that we need a system to transform. If I propose a biological system, then you will be tempted to say to me - but hey! You started with a biological system! Where did that come from! And if I start with a non-biological system, you will be tempted to say: hey! but that's not anything like what we see in living things! The third problem is the word "transformation" - what kind of transformation would count? Obviously I could drop a brick into a bucket of strawberries and "transform" some nice strawberry systems into a lot of mush. You'd probably (rightly) call that loss of information, but it's information we are trying to define right now! So we could go with something like: "A sequence of items that can cause non-destructive change to a persisting pattern." But I think we have lost the essence of your concept, so I don't think that will do. So let's try your alternative:
Information is an abstraction of a discrete object/thing embedded in an arrangement of matter or energy.
This looks more promising, apart from the word "abstraction". hmmm. Dictionary definitions of "abstraction" just send us back to "abstract". For "abstract", Merriam Webster has:
1 a : disassociated from any specific instance b : difficult to understand : abstruse c : insufficiently factual : formal 2 : expressing a quality apart from an object 3 a : dealing with a subject in its abstract aspects : theoretical b : impersonal, detached 4 : having only intrinsic form with little or no attempt at pictorial representation or narrative content
Which is somewhat problematic because these tend to reference ideas and minds, and again, we cannot include this in our definition if we are trying to determine whether a mind is intrinsic to information! However "disassociated from any specific instance" might give us a clue. That could give us something like: "Information is a representation of a discrete object/thing embedded in an arrangement of matter or energy, where the object/thing represented is entirely dissociated from the representation". That seems to work, I think, do you agree? So I can't, for example, claim that the pattern of raindrops left on sand is creating "information" about the rain, because the representation (dimples in the sand) is not dissociated from the drops (the dimples are rain-drop shaped). I'm not wild about this (it still seems to have potential loopholes) but what do you think? Believe me, I'm not trying to make this easy for myself - exactly the opposite! I'm trying to make it hard! Interested to know what you think. Cheers Lizzie Elizabeth Liddle
@kairosfocus, #178
Dr Liddle First, the null hyp is natural regularity [law]. If something is highly contingent, that kills the null. Then, the second null is that the thing is contingent reflective of a stochastic distribution. What kills that is being on a narrow zone of interest in a large enough config space, just as the analysis that supports the second law of thermodynamics highlights. In case you are interested, here is Dembski’s phrasing (and recall, this is to be applied per aspect, as linked above): “Whenever explaining an event, we must choose from three competing modes of explanation. These are regularity [i.e., natural law], chance, and design.” When attempting to explain something, “regularities are always the first line of defense. If we can explain by means of a regularity, chance and design are automatically precluded. Similarly, chance is always the second line of defense. If we can’t explain by means of a regularity, but we can explain by means of chance, then design is automatically precluded. There is thus an order of priority to explanation. Within this order regularity has top priority, chance second, and design last” . . . the Explanatory Filter “formalizes what we have been doing right along when we recognize intelligent agents.” The steps are plainly valid, and are based on the way science commonly works, the difference being that the chance/choice contrast is decided on isolation to a zone of interest in so large a config space that arriving there by chance is utterly implausible on the gamut of the cosmos or at least the solar system.
A couple of points here: Firstly, I don't accept, at least as self-evident, that "natural regularity" is incompatible with "contingency", or even high contingency. Or at least, I would need to see very clear operational definitions of those terms as used in that claim. Most phenomena we see in the non-living world are highly "contingent". Take weather, for instance. Certain weather phenomena only occur when a certain set of conditions are met. Or star or galaxy formation. Indeed the non-biological world is full of intricately patterned phenomena that are highly contingent, and yet an "Intelligent Designer" is not normally inferred from them (although the designer of a world in which such things occur may be). Secondly, as a result, I don't actually agree with Dembski on this (and indeed I am interested, so thanks for the quotation!). Firstly I don't think "natural law" and "chance" are orthogonal causal factors. Firstly, even if they were, I don't see any a priori reason for assuming that if they are ruled out, the only alternative is some third. Let me try to support my position on this: I think Dembski is using "regularity" in the same sense that he uses "necessity" (as in "Chance and Necessity"). I think contrasting "Necessity" with "Chance" is fraught with difficulty. What is the difference between "Chance" and "Necessity"? Monod, interestingly, does not oppose the two terms in this way. He sees evolution as emerging from the interplay between the highly predictable ("Necessity") and the highly unpredictable ("Chance"). In other words natural events do not arise from one of two separate causal agents, "Chance" or "Necessity"; rather there are events that are highly predictable, and events that are highly unpredictable. Furthermore, highly predictable events are ones that are contingent on few conditions (you drop sodium into water and you will get a hydrogen flame with a high degree of certainty) while highly unpredictable events are contingent on a great many conditions, many of which may be unknown (which is why weather forecasting is so difficult). So I propose an alternative filter: You have a highly complex pattern. You ask: is this pattern highly predictable? And your answer may be yes. For example, the pattern might be a crystal of some kind, and it might be possible, with high degree of certainty, to predict the final crystal from known starting conditions. This would be the equivalent of "is it regular?" But it might be highly non-predictable, like a complex weather pattern. In which case you would to conclude that pattern was critically dependent on either starting conditions, or very small, possibly even quantum level, fluctuations. In other words, that the pattern was chaotic, and that feedback loops resulted in highly non-linear relationships between inputs and outputs. If the answer to the initial question was "non-predictable" i.e. we are dealing with a non-linear, chaotic system, the next question of interest, I sugggest, is (back to Monod): does it exhibit teleonomy? Which I will define (with a tweak to Monod's definition) as: do its structures and behaviour contribute to the persistence of the pattern? If so, we may be in the presence of a living thing. If answer to the last question is Yes, I suggest that we ask a final question: "Does it exhibit intentional behaviour"? By which I mean: do its activities provide evidence that it selects, from a wide repertoire of behaviours, those that further some distal goal? If the last, we have, I suggest, an Intelligent Designer :) But none of that casts any light on the question as to whether the Intelligent Designer was Intelligently Designed - it does, however, cast some light as to what we should be looking for when looking for an Intelligent Designer. So I reject the validity of Dembski's filter. I don't think that Necessity/Regularity and Chance are orthogonal causal factors, I think they simply describe the degree of contingency that governs an event or phenomenon. Andn I think that the signature of life is neither predictability nor unpredictability (because living things exhibit both) but teleonomy. The big question, then, is, must teleonomic phenomena be designed by Intelligent Designers? Darwin's answer was: no. I think he was right :) Elizabeth Liddle
Dr Liddle: Cf also the discussion in 147, which was originally directed to you in a previous thread. Note especially the expression: Chi_500 = I*S - 500, functionally specific bits beyond the [solar system] threshold Where I is the usual I = - log2 p info metric from Hartley on Also, where S = 1 or 0 depending on whether an item or event E is functionally specific from a definable zone T in a space of possibilities Z. The 500 bits threshold sets up a sufficiently isolated threshold that once the value is positive we are credibly entitled to infer to design on the evidence of functional specificity and complexity together. A random walk culled by trial and error is maximally unlikely to access T on the gamut of the solar system's resources which are 48 orders of magnitude smaller than the set of possibilities for 500 bits. If that is not good enough, jump up to 1,000 bits, which exhausts the resources of the observable cosmos at 1 in 10^150 of the set of possibilities.. GEM of TKI kairosfocus
Dr Liddle: Intelligent agents -- including beavers and bees for this purpose -- have intentions, but can also cause unintended consequences. the key issue as has been pointed out, is the localisation of events E in special and describable specific zones T in a wider space S of configs. Beyond a certain point the resources of the observable cosmos are inadequate to c4redibly hit on an E from a T, on random walk based trial and error. And that limit demonstrably starts within 1,000 bits worth of possibilities, where 125 bytes is a very short span for a program to make a serious difference. Genetic algorithms, I am afraid, all start in zones T, and seek to climb to hilltops. They thus reveal their root in intelligent design with intent. Here, AKA targetting. That can be seen form the fitness function which at all points in the zone swept by the vary and test and cross-breed processes, has a nice trend pointing to a locally accessible peak of performance. Starting in a broad target zone and seeking to optimise by heading for a hilltop at best is about micro-evo, it has nothing to do with the origin of major structural systems that must function from embryogenesis forward, i.e. body plan level macro-evo. GEM of TKI PS: Mel is right, NS is a culler, not a creator. And it is by no means a given that the path to successful novelties lies always step by step uphill. Indeed evidence of use of codes and presence of irreducible complexity point to islands of function. kairosfocus
EL, I gave you a definition two days ago at 143. Echoing an apparent new trend in debate, you have failed to say what it is about the definition you find prohibitive, and why that is so. Instead, it seems possible that the revolving claim for a definition will stand in as a tactic for avoiding the question. Upright BiPed
OK, well, I've checked through the thread, and at this point, I'm not sure who is waiting for responses to specific posts, so I'll try to respond to responses to my own posts that have appeared more recently. Upright BiPed: I'd love to respond to your challenge, but I do need an operational definition of "information" before doing so. I don't mind what it is though (i.e. I'm not arguing about the definition, I just need an operational definition corresponding to the conceptua definition of information you want me to use). @ nullasalus, #177:
Elizabeth Liddle,
But we do not specify the solution. That is what the evolutionary algorithm does, and in that, they are directly comparable to natural evolution.
But we can ‘specify the solution’ in principle, to whatever degree required. Whether or not we do is a reflection of our wants and abilities, not a reflection of GAs themselves. Indeed, we already ‘specify the solution’ to a degree just by employing GAs to begin with. They aren’t utterly unpredictable to us (otherwise their practical use would be far more limited.)
I think it's very important to keep the levels distinct here, otherwise we are in trouble when we try to map the GA model on to life. Yes, of course, we use GAs to solve a problem because we think that GAs might provide us with a solution! And yes, we can constrain the solutions if we want to. But my point is that we, as Intelligent Designers, can carefully define the problem statement in order to ensure that what evolves solves it, we are NOT designing the solution itself. We are designing the problem. Now I am well aware that getting a good problem statement is, in practical terms, a hugely important step in finding a solution. In addition, as Intelligent Designers we can also define the "solution space" - in other words, we can design our GA so that it varies along the dimensions that we think may bracket a solution. But that is not the same thing as actually solving the problem, and I think it's important to keep the steps clear. When we define the problem we are defining the fitness function: by what criteria does our program determine whether our critter breeds or dies? The analog of this, in Darwinian terms, is the environment - whether an actual organism breeds or dies depends on how it responds to the opportunities and hazards presented to it by the environment. So that part doesn't need an ID to account for. When we design the "solution space" however, there are a couple of analogs in Darwinian terms: we need to decide the dimensions along which our critters are going to vary (are we randomly adjusting parameters, for instance, or are we adding new terms or operators?), and indeed the kind of critter it is. And both of these, of course, require intelligent input, and are candidates for an Intelligent Designer opportunity in nature. So I can see the potential argument (cf Behe) that an Intelligent Designer is required a) to design the original critter (usually very simple) and b) to constrain the ways in which its progeny can vary. And I agree that we "neo-Darwinists" or whatever you call us (don't like the term much) need to make the case that these two things can happen in the absence of an Intentional Designer (I use that adjective advisedly :) Then, thirdly, there is the actual evolutionary process. Given the first two, the third is automatic. So, I'd argue we don't need to invoke an Intelligent Designer for that part. Of course that is the fun part - once you've defined your fitness function and designed your virtual biology, you can go home and wait for the system to solve your problem. And the answers can be deeply surprising! On occasions, it's even difficult to figure out how why the solution actually works. So I guess what I'm saying (or trying to pin down) is where the trickiest part of the Darwinian puzzle lies. I'd say there is no problem in accounting, in very "natural" terms for both the fitness function and the solution-finding process. The tricky part is accounting for how the actual original critter emerged, if not from an Intentional Design process, and how it happens (if it does) that the variance in its descendents brackets viable and useful novelties. Does that seem like a fair problem statement?
However, unlike artificial selection, where even the “solution” may be highly specified, in GAs it often is not. Indeed, some GA outputs it is quite difficult to figure out how they actually solve the problem.
And again – this comes down to a statement about the limitations of abilities of a proximate designer, not the processes themselves. In other words, what’s doing the work here in making these ‘Darwinian’ is not the processes themselves, but statements about the designer’s knowledge (or lack thereof) of them. You’re setting up a comparison where the principal metric to decide whether a designer using a GA ‘designed’ the results of a GA, is if the designer knew and intended the GA’s results. But unless you have an ID style design detection filter, science is unable to determine the answer to the question in play – “Did a designer know and intend these results?” Note that this all comes prior to the question of whether or not the processes (variation and selection as defined by any evolutionary theory, given what we know about nature) are capable of achieving what they did, with or without designer input. Just as a GA, whether or not it was designed, likely couldn’t go from a (to use a biological example) single cell to an elephant in 4 generations.
Yes indeed. I think we playing the same game, at least, now, even if we are on different sides! Cool. Also I do think the distinction between "intentional" and "intelligent" is an important one. But perhaps off topic for this thread. Cheers Lizzie Elizabeth Liddle
ellazimm: re the metric: you are again shifting the burden. It is not my job to tell you what the evaluatory metric is; it is the job of those that claim chance* mutation and natural* selection sufficient to provide the metric that demonstrates those kinds of processes to be sufficient. However, just to move the conversation along: The unit of advancement you have available is a random* or chance* mutation. The accumulative pathway sorting process via eliminative algorithm you have available is natural* selection. The destination/goal you want to acquire is any highly functioning macro-evolutionary location - like winged flight or stereoscopic vision. The analytical metric would examine the capacity of any set of random* steps (mutations) to generate pathways (accumulative genetic mutations) to such a location, when the only modifier to the generated path is an eliminative algorithm that stops walks which produce sufficiently dysfunctional steps. Note that natural selection doesn't prescribe steps in any particular direction - it is not teleological; it is only capable of ending walks that are sufficiently dysfunctional. The metric would be an analysis of the capacity of chance* mutation walks modified by natural* selection eliminations (which really does nothing but limit pathways towards achieving the goal) to acquire any functioning macro-evolutionary location given the known parameters of volume of steps (how many mutations were likely to have occurred, given known mutation rates) in the time frame allowed. I guess that's one interesting thing I've gotten from this debate: I just realized that natural selection, an eliminative process, in a purely computational examination of random* steps towards a goal, does nothing but hinder progress towards finding the goal, because finding the goal might be more easily achieved via steps that in an organic world would result in eliminative selection. My profession is print designer, and when I'm designing print work, I often do things that, to a casual observer, make no sense or would seem detrimental to acquiring the final design. For example, when inputting design copy, i just type in the string of words without worrying about the typestyle, color size, location, etc. If the client were watching, they might think I'm ruining the piece, but I know all I'm doing is getting the text in so I can edit and fit it to the style of the piece later - choose fonts, colors, size, location, etc. The same is true of images I place in the piece, and how I arrange them for particular reasons. If one was to take the final piece of work and judge my path towards acquiring it, much of the process would not only not make sense to the observer, it would seem counter-productive and counter-intuitive - but that's only because they don't understand the design software, how it works, or what it is capable of. IOW, the design piece would have been long eliminated as dysfunctional by any editorial process that is not informed of the steps necessary to acquire the target. Natural selection - a teleologically blind process - might help in the organic world by reducing the quantity of entities competing for resources, but it certainly doesn't help in acquiring targets that it cannot have any idea how to acquire. Natural selection is like the client looking over my shoulder saying, "no, that doesn't look good, throw it out" when they have no idea about the final design I'm targeting or how to get there. They're only going to make it extremely difficult to acquire a workable, appealing final design, because every step along the way has to conform to their idea of a good design, just as every step along the way has to meet the blind doorman's (natural selection) idea of what a good mutation is. Meleagar
In response to ellazimmm shifting the burden of determining whether or not the necessary mutations in question could be explained as "random" to me, I said: “In any event, it’s not my job to prove they are not random, it is the job of those that claim that they are random to demonstrate not only that they are (that would be the first part), but that they are sufficient, when combined with natural* selection, to produce what they are claimed to have produced.” To which ellazimm responded, doubling down on her burden-shifting: "Well, have you looked at all the statistical data and analysis of mutation rates? Have you read all the research looking at the observed occurence of mutations? You’ve got a question, fair enough. Have you gone and looked for the answer? The world is not obligated to you to come and present all the data to you. If you have a question then the obligation is partly on you to go and find out what research and information already addresses your concerns." No, ella. The world is not obligated to bring me the evidence I ask for; the obligation belongs to those who claim as scientific fact that random* mutations are sufficient. It is not my job to seek out and find the evidence that supports your assertions; it is your job to do that. I haven't asserted the converse; all I have done is ask you to support your assertion, and all you have done is avoid it. Meleagar
So over on Cornelius Hunter's blog I came across the following:
Ergodic systems "forget" their initial conditions. In other words, from any random starting point the system will converge to the same "attractor".
So here's my question. Is a genetic algorithm an ergodic system?
This paper presents a fine-grained parallel genetic algorithm with mutation rate as a control parameter. The function of the mutation rate is similar to the function of temperature parameter in the simulated annealing [Lundy’86, Otten’89, and Romeo’85]. The parallel genetic algorithm presented here is based on a Markov chain [Kemeny’60] model. It has been proved that fine-grained parallel genetic algorithm is an ergodic Markov chain and it converges to the stationary distribution.
The transition matrix for the Markov chain suggests that the chain is irreducible and aperiodic. These two conditions establish the fact that this chain is ergodic and a unique stationary distribution of the population exists. These properties provide enough information about the convergence of the algorithm, although they do not guarantee the convergence to the global optimal solution. However, the algorithm provides a basis for the development of variants which will have global convergence.
http://www.intelligentmodelling.org.uk/Papers/rttg-publ28.pdf Mung
Hello again Ellazimm, Your most recent response to me suggests that evolutionist doctrine is in a priori part of your reasoning. Furthermore, forgive me for saying, it is causing you to miss the bleeding obvious! If only you subjected evolutionist assumptions to the same level of scrutiny as you are subjecting ID to. The fact that it took mankind, using artificial selection, to generate the “many and varied morphologies…which had no previous incarnation” should be leading you to ask: why didn’t natural selection manage it alone? You cannot call it a draw when you have zero observational or experimental evidence to support the occurrence of random mutations in a 5,000 year old genome which provided all the variety that only artificial selection could achieve. Especially when we know that random mutations only serve to degrade, not improve, pre-existing systems. “Originally, we all had brown eyes,” said Hans Eiberg. Can you really not see that this statement is utterly devoid of empirical support? It is pure speculation based on the assumption that we evolved from a brown-eyed ape-like common ancestor. Once again, you’re appealing to ‘peer review’ and ‘a large consensus’ as if these are decisive in your favour. The only thing that matters in science is observational fact and experimental results. Clearly you disagree with my comments about Lenski’s experiment. If you don’t want to leave me with the strong impression that this is just cognitive dissonance on your part, you need to explain why; instead of implying that “your sources” are better than mine. Again, you confuse the claim that eukaryotes evolved from prokaryotes with some sort of scientific finding. It is nothing of the sort. There is no observational evidence to support this claim and we cannot ‘evolve’ eukaryotes from prokaryotes in the lab. This claim is merely an appeal to some sort of unobservable, miraculous occurrence in the distant past. But, here’s the important bit, even if that miracle did happen, that doesn’t mean that we are descendants of bacteria. We would merely be descendants of the first eukaryotic cell. From that point (hundreds of millions of years ago) on, bacteria did not evolve into anything new whatsoever. They have effectively remained in unicellular stasis while the Cambrian Explosion occurred, while the dinosaurs walked the planet, right up until now. If evolution were true, and convergent evolution really does happen, then where are all the prokaryotic animals and plants? If you subjected evolutionist claims to scrutiny, you would urgently be asking why bacteria today is basically the same as it has always been. This is not a matter of interpretation, ellazim. It is a matter of who has true science (observation and experiment, not peer review and consensus) on their side. There can be only one of us that this applies to. Chris Doyle
Greetings Lizzie, I mean for the terms Accident and Design to be as all-encompassing as possible – hence the initial capital letters. For me personally (going beyond the remit of ID science), either the Universe, and everything in it, is a product of the Grand Architect, or it just made itself without any kind of design whatsoever. With that emphasis in mind, I put it to you that *all* explanations that are on offer to explain existence ultimately fall into one of those two categories. The two main contenders in science - neo-Darwinian Evolution and Intelligent Design – are just competing explanations for Accident versus Design. There is no third way. Just saying there is one, without providing any details, doesn’t count by the way! This thread and others are littered with definitions of information. Either every single definition provided fails because the cell is just “a simple homogenous globule of plasm”. Or, actually, we all know what we’re talking about here (we can even see it in the video in the top right hand corner of this very webpage) and all this talk of ‘gathering dust’ and 1s and 0s is, at best, missing the point (at worst, deliberately avoiding it). Why choose ‘gathering dust’ as a starting point for information (particularly in the cell) when supercomputers and superfactories are far more obvious and accurate associations? Chris Doyle
PS: In short, once one has identified that we are dealing with high contingency where on similar initial conditions we have a large variety of outcomes, we are not dealing with natural law rooted in forces of mechanical necessity. Once that is the case, the issue is whether the contingency is dominated by sheer statistical weight or is in a context where we see a rejection region -- and in classical Fisherian approaches for making serious decisions, RR's of 5% likelihood by chance are common. We are dealing with rejection regions that are far more remote than that. The basic argument is tha tif an event comes from a sufficiently remote region in the space of possibilities that such would not be credible on a random walk in the config space across the lifespan of the solar system or the cosmos [and this was shown to be the benchmark for searches of such a space by Marks and Dembski in their work on active information], depending, then we have reason to conclude that the reason we are in a zone of interest T that is well-fitted to a purposeful description is that we are there by choice. In short, if you are at Chesil beach and you see shingles spelling out Welcome to Chesil beach, you do not infer to mechanical necessity or chance as the best explanation, on an intuitive version of the sort of quantitative approach just described in outline and detailed elsewhere as linked. kairosfocus
Dr Liddle: First, the null hyp is natural regularity [law]. If something is highly contingent, that kills the null. Then, the second null is that the thing is contingent reflective of a stochastic distribution. What kills that is being on a narrow zone of interest in a large enough config space, just as the analysis that supports the second law of thermodynamics highlights. In case you are interested, here is Dembski's phrasing (and recall, this is to be applied per aspect, as linked above):
“Whenever explaining an event, we must choose from three competing modes of explanation. These are regularity [i.e., natural law], chance, and design.” When attempting to explain something, “regularities are always the first line of defense. If we can explain by means of a regularity, chance and design are automatically precluded. Similarly, chance is always the second line of defense. If we can’t explain by means of a regularity, but we can explain by means of chance, then design is automatically precluded. There is thus an order of priority to explanation. Within this order regularity has top priority, chance second, and design last” . . . the Explanatory Filter “formalizes what we have been doing right along when we recognize intelligent agents.”
The steps are plainly valid, and are based on the way science commonly works, the difference being that the chance/choice contrast is decided on isolation to a zone of interest in so large a config space that arriving there by chance is utterly implausible on the gamut of the cosmos or at least the solar system. That is not something that requires big peer reviewed studies to support, but it is helpful to know that indeed essentially this sort of reasoning is the foundation of the second law of thermodynamics in stat mech. For, the direction of spontaneous change is towards clusters of microstates that are of much higher statistical weight. If you see something in a very special config [e.g. all the O2 molecules in a room at one end of it], that points to choice not chance as the most credible explanation. As for further instance occurs with the text of this post. It could after all be just lucky statistical noise on the Internet hitting on a config in an island of function. But instinctively we know better. The CSI- explanatory filter approach helps us give a quantitative way to make the same inference. One we make intuitively all the time. GEM of TKI kairosfocus
Elizabeth Liddle, But we do not specify the solution. That is what the evolutionary algorithm does, and in that, they are directly comparable to natural evolution. But we can 'specify the solution' in principle, to whatever degree required. Whether or not we do is a reflection of our wants and abilities, not a reflection of GAs themselves. Indeed, we already 'specify the solution' to a degree just by employing GAs to begin with. They aren't utterly unpredictable to us (otherwise their practical use would be far more limited.) However, unlike artificial selection, where even the “solution” may be highly specified, in GAs it often is not. Indeed, some GA outputs it is quite difficult to figure out how they actually solve the problem. And again - this comes down to a statement about the limitations of abilities of a proximate designer, not the processes themselves. In other words, what's doing the work here in making these 'Darwinian' is not the processes themselves, but statements about the designer's knowledge (or lack thereof) of them. You're setting up a comparison where the principal metric to decide whether a designer using a GA 'designed' the results of a GA, is if the designer knew and intended the GA's results. But unless you have an ID style design detection filter, science is unable to determine the answer to the question in play - "Did a designer know and intend these results?" Note that this all comes prior to the question of whether or not the processes (variation and selection as defined by any evolutionary theory, given what we know about nature) are capable of achieving what they did, with or without designer input. Just as a GA, whether or not it was designed, likely couldn't go from a (to use a biological example) single cell to an elephant in 4 generations. nullasalus
Cool :) It's lovely to get at least the point where you are in real disagreement - so often in these discussions, the battles are between straw men on both sides! Not that I'm blaming anyone (except perhaps myself) but finding common ground is Hard Work. I appreciate yours! But I have to go to bed, and I've got a pretty full day tomorrow. Hope you are maybe around later on. Cheers Lizzie Elizabeth Liddle
ME: "As to your Shannon Information example. Even Shannon Information pre-supposes the existence of something called information. He just gives a way to measure it. True?" Elizabeth @166:
hmmm. Yes, I guess he does – no point in measuring something you don’t think exists :) But he also defines it in terms of what the receiver doesn’t know. That was one of the things I was getting at – whether a message is informative or not depends on whether it tells you something you don’t know.
Now, does it make sense to speak in terms of what the receiver doesn’t know without any sense of aboutness? I think you are spot on here. I'm glad we've managed to come to this point. Shannon uncertainty is uncertainty about something and our reduction in uncertainty about something can be called information. I hope this will help you understand and appreciate UprightBiPed's question about, well, about. :) To modify what you wrote, I might put it this way: "how informative a message is depends on whether it tells you something about something you don’t already know." Any objection?
So sender and receiver are an important part of Shannon’s definition.
I think we probably agree on this. It is a theory of communication, after all. lol. I'm not sure that Shannon's measure can't be generalized though to situations without a sender/receiver pair. Mung
Upright BiPed:
Sorry, can’t take that seriously.
Well, that's a shame. As I said, I'll do my best, but right now, I don't even know if you mean this thread or some other thread. Anyway, I hope I find it, but my response may take some time. That's why I asked for a link. Elizabeth Liddle
^sorry, messed up a tag :( Elizabeth Liddle
Would you please stop with this? I say again you misrepresent GA’s. We don’t use GA’s because Darwinian mutation and selection can produce the appearance of design. Ga’s are inherently teleological, and that’s why we use them. Darwinian evolution is not.
I think there is a confusion here. Yes, of course, we use GAs because we have a purpose - we want to solve a problem. Therefore we design a fitness function (an environment if you will) in which critters that are better at solving our problem are selected. But we do not specify the solution. That is what the evolutionary algorithm does, and in that, they are directly comparable to natural evolution. In this way GAs are to natural selection as artificial selection is to natural selection. The difference is that in GAs and artifical selection, humans define the fitness landscape. However, in all three cases, it is Darwinian processes that produce the "solution". However, unlike artificial selection, where even the "solution" may be highly specified, in GAs it often is not. Indeed, some GA outputs it is quite difficult to figure out how they actually solve the problem.
In a GA we know the problem we are trying to solve. We find a way to represent potential solutions. We define a way to tell us whether our potential solutions are more or less likely to reach our goal.
Yes. But we do not design the solutions. That is my point. The actual clever bit - the problem solving bit, is done by the GA.
Darwinian evolution is not like this.
It is in the aspect that matters - the actual finding (indeed inventing) the solution. However, as I said above, that is always assuming that the variance the critters can exhibit bracket viable solutions. So that seems to me to be the challenge Darwinian evolution has to meet. We know selection works very well, given a rich enough choice of incremental solutions. It's the origin of of that variance that needs more research, IMO (and where understanding how genes have phenotypic effects has produced so much relevant evidence).
Elizabeth Liddle
#163 Sorry, can't take that seriously. Upright BiPed
I just received The Plausibility of Life, which is pretty cool
I have that one too :) Let us know what you think of their arguments against ID. For example:
These secular theories, labeled intelligent design, argue against evolution ... attempting to show from first principles why evolution is impossible.
YIKES! And:
Advocates of intelligent design have introduced the term irreducible complexity. It is meant, in principle, to contradict the theory of evolution by arguing that complex physiology is too improbable to have ever been assembled by chance.
Now if you didn't know better, would you want to come here to UD and argue against ID based on this book? Mung
Elizabeth:
The real challenge to Darwinism, it seems to me, is not whether natural selection can produce the appearance of design (it can – indeed it can actually design, which is why we use GAs)
Would you please stop with this? I say again you misrepresent GA's. We don't use GA's because Darwinian mutation and selection can produce the appearance of design. Ga's are inherently teleological, and that's why we use them. Darwinian evolution is not. In a GA we know the problem we are trying to solve. We find a way to represent potential solutions. We define a way to tell us whether our potential solutions are more or less likely to reach our goal. Darwinian evolution is not like this. Mung
"No, Stephen, I read more than the “contents pages”. What he seemed to be saying was that the genetic code couldn’t have emerged from purely physical/chemical processes. That seemed unsubstantiated to me, and, in any case, an argument from lack of evidence/alternative model rather than a positive argument." Well now you see that's where you're wrong, and I find it odd that you would comment on a book without having read it. I think we had a discussion about this particular matter recently. His book is not about a lack of evidence for Darwinian process. Far from it. His book is about evidence that points in another direction than Darwinian process based on what we already know about designers. I think you need to read the book, and read it carefully. CannuckianYankee
Mung: well, I'm still waiting for The Signature in the Cell, but I just received The Plausibility of Life, which is pretty cool:) However, I'll take yet another opportunity to plug this: http://videolectures.net/eccs07_noble_psb/ It's worth listening to whether you are an IDist or a "Darwinist". I hope Dawkins has read the book :) Elizabeth Liddle
Mung:
: Elizabeth Liddle @93: when DNA…is read in a cell…linearly, what is it that constrains it to be read linearly? What stops it, if not physical/chemical forces, from reading it non-linearly? What constrains it? First, I’d like to commend you on your effort.
aw shucks :)
But we have known systems of information storage and retrieval. Why not appeal to those for an analogy? Take a hard disk, or CD/DVD or RAM. Think about whether the data is read from them in a linear fashion, and why. What is it about matter, energy, or information that requires that it be read in a linear fashion?
Nothing, and it doesn't, always. But the reader (or reading head) has to do it right in order to make sense of it. What I was asking was what guides (physically) the reading head in a cell? Not why is it so guided?
Could the information that is stored in DNA be read in a non-linear fashion?
Well, in a sense it is. It's certainly not read from start to finish. It's read more like a database, in which only relevant information is read, and then, only as needed. That's in teleological language :) But the individual genes are read in one direction, guided by the physical properties of the molecules. That was my point really. Not that the molecules aren't intelligently arranged, but that once arranged, they are physically constrained to be read as they are.
What is it about DNA that says that the three bases that code for a particular codon must be arranged in a linear manner on the DNA strand?
Ah. Well,that's the way it's arranged in DNA. It may not be the only possible way. If we ever do find life in other parts of the universe it will be interesting to know how unique the DNA arrangement is.
As to your Shannon Information example. Even Shannon Information pre-supposes the existence of something called information. He just gives a way to measure it. True?
hmmm. Yes, I guess he does - no point in measuring something you don't think exists :) But he also defines it in terms of what the receiver doesn't know. That was one of the things I was getting at - whether a message is informative or not depends on whether it tells you something you don't know. If I send the same string of ones and zeros to Upright Biped over and over, it will become so predictable that it will no longer contain any information, meaningful or otherwise. It will go straight to his/her spam folder! So sender and receiver are an important part of Shannon's definition. That's why, when people talk about information in the cell (which I think is perfectly valid - I think the cell is full of information) - I'd like to know what the analogues of "sender" and "receiver" are. I should say that I'm particularly interested in gene-expression, of course, because I'm interested in neurotransmitters. But I don't primarily see DNA as the "the blueprint for the bodyplan". I don't think it is. It's the database on which the cells draw for their repertoire of potential behaviour, which depends on far more than the DNA. None of which is at odds with ID of course, but I just thought I'd put it out there :) Elizabeth Liddle
The real challenge to Darwinism, it seems to me, is not whether natural selection can produce the appearance of design (it can – indeed it can actually design, which is why we use GAs) but whether mutational processes can provide a sufficient range of potentially advantageous options to select. I’d be more than happy to discuss that, but it seems to me that’s the real chink in Darwin’s armour, not natural selection. Where does the variance come from, and why should there be enough variants that are actually more advantageous than what preceded them to make it work?
My copy of The Genetic Basis of Evolutionary Change by R.C. Lewontin arrived yesterday. :) "For I now realize that the question was not simply, How much genetic variation is there? nor even, How much genetic variation in fitness is there? but rather, How much genetic variation is there that can be the basis of adaptive evolution?" Mung
Elizabeth:
The null hypothesis for the H1: “Darwinian evolutionary processes account for the appearance of design in living things”: is not “an Intelligent Designer must have accounted for the appearance of design in living things” but “Darwinian evolution does not account for the appearance of design in living things”.
Well, this does bring up an interesting question which I hope you'll take some time to think about even if only over tea. This question about the appearance of design. It seems that neither H1 nor H2 makes much sense without dome idea of what things appear to be designed and what things do not appear to be designed. Darwin and Dawkins seem to just take this appearance of design for granted. But what is it actually, that Darwinian evolutionary processes are supposed to explain? How do you decide, scientifically, what has "the appearance of design" and what does not? It's hard to believe that Darwinism can be an acceptable scientific explanation for some phenomenon that cannot even be scientifically identified and described. What it is, exactly, that gives a thing "an appearance of design"? Scientifically speaking, of course. Since were all here doing science. Right? Mung
@Upright BiPed
Could you remind me which one, UB? With a link if possible, or a thread title.” You needn’t look far, its the one you just posted on. Scroll north.
Sorry, you'll have to be more specific. The thread immediatetly above this one is one I haven't posted on. If you mean this thread, then I'm going to need a post number. I'm trying my best to address all the responses that people have made to my posts, but it's going to take me some time. Elizabeth Liddle
ellazimm:
And really, truthfully, most people here already know my arguments because they are nothing new or original.
Right. We're familiar with the arguments. It's your reasons for believing them that has us scratching our heads. Mung
Kairosfocus @ #
I trust this will help clarify. GEM of TKI
It doesn't, unfortunately, Kairosfocus, although I do hugely appreciate the efforts you went to. I did start a point by point reply, but the thing got really unwieldy. So let me try a more summary approach: Regarding the "null hypothesis" approach to hypothesis testing: you seem to have a very different idea of what this consists of to the one I understand. In conventional empirical science, fisherian hypothesis testing consists of a study hypothesis, often called H1, and a null hypothesis, often called H0. The null hypothesis is simply the hypothesis that H1 is false. However, "retaining the null" is not the same as falsifying H1 (which is, in practice very difficult). But this puts great constraints on what hypothesis can be tested. The null hypothesis for the H1: "Darwinian evolutionary processes account for the appearance of design in living things": is not "an Intelligent Designer must have accounted for the appearance of design in living things" but "Darwinian evolution does not account for the appearance of design in living things". In other words, the null is always framed in reference to H1, otherwise the stats don't work. You can, of course, instead of comparing H1 to H0, compare two competing hypotheses: you can say: "intelligent design accounts for the appearance of design in living things better than Darwinian evolutionary processes do". But to do that, you would have to make differential predictions for the two hypothesis, not just infer that if one fails, the other is supported. It is perfectly possible for two theories to be false! Regarding your comments about upper probability bounds; from my PoV they seem irrelevant. I'm not saying they are, but I am certainly not seeing their force. Yes, some things are so unlikely that we would not expect to see them in the knowable universe (flying spaghetti monsters, perhaps, or a teapot orbiting Mars). But in order to ascertain the probability of some event, we need some priors, and I think it is your priors that I dispute. It's not that my priors are "right" and yours "wrong" (the whole point of priors is that they are adjustable in the light of new information, and they are probabilities anyway - we can even put priors on our priors being right!) It's just that to me, there are plenty of promising leads in the search for antecedents of the "minimal first modern cell", so I am not at this stage prepared to say: the probability that the first modern cell had viable antecedents is zero. Of course I agree that the probability that the first modern cell self-assembled by chance is on the order of Flying Spaghetti Monsters and orbiting teapots. So that is not the source of our disagreement (at least I don't think so). Lastly, re random walks: Darwinian evolution is not a random walk - or rather it is a biased random walk. Imagine a drunkard's walk where the terrain slopes gently downwards. Except that you have to imagine an army of drunks, so huge that the whole terrain alters under their weight :) Understanding of drift has made a big difference, however, and we know know that the terrain is much flatter than we thought (although it may have the odd gully. Nonetheless it slopes. The real challenge to Darwinism, it seems to me, is not whether natural selection can produce the appearance of design (it can - indeed it can actually design, which is why we use GAs) but whether mutational processes can provide a sufficient range of potentially advantageous options to select. I'd be more than happy to discuss that, but it seems to me that's the real chink in Darwin's armour, not natural selection. Where does the variance come from, and why should there be enough variants that are actually more advantageous than what preceded them to make it work? If that can work, I think that showing that CSI can be generated is pretty easy :) Anyway, thanks for the conversation. I'm sorry I've been a bit sporadic, and that will continue, but I'll try to respond to everyone eventually. If I miss any posts or threads, I'd be grateful if people could jog me with a link. Cheers Lizzie Elizabeth Liddle
Meleagor @139: "Unfortunately, even if mutations were in fact unpredictable, this wouldn’t help your case any, because you are making a case against intelligence, and intelligence is also often unpredictable." Well, I think you're making an assumption about the designer which I have promised to avoid. "Then it shouldn’t be a problem directing me to the model of mutations that shows chance* mutations sufficient (combined with natural* selection) to generate macro-evolutionary success." I refer, as always, to the modern evolutionary synthesis and all its supporting documentation. "ellazimm said: “Selection IS NOT random!” I guess that statement would be relevant had I ever claimed it was." I have tried very hard to find the reason for me making that statement and I have failed to do so. If I have misrepresented one of your arguments then I apologise and retract the statement. "I want to measure the capacity of chance* mutation and natural* selection to produce the macro-evolutionary features they are claimed to have the power to produce.? Okay, what do you propose as a unit of measure? What should we be comparing and to what standard? "In any event, it’s not my job to prove they are not random, it is the job of those that claim that they are random to demonstrate not only that they are (that would be the first part), but that they are sufficient, when combined with natural* selection, to produce what they are claimed to have produced." Well, have you looked at all the statistical data and analysis of mutation rates? Have you read all the research looking at the observed occurence of mutations? You've got a question, fair enough. Have you gone and looked for the answer? The world is not obligated to you to come and present all the data to you. If you have a question then the obligation is partly on you to go and find out what research and information already addresses your concerns. above @140: "I am not taking shot at you ellazim. I am merely pointing out to the fact that your model is just as much a matter of faith as anyone else’s. The point that everyone is trying to make here no scientific metric available (not even in principle in fact) that will demarcate the issue. The whole “nature-did-it” is just belief and rhetoric." I understand. But I do not see that ID has multiple threads of evidence like the modern biological synthesis. And no one has yet proposed a definition of a metric or even what units it would be measured in. You can talk about a metric but until you give me some idea of what measurable quantity you want to be measured, what units that measurement will be done with and what scale the measurement will be compared to then . .. it's all just kind of vague and meaningless. You want a yardstick. Okay. Give me the units on the stick at least. allanius @152: "They obtain hegemony through the thirst for identity. “God has put eternity into the hearts of men,” and for that reason all men desire an immortal identity. One way to achieve it is through procreation. A better way is through the apparent justification the comes from investing one’s identity in the prevailing point of view." Another way to achieve that is to assume an eternal afterlife overseen by an benign creator. Please be honest and complete in your analysis. And let me offer my apologies to all participants in this thread. I may be very dull but I do have a life and, just now, it precludes me from monitoring things as well as I would like. I'm sure I've missed some things. I shall do the best I can but my family and my job come first. And really, truthfully, most people here already know my arguments because they are nothing new or original. I am only repeating things that have been elucidated by others to much greater effect. ellazimm
Mrs Liddle, "Could you remind me which one, UB? With a link if possible, or a thread title." You needn't look far, its the one you just posted on. Scroll north. Upright BiPed
Chris @128: "1. Artificial selection in dogs and cabbages acted purely upon the pre-existing gene pool. Any claim that random mutation was part of these varieties needs to be supported by scientific fact. Absent that, you must concede." Perhaps. Since I cannot produce the pre-existing genome of the Brassica genus from 5,000 years ago then I suspect the best we can do at this point is to declare a draw. But, it is the case, that many and varied morphologies were observed and recorded which had no previous incarnation. "The notion that humans were all brown-eyed before blue eyes somehow appeared is just that: a notion. There is absolutely no observational or experimental evidence to support this notion. Sorry. The research you appeal to is simply unscientific (it makes the unwarranted assumption that all humans were brown-eyed originally)." From Wikipedia: The inheritance pattern followed by blue eyes is considered similar to that of a recessive trait (in general, eye color inheritance is considered a polygenic trait, meaning that it is controlled by the interactions of several genes, not just one).[10] In 2008, new research revealed that people with blue eyes have a single common ancestor. Scientists tracked down a genetic mutation that leads to blue eyes. "Originally, we all had brown eyes," said Hans Eiberg from the Department of Cellular and Molecular Medicine at the University of Copenhagen.[35] Eiberg and colleagues showed in a study published in Human Genetics that a mutation in the 86th intron of the HERC2 gene, which is hypothesized to interact with the OCA2 gene promoter, reduced expression of OCA2 with subsequent reduction in melanin production.[36] The authors concluded that the mutation may have arisen in a single individual probably living in the northwestern part of the Black Sea region (around modern Romania) 6,000–10,000 years ago during the Neolithic revolution.[35][36][37] Eiberg stated, "A genetic mutation affecting the OCA2 gene in our chromosomes resulted in the creation of a 'switch,' which literally 'turned off' the ability to produce brown eyes." The genetic switch is located in the gene adjacent to OCA2 and rather than completely turning off the gene, the switch limits its action, which reduces the production of melanin in the iris. In effect, the turned-down switch diluted brown eyes to blue. If the OCA2 gene had been completely shut down, our hair, eyes and skin would be melanin-less, a condition known as albinism.[35] Dawkins: “It is absolutely safe to say that if you meet somebody who claims not to believe in evolution, that person is ignorant, stupid or insane (or wicked, but I’d rather not consider that).” I agree, that statement is nasty and demeaning. Not my style at all. But he is completely straight forward and honest. You know where you stand with Dawkins. ". Be honest, ellazimm. You did not know what I told you about E. coli until you read my words, did you. If the “very honest” Dawkins didn’t know, then what chance did you have!? Seriously, Lenski’s research simply shows that, with extreme efforts, you can make some E. coli do what other E. coli has been doing all along. Big deal. If you disagree, what exactly did you have in mind?" You have your sources, I have mine. I think mine are backed up with peer review and a large consensus about what is known and established. I am NOT just depending on my own view or ability to interpret the data. "5. “Hey, we may all be descendent from bacteria. Who says it hasn’t give rise to other life forms?” Oh dear. You certainly saved the best to last! Bacteria are prokaryotic lifeforms. All plants and animals are eukaryotic lifeforms. Therefore, science says bacteria did not give rise to plants and animals. I’m sorry to say that to even ask such a question demonstrates a massive lack of understanding. You don’t even need to read the research! Even Wikipedia will do on this occasion." Read http://www.ncbi.nlm.nih.gov/books/NBK28340/ to see a discussion of the current state of understanding regarding this issue. Chris @121: "Lizzie and ellazimm, like United, have given their best but have found our assaults on their position indefensible! At least they tried. The likes of paragwinn, who didn’t try, should try and follow their example." I would say you have disagreed over our interpretation. Is that fair? ellazimm
I see that Dr Liddle continues to drop back in, but has yet to return to the thread she was previously involved in. I do hope she finds the time.
Could you remind me which one, UB? With a link if possible, or a thread title. I'm afraid having been more used to forum than blog format recently, I haven't yet got the hang on keeping track of different threads. I've now got into the habit of bookmarking each one, but I'm still behind, and I'm aware that there are a lot of posts addressed to me on this one. I'm fairly tied up this week, but I will try to respond to everyone eventually, and I promise not to start any more hares until I've done so! Apologies Lizzie Elizabeth Liddle
Hi again Chris :)
Hiya Lizzie, my turn to butt in! I see that you are arguing over the definition of information. This seems to be a recurring and pointless argument.
No, I'm not arguing over the definition of information. I want to put this absolutely straight. I don't have "a problem" with the concept of information. Or even with any given definition. But there are several, and in science, if you want to demonstrate something, you need what is called an "operational definition". It doesn't matter much what it is, as long as it is clear, and it can differ from study to study, as long as it is clear. Because if someone makes a claim that X cannot be Y, then unless we have an operational definition for X we cannot either verify or falsify the claim. What it is doesn't actually matter, although obviously the person making the claim should approve the definition.
Disagreement over the definition of information cannot change the fact that information, particularly in the cell, is an observable phenomenon that needs to be accounted for, and, ultimately, there are only two possible sources for that account: Accident or Design.
Well, only if "Accident" and/or "Design" encompass very much more than they usually do, because there are more causal phenomena in the universe than are covered by those two terms, in their normal usage.
It’d be interesting to hear about any kind of empirical basis you have for the notion that the information contained in the cell arose by accident.
And I'd be more than happy to respond, once I have your operational definition of information! I certainly won't argue about it - I just want a definition that is tight enough to serve as an objective criterion by which we can judge whether or not the information I cite in support conforms to your definition. Above, I gave the example of gathering dust "informing" neighbours that the occupant of a house was dead. In one sense that dust conveys information - but that was not (and clearly was not, which was my point) the kind of information Upright BiPed had in mind. I also gave the example of a series of 100 of ones and zeros drawn from a flat probability distribution. By some definitions that series contained 100 bits of information. But again, was not the kind of information Upright Biped had in mind (clearly, as again was my point), so he/she rightly asked a follow up: what was the information "about". Which implies that Upright Biped's concept of information is a semantic communication between a sender (me) and a receiver (him/her) who share a common language. So I asked, if, as seems perfectly reasonable, he/she regarded "information" as a communication between a sender and a receiver, who or what were the equivalent of the sender and receiver in the context of DNA? (I'm hoping there's an answer to my question above, but I haven't checked yet). It's not that I have any issue with any of these definitions - I'm happy to accept any of them as long as they are operationalised. The most promising candidate seems to be that the relevant information is CSI. If someone can point me at an up-to-date operational definition of CSI, I'd be more than happy to use that. Cheers Lizzie Elizabeth Liddle
I see that Dr Liddle continues to drop back in, but has yet to return to the thread she was previously involved in. I do hope she finds the time. Upright BiPed
Thanks! You might have noticed the typo in the 2nd para. BTW: allanius@juno.com allanius
allanius, Your insightful post above deserves its own unique thread. If you like, I'll take care of that for you. GilDodgen
Cultural epochs are self-limiting. The Scholastic age, the Renaissance, the Baroque, the Enlightenment, Romanticism—all of them obtained cultural hegemony for a time, and all of them toppled in the end. They obtain hegemony through the thirst for identity. “God has put eternity into the hearts of men,” and for that reason all men desire an immortal identity. One way to achieve it is through procreation. A better way is through the apparent justification the comes from investing one’s identity in the prevailing point of view. For this same reason, however, all cultural identities lose their hegemony in the end. These identities are based on the dividing power of intellect, but a dividing power cannot give mortals the thing they desire most. It cannot give them life. Over time, the limitations of all divided identities become too evident to ignore—at which point they are abandoned. The abandonment tends to be rather sudden. There is a tipping point where the same herd mentality that empowers cultural identities begins to work against them and betrays them. Being possessed by the herd, they lose their sense of difference and distinctiveness. They become common and tedious—and then they are vulnerable to a fall. Darwinism, the basis of Modernism, now appears to have reached such a tipping point. The same universality that produced the herd mentality seen in its proponents has now begun to work against it and is making it reactionary and inflexible. It cannot afford to be open to new discoveries in the biological sciences. It cannot afford to be open to design, no matter how self-evident. It has turned in upon itself in order to preserve itself. There was a time when Romanticism/Transcendentalism seemed invincible, but in a few short decades it had been utterly swept away, to the point where it was impossible to imagine it ever returning again. A cultural identity, once used up, is lost forever. It loses its power to satisfy the restless human spirit and its thirst for life. This is what is happening to Darwinism now. Its implied promise of ameliorative evolution and “progressivism” is beginning to fade. It was the narrative of the Modern age, the thesis; the antithesis is coalescing as we speak. Change may still seem far away, but when it comes it is likely to be swift. allanius
Mung: Dembski used T as the term for the cluster of configs from which the observed case E comes. I use it for consistency. As long as T is sufficiently isolated, it is practically unreachable by a random walk based trial and error process on the gamut of our solar system or observed cosmos. T of course is specified on function or meaningfulness in a context. GEM of TKI kairosfocus
...functionally specific complex information [FSCI] is the case where the zone of interest T is specified on a function, e.g. an AA chain must fold properly and work as a Type-X enzyme.
I like the way you put this. T doesn't mean target though does it? No targets allowed. Mung
For the record:
I do not see why a “purposeless, mindless process” should not produce purposeful entities, and indeed, I think it did and does. Elizabeth Liddle
Mung
II: On points on your 127: Let us clip and insert comments: _____________ >> EL: At best I see: well all the codes/machines we know the provenance of are designed by intelligence, and living things are codes/machines, so they must have been as well. Which simply isn’t a valid inference! As I’ve said somewhere else recently (this thread?) that’s like saying: cats are mammals, therefore all mammals are cats. KF: This is an outright strawman caricature. EL: I’m not convinced it is, Kairosfocus. a: oh, yes it is Let’s look at what you say below: KF: You have an empirically known cause of FSCI — which BTW can be and is measured, thank you. Namely, intelligence. b: Just seen above. EL: I’m not sure what FSCI is, but I take it that the SCI part is specified complex information. c: FUNCTIONALLY specified, complex information. And yes, we have a known cause of it in the case of human artefacts. d: In short, you accept that the criterion is empirically reliable in known cases. In addition, we have just that, a known sufficient cause, in a context where the other proposed causes [chance and/or necessity] -- as I said earlier -- are NOT causally sufficient per direct observation and per analysis KF: You have a claimed cause, chance and necessity without intelligence, that has not been observed to cause FSCI, e: I.e. I am summarising the point. EL: And here is what I see as the problem. We see CSI, in living things, with no known cause. f: so we should be doing an inference to best causal explanation and a means to reconstruct the credible cause,on the same basis as geology under Lyell progressed, or evolutionary biology under Darwin, or more broadly how Newton inferred to the nature of other star systems: the uniformity principle, where like is expected to be cause of like, where the best explanation has been developed. EL: So we have two sets of things with CSI: 1)Non-self-replicating things observed to be made and designed by intelligent humans 2)Self-replicating living things that appear to make the members of each subsequent generation themselves automatically, and no observed designer. g: All that the replication introduces is a means of propagating the FSCI from one generation to the next [cf discussion by Paley and how it was built on here], it is not an innovator of FSCI, the question is the SOURCE of the FSCI. h: Further, recall, you are not in the FSCI threshold until the quantum of new, functionally specific info is beyond at least the resources of the solar system, the cosmos we effectively live in, or if you will the observable cosmos. EL: That is not enough to infer that living things were designed by intelligent beings, any more than observing that cats are mammals allows us to infer that all mammals are cats, or, to be a little more subtle, any more than observing that this sand is aeolian in origin allows us to conclude that all sand is aeolian in origin. i: Kindly observe what has been pointed out over and over again, in now thread after thread (the repetition is becoming a pain, as it seems that the point is repeatedly simply being brushed aside): the origin of the FSCI at origin of cell based life on metabolising and vNSR self replicating organisms is of order 100 - 1,000 kbits, and until that functional info is there you do not have such replication of a metabolic entity with a replicating facility that codes the organism and uses that to guide replication. Secondly, until you have an embryologically feasible new body plan, requiring credibly 10 - 100 million bits for each as previously shown, you do not have the functionality that allows for replication of the new type of organism. And as was shown and linked earlier today the concept of the smoothly graded branching tree of life is dead, though the advocates are not willing to accept this. EL: Or, even more appositely, that because this snowball was made and thrown by a boy, that the avalanche of rather different snowballs was also made and thrown by a boy. j: this is is little more than an irrelevant analogy. We are doing an inference to best explanation for a known and measurable phenomenon, per the sufficient cause for it. It bears repeating that there are no cases where FSCI has been shown to be caused in our observation by noise leading to trail and error. this is precisely because the threshold is set so high that the config spaces are not sufficiently traversible by random walks that you are likely to ever find islands of function. EL: We are back to the null hypothesis problem – why should the null be “intelligent design”? k: False. The null hyp 1 as can be seen from the Chi_500 eqn above or the closely related explanatory filter analysis per aspect of an object or process, is MECHANICAL NECESSITY. Null hyp 2 IS CHANCE. Only when both mechanical necessity and chance are causally inadequate will intelligence be considered, indeed this deliberately builds in a significant level of false negatives, i.e to gain reliability when "design" is inferred, there is a willingness to tolerate cases where it will rule chance and/or necessity for various aspects of the object or process under study. EL: Why not “we don’t know?” l: Because, we have a long worked out classification of causal forces and factors, whereby natural regularity under similar initial conditions traces to forces of mechanical necessity, aka natural law. Similarly high contingency with a statistically driven distribution traces to chance, and specified complexity to design. This is in fact routinely used in any number of fields of pure and applied science, even just to design and work with experiments. think about control and treatments by blocks and plots etc, would you consider a "we do not know" when variation is not chance, a satisfactory result, or would you infer to treatment -- design taking advantage of natural law, and then by looking at degree of treatment applied, try to work out he law? m: or is it only when the lewontinian type a priori assumption comes into play that we suddenly get cold feet. Normal scientific practice is to infer to a known sufficient and superior cause, not to we do not know just where it is inconvenient to point to the best and sufficient candidate cause. KF: and which is ALSO analytically — on considerations very close to those lying behind the second law of thermodynamics, statistical form — challenged by the quantum state level resources of the 10^57 atoms or our solar system (at 500 bits) or the 10^80 or so of our cosmos (at the 1,000 bit FSCI threshold). EL: I’m sorry I don’t know what this means. n: I am pointing out the basis of the analysis in terms of the spave of possibilities for 10^57 atoms in our solar system or 10^80 in our cosmos for 10^17 s since the singularity, etc, and the parallel where in statistical thermodynamics the direction of spontaneous change is based on the relative statistical weight of clusters of micro-states. that is the basis for the law of increasing entropy or disorder. KF: So, to infer on best explanation relative to the evidence that FSCI is a reasonable and so far reliable sign of intelligent cause is NOT a circular argument. o: Inference to best explanation is not circular, and is in fact provisional pending further evidence and analysis, as are all scientific findings of consequence. EL: It’s not so much circular as unwarranted, IMO. p: Your previous claim was precisely circularity, which I replied to. EL: Sure, living things might have been designed by an intelligent designer, or several. That is an interesting hypothesis, but it is not a justifiable null. q: the only one suggesting that the inference to design is a null hyp is you, I am afraid. It is just the opposite,the result only after TWO successive nulls have failed. EL: A better approach, IMO, is to say: what kind of processes can generate CSI? r:pardon, but is this not what was gone over, again and again, including in this thread? The only observed sufficient cause of CSI is intelligence, e.g posts in this thread. Further to this, it was seen that the threshold set for inferring to CSI, is at a level where chance and/or necessity will not be able to search the relevant space enough to find the specific zones of interest T, to practical certainty. EL: And one answer could be (and this is testable) processes that involve contingent selection. s: random walks plus selection will only move to such a zone of interest if the criterion of functional specificity is remove, and substituted for by a Hamming distance metric so that if we happen to be closer to an island of function that closeness will be rewarded despite absence of relevant function. t: In the case of origin of life and of origin of body plans until a threshold of complex and specific integrated function is passed, the entity cannot self-replicate or reproduce. No differential reproductive success can be relevant in such a case until you are on an island of function. And cf the previous on the failure of the tree of life. EL: We know that intelligence involves contingent selection, and natural selection, by definition, involves contingent selection. u: Not at all: until one has function and a means of getting to islands of function, random walks will not have any function to reward. Intelligence, on knowledge or imagination of where the islands of function are or may be can reward closeness, as for instance notoriously happens with Weasel the program by Dawkins. EL: So the next question becomes: can we distinguish between the products of distal goal-directed contingent selection and mere proximal “goal” directed contingent selection (e.g. selection that is an automatic consequence of relative reproductive success) v: Yes, on the very criteria pointed out: once islands of function are deeply isolated based on sufficient complexity AND specificity, chance based random walks and trial and error will be maximally unlikely to reach such an island, on the analytical grounds already given, which are backed up by case after case of observation, e.g. the random walk based text generation exercises are able to pick up texts in fields of 10^50 or so possibilities, but 10^500 will overwhelm them. EL: I think we can, which is why I think that natural selection is a better explanation for living things than intelligent design. Actually, I think it also leads to better theology, but that’s a derail w: Again: until you are credibly able to get to islands of reproductive function, whether for OOL or for novel body plans, you cannot apply the criterion of differential reproductive success. x: I will not delve on theology here. >> __________________ I trust this will help clarify. GEM of TKI kairosfocus
Dr Liddle: Let me respond on points, step by step, to your 127. First, through, preliminaries: I: Preliminaries: Do you recall the immediately previous thread that you suddenly and conspicuously walked away from when the way information is encoded in DNA, along he strand (and not across the ladder between strands) was addressed? I think it would be good to go back there and work through the posts and videos. That will set a context for addressing what is on the table here. Secondly, here is the basic definition of information in the UD \glossary for several years now, that is extracted by way of testimony against interest from Wiki:
“ . . that which would be communicated by a message if it were sent from a sender to a receiver capable of understanding the message . . . . In terms of data, it can be defined as a collection of facts [i.e. as represented or sensed in some format] from which conclusions may be drawn [and on which decisions and actions may be taken].”
Perhaps, you are concerned about how it is quantified, well at first level there is a Hartley suggested metric picked up by Shannon and others that uses symbol frequencies viewed as frequentist probabilities, and then deduces for the kth symbol: Ik = log(1/pk) = - log pk This is then extended to the average quantum of info per symbol by a weighted sum: H = - [SUM on k] pk log pk So far we are basically dealing with measures of symbol frequencies on the idea that the rarer symbols are more surprising, more unexpected, and thus more informative, and using a log metric to allow additivity. It turns out -- by the peculiarities of probabilities -- that a string of random characters will under this metric, give the peak value of information per symbol [by contrast in English about 1/8 of characters in a typical message will be an E]. How do we get to the more conventional sense, where meaning is important, and information is often expressed in coded clusters of symbols with defined vocabularies and rules for meaning? (In cases where information is implicit in a structure or an organised cluster of components to do a function, the structured set of yes/no questions that specifies the outcome is going to be similarly in symbols and will have rules of meaning and a vocabulary.) Sets of symbols imply possible configurations, only some of which are meaningful or functional, i.e. we are looking at islands of function [or more broadly zones of interest] in the space of possible configurations. That means the observed event E comes form a zone of interest T, that can be separately specified or observed. This is of course the root of the idea that we have deeply isolated zones of interest or islands of function in vast seas of possible configs. As you know, if we are dealing with a binary string, each additional bit doubles the config space of possibilities. Beyond 500 bits, the resources of the solar system of 10^57 or so atoms will not be enough to scan more than 1 in 10^48 of the possibilities so a zone of interest that is sufficiently narrowly defined will be so isolated that it is not accessible to random walk driven trial and error search. And, unless a search is matched based on knowledge, the typical search strategy will on average do about as well as that. It is intelligently injected active information that makes a difference and makes reaching such zones within resources a routine observation, e.g. posts in this thread. This is of course the heart of the concept complex specified information [CSI], and functionally specific complex information [FSCI] is the case where the zone of interest T is specified on a function, e.g. an AA chain must fold properly and work as a Type-X enzyme. (Cf Durston et al's estimates for 35 protein families for cases in point.) A simple way to model and measure such FSCI, towards use in the decision-making explanatory filter to decide whether on best explanation something is intelligently caused is:
a: Define specificity, S such that once we observe a specification S = 1, and if not S = 0. (Typically this will be on observed function and on observing or inferring the likely effect of significant random perturbation that moves us a reasonable Hamming distance away from an original observed E within T.) 2: Identify a measure of information, such that I is calculated per symbol frequencies or if a storage medium is used, it can be estimated from the storage used at least to an order of magnitude. [E.g. File-X is 127 kbits] 3: Using the log reduction of Dembski's Chi: Chi_500 = I*S - 500, bits beyond the solar system threshold Chi_1000 = I*S - 1,000, bits beyond the observed cosmos threshold. 4: On the case of a random string, S = 0. 5: On the case of a string that is set to a fixed repetitive pattern [say Thaxton et al's THE END, repeat], I = 0 or a value near to 0. 6: Taking as a text, your 127 up to the smiley, you have 3282 128-state ASCII characters. Or,3282 * 7 = 22,974 bits, an I value; and as text in English S = 1. Chi_500 = 22,474 functionally specific bits beyond the threshold. 7: On seeing such a positive value of FSCI beyond the threshold, I comfortably infer that the cause is not mechanical necessity without intelligent direction, nor is it chance, but intelligence. 8: Similarly, I have several times cited cases from Durston's 2007 Table I, and have set up the inference to design for these protein families thusly:
RecA: 242 AA, 832 fits, Chi: 332 bits beyond SecY: 342 AA, 688 fits, Chi: 188 bits beyond Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond . . . results n7
8: In both cases, the matter is simple and in eh case where we directly know the case, the inference form FSCI -- as usual -- is accurate. As was shown, a random text and a repeated block would not pass the threshold, for opposite reasons. 9: A simpler, brute force X-metric does much the same, based on storage use.
That should be enough for basic backdrop. [ . . . ] ______________ kairosfocus
Mung @119:
What is it about DNA that says that the three bases that code for a particular codon must be arranged in a linear manner on the DNA strand?
ellazimm @122:
If the codons were not read in a particular order then it wouldn’t be a code. There has to be a nomenclature or the data is meaningless. We don’t decide what order to read the letters in a word; that’s defined by the language protocols. Once a scanning order/sequence is selected then that’s it.
Well, I was talking about bases being read in a linear manner, but I suppose the question is just as applicabale to codons. The correct answer is, nothing. Consider that you and I both have a copy of the same book. We devise a method of communication where I send you a sequence of numbers which specify a page number (within the book), paragraph number (on the page) and word number (within the paragraph). Obviously I could direct you to any page in the book, I don't have to start with page 1. And on the page I could direct you to any paragraph, I don't have to start with the first paragraph, and within the paragraph, I could direct you to any word, I don't have to start with the first word. You could read that book from start to finish in a linear manner and you'd never know the message unless you understood the code (the rule). Mung
Elizabeth, though Szostak's definition of information is more than sufficient for for the purposes of this debate, I would like to take a 'deeper' look at what information really is. In this following video McIntosh reveals the transcendent nature of information we use in our everyday lives: Information? What Is It Really? Professor Andy McIntosh - video http://www.metacafe.com/w/4739025 And Quantum information, which is shown to be completely transcendent from matter and energy, by Quantum Entanglement and Teleportation, is proven to have a 'deeper' level of conservation than energy does in the first law of thermodynamics, by virtue of the fact that a photon is destroyed in quantum teleportation,, Quantum Teleportation - IBM Research Page Excerpt: "it would destroy the original (photon) in the process,," http://www.research.ibm.com/quantuminfo/teleportation/ ,,, whereas the quantum information is shown to be conserved,,, Quantum no-hiding theorem experimentally confirmed for first time Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. A third and related theorem, called the no-hiding theorem, addresses information loss in the quantum world. According to the no-hiding theorem, if information is missing from one system (which may happen when the system interacts with the environment), then the information is simply residing somewhere else in the Universe; in other words, the missing information cannot be hidden in the correlations between a system and its environment. (This experiment provides experimental proof that the teleportation of quantum information in this universe must be complete and instantaneous.) http://www.physorg.com/news/2011-03-quantum-no-hiding-theorem-experimentally.html ,,, but to relate the conservation principle of quantum information to classical information, such as what is encoded onto a computer or onto our DNA,,, For years there was a debate brewing between Rolf Landauer, who asserted that information, that was encoded on a computer, was merely physical (i.e. emergent from a material basis) as all good materialists/atheists must hold classical information to be, whereas people such as Roger Penrose and Norbert Weiner held that the information that was encoded on a computer was its own independent entity that was separate from any matter-energy basis. "Those devices (computers) can yield only approximations to a structure (of information) that has a deep and "computer independent" existence of its own." - Roger Penrose - The Emperor's New Mind - Pg 147 "Information is information, not matter or energy. No materialism which does not admit this can survive at the present day." Norbert Weiner - MIT Mathematician - Father of Cybernetics ,,, Landauer held his 'materialistic position', that 'information is physical', because it always required a specific amount of energy to erase information. Landauer's principle Of Note: "any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase ,,, Specifically, each bit of lost information will lead to the release of an (specific) amount (at least kT ln 2) of heat.,, http://en.wikipedia.org/wiki/Landauer%27s_principle ,,, And indeed much ink has been spilt through the years arguing both sides of Landauer's principle. Yet recently there have been some breakthroughs that have finally shown that the classical information, which is encoded on computers, is not 'physical' after all, since it is now shown to be possible to erase the information without consuming energy. Quantum knowledge cools computers: New understanding of entropy - June 2011 Excerpt: No heat, even a cooling effect; In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that "more than complete knowledge" from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy. Renner emphasizes, however, "This doesn't mean that we can develop a perpetual motion machine." The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what's known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says "We're working on the edge of the second law. If you go any further, you will break it." http://www.sciencedaily.com/releases/2011/06/110601134300.htm and this,,, Scientists show how to erase information without using energy - January 2011 Excerpt: Until now, scientists have thought that the process of erasing information requires energy. But a new study shows that, theoretically, information can be erased without using any energy at all. Instead, the cost of erasure can be paid in terms of another conserved quantity, such as spin angular momentum.,,, "Landauer said that information is physical because it takes energy to erase it. We are saying that the reason it is physical has a broader context than that.", Vaccaro explained. http://www.physorg.com/news/2011-01-scientists-erase-energy.html ,,, Thus Elizabeth, it now shown that the 'classical' information encoded onto computers, and even onto DNA, is not 'physical', or merely emergent from a material basis, as neo-darwinists must hold, but it is shown that classical information is indeed its own independent entity which is transcendent, and dominant, of any energy-matter basis. ================= As a side light to this, leading quantum physicist Anton Zeilinger has followed in John Archibald Wheeler's footsteps (1911-2008) by insisting reality, at its most foundational level, is 'information'. "It from bit symbolizes the idea that every item of the physical world has at bottom - at a very deep bottom, in most instances - an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that things physical are information-theoretic in origin." John Archibald Wheeler Why the Quantum? It from Bit? A Participatory Universe? Excerpt: In conclusion, it may very well be said that information is the irreducible kernel from which everything else flows. Thence the question why nature appears quantized is simply a consequence of the fact that information itself is quantized by necessity. It might even be fair to observe that the concept that information is fundamental is very old knowledge of humanity, witness for example the beginning of gospel according to John: "In the beginning was the Word." Anton Zeilinger - a leading expert in quantum teleportation: Zeilinger's principle The principle that any elementary system carries just one bit of information. This principle was put forward by the Austrian physicist Anton Zeilinger in 1999 and subsequently developed by him to derive several aspects of quantum mechanics. http://science.jrank.org/pages/20784/Zeilinger%27s-principle.html#ixzz17a7f88PM In the beginning was the bit - New Scientist Excerpt: Zeilinger's principle leads to the intrinsic randomness found in the quantum world. Consider the spin of an electron. Say it is measured along a vertical axis (call it the z axis) and found to be pointing up. Because one bit of information has been used to make that statement, no more information can be carried by the electron's spin. Consequently, no information is available to predict the amounts of spin in the two horizontal directions (x and y axes), so they are of necessity entirely random. If you then measure the spin in one of these directions, there is an equal chance of its pointing right or left, forward or back. This fundamental randomness is what we call Heisenberg's uncertainty principle. http://www.quantum.at/fileadmin/links/newscientist/bit.html Quantum Entanglement and Teleportation - Anton Zeilinger - video http://www.metacafe.com/watch/5705317/ etc... etc... etc... bornagain77
bornagain77: Mathematically Defining Functional Information In Molecular Biology – Kirk Durston – short video http://www.metacafe.com/watch/3995236 bornagain, I really appreciate your links, especially the video links. I've learned a lot from them. You've obviously put a lot of effort into tracking this stuff down and sharing it, and your efforts are greatly appreciated. My post was perhaps a rather lame attempt at a play on words, since in my view the infamous Barbara Forrest is the most disingenuous (this euphemism is a sanitized and politically correct way of saying that she lies and makes stuff up) critic of ID theory whose "works" I have had the displeasure of reading. The major point of my essay is that some stuff is so obviously wrong that one must sacrifice all intellectual integrity to defend it. This is not seeing the forest for the trees. The Darwinian thesis when boiled down to its essentials is that chance and necessity, in about 10^18 seconds with hopelessly inadequate probabilistic resources, accidentally produced Mozart and Mozart's symphonies, and the computer with which I am communicating the information in this post. This proposition is simply ludicrous on its face. As many UD readers know, I am a former militant atheist who at one time could have given Richard Dawkins a run for his money in defending materialistic atheism. Then, one day, I realized what a pathetic, irrational, deceived idiot I had become. GilDodgen
Lizzie, I have no problem with the definition of information. Almost to the person, the people I have seen with a problem with the definition of information are those who wish to twist the definition away from its possible implications - just as you have attempted to do. I don't worry overmuch about the implications; I look to a rational observation of what information is, how it operates, and how it exists. As I already said, my starting point is the historical use of the word; that which gives form, to in-form (from the Latin verb informare), or, from the information processing domain; a sequence of symbols that can cause a transformation within a system. Either is suitable. If these are not sufficient for you, then I will add this: Information is an abstraction of a discrete object/thing embedded in an arrangement of matter or energy. This definition is fully compliant with what is found at the genomic level, as well as inter-cellular transient signaling systems, and every other instance of information I am aware of. I believe there is a list of requirements for the existence of information. It requires the selection of an object. It requires a mechanism to experience that object in some manner so that an abstraction can be formed. It requires a suitable medium to contain that abstraction, a medium with very specific properties. It requires a mechanism to cause an arrangement of matter/energy (a representation) to be formed, and to establish a relationship between that arrangement and the object it is to represent. And it requires a receiver that has the capacity to properly interpret that arrangement – to be in-formed by the information. Are these not the observable properties of recorded information? Do you know of any recorded information of a thing that did not come into existence by it being experienced in some way? Do you know of any information that is not recorded in a material or energetic medium? Do you know of any information that came to be recorded in a medium without that medium being arranged in order to record it? What you seem to be saying in your argument is that we could have two instances where all the facets of actual information are fully observed (we have an abstraction of a discrete object, recorded by means of an arrangement of matter/energy, that arrangement then being received and decoded by receiver) but in one case or the other it is not really information based upon – what? Who or what captured the information, and who or what may receive it? I’ll stop here, and let you argue your case. Upright BiPed
Correction to post #139: ellazimm said: "But I don’t see how you can prove that mutations are anything but random. They follow random patterns. They are unpredictable." Meleagar
*the point that everyone is trying to make is here is that there is no scientific. above
@ellazim above 104: -“What you have illustrated here is the impossibility of defending scientifically, the articles of faith embeded in words such ‘chance’ and ‘nature’ that are unheld as dogma by the naturalists.” You don’t like my model, that’s fine. But I haven’t seen a well defined, worked out alternative. Everyone is taking shots at me but no one has told me why there is a need for a ‘metric’ or proposed a possible ‘metric’.” I am not taking shot at you ellazim. I am merely pointing out to the fact that your model is just as much a matter of faith as anyone else’s. The point that everyone is trying to make here no scientific metric available (not even in principle in fact) that will demarcate the issue. The whole “nature-did-it” is just belief and rhetoric. above
ellazimm said: "The fact that mutations happen unpredictably." Unfortunately, even if mutations were in fact unpredictable, this wouldn't help your case any, because you are making a case against intelligence, and intelligence is also often unpredictable. ellazimm said: "That mutations can be modelled like other random variables." Then it shouldn't be a problem directing me to the model of mutations that shows chance* mutations sufficient (combined with natural* selection) to generate macro-evolutionary success. ellazimm said: "Selection IS NOT random!" I guess that statement would be relevant had I ever claimed it was. ellazimm asks: "Everyone keeps asking for a metric . . . .what is it you want to measure? A capacity? At least tell me the units of the metric you want." I want to measure the capacity of chance* mutation and natural* selection to produce the macro-evolutionary features they are claimed to have the power to produce. But I don’t see how you can prove that mutations are anything but random. They follow random patterns. They are unpredictable. Do they follow random "patterns", or are they "unpredictable"? If they are stochastically predictable (as a pattern) in the genome, then it should be a relatively simple process to run an analysis to see if random mutations can do what they are claimed to do when it comes to building macroevolutionary features. In any event, it's not my job to prove they are not random, it is the job of those that claim that they are random to demonstrate not only that they are (that would be the first part), but that they are sufficient, when combined with natural* selection, to produce what they are claimed to have produced. Meleagar
Dr. Liddle said: In other words, the onus is on people who dispute ID to falsify the null – to demonstrate that what we see can be sufficiently accounted for by non-intelligent processes. I haven't cliamed ID is true or even valid in this debate, much less the null hypothesis. You, on the other hand, have asserted, by reason of defining "evolutionary processes" as "unintentional processes", that unintentional processes are "adequate explanation" for macro-evolutionary success. I have asked you repeatedly to show me where it has been scientifically demonstrated that, as you say and have defined, "unintentional" processes are sufficient explanation for macro-evolutionary success. I don't claim they are, or are not; I don't claim design is, or is not a sufficient explanation. Please do not attempt to shift the burden onto me when you are trying to justify your own assertions. I'm asking you to deliver the goods on your assertion and your assertion alone: where has science demonstrated that chance* (the asterix means, in our debate, "unintentional") mutation and natural* selection can sufficiently explain what they are claimed to have produced? If you are going to characterize the mutation as chance*, and selection process as natural*, then it is you that have made the only claim on the table about the nature of those chance and selection events, accumulatively, that are necessary to produce functioning macro-evolutionary features. You have made a positive assertion that those process are chance* and natural*; back it up. Show me the metric that supports your claim that those processes: 1) are indeed accurately characterized as chance* and natural* (meaning, in our debate, unintentional), and 2) are demonstrated to be sufficient to the task ascribed to them. Meleagar
A Darwinist (D) having a conversation with an ID proponent (ID) D: Naturalistic forces are sufficient for producing biodiversity. ID: Can you provide any evidence to support your claim? D: It’s all over the place. ID: Could you point me to one of those places? D: Well, sure if you insist. Here is the evidence for Macro Evolution. ID: Please try to focus. We are not talking about Macro Evolution. The issue is whether or not naturalistic forces are “sufficient” to produce it. D: Please tell me why you think an intelligent agent was responsible. ID: Again, I must ask you to stay on topic. We are discussing your claim, not mine. D: I believe that my neo-Darwinistic theory is adequate. Eventually, matter in motion will produce life and leave the appearance of design, even though that design is not real. ID: I understand that you believe in the neo-Darwinism paradigm, but I am asking you if you have any good reasons for believing in it. D: ID is not a rigorous science. ID: ID is rigorous enough that its proponents can produce empirical evidence that lends itself to scientific measurement. Do you have any empirical, measurable evidence to support your position? D: Please define “information.” ID: I will be happy to do that at another time, but I am, at the moment, interested in finding out if you can make a rational case for your argument. D: I think evolutionary processes resemble intentional intelligent processes very closely. ID. That is an interesting claim, but I am still hoping that you will defend your original claim, which you seem to have forgotten. D: Well, if you must know, I find the Darwinistic explanation more parsimonious? ID: But do you have any reason to believe that this parsimonious explanation reflects reality or is consistent with the evidence? D: Yes, thousands of scientists believe it. ID: But that is precisely what all the fuss in about. Those scientists, like you, cannot support their beliefs, which is why we are having this discussion. D: Well, I’ve been busy, and I’ve slightly lost track of the challenge. But yes, I do think that unintelligent processes can generate intelligent ones. I don’t see any good a priori reason to think they couldn’t. ID: But do you have any evidentially-based reasons for believing that? D: I have already presented the evidence? ID: Again, you have presented summaries of arguments on behalf of Common Descent? You have not, in any way, presented an argument to support the proposition that naturalistic forces can take life through all the taxonomic levels or produce even one new body plan. D: Please define "naturalistic forces." ID: They are what you thought they were when you said they were "sufficient." Darwinists are fun. You have to love it! StephenB
Cricket is boring, Lizzie! Football is where it's at. Depends who you support right enough... In the interests of avoiding repetition on the same thread, please can I refer you to this comment to explain the difference between artificial and natural selection: https://uncommondescent.com/intelligent-design/at-some-point-the-obvious-becomes-transparently-obvious-or-recognizing-the-forrest-with-alls-its-barbs-through-the-trees/comment-page-3/#comment-383514 As far as bacteria are concerned, again, to avoid repetition on the same thread, please can I refer you to this comment: https://uncommondescent.com/intelligent-design/at-some-point-the-obvious-becomes-transparently-obvious-or-recognizing-the-forrest-with-alls-its-barbs-through-the-trees/comment-page-3/#comment-383560 Italian immunity from heart disease? Reminds me of the Italian immunity from pain: at best, sub-specific variety within a pre-existing gene pool! As for anti-freeze in arctic fish, I do not see an observational or experimental basis to assume that anti-freeze arose as the result of a random mutation (admittedly I disagree with both Michael Behe and yourself here!). At best, the evidence is equally consistent with Intelligent Design in my opinion. Most likely, anti-freeze in arctic fish has always been a part of their gene pool. So you were right: I didn't like your examples... partly because none of them were new, but mainly because assigning "random mutation" to the addition of "information in the genome" is more wishful thinking than proper science. Chris Doyle
Elizabeth, since you consider Szostak, at the forefront of abiogenesis research, will his definition of information suffice? Functional information and the emergence of bio-complexity: Robert M. Hazen, Patrick L. Griffin, James M. Carothers, and Jack W. Szostak: Abstract: Complex emergent systems of many interacting components, including complex biological systems, have the potential to perform quantifiable functions. Accordingly, we define 'functional information,' I(Ex), as a measure of system complexity. For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, Ex (e.g., the RNA-GTP binding energy), I(Ex)= -log2 [F(Ex)], where F(Ex) is the fraction of all possible configurations of the system that possess a degree of function > Ex. Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree. In each case we observe evidence for several distinct solutions with different maximum degrees of function, features that lead to steps in plots of information versus degree of functions. http://genetics.mgh.harvard.edu/szostakweb/publications/Szostak_pdfs/Hazen_etal_PNAS_2007.pdf Mathematically Defining Functional Information In Molecular Biology - Kirk Durston - short video http://www.metacafe.com/watch/3995236 Measuring the functional sequence complexity of proteins - Kirk K Durston, David KY Chiu, David L Abel and Jack T Trevors - 2007 Excerpt: We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable. To demonstrate the relevance to functional bioinformatics, a method to measure functional sequence complexity was developed and applied to 35 protein families.,,, http://www.tbiomed.com/content/4/1/47 bornagain77
Hiya Lizzie, my turn to butt in! I see that you are arguing over the definition of information. This seems to be a recurring and pointless argument. Disagreement over the definition of information cannot change the fact that information, particularly in the cell, is an observable phenomenon that needs to be accounted for, and, ultimately, there are only two possible sources for that account: Accident or Design. It'd be interesting to hear about any kind of empirical basis you have for the notion that the information contained in the cell arose by accident. Chris Doyle
Lizzie and ellazimm, like United, have given their best but have found our assaults on their position indefensible! At least they tried. The likes of paragwinn, who didn’t try, should try and follow their example.
We are in for the long game :) Think cricket. Elizabeth Liddle
Hiya Chris!
Hiya Lizzie, What I actually said was: 1. Artificial selection does NOT act upon random mutations. 2. Natural selection acting upon random mutations in a macro-evolutionary manner? Merely a fairytale! And it looks like you’ve made the same mistake as Dawkins: confusing artificial selection (which relies upon Intelligent Design) with natural selection (which relies upon a tautology).
No, I'm not confused. Natural selection is almost a tautology, but that's because it's a complicated way of stating a self-evident truth: that variants that replicate better will be replicated more often. But that truth is equally applicable to artificial selection, the only difference being that what is required to "replicate better" is to exhibit traits that the breeder likes, whereas in "natural selection" the traits could be anything, from traits that mates like, to traits that predators don't like, to traits that enable you to raise your young more effectively, to traits that make you hard for predators to find.
Please provide just one example of a new advantageous random mutation that has added “information in the genome”. Please don’t say “Peppered Moths”.
I would say that any mutation that results in a phenotypic effect has "added information to the genome", but that's probably because there is still a huge gulf between our definitions of information. As for an example of a mutation that results in an advantageous phenotypic effect (e.g. increases the probability that an organism will breed successfully in the current environment) then there are a few poster children (antifreeze in arctic fish; nylon digesting capacity in bacteria; antibiotic resistance in bacteria; that family of Italians with protection from heart disease - I'll have to check that one out). But those are, as I said, poster children. There are countless others than can only be inferred statistically. Polymorphisms with small phenotypic effects are common place in all populations, and the only definitive way to determine which are advantageous is to manipulate the environment and watch the allele frequencies change (as with Endler's guppie experiments, or, indeed, the natural experiment offered by antiobiotics and bacteria). I anticipate that you won't like these examples, but I'd like to hear your specific objections anyway :) Gottagotobed, I think. Elizabeth Liddle
Greetings GilDodgen, Your blog entry has already provoked a full day of comments from around the world: good stuff, sir. I would just like to defend the "indefensible" (or use of that term anyway!): When Barcelona played Manchester United in the Champions League final last week, Barca's attack was indefensible: nothing my beloved United could do could protect them from defeat! Lizzie and ellazimm, like United, have given their best but have found our assaults on their position indefensible! At least they tried. The likes of paragwinn, who didn't try, should try and follow their example. Chris Doyle
@ Upright BiPed, #125 I hear your frustration, UB, and believe me, I've felt the same in parallel circs! Let me try to address it:
Lizzie, geeeezzz. Your pains to not answer a simple question was illuminating. You first told me that you could show the rise of recorded information by neo-Darwinian processes. I then gave you two descriptions of information which are not in conflict (one employing the classic etymology of the word, as well as another using a more technical description). You ignored those.
No, I didn't. You said I could use any definition I wanted. Then when I took a look at your definition, I found operational issues, which we need to sort out, like who the sender and receiver would be in my demonstration (obviously it's difficult with time and post lags, so I went on regardless).
You then said you were going to send me some recorded information by virtue of 1’s and 0s in order to make your point. And then that morphed into tossing a coin a hundred times. I simply asked what the information was about. And then you then switched away from recorded information to dusty table tops instead. I am now getting the feeling I am chasing someone who desperately wants to avoid being cornered by taking a stand on anything. This is often called gorilla dust (the dust thrown in the air to confuse an opponent). It doesn’t intimate a strong position on your part, so I may not waste my time.
Well, I can only say you are misreading me (though I guess that's understandable, given the gulf. I am more than happy to do what you ask, as long as I have a workable operational definition of what I am supposed to demonstrate. I realise you think you have given me one, but from my PoV it isn't one. This isn't gorilla dust - it's the heart of the entire matter. I am not trying to confuse you - if anything I am trying to show you where the confusion between us lies. Clearly if you don't want to continue, that's fine with me, but I'd be disappointed, as I am posting in good faith. I know this is difficult to understand, and, despite the warmth people here have shown me (for which I am touched and grateful) over the last week or so, I have learned a lot about just how far apart people like me and people here actually are. I'd like to try to close that gap, but if people here insist that any attempt from an "evolutionist" to reframe a question is simply obfuscation and evasion, or even "dissembling" then we are in a closed loop, where any attempt to find common ground is seen as "gorilla dust", while failure to find common ground simply ends the discussion. I hope that won't happen. My offer was to demonstrate how evolutionary processes (specifically, replication with modification plus natural selection) can create information. To do this we have to have an agreed operational definition of information. One that requires that it is something like a message "about" something between two intelligent beings, a sender and a receiver, will obviously land us in circularity, because the challenge for is to show that information can be sent between non-intelligent entities (e.g. an environment and a protein). Further more, I need to know what are the equivalent of the sender and receiver in the model I am challenged to produce. Do you see the problem? I think it's solvable, but there is still a gulf to cross. - – - – - – - –
Picking up at the point in your response where your neighbors find your dusty table tops and presume that you are in trouble. You seem to presume that this is information. However, the dust on your table is nothing more than the dust on your table. It contains no information whatsoever Lizzie, none. For the record, the state of a thing does not contain information (unless that state is specifically configured as a container of information, eg a book, a disc, a neural cell).
Well, if it contains no information, how come my neighbours gain from it the knowledge that I am probably dead? This is not a sarcastic question, I am just trying to make the point that "information" needs tight operational definition if I am to fulfill the challenge. "The drips from the ceiling told me the pipe had burst" is a perfectly normal human locution, but is not, clearly, the kind of message you mean. We need to find a bomb-proof definition of the kind of message you mean!
There is no information in an atom of carbon, for instance. The state of an atom of carbon is nothing more than the state of an atom of carbon. On the one hand we have a carbon atom with six electrons, six protons, and six neutrons; on the other hand we have the information that an atom of carbon has six electrons, six protons, and six neutrons. Each is a discrete reality which can be independently validated. A physicist can demonstrate that the state of a carbon atom exists, and a librarian can demonstrate that the information exists. The latter, however, required a mechanism to bring it into existence.
And the latter is a communication between two intelligent individuals. I need to know what the communication is supposed to between in the case of, say, a genome, without, of course, requiring that any part of the system is intelligent, which would be circular.
As for the remainder of your post where you go off into CSI and Shannon information, I can offer a simple piece of advice. I say this without any intention of crudeness – but you really might like to avoid speaking definitively on these topics until you come to grips with exactly what information is, and what it isn’t, and how it exists, and how it comes into existence. At such a point, then you could assess ID on its merits, instead of some caricature in your head.
I am not attempting to "speak definitively" on the subject of information. I am trying to find out what you, Upright BiPed, mean by information in the context of your challenge (which I am all too happy to respond to). It is a word with many meanings, as I think we can agree. But I can't accept a definition of information that is defined in terms of how it comes into existence, obviously, because the question at issue is how it comes into existence! What I do need is an operational definition of information as used in the claim that unintelligent processes can't create it, if I am going to challenge the claim. In peace Lizzie Elizabeth Liddle
Hiya Lizzie, What I actually said was: 1. Artificial selection does NOT act upon random mutations. 2. Natural selection acting upon random mutations in a macro-evolutionary manner? Merely a fairytale! And it looks like you’ve made the same mistake as Dawkins: confusing artificial selection (which relies upon Intelligent Design) with natural selection (which relies upon a tautology). Please provide just one example of a new advantageous random mutation that has added “information in the genome”. Please don’t say “Peppered Moths”. Chris Doyle
Greetings ellazimm, First of all, allow me to echo the sentiments of admiration for your particular participation here. In the heat of battle, such sentiments are easily forgotten. Fair play to you for doing what the vast majority of evolutionists will never, ever do: properly engaging with your opponents. With those sentiments still echoing, I must now point out the major shortcomings in your response to me (post 102). 1. Artificial selection in dogs and cabbages acted purely upon the pre-existing gene pool. Any claim that random mutation was part of these varieties needs to be supported by scientific fact. Absent that, you must concede. 2. The notion that humans were all brown-eyed before blue eyes somehow appeared is just that: a notion. There is absolutely no observational or experimental evidence to support this notion. Sorry. The research you appeal to is simply unscientific (it makes the unwarranted assumption that all humans were brown-eyed originally). 3. Why do you not subject Dawkins’ rhetoric to the same level of scrutiny as you do the claims of ID proponents? How can you be so in thrall to a man that you’ve never met (“Dawkins is very honest actually”): particularly when that man has uttered statements that are so toe-curlingly dreadful? For example: “It is absolutely safe to say that if you meet somebody who claims not to believe in evolution, that person is ignorant, stupid or insane (or wicked, but I’d rather not consider that).” “Are there, then, any examples of anti-evolution poseurs who are not ignorant, stupid or insane, and who might be genuine candidates for the wicked category? I once shared a platform with someone called David Berlinski, who is certainly not ignorant, stupid or insane. He denies that he is a creationist, but claims strong scientific arguments against evolution…As I said, he is certainly not ignorant, stupid or insane.” 4. Be honest, ellazimm. You did not know what I told you about E. coli until you read my words, did you. If the “very honest” Dawkins didn’t know, then what chance did you have!? Seriously, Lenski’s research simply shows that, with extreme efforts, you can make some E. coli do what other E. coli has been doing all along. Big deal. If you disagree, what exactly did you have in mind? 5. “Hey, we may all be descendent from bacteria. Who says it hasn’t give rise to other life forms?” Oh dear. You certainly saved the best to last! Bacteria are prokaryotic lifeforms. All plants and animals are eukaryotic lifeforms. Therefore, science says bacteria did not give rise to plants and animals. I’m sorry to say that to even ask such a question demonstrates a massive lack of understanding. You don’t even need to read the research! Even Wikipedia will do on this occasion. All the best, Chris Chris Doyle
@Kairosfocus, #123:
Dr Liddle Please, please, please!
At best I see: well all the codes/machines we know the provenance of are designed by intelligence, and living things are codes/machines, so they must have been as well. Which simply isn’t a valid inference! As I’ve said somewhere else recently (this thread?) that’s like saying: cats are mammals, therefore all mammals are cats.
This is an outright strawman caricature.
I'm not convinced it is, Kairosfocus. Let's look at what you say below:
You have an empirically known cause of FSCI — which BTW can be and is measured, thank you. Namely, intelligence.
I'm not sure what FSCI is, but I take it that the SCI part is specified complex information. And yes, we have a known cause of it in the case of human artefacts.
You have a claimed cause, chance and necessity without intelligence, that has not been observed to cause FSCI,
And here is what I see as the problem. We see CSI, in living things, with no known cause. So we have two sets of things with CSI: 1)Non-self-replicating things observed to be made and designed by intelligent humans 2)Self-replicating living things that appear to make the members of each subsequent generation themselves automatically, and no observed designer. That is not enough to infer that living things were designed by intelligent beings, any more than observing that cats are mammals allows us to infer that all mammals are cats, or, to be a little more subtle, any more than observing that this sand is aeolian in origin allows us to conclude that all sand is aeolian in origin. Or, even more appositely, that because this snowball was made and thrown by a boy, that the avalanche of rather different snowballs was also made and thrown by a boy. We are back to the null hypothesis problem - why should the null be "intelligent design"? Why not "we don't know?"
and which is ALSO analytically — on considerations very close to those lying behind the second law of thermodynamics, statistical form — challenged by the quantum state level resources of the 10^57 atoms or our solar system (at 500 bits) or the 10^80 or so of our cosmos (at the 1,000 bit FSCI threshold).
I'm sorry I don't know what this means.
So, to infer on best explanation relative to the evidence that FSCI is a reasonable and so far reliable sign of intelligent cause is NOT a circular argument.
It's not so much circular as unwarranted, IMO. Sure, living things might have been designed by an intelligent designer, or several. That is an interesting hypothesis, but it is not a justifiable null. A better approach, IMO, is to say: what kind of processes can generate CSI? And one answer could be (and this is testable) processes that involve contingent selection. We know that intelligence involves contingent selection, and natural selection, by definition, involves contingent selection. So the next question becomes: can we distinguish between the products of distal goal-directed contingent selection and mere proximal "goal" directed contingent selection (e.g. selection that is an automatic consequence of relative reproductive success) I think we can, which is why I think that natural selection is a better explanation for living things than intelligent design. Actually, I think it also leads to better theology, but that's a derail :) Elizabeth Liddle
PS: Please read here on Inference to best Explanation, and here on the application of the scientific method to origins contexts. (This is beginning to go in circles . . . let's hope we can make this a learning spiral instead.) kairosfocus
Lizzie, geeeezzz. Your pains to not answer a simple question was illuminating. You first told me that you could show the rise of recorded information by neo-Darwinian processes. I then gave you two descriptions of information which are not in conflict (one employing the classic etymology of the word, as well as another using a more technical description). You ignored those. You then said you were going to send me some recorded information by virtue of 1’s and 0s in order to make your point. And then that morphed into tossing a coin a hundred times. I simply asked what the information was about. And then you then switched away from recorded information to dusty table tops instead. I am now getting the feeling I am chasing someone who desperately wants to avoid being cornered by taking a stand on anything. This is often called gorilla dust (the dust thrown in the air to confuse an opponent). It doesn’t intimate a strong position on your part, so I may not waste my time. - - - - - - - - Picking up at the point in your response where your neighbors find your dusty table tops and presume that you are in trouble. You seem to presume that this is information. However, the dust on your table is nothing more than the dust on your table. It contains no information whatsoever Lizzie, none. For the record, the state of a thing does not contain information (unless that state is specifically configured as a container of information, eg a book, a disc, a neural cell). There is no information in an atom of carbon, for instance. The state of an atom of carbon is nothing more than the state of an atom of carbon. On the one hand we have a carbon atom with six electrons, six protons, and six neutrons; on the other hand we have the information that an atom of carbon has six electrons, six protons, and six neutrons. Each is a discrete reality which can be independently validated. A physicist can demonstrate that the state of a carbon atom exists, and a librarian can demonstrate that the information exists. The latter, however, required a mechanism to bring it into existence. - - - - - - - - As for the remainder of your post where you go off into CSI and Shannon information, I can offer a simple piece of advice. I say this without any intention of crudeness – but you really might like to avoid speaking definitively on these topics until you come to grips with exactly what information is, and what it isn’t, and how it exists, and how it comes into existence. At such a point, then you could assess ID on its merits, instead of some caricature in your head. Upright BiPed
@Chris,#67 Ellazimm has responded to this, but just an additional response:
That’s where Dawkins has led you up the garden path, ellazimm. Artificial selection acts on a pre-existing gene pool: random mutations do not come into it. If anything, the more specialised a variety becomes, the more genetic information is actually lost. Man, using Intelligent Design if you like, can shape a dog or a cabbage using artificial selection with nothing more than the gene pool that was already there in the first place. I repeat: artificial selection does NOT act upon random mutations.
I hear you, but I am wondering what sources you are referencing. As Ellazim says, every human being has a couple of hundred de novo mutations, a few of which will have phenotypic effects. In addition, every human being potentially has brand new alleles, generated by random crossover between parental alleles. Again, these may have phenotypic effects. And anything trait with a phenotypic effect is potentially selectable. So to say that natural selection does not act on random mutations seems, well, odd. Natural selection acts on allele with a phenotypic effect, and that will include new alleles resulting from "random mutations" as well as changing the frequency in the population of existing alleles. If all natural (or artificial, which is just a special case of a niche environment) selection were simply a question of eradicating unwanted or not-currently-useful alleles from an existing pool, there would indeed be "limits to evolution". But this is not the case. There is a constant supply of new alleles, in every generation. Most of these are likely to be neutral in effect, and a few are disastrous. The neutral ones will sometimes propagate by drift, and sometimes not. The slightly advantageous ones have greater chance of propagating through the population. And, I submit, it is the information that those advantageous, whether new or existing, alleles are, indeed, advantageous in this new environment that constitutes the "information in the genome" namely, the information the organisms physiology requires to build a phenotype with the best chance of successful replication in that environment. At least that is the Darwinian case, and there is a good evidence to support it. There is certainly good evidence for the regular spontaneous appearance of brand new alleles with phenotypic effects. Elizabeth Liddle
Dr Liddle: Please, please, please!
At best I see: well all the codes/machines we know the provenance of are designed by intelligence, and living things are codes/machines, so they must have been as well. Which simply isn’t a valid inference! As I’ve said somewhere else recently (this thread?) that’s like saying: cats are mammals, therefore all mammals are cats.
This is an outright strawman caricature. You have an empirically known cause of FSCI -- which BTW can be and is measured, thank you. Namely, intelligence. You have a claimed cause, chance and necessity without intelligence, that has not been observed to cause FSCI, and which is ALSO analytically -- on considerations very close to those lying behind the second law of thermodynamics, statistical form -- challenged by the quantum state level resources of the 10^57 atoms or our solar system (at 500 bits) or the 10^80 or so of our cosmos (at the 1,000 bit FSCI threshold). So, to infer on best explanation relative to the evidence that FSCI is a reasonable and so far reliable sign of intelligent cause is NOT a circular argument. remember, this is the same sort of claim as the grounds for the laws of thermodynamics: well tested, analytically supported, and open to correction in light of actual observations to the contrary. But, as the conclusions are empirically reliable, the burden of proof lies on him or her who would object. Proof by empirical demonstration. the Internet and the collected libraries and offices of he world provide billions of test cases ion point on FSCI, as do ever so many artifacts of a technological civilisation. The infinite monkeys analysis used to support the 2nd law of thermodynamics comes up in support. We now see cases of unknown, unobserved provenance, inthe living cell. Do we trust the empirical reliability and the known causal force, as well as the analysis, or do we allow Lewontinian a priori materialism to prevail over the uniformity principle of inferring to a known and empirically reliable causal pattern on observing its signs? Why? GEM of TKI kairosfocus
BA 103: I do think Richard Dawkins is honest to a fault. I know you disagree with him but that doesn't make him a liar. Have you got a critique of Lenski's work other than that put out by Creation Evolution website? above 104: "What you have illustrated here is the impossibility of defending scientifically, the articles of faith embeded in words such ‘chance’ and ‘nature’ that are unheld as dogma by the naturalists." You don't like my model, that's fine. But I haven't seen a well defined, worked out alternative. Everyone is taking shots at me but no one has told me why there is a need for a 'metric' or proposed a possible 'metric'. WJM 107: "What aspect of “the biological record” can be used to quantify that mutation is chance*, and selection random*, in sufficient capacity to produce the biological record?" The fact that mutations happen unpredictably. That mutations can be modelled like other random variables. Selection IS NOT random! Everyone keeps asking for a metric . . . .what is it you want to measure? A capacity? At least tell me the units of the metric you want. WJM 109: "Darwinists claim as scientific fact that mutations are chance* and selection is natural*, and that they are sufficient to produce macroevolutionary features. That claim is a lie. It only take a little logic to show it to be a lie. Odd how so transparent a lie can be so difficult for so many to see." Well, you don't have to agree, that's okay. But I don't see how you can prove that mutations are anything but random. They follow random patterns. They are unpredictable. You may disagree with the assertion but that doesn't make it a lie. Please be civil. Mung 111: "And of course, all the evidence in favor of the existence of a designer must be disregarded for after all, there is no evidence there has ever been one." I'm sorry that I don't see any good evidence. But even Christians don't agree what parts of the Bible are literal truth and what parts are metaphor. Mung 114: "So if we infer that the eye is designed, what explanation is really the most parsimonious?" The one that makes the fewest assumptions. For me that is the one that does not assume the presence of an unnamed, undefined, undetected designer. I'm even told that some of the questions about the designer are inappropriate. So, one we can't question too closely either. That's too big a pill for me to swallow. Sorry. Mung 117: "What scientific metric exists that can exclude “the hand of god” from nature, from being the cause of events that we call “selection” and for “natural selection” therefore to be guided, purposeful, and intentional? There is none, and there can be none. Thus the exclusion of “the hand of god” has to be rhetorical, not scientific." What about the inclusion of the hand of god? How can that be scientific if the exclusion must be rhetorical? Mung 119: "What is it about DNA that says that the three bases that code for a particular codon must be arranged in a linear manner on the DNA strand?" If the codons were not read in a particular order then it wouldn't be a code. There has to be a nomenclature or the data is meaningless. We don't decide what order to read the letters in a word; that's defined by the language protocols. Once a scanning order/sequence is selected then that's it. Now I'm going to bed. If anyone can suggest the units of a possible metric I'd appreciate it. And once we decided on a metric then what values would indicate that Darwinian processes are adequate? What else would we apply the metric to for comparison? What values would be put into the metric? If you want me to answer your question then I'll need some help understanding what you want. ellazimm
Posts inviting responses are sprouting rather faster than I can keep up, but I'll keep trying! Meleager @ #55:
Pardon me for answering under a different name (my work computer won’t all me to sign on via my “Meleagar” identity), but: I never asserted that ID was an adequate explanation. I only asked Dr. Liddle to support her assertion that non-intelligent evolutionary forces were “an adequate explanation”.
I hope the modification I have now posted of that assertion makes more sense. The point perhaps I was trying to make, but failing, was that how we look at this depends on what we regard as the null hypothesis. From what I gather from reading the posts here, ID is regarded as a legitimate null. In other words, the onus is on people who dispute ID to falsify the null - to demonstrate that what we see can be sufficiently accounted for by non-intelligent processes. I think that is at the bottom of much of the rancour, actually, because, of course, scientists do not regard ID as a valid null. The null, in science, is not "materialism did it" but "we don't know. If we fail to support a hypothesis we simply conclude that we "don't know". Sometimes the null is more formally stated, but that in the end is what it boils down to. If we cannot show that two groups are different, we do not conclude that they are the same, but that we do no know that they are different, or, at best, that if they are different, the difference was smaller than we postulated. It seems to me that on a level playing field, the same should be true of ID - that if IDists cannot produce evidence that a thing is intelligently designed, that the interim conclusions must be that "we do not know". Now, before a ton of bricks falls on my head, I know that people have offered all kinds of evidence that they think supports ID. But (and I may have missed something) it seems to me that all that evidence amounts to evidence that "materialism" or "Darwinism" cannot explain such and such, therefore ID. At best I see: well all the codes/machines we know the provenance of are designed by intelligence, and living things are codes/machines, so they must have been as well. Which simply isn't a valid inference! As I've said somewhere else recently (this thread?) that's like saying: cats are mammals, therefore all mammals are cats. However, let's move on...
Note how Dr. Liddle still hasn’t answered the question or met the challenge, but is now claiming that evolutionary forces might be considered intelligent, just not intentional; IOW, she’s thinking maybe an “intelligent” decision-making computer can be purchased from generated from unintelligent processes.
Well, I've been busy, and tbh I've slightly lost track of the challenge. But yes, I do think that unintelligent processes can generate intelligent ones. I don't see any good a priori reason to think they couldn't.
Note how the semantic distinction between “intelligence” and “intentionality” simply moves the question to another position; okay, Dr. Liddle, please show your rigorous evidence that demonstrates that unintelligent forces, chemicals, natural laws, etc. or whatever can generate intelligent processes.
Yes, indeed, it does move the question to another position - I think it moves it to the key position, i.e. the place where we should be looking to cut between intelligence in the sense that human beings are intelligent, with foresight and motivations and goals, and intelligent in the strict sense that Dembski defined it, which specifically excluded intention. As for demonstrating that chemicals, natural laws, etc, can generate intelligent processes, well, first let's be clear whether we are talking about intentional processes or merely intelligent processes as in Dembski's definition of something with the "power to choose between options". If the latter, than I scarcely need to demonstrate it - natural selection (note the word "selection" which is a synonym for "choose") is a process by which the traits that result in the greatest replication rate are best represented in the next generation. What works, in other words, is amplified, and what works less well, is inhibited. And, of course, that is exactly what happens in human brains - thought patterns that receive excitatory input from lots of other networks are amplified, and those that receive less are laterally inhibited (what is sometimes called Neural Darwinism). There is one important difference (well, several, but one crucial one, I suggest) between brains and evolutionary processes, which is that brains can simulate the outcome of potential actions without execution, and feed that outcome back as input. This the key recursion that allows intention, I would argue.
Or is making a distinction between “intelligence” and “intentionality” simply more dissembling in order to continue avoiding the fact that you have provided no scientific basis whatsoever for your assertion that chance* and natural* processes can [insert new begged-question semantic avoidance] generate the “intelligent” non-intentional processes necessary to acquire macro-evolutionary success?
Not at all, and I have to repeat - I do not "dissemble". I have my faults, but lying isn't one of them. What I would like, however, is for you to explain what you mean by "chance*" and "natural*" processes, and what those asterisks mean. Then I will attempt to answer your question. Elizabeth Liddle
What scientific metric exists that can exclude “the hand of god” from being the cause of events that we call “chance” or “random” and for those events thus being guided, purposeful, and intentional? What scientific metric exists that can exclude “the hand of god” from nature, from being the cause of events that we call “selection” and for “natural selection” therefore to be guided, purposeful, and intentional? There is none, and there can be none. Thus the exclusion of “the hand of god” has to be rhetorical, not scientific. I'm usually not much for simply repeating a comment to cheer on merely because I agree with it. But what the heck, I'll make an exception here. Bravo. nullasalus
: Elizabeth Liddle @93:
when DNA...is read in a cell...linearly, what is it that constrains it to be read linearly? What stops it, if not physical/chemical forces, from reading it non-linearly? What constrains it?
First, I'd like to commend you on your effort. But we have known systems of information storage and retrieval. Why not appeal to those for an analogy? Take a hard disk, or CD/DVD or RAM. Think about whether the data is read from them in a linear fashion, and why. What is it about matter, energy, or information that requires that it be read in a linear fashion? Could the information that is stored in DNA be read in a non-linear fashion? What is it about DNA that says that the three bases that code for a particular codon must be arranged in a linear manner on the DNA strand? As to your Shannon Information example. Even Shannon Information pre-supposes the existence of something called information. He just gives a way to measure it. True? Mung
@ Upright BiPed, #106
EL, I am away from my computer for the next short while, and have limited accesss (on my phone). I have read your reply quickly. Just so as to save time, it looks like you’vre covered most of the bases necessary to have a discussion – except one. This information you chosen to send me by virtue of 1s and 0s, I have just one question before I return and give you my response – What is this information about? You are free to make up anything you wish in this regard, just choose what the information is about, and then I will respond soon.
Well, that was exactly why I asked you to give me your definition of information! The nice thing about Shannon information is that it has an absolutely clear operational definition. However, once we start talking about what the message is "about" we are in a whole new ballgame and need another operational definition before I can proceed. Clearly, you are an intelligent, language using agent, and so am I (I hope). So I can send you a message containing information that can be "about" something, and you can figure out what it is "about". But the whole point of this exercise is that we are trying to apply this to what happens, in, for example a cell. In the cell, if we assume there is an intelligent sender and an intelligent decoder, and they speak the same "language" then how do we allocate the actors in the analogy? We might call the "language" the sequence of nucleotides, and we might call the "message" the protein, and we might call the "receiver" the RNA molecules (although that would be a stretch, IMO). So who is the sender? We cannot say "the intelligent designer" because that would be assuming our conclusion, as would be to say "natural selection". My answer, actually, if we must use this model, which I don't think is great, is that the "sender" is the environment. The "message" encoded in DNA is, in effect, "if you make this protein, now, you will maximise your organisms chances of survival and thus my chance of being replicated". But let me have a go at that challenge anyway: Let's say I am an infirm old lady (not yet, but it'll happen one day I expect),with reasonable nice, but not terribly diligent neighbours. One morning, my neighbours go past my house, and they notice that a thick layer of dust has settled on the polished table I keep by the window. They think: this is unusual, Lizzie normally keeps her house spotless (heh - this is hypothetical, you understand). So they go in and sure enough, I'm dead as a doornail, and obviously haven't been downstairs for a couple of weeks. That, it seems to me, is an example of information about my health being conveyed by an entirely stochastic process (the gathering of dust on my polished table top. The dust is "saying" to my neighbours: "Lizzie is in trouble". Now, I'm sure you will see a flaw in that analogy, but I'm hoping you can see why an operational definition is so important if we are going to sort this thing out! However, that's fine, because Kairosfocus above, says: use CSI. heh. So the challenge to me is to show that purely stochastic processes can create CSI. Complex, Specified, Information. As I understand it, Specified information is actually the opposite of Shannon information - it is information that can be highly compressed. But while 1010101010101010 can be highly compressed, it is not complex. On the other hand, 718281828459045 is both complex and can be highly compressed (e-2), perhaps with some kind of symbol to indicate the precision. So 718281828459045 exhibits CSI. If we found it in a SETI signal we might infer intelligent life,right? So the challenge for me is to demonstrate that Darwinian processes can generate complex (not just a simple repetitive pattern) but highly compressible information, right? I'll wait for your assent here, and in the mean time have a bit of a play with MatLab and see what I can come up with. Cheers Lizzie Elizabeth Liddle
One comes away with a sense that people are talking past each other, to be charitable about it. What scientific metric exists that can exclude "the hand of god" from being the cause of events that we call "chance" or "random" and for those events thus being guided, purposeful, and intentional? What scientific metric exists that can exclude "the hand of god" from nature, from being the cause of events that we call "selection" and for "natural selection" therefore to be guided, purposeful, and intentional? There is none, and there can be none. Thus the exclusion of "the hand of god" has to be rhetorical, not scientific. The belief that chance and selection or nature acting alone can account for anything at all is one of faith. Stop claiming that science supports it. It does not, and it cannot. Mung
William J. Murray: That claim is a lie. It only takes a little logic to show it to be a lie. Odd how so transparent a lie can be so difficult for so many to see. This phenomenon is known as not seeing the forest for the trees. GilDodgen
Holy mackerel! (I've never quite understood how a mackerel could be holy, but that's how the saying goes.) It appears that I've stirred up quite a controversy, and started something akin to a Forrest fire. And yes, I made a booboo concerning the indefensible assault. The assault is defensible, but no reasonable defense can be made against it. GilDodgen
Further on my post @111, I would gather all the evidence that appeared to indicate design, then see if anything could be developed from the totality of the evidence. ellazimm @61:
For me I just find that the modern evolutionary synthesis has much more explanatory power and makes fewer assumptions about the forces utilised. It’s more parsimonious. Occam’s Razor and all that.
It's more parsimonious than what? The explanations offered by the modern evolutionary synthesis are all very different and varied. One thing it cannot reasonably be called is parsimonious. Take for example one structure, the eye. Id there was a single explanation for the evolution of the eye, that would be parsimonious. But the eye evolved many times, we are told. And are we supposed to believe that the same mutations and the same sequences of events took place in each case? Hardly. So if we infer that the eye is designed, what explanation is really the most parsimonious? Mung
@WJM -“ I was a materialist for quite a long time, but it wasn’t because it was an attractive philosophy. I held it because it seemed to me that was what the evidence indicated.” That has been pretty much my experience as well and the funny thing is up until I started engaging with philosophical and metaphysical literature I had no idea that I sub-consciously held such a worldview. I think our development in western culture, our education, the force of the media, the blind faith in scientism (not science) and a general attitude favoring materialistic hedonism are what at least in my experience were the primary underlying forces that led me to unthinkingly accept such a worldview. It could countless hours of reading contemplation and dialogue with myself and those close to me to finally liberate myself, from the materialistic superstition. My point being: given that there is sufficient scientific rationale (even if one doesn’t hold it as “proof”) to abandon philosophical materialism, why would anyone want to hold onto it, much less defend its obvious and glaring defects to the point of abandoning logic and right reason? I can see abandoning logic and reason to avoid materialism; I can’t for the life of me figure out the appeal.” Exactly! I think it has more to do with sophism, surrendering to the authority of scientism and a general ignorance as to what a naturalistic worldview actually entails. @Mung -"And of course, all the evidence in favor of the existence of a designer must be disregarded for after all, there is no evidence there has ever been one." I'm starting to pick up on your humor style. You're actually pretty funny. above
Another thing I wonder: what is it that is so appealing about materialism that people will cling to it in the face of 90 years of quantum, information, and biological research that show it to simply not be true? I was a materialst for quite a long time, but it wasn't because it was an attractive philosophy. I held it because it seemed to me that was what the evidence indicated. My point being: given that there is sufficient scientific rationale (even if one doesn't hold it as "proof") to abandon philosophi al materialism, why would anyone want to hold onto it, much less defend its obvious and glaring defects to the point of abandoning logic and right reason? I can see abandoning logic and reason to avoid materialism; I can't for the life of me figure out the appeal. William J. Murray
If I thought something was designed then I would want to know things about the designer and there just is no evidence that there ever has been one.
And of course, all the evidence in favor of the existence of a designer must be disregarded for after all, there is no evidence there has ever been one. Mung
Correction to the end of #107: ellazimm said; "Why don’t you come up with one since you’re the one questioning the consensus? I’m serious. I don’t need a metric. You do. " IOW, you find the current Darwinistic explanation “satisfying” not because any rigorous metric has demonstrated nature* and chance* reasonably sufficient (because, as you say, you don’t need a metric); you find it satisfying simply because it is the consensus view. William J. Murray
Above, I know that Darwinists cannot provide any support for the chance* & natural* characterization of mutation and selection. The question is, why do they insist on characterizing evolutionary forces according to ideological matieralism when, according to them, there is no way to determine them to be such. Someone asked me why I don't require that astrophysicists demonstrate that gravity or entropy is unguided by intelligence (or intention, s per Dr. Liddle); the simple answer is that they do not claim as scientific fact that gravity and entropy are unguided by intelligence (intention). Darwinists claim as scientific fact that mutations are chance* and selection is natural*, and that they are sufficient to produce macroevolutionary features. That claim is a lie. It only take a little logic to show it to be a lie. Odd how so transparent a lie can be so difficult for so many to see. William J. Murray
This thread is great proof of the content set forth within the OP. Good job Gil! Mung
ellazimm said: "As I’ve said. I think the support is in the evidence I’ve cited." The evidence you've cited only applies to a straw man argument for common descent and descent with modification, neither of which I have challenged. ellazimm asks: "How about the biological record as a metric?" What aspect of "the biological record" can be used to quantify that mutation is chance*, and selection random*, in sufficient capacity to produce the biological record? ellazimm said: "OR, how about, if you want a metric, you propose one and then see how things stack up?" Shifting the burden. Why don’t you come up with one since you’re the one questioning the consensus? I’m serious. I don’t need a metric. You do. ellazimm said; IOW, you find the current Darwinistic epxlanation "satisfying" not because any rigorous metric has demonstrated nature* and chance* reasonably sufficient (because, as you say, you don't need a metric); you find it satisfying simply because it is the consensus view. William J. Murray
EL, I am away from my computer for the next short while, and have limited accesss (on my phone). I have read your reply quickly. Just so as to save time, it looks like you'vre covered most of the bases necessary to have a discussion - except one. This information you chosen to send me by virtue of 1s and 0s, I have just one question before I return and give you my response - What is this information about? You are free to make up anything you wish in this regard, just choose what the information is about, and then I will respond soon. - - - - Ella, clearly you are just not getting the point being made. Sorry. Upright BiPed
-"‘Dawkins is very honest actually." Seriously, you would be better off claiming that pigs can fly. The guy is a hateful proselytizer who ironically has become the very thing he claims to want to combat, a religious extremist, with the only difference being that his beliefs rest upon atheism. above
@WJM -"No. Watching mutations occur, even if one had a time-lapse billion-year camera, would not answer the question of whether or not those variations caught on tape were generated by processes that were chance* and natural*, or by intelligent design. What is required is a meaningful metric that describes the creative capacity of undirected, unintelligent mutation and selection processes so that we can be reasonably assured that the macroevolutionary products we see all around us could in fact have been – again, reasonably – generated by the kinds of forces (chance*, natural*) you have said offer satisfying scientific explanations." I don't think a darwinist can provide you with such a demarcation. I highly doubt if that is even possible to do scientifically. What you have illustrated here is the impossibility of defending scientifically, the articles of faith embeded in words such 'chance' and 'nature' that are unheld as dogma by the naturalists. This is why ellazim is once again trying to shift the burden of proof with his post #102. Strictly speaking there is not even a consensus on what the word 'nature' entails (among naturalists) and what entities are said to comprise it. There was a discussion on this topic several months ago where different definitions of 'nature' provided by naturalists were analyzed and were not even close to being agreeable with one another. An example would be in the philosophy of mind where some proclaimed naturalists have began adopting a dualist view. I find that violently incompatible with naturalism as do many hardcore naturalists themselves. above
ellazimm, though I disagree with just about everything you wrote, this caught my eye: 'Dawkins is very honest actually. No Dawkins IS NOT honest!!! But hey, don't take my word for it, watch for yourself here. Richard Dawkins Lies About William Lane Craig AND Logic! - video http://www.youtube.com/watch?v=t1cfqV2tuOI As well ellazimm you state; 'And I think Lenski’s research shows exactly what he says it shows.' Well Ellazimm it turns out that Lenski just recently co-released a paper on his LTEE on e-coli; Genetic Entropy Confirmed (for Lenski's e-coli) Excerpt: No increases in adaptation or fitness were observed, and no explanation was offered for how neo-Darwinism could overcome the downward trend in fitness. http://crev.info/content/110605-genetic_entropy_confirmed bornagain77
Chris: ".....do you concede my point: artificial selection in cabbages and dogs is acting upon a pre-existing gene pool and has nothing to do with random mutations?" Nope. Random mutations happen all the time at a fairly well defined rate. There is a pre-existing gene pool but, especially with artificial selection, desirable variations can get fixed in the population quickly. "Variation in eye colour, like so many other human characteristics, is simply part of our pre-existing gene pool." Well, from what I've read, the blue-eyed allele probably arose once, percolated along under the surface and then, once expressed in a recessive individual, got selected for. It did not exist previously. Sorry. Read the research. Dawkins is very honest actually. And I think Lenski's research shows exactly what he says it shows. Hey, we may all be descendent from bacteria. Who says it hasn't give rise to other life forms? WJM: "Yes, we know what you believe. You’ve stated it several times. I’m not trying to discern what you believe or what you think, but rather if what you believe and what you think has any rational or scientific support that you can argue here or direct anyone to. Apparently, the answer is no." As I've said. I think the support is in the evidence I've cited. "No. Watching mutations occur, even if one had a time-lapse billion-year camera, would not answer the question of whether or not those variations caught on tape were generated by processes that were chance* and natural*, or by intelligent design. What is required is a meaningful metric that describes the creative capacity of undirected, unintelligent mutation and selection processes so that we can be reasonably assured that the macroevolutionary products we see all around us could in fact have been – again, reasonably – generated by the kinds of forces (chance*, natural*) you have said offer satisfying scientific explanations." How about the biological record as a metric? OR, how about, if you want a metric, you propose one and then see how things stack up? Honestly, the biologists are asking for a metric. Why don't you come up with one since you're the one questioning the consensus? I'm serious. I don't need a metric. You do. Come up with one and we'll see how it works. Deal? ellazimm
ellazimm said: "I think they already have been shown to be sufficient to the the task at hand." I ask you to direct me to where, and this is your answer? ellazimm said: "I believe that evidence exists in the fossil record, the geographic distribution of species, shared morphology and the copious DNA evidence. I think the quantification is there." Yes, we know what you believe. You've stated it several times. I'm not trying to discern what you believe or what you think, but rather if what you believe and what you think has any rational or scientific support that you can argue here or direct anyone to. Apparently, the answer is no. ellazimm askes: "What kind of proof do you want?" I haven't asked for proof. I've only asked you to reasonably support your assertions. ellazimm said: "Stick around for a couple of million years and see what happens!! But will that satisfy those that say that the design implementation is at the mutation level??" No. Watching mutations occur, even if one had a time-lapse billion-year camera, would not answer the question of whether or not those variations caught on tape were generated by processes that were chance* and natural*, or by intelligent design. What is required is a meaningful metric that describes the creative capacity of undirected, unintelligent mutation and selection processes so that we can be reasonably assured that the macroevolutionary products we see all around us could in fact have been - again, reasonably - generated by the kinds of forces (chance*, natural*) you have said offer satisfying scientific explanations. I'm not asking for "proof" that those kinds of forces in fact generated biological diversity; I'm just asking for rigorous support that they are reasonably capable of generating such macroevolutionary features. It's a reasonable request. One wonders why you and Dr. Liddle work so hard to simply avoid directly responding to such a simple and reasonable request. William J. Murray
PS: I mean info in the practical day to day sense of posts etc. I doubt that most of us are particularly interested in identifying how many bits per second (regardless of functional content) can be sent down a Gaussian White noise, bandlimited channel. Dembski and others have done us a good service by highlighting how to characterise and measure such functional information. kairosfocus
Dr Liddle: Actually, no: information is not measured from how much or little of Shannon metric info is sent but by how much spcecified complexity (especially functionally specified) is sent. In that case we have a communication system and the pattern is distinct form noise. We have had an encoding, transmission, protocol, decoding and recognition. In this case, as distinct from noise and fitting a known pattern, so signal as the channel path is known to be contingent so what appears will either be from noise or from signal or both. In more complicated situations, we could have orderly sequence complexity specified by natural law, e.g. crystal unit cell ordering and packing. In that case the threefold possibilities allow for us to discern law, chance and choice by the signs in light of accessible dynamics. If we see a defect in the crystal scattered at random it would be best to infer to chance, but if there is a discernible and functionally informative (not just orderly -- think ferrimagnetic materials here) pattern to the defects, that would be a sign of the crystal being used as a storage unit. GEM of TKI kairosfocus
Hi Ellazimm, I appreciate the pressing demands of family (and other social distractions!) so take your time with any response you may care to post. Now then, back to artificial selection. First of all, given that you’ve changed the subject matter from cabbages and dogs to bacteria, lactose and eye colour, do you concede my point: artificial selection in cabbages and dogs is acting upon a pre-existing gene pool and has nothing to do with random mutations? Dawkins himself fails to appreciate the crucial involvement of Intelligent Design in artificial selection and the significant difference it makes here. Lactose tolerance is a fuzzier characteristic than the rest because it is often acquired (or lost) due to environmental factors. Nonetheless, any genetic component to it is something that is part of our pre-existing gene pool. The same can be said of eye colour. Even if we take seriously (I don’t, by the way) the claim that the human race is artificially selecting for more blue-eyed babies, this has nothing to do with random mutations. Variation in eye colour, like so many other human characteristics, is simply part of our pre-existing gene pool. These two examples represent neither artificial nor natural selection. They are merely examples of sub-specific variety (sometimes referred to as micro-evolution). Finally, bacteria. I’m glad you mentioned it actually because I can now reproduce a statement that has yet to be addressed by any evolutionist. Before that, let me just point out that E. coli has several pre-existing enzymes - coded from its gene pool - that use and digest citrate, especially in the absence of oxygen. The only problem E. coli normally has is bringing citrate through its membrane in the presence of oxygen. Nonetheless, E. coli (outside of Lenski’s experiment) has been identified which can do just this thanks to an over-expressed protein. There are also plasmids which perform the same function on its behalf. I bet Dawkins didn’t mention that did he! To be fair to him, he probably didn’t even know that himself. I do not disagree that random mutations and bacteria certainly appear to go hand-in-hand. Bacteria is believed to have been in existence for 3,000,000,000 years. It has the ability to asexually reproduce so quickly that populations can double in size every 10 minutes. It thrives in all environments, extreme or otherwise. It obtains genetic information from plasmids, bacteriophages, mutations and even other bacteria (no matter how distantly related they may be). There are about 5,000,000,000,000,000,000,000,000,000,000 of them on the planet today. With so many features to facilitate evolution, bacteria should have given rise to a multitude of species: things like flowers, fish, trees, whales, fungi, mice, canaries, dinosaurs, humans, etc. All those random mutations, all that time and all those opportunities for natural selection... yet not a single body plan or even body part to show for it! If evolution predicts a tree of life, then why is bacteria still just an acorn? Chris Doyle
Dr. Liddle says: "No, I don’t have a citation for a formalised version of the limits I suggested, but they seem intrinsic to me. " Your suggested limitations are without value in our discussion, because they do not address the specific issue at hand - whether or not chance* and natural* processes are reasonably capable of producing those series and collections of stepwise and treelike variations. What you need to produce is support for the assertion that chance* and natural* mutations & selections are explanations in any scientific sense at all (seeing as you have wisely walked back the claim of "adequacy") Dr Liddle asked: "And I’m not at the moment quite sure what kind of “vetting” you mean – can you provide operational definitions for “natural” and “chance”?" Seeing as you have "begged the question" from "intellience" back to intentionality, I mean (for the sake of our particular discussion) processes that do not require intentionality. Dr. Liddle said: "Well, as I’ve said elsewhere, falsification isn’t, in general, how science proceeds, pace Popper. Rather, we fit models to data, and discard the models that fit less well." I haven't asked for a falsification. I've asked you to provide support for your assertion. If you are going to assert that such processes are a scientific explanation without any necessary intentionality, then please direct me to where intentionality has been quantifed and shown unnecessary to the evolutionary process because chance* and natureal* processes were shown to be sufficient. Dr. Liddle said: "In my view evolutionary processes fit the data better than any other model." So, with our new understanding of what you mean by "evolutionary process", will you please direct me to where "non-intentional processes" have been rigorously vetted as sufficient to produce successful macro-evolutionary outcomes? I'm anxiously waiting to see where science has quantified "intentionality". Dr Liddle said: "I do think think that we have good models for all those phenomena, models that predict new data, and for which new data have been found that support the models." Please direct me then to the predictive evolutionary model that defines, quantifies and utilizes the characteristic parameters of mutation and selection we call "chance" and "natural" with, of course, footnotes that show where mutation and selection processes have been vetted as non-intentional. William J. Murray
UB: "There isnt a single evolutionary book that shows the rise of the information REQUIRED for your “system of modification with common descent”. One cannot exist without the other, Ella.? I think they all show how life forms whose recipes (DNA) are subject to random modifications are 'selected' as being more or less fit by the environment and competition with other organisms. That that information is preserved and stored because those that have it propagate more. The source of the information in the genome is a long dance between DNA and the environment. And the DNA is very . . . loose. Over the billions of years it's tried millions and millions and millions of different configurations. And the ones that dance better make more babies. And they pass on their information. And their babies try new variations. That's where I think the information arises. And that is inherent in any good book on evolution. "You take it for granted Ella, thats the cheap way out. You can do better than that." Well, I don't think I can match the multiple lines of evidence that I've elucidated above. KF: The fossil evidence does not contradict the Darwinian hypothesis. The fossil record is bound to be spotty, it's the nature of that evidence. The genome of the platypus . . . well, that I know nothing about so I'll pass on that discussion. But I can't help but think that the idea that hundreds, thousands of working biologists have been brainwashed into following a party line is . . . . a bit paranoid. I prefer to think that people are much like me: honest, sometimes confused, but basically sincere. And there are millions of qualified people who have looked at the evidence who think that the modern evolutionary synthesis is valid. It's not just down to you and me. It's down to the non-conspiracy consensus. ellazimm
Groov: Just for fun: often "basic" mathematics to a practising engineer -- which Gil is [think the dynamics to make a parachute that is dirigible] -- would be categorised as "higher" for most people. As in start from Kreyszig and go on from there. GEM of TKI kairosfocus
EZ: I have provided a 101 look at the evidence that points tot hat collapse. To begin with, the fossil evidence did not support Darwin's icon -- the only diagram in Origin. Darwin hoped that the then relatively sparse evidence would be filled in as he desired. With 250,000+ fossil species in hand and millions of specimens in museums, billions observed, that has not happened, starting with the Cambrian. And remember, we are here seeing the opposite of what was expected: sudden appearance of top level categories of life, in a context where if they were there in the required numbers to support that much of a burst of massive evolution of body plans, we should have seen the fossils, even if soft bodied and even if micro-scopic. The overwhelming pattern, onward, is sudden appearance in the layers, stasis, disappearance or continuation into the modern world. The molecular trees that were hoped to be the salvation of the case, then fell into contradictions to one another and to the gross anatomy tree. (Cf the linked.) To cap off all, it turns out that the genome of the platypus shows a mosaic from several branches of the vertebrate family of animals [just as the gross anatomy suggests], and that of the kangaroo shows vast swathes of the human genome sitting there, for species whose lines are said to have diverged 150 mn YA. The evidence is that of a library adapted to specific cases, not of a branching tree. And, it is of a discrete categorisation into distinct kinds. The headlines, the confident textbook and brochure declarations may suggest or outright say otherwise, but the pattern is of islands of function. Precisely what one would expect of a code-based, symbolic system that needs to construct meaningful structures that must work in the real world. You may choose to deny this, but that is the actual state of the facts, as I took time to document and have now repeatedly linked. In addition, the best explanation -- the only observationally supported one -- for FSCI is intelligent cause. GEM of TKI kairosfocus
Upright BiPed, @ # 84
EL, “What makes you think that “there is nothing in the material make-up that sets this characteristic as a matter of physical necessity?” What do you think, in the cell, constrains DNA to be “read in a linear fashion” if not “physical necessity”? Design. A system set up with foresight to the retrival of recorded information, including the information required to build and operate the system doing the retrieval.
I can't have made myself clear - my question was much simpler than the one you answered! I will rephrase: when DNA, today, this minute, is read in a cell, in my body, linearly, what is it that constrains it to be read linearly? What stops it, if not physical/chemical forces, from reading it non-linearly? What constrains it? (Analogy: if I'd asked the question: what stops me getting into last summer's jeans? the answer you gave me was the equivalent of "your inability to stay on your diet" whereas what I was after was "the waistband" :)) Hope that is clearer! As to your request - thanks for reposting your response, I'm sorry I missed it (I did, in fact, go looking). But you have made it far too easy for me by letting me choose my definition! Let's say I choose Shannon information, so that if I send you a message that you cannot predict in advance, then I have sent you information, right? So if I send you a series of 100 ones and zeros, and I arrange it so that at each position, ones and zeros are equiprobable, then I have sent you 100 bits of information, right? Well, I don't even need natural selection to do that, I can just toss a coin 100 times! And, by an entirely stochastic process, I have sent you 100 hundred bits of information. So on that definition, any stochastic process creates information. Indeed, the more "intelligent" the process, the less information I actually create. If instead of coin tosses, I sent 1010101010101010101..... You'd start to make some pretty good guesses at the rest of the series, so the amount of new information I'd created would be very small. And indeed, the message would be extremely compressible as a result. So by that definition, "intelligent design" is marked not by how much information is generated, but how little. And this is actually very useful - we can, for example, analyse output of what are supposed to be randomly generated 1s and 0s and figure out that the generator is a human being rather than a coin-tossing machine, because interestingly, human beings are very bad at reproducing flat probability distributions. But I don't think that's what you meant! So can you give me a definition, that will provide me with a slightly greater challenge :p More to the point, one that gets to the heart of what you think evolutionary processes can't do? Elizabeth Liddle
Ella, Perhaps you are just being obtuse today. "I think any good book on the modern evolutionary synthesis shows how information can arise within a system of modification with common descent". There isnt a single evolutionary book that shows the rise of the information REQUIRED for your "system of modification with common descent". One cannot exist without the other, Ella. It is simply taken for granted, blindingly so, just as you just did yet again. It sometimes seems quite immpossible to get a materialist to focus on the issues at hand. They are so wound up in their prior assumptions that they simply cannot see that they are missing the important pieces of the puzzle. You take it for granted Ella, thats the cheap way out. You can do better than that. Upright BiPed
WJM: I apologise if I answered questioned you did not ask. "It doesn’t prove that Darwinistic forces **actually** created anything; it just demonstrates them scientifically capable of producing the claimed product within reasonable (qualified stochastic) parameters. And that’s all I’m asking for: that you support your claim that Darwinism offers a satisfying explanation by directing me to where the chance* and natural* characteristics of the processes in question (mutation and selection) have been properly quantified as sufficient to produce macro-evolutionary success." I think they already have been shown to be sufficient to the the task at hand. I believe that evidence exists in the fossil record, the geographic distribution of species, shared morphology and the copious DNA evidence. I think the quantification is there. What kind of proof do you want? Stick around for a couple of million years and see what happens!! But will that satisfy those that say that the design implementation is at the mutation level?? ellazimm
UB: I know, I know, I said I was going. And I will! Promise! "No, actually I don’t, since there are no books that demonstrate the rise of information without a living thing. That’s the point. You assume it can happen, but you have not a shred of evidence that it can happen." I think any good book on the modern evolutionary synthesis shows how information can arise within a system of modification with common descent. I think there's over 150 years of evidence. You've read the books, you disagree. I've got nothing new to add. Best to leave that there. Well . . . science is about finding plausible models that have explanatory power. You think my model is not plausible. Fair enough. Propose another. Be specific. If there was design then when and what? (We'll leave off why and how.) Make sure your when and what have explanatory power, that they explain some of what we see. You say no materialist answers your questions well I have a hard time getting an ID proponent to answer these questions. AND, once you've answered those, show that there was a designer present at the time to implement the design interventions you propose. Show that the design inference sits on multiple lines of positive evidence. And no begging the question: In my experience design only arises from an intelligence and I see what I perceive to be design therefore there was a designer. Give some other, independent evidence. Make the design inference that much stronger and undeniable. It seems to me that the design inference hangs on two major hypothesis: In our experience, complex specified information and design only arises from intelligent agents. AND there is no proof that non-intelligent processes are capable of creating life and/or imbibing life with complex and specified information. In some ways ID IS like archaeology. And there have been moments when things have been discovered which looked designed but dated too early based on our understanding of the evolution of human beings. And those moments are open issues until they are further explored or verified. They DO NOT over throw the existing paradigm until there is an accumulation of more, independent evidence. If an archaeologist tries to prove that some artefact is human designed then there has to be independent evidence that there were humans about to do the designing. IF you think the designer is transcendent and removed from the world to the point that no evidence of intervention need be left then that will always be outside the realm of science. That's not part of this world/universe, it's outside. And that cannot be analysed and sliced and defined. But then it cannot be an understandable and definable explanation for events. And that's why Sagan said what he did: we cannot do science with things that don't play by the rules. And now I really do have to go. ellazimm
ellazimm, This is about the time that I - and others, I would suppose - begin to suspect that you are deliberately employing evasive and distracting tactics. Note - you responded: "Why are you assuming intelligence is the null hypothesis and I have to prove non-intelligence?" I have assumed no such thing. I have only asked you and Dr. Liddle to support your asserted contentions - that the characterization of the evolutionary processes of selection and mutation as chance* and natural* have been scientifically demonstrated as reasonably capable of producing what they are claimed to have produced. ellazimm said: "But I don’t know that there was an intelligence there to do the producing." Now you are doing what is called shifting the burden. Instead of supporting your own assertion, you want someone else to provide evidence against your assumption. ellazimm said: "I think you’re begging a question. In my opinion. You’re satisfied with the design inference when you don’t know that there was a designer around at the time that was capable and motivated to do the designing." I can only interpret this as deliberate obfuscation. I never said I was satisfied with the design inference, nor have I claimed that there was a designer around, nor that any supposed designer was "motivated". You seem to be saying: "You can't support your claims, either!" only, I haven't made such claims here that require supporting - I've only asked you and Dr. Liddle to support your assertions. Perhaps you should wait until I make an assertion before you ask me to support it or accuse me of "begging the question". ellazimm said: "Whatever designing you are proposing. No one seems to have narrowed that down yet." I haven't proposed any desining. I am asking you to support your assertions. ellazimm stated: "How would you prove non-intelligence?" I don't know. It's not my job to "prove" it (support the claim) because I haven't made such a claim. Should i give you and Dr. Liddle and Darwinists a pass simply because you are foolish enough to make a claim you cannot support? Here's a suggestion: when it has been pointed out to you that you have made an unsupportable claim, rescind your claim. ellazimm said: "Some ID proponents accept some form of ‘micro’ evolution thereby assuming that random mutation and natural selection can fix some new morphologies." What some ID proponents accept has nothing to do with whether or not you can direct me to any rigorous, scientific support for your claim here that chance* mutations and natural* selection offer a satisfying scientific explanation. ellazimm said: "And how would you show that a beneficial mutation was random? You could always make the argument that that’s how the design implements design." It's not necessary to prove that any particular mutation was random; it's only necessary to demonstrate that the claimed product of the process in question is within the reasonable parameters of what the process can be shown to produce. It doesn't prove that Darwinistic forces **actually** created anything; it just demonstrates them scientifically capable of producing the claimed product within reasonable (qualified stochastic) parameters. And that's all I'm asking for: that you support your claim that Darwinism offers a satisfying explanation by directing me to where the chance* and natural* characteristics of the processes in question (mutation and selection) have been properly quantified as sufficient to produce macro-evolutionary success. Please note: how many responses, and still I have not been directed to any legitimate source that has even made an attempt to quantify evolutionary forces as chance* or natural*, yet Darwinists are "satisfied" that evolutionary processare are in fact chance* and natural*. How can they be, when there is **zero** evidence they are, and when mainstream science itself claims there is no metric capable of making such a distinction when it comes to evolution? I suggest your satisfaction is borne from faith in other, more ideological considerations, and is not - ooulc not be - the result of any evidence. William J. Murray
Meleager @ #21: Apologies for the delay in responding:
Elizabeth: In other words, you cannot direct me to where such limitations have been formally provided by pro-Darwinists, and you cannot provide any answer to the challenge of where evolutionary processes have been scientifically vetted as natural* or chance*.
No, I don't have a citation for a formalised version of the limits I suggested, but they seem intrinsic to me. And I'm not at the moment quite sure what kind of "vetting" you mean - can you provide operational definitions for "natural" and "chance"?
You are doing nothing but assuming your conclusion that such processes, however you narrate them as “stepwise” or “treelike”.
I don't think so. "Stepwise" and "treelike" are direct predictions from Darwin's theory. If we find evidence that life unfolds in a non-stepwise, or non-treelike fashion, then clearly that raises doubt about the theory. And indeed, we already know that the "tree" is bushier than Darwin's theory predicts.
You are stating that Darwinism is “an adequate explanation” without even providing a rigorous explanation of the power (and limits of that power) of chance* and natural* processes claimed to be “an adequate explanation”.
I did not say that "Darwinism" was adequate. I said "evolutionary processes" but I should have been clearer - I think that the evolutionary processes that have been hypothesised and tested (which extend well beyond Darwin's) are a good fit to the data. No theory will ever be entirely "adequate", and it was a poor choice of words. But my position right now is that there is no glaring gap that shows no promise of being filled by something other than an Intentional Designer (note that I did not say "intelligent"!)
How can you claim those processes are adequate, if you cannot even direct me to where they have been vetted as adequate via a rigorous falsification metric?
Well, as I've said elsewhere, falsification isn't, in general, how science proceeds, pace Popper. Rather, we fit models to data, and discard the models that fit less well. Only very occasionally is falsification used, apart from falsification of the null, of course, but that isn't what you mean. In my view evolutionary processes fit the data better than any other model. I do not think that ID is a well fitting model, and indeed, I have seen few, if any, attempts to fit it.
If there is no rigorous means to examine the computational and engineering limitations of the natural* and chance* processes claimed to be sufficient for producing macro-evolutionary successes (such as winged flight and stereoscopic, color vision), then how can one possibly be satisfied that such processes are “an adequate explanation”?
Well, I will gladly walk back the word "adequate". I do think think that we have good models for all those phenomena, models that predict new data, and for which new data have been found that support the models. I am a little puzzled though by your references, with asterisks, to natural* and chance* - do they link to a definition somewhere that I should know about? Cheers Lizzie Elizabeth Liddle
Ella, Also, I noticed in your answer, you immediately tried to change the subject to evolution, and chimps, and mutations, and what not... It doesn't work. It is the rise of information that is at issue. And once again, you cannot demonstrate the rise of information without a prior living thing. Upright BiPed
Caught my eye: the so-called "unbiased observer with a decent education in basic mathematics and expertise in any rigorous engineering discipline" In my experience, the branches of engineering are amalgams of rigorous and hueristic approaches, even given a mathematical subject such as classical signal and systems theory. And engineering expertise has a prerequisite - education in higher mathematics and analysis. groovamos
Ella, "You know what I would say." No, actually I don't, since there are no books that demonstrate the rise of information without a living thing. That's the point. You assume it can happen, but you have not a shred of evidence that it can happen. Upright BiPed
EL, "What makes you think that “there is nothing in the material make-up that sets this characteristic as a matter of physical necessity?” What do you think, in the cell, constrains DNA to be “read in a linear fashion” if not “physical necessity”? Design. A system set up with foresight to the retrival of recorded information, including the information required to build and operate the system doing the retrieval. And now its your turn, you've removed from your explanatory toolbox anything but the physical make-up of the material itself. So, what in the material make-up of the DNA molecule establishes linear decoding as a physical requirement? How was this requirement manifest in the material, and how did this manifestation play a role in the existence of the decoding system? - - - - - - - - As for the previous post regarding the onset of information, here is my last respose to you:
You are going to demonstrate how neo-darwinism brought information into existence in the first place??? Please feel free to use whatever definition of information you like. If that definition is meaningless, then we’ll surely both know it. For what it is worth, I follow the common etymology of the word: that which gives form, to in-form (the latin verb informare). This defnition is not in conflict with the more technical definition of a set of symbols that can cause a transformation within a system. The problem you face is not the definition of the word so much, it is that the state of an object must be in some way “sensed” or “experienced” in order for the information to come into existence. That is the mechanism which is missing from the narrative; it is simply taken for granted for the past 60 years. I am delighted that you intend to tackle it here and now. :)
Upright BiPed
UB: "I think evolution shows this. And your other points. By all means, show me." You know what I would say. What books I would point you to. Time to leave it I think. We're not going to agree so there's no point. But, if I did try to show you . . . and I found a complete step-by-step mutational path from, say the common ancestor of humans and chimps (impossible since we don't have the DNA) all the way down to humans it would still be possible for one branch of ID theory to say that all or some of the mutations were 'designed'. I don't think I can ever prove to everyone's satisfaction. And I find no intelligence more parsimonious than an unknown and un-proven intelligence. ellazimm
:-) Lizzie is back and I have to start taking care of my family. I'll try and check on the thread later but I've got a meeting of my local archaeology club this evening (where we will be hoping to infer design in some crop marks . . . if they're not blindingly clear the answer is no) so I might not make it back. Night al! ellazimm
ellazzim,
"You can’t demonstrate the onset of information without a prior living thing.”
I think evolution shows this. And your other points.
By all means, show me. Upright BiPed
WJM: "If you cannot support the claim that the processes involved were non-intelligent, then one shouldn’t make the assertion that they were non-intelligent by characterizing them as chance* muatation and natural* selection." Why are you assuming intelligence is the null hypothesis and I have to prove non-intelligence? Because we know intelligence can produce complex, specified information? I agree it can. But I don't know that there was an intelligence there to do the producing. I think you're begging a question. In my opinion. You're satisfied with the design inference when you don't know that there was a designer around at the time that was capable and motivated to do the designing. Whatever designing you are proposing. No one seems to have narrowed that down yet. How would you prove non-intelligence? Would you have to rerun all of evolution to show that each and every step happened without direction? Some ID proponents accept some form of 'micro' evolution thereby assuming that random mutation and natural selection can fix some new morphologies. And how would you show that a beneficial mutation was random? You could always make the argument that that's how the design implements design. ellazimm
OK, teabreak.... Upright BiPed @ #52:
“And it is this similarity that, I submit, that is reflected in their “CSI”, not the additional factor of “intention”.” The sequence of chemical symbols in DNA is useless unless it is read in a linear fashion. Otherwise no information would come from it. But, there is nothing in the material make-up of the DNA molecule that sets this charateristic as a matter of physical neccesity. So the question becomes, can it be said that DNA was not intended to be read in a linear fashion?
What makes you think that "there is nothing in the material make-up that sets this characteristic as a matter of physical necessity?" What do you think, in the cell, constrains DNA to be "read in a linear fashion" if not "physical necessity"? Are you actually suggesting that something other than physical/chemical forces constrain the linear reading? If so, what? If not, I'm not getting your point!
- – - – - – - – - By the way Dr Liddle, you were going to demonstrate how neo-darwinian processes brought information into existence in the first place. I am eargerly awaiting your explanation.
Glad you mentioned that - I lost the URL of the thread in question (or couldn't find it in the threads I had bookmarked). I am more than willing, as long as (as I think I asked) you give me the definition you are using for "information" (there are of course several :)) Elizabeth Liddle
WJM: "I’m challenging the assertion that the process of that descent with modification can be properly, scientifically characterized as being by chance* and natural* processes." But, by my way of seeing things, there is no other viable method. Can't be design without a designer. UB: "You can’t demonstrate the onset of information without a prior living thing." I think evolution shows this. And your other points. I think self-replication with modification is selected by the environment so that organisms are 'designed' that are better and better able to exploit the environment (including other organisms). Sometimes gene drift has an effect, sometimes sexual selection. Some times the steps are bigger or smaller depending on the random modification. Sometimes the environment changes. Have you ever asked yourself why you don't believe Darwinism? Why you are right and thousands, millions of other sincere and intelligent people are wrong? ellazimm
If you cannot support the claim that the processes involved were non-intelligent, then one shouldn't make the assertion that they were non-intelligent by characterizing them as chance* muatation and natural* selection. Or does everyone get to make negative claims and then not have to support them? William J. Murray
KF: I don't think I am begging the question obviously. I disagree that the Tree of Life model has collapsed. I don't think the preponderance of the evidence points that way. And I don't find the design inference, as it stands right now, to be the most parsimonious answer or the one with the most explanatory power. There's no real point to hashing it over again. I was just trying to answer a question, not get into an argument. Not only does the truth or falsehood of the modern evolutionary synthesis not depend on my ability to defend it NEITHER does the design inference depend on your convincing me that it's true. ellazimm
"Which is why I do not want to propose a designer without several lines of evidence for there being one" How many do you need? You can't demonstrate the onset of information without a prior living thing. You can't demonstrate the existence of symbolic representation without a living thing. You can't demonstrate the presence of an abstraction without a living thing. You can't explain the existence of a decoding system without appealing to foresight. You can't explain the existence of matter arranged to record discrete information without a mind. The list goes on and on. Upright BiPed
ellazimm says: "Which is why I do not want to propose a designer without several lines of evidence for there being one." I'm not talking about proposing a designer at this point. All I'm talking about is how you (and Dr. Liddle) have justified your view that Darwinism offers a "satisfactory [or adequate] explanation". eallazimm stated: "But, I have several lines of evidence which indicate not only is the system up to the task but that it is unguided. I know you know what I’m going to say but I’ll repeat them anyway: Fossils, geographic distribution of species, morphology and the DNA evidence. I’ve discussed these before and anyone who is interested in reading up on them will be able to find many good explanations of the evidence. " None of the things you mentioned have anything whatsoever to do with what we are talking about. We are talking about qualifying the collections of mutations, and the sequences of selection, as chance*, and as natural*. Your evidence (to whatever degree) is that general evolution occurred via descent with modification from a common ancestor through genetic variations in populations; I'm not challenging that. I'm challenging the assertion that the process of that descent with modification can be properly, scientifically characterized as being by chance* and natural* processes. "AND my own poor ability to argue in their favour does not indicate their truth or falsehood. " While a "poor ability to argue in their favour" doesn't matter as to the validity of darwinism itself, it does directly indicate that your belief in them as "satisfying explanations" is suspect and probably not well-founded. William J. Murray
KF: I never doubt you. I do disagree with you sometimes though. :-) ellazimm
EZ: Pardon a direct question: why do you keep begging the question of getting to islands of function in beyond astronomically large config spaces, as again just linked on? FYI, on fair comment, the Darwin tree of life type model has collapsed. The evidence that functionally specific complex organisation and associated information (especially coded information) will come in deeply isolated islands of function is supported by the actual evidence. There is no smoothly graded branching tree of life with a root in one or a cluster of original microorganisms, and of course OOL is an even more blatant example that design is the best explanation of what we can actually observe. GEM of TKI kairosfocus
KF: Yes, I know the Sagan quote. Only natural processes can be examined by holding some parameters constant. By definition, something that is super-natural cannot be so constrained and so cannot be studied by the scientific method. WJM: "I’m not asking you to prove a negative. I’m asking you to support the positive claim that natural* and chance* processes are reasonably capable of producing what you have claimed they explain.? I have. I find the lines of evidence I've already mentioned, and I'm sure you are very familiar with, to be sufficient proof. What I was referring to as trying to prove a negative was your question as to whether I could prove that the processes were non-intelligent. Chris: "I repeat: artificial selection does NOT act upon random mutations." So, if a mutation occurred which pushed the breed in the direction the breeder wanted the breeder would not be acting on random mutations? I'm thinking of Lenski's experiments getting colonies of bacteria to adapt to a different food source. Or human becoming lactose tolerant. Or blue eyes (which are recessive) becoming quite common. Are the last two examples natural or artificial selection? If humans are themselves breeding for certain characteristics? KF: Yes, I know that. I don't see an 'edge' to evolution. ellazimm
PPS: If you doubt me, cf here on the darwinian tree of life and in context on related claims and issues. kairosfocus
WJM: "No, it’s not. Occam’s razor and the principle of parsimony states that you don’t want to multiply explanatory entities beyond necessity." Which is why I do not want to propose a designer without several lines of evidence for there being one. "Fewer assumptions? You have **assumed** your entire engine of evolution to be something you cannot even begin to verify – chance*, and natural*." But, I have several lines of evidence which indicate not only is the system up to the task but that it is unguided. I know you know what I'm going to say but I'll repeat them anyway: Fossils, geographic distribution of species, morphology and the DNA evidence. I've discussed these before and anyone who is interested in reading up on them will be able to find many good explanations of the evidence. AND my own poor ability to argue in their favour does not indicate their truth or falsehood. And, as I said, I was only trying to give an idea of why I find Darwinism more parsimonious. "Yet you (and Darwin) feel perfectly comfortable pointing at products of artificial selection and claiming it provides evidence of what natural selection can do! Will you also point at genetic engineering as an example of what chance* mutations can do?" Of course not. Natural and artificial selection are both non-random processes. Genetic engineering is likewise non-random whereas the proposed mechanism of mutations is. ellazimm
PS: And, EZ, surely you know or should know by now that the issue is not to hill climb within islands of function, but to get to the shores of such islands in beyond astronomical config spaces. Intelligence we know can routinely do that, but chance and necessity, we have good analytical and empirical reasons to see, cannot credibly do so within the gamut of our observed cosmos. kairosfocus
That's where Dawkins has led you up the garden path, ellazimm. Artificial selection acts on a pre-existing gene pool: random mutations do not come into it. If anything, the more specialised a variety becomes, the more genetic information is actually lost. Man, using Intelligent Design if you like, can shape a dog or a cabbage using artificial selection with nothing more than the gene pool that was already there in the first place. I repeat: artificial selection does NOT act upon random mutations. Natural selection, acting upon random mutations, has nothing to do with the present variety of cabbages and dogs. Chris Doyle
ellazimm states: "You know you can’t prove a negative like that!!" I'm not asking you to prove a negative. I'm asking you to support the positive claim that natural* and chance* processes are reasonably capable of producing what you have claimed they explain. All positive claims carry with them either explicit or implicit negative counter-claims. ellazimm states: "And, I would say, the default assumption is not guided by intelligence unless there is other evidence of intelligence present for the task." So you are going to defend your claim of satisfactory explanation by claimming it is a "default assumption"? Do you know the difference between an assumption and a satisfactory scientific explanation? William J. Murray
EZ: Unfortunately, this is the crucial assumption at work:
To Sagan, as to all but a few other scientists, it is self-evident that the practices of science provide the surest method of putting us in contact with physical reality, and that, in contrast, the demon-haunted world rests on a set of beliefs and behaviors that fail every reasonable test . . . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [[From: “Billions and Billions of Demons,” NYRB, January 9, 1997.]
(And in case you think this is idiosyncratic and personal, observe the other three excerpts here, from NAS and NSTA as well as Coyne.) That's worldview level quesiton-begging. And worse, in fact the claimed cases of chance or chance and necessity giving rise to functionally specific complex organisation and information consistently are either shown on inspection to be actually intelligent design, or else are cases where we do not and cannot observe. The ongoing saga of ev and the GA's here at UD is a capital example in point. The only -- and routinely -- observed causal source of FSCI (this and other posts in this thread are examples in point) is design. Asa Einstein famously said: everything should be as simple as possible, but not simpler than that. In other words, if the account cannot account coherently for the observed facts, or imposes question-begging and censoring a prioris etc [as we just saw], it is not simple, but instead it is simplistic. GEM of TKI kairosfocus
Chris: "Artificial selection absolutely requires Intelligent Design – that provided by man. Without it, we would never have seen all of those varieties of cabbages and dogs. It is a fact that Natural selection alone didn’t manage it despite having tens of millions of years to do so." But, that kind of artificial selection is just as dependent as natural selection on the variation provided by random mutations. So, I would argue (as does Dawkins and Darwin) the the capacity for random mutation providing the raw materials for selection to work on has been established. ellazimm
ellazimm said: "For me I just find that the modern evolutionary synthesis has much more explanatory power and makes fewer assumptions about the forces utilised. It’s more parsimonious. Occam’s Razor and all that." No, it's not. Occam's razor and the principle of parsimony states that you don't want to multiply explanatory entities beyond necessity. If one is going to that chance* mutations and natural* selection have explnatory power in any evolutionary sequence, it must at least be vetted that the evolutionary outcome is at least reasonably possible given a qualified stochastic analysis of the real potentials of the processes in question. But there is no such metric that has ever been offered that verifies chance* and natural* processes up to the task - indeed, Darwinists assert there is no such metric. Fewer assumptions? You have **assumed** your entire engine of evolution to be something you cannot even begin to verify - chance*, and natural*. Yet you (and Darwin) feel perfectly comfortable pointing at products of artificial selection and claiming it provides evidence of what natural selection can do! Will you also point at genetic engineering as an example of what chance* mutations can do? William J. Murray
WJM: "Please refer me to any paper or research which has vetted any of the collective mutations or series of selection events as chance* or as natural* (meaning, not guided by intelligence)." You know you can't prove a negative like that!! And, I would say, the default assumption is not guided by intelligence unless there is other evidence of intelligence present for the task. ellazimm
WJM: I realise you didn't ask, I'm just trying to give a sense of why I find the design inference unsatisfying. It just doesn't answer enough questions for me. I hear lots of ID proponents saying the same about Darwinism: you can't prove this or that happened. And I suppose that will lead some to say that the real difference between a Darwinist and a non-Darwinist is the basic commitment/assumption of only mechanical processes. I suppose that is arguable. It certainly seems to get argued about a lot. For me I just find that the modern evolutionary synthesis has much more explanatory power and makes fewer assumptions about the forces utilised. It's more parsimonious. Occam's Razor and all that. It being about 3:40pm here in England I suspect Lizzie is busy at work. :-) ellazimm
ellazimm states: I know these examples are not considered adequate but to me they DO show the amazing power of random mutation and selection with no design on the molecular level. Please refer me to any paper or research which has vetted any of the collective mutations or series of selection events as chance* or as natural* (meaning, not guided by intelligence). If you are going to claim that macro-evolutionary feats (or any evolutionary feat) demonstrates the power of chance* mutation and natural* selection, then you are obliged to demonstrate (not narrate or assert, but demonstrate) how you have vetted those collections and sequences of mutation and selection as being unguided by intelligence. William J. Murray
F/N: recall, Paley's watch example goes on to the issue of the watch with a self-replicating facility, and argues that the additionality of a unit that is capable of replicating the watch is a significant additional reason to infer to design. I find that in the literature that dismisses Paley, this is as a rule not addressed in any serious fashion, and mostly simply not discussed in the haste to say that self-replication is the key answer to Paley. kairosfocus
Now we have unintelligent forces generating intelligent, *unintentional* processes that can mimick intentional intelligent processes. Heck, why not just have unintelligent forces mimick intentionality, too? All without - so far - a shred of evidence that shows such processes, forces and materials up to the task of generating anything of the kind. Nothing but imagination and hope that chance can work a series of materialist miracles. William J. Murray
Hi ellazim: sounds like you've been reading "The Greatest Show on Earth"! I'm still reading it myself... surprisingly, it keeps getting bumped down by other books (though it is much more tolerable than 'The God Delusion')! Anyway, regarding cabbages and dogs. There is actually a fundamental difference between artificial selection and natural selection. Not that Dawkins realises it. Artificial selection absolutely requires Intelligent Design - that provided by man. Without it, we would never have seen all of those varieties of cabbages and dogs. It is a fact that Natural selection alone didn't manage it despite having tens of millions of years to do so. Yes, artificial selection is pretty remarkable. Natural selection alone, not so. Natural selection acting upon random mutations in a macro-evolutionary manner? Merely a fairytale! I personally see no role for "the Darwinist proposed process" in any of the four things I listed. Chris Doyle
KF: I agree that we frequently infer design without knowing much about the designer. But in order to prove the point when challenged then I, personally, would want to be very specific about what I was claiming was designed and I would want to be very sure that I had other evidence that a designer with that claimed capacity was around at that time. Again, this is just to answer the question: why can't Darwinists accept the design hypothesis. For me it's because I can't get answers to the questions that come up. ellazimm
ellazimm said: "I would ask the same of ID. Without knowing the power, forces or capability of a designer how can you be sure there was one capable of what is being claimed? Setting aside the fact that the claim is not yet clear." Pardon me for answering under a different name (my work computer won't all me to sign on via my "Meleagar" identity), but: I never asserted that ID was an adequate explanation. I only asked Dr. Liddle to support her assertion that non-intelligent evolutionary forces were "an adequate explanation". Note how Dr. Liddle still hasn't answered the question or met the challenge, but is now claiming that evolutionary forces might be considered intelligent, just not intentional; IOW, she's thinking maybe an "intelligent" decision-making computer can be purchased from generated from unintelligent processes. Note how the semantic distinction between "intelligence" and "intentionality" simply moves the question to another position; okay, Dr. Liddle, please show your rigorous evidence that demonstrates that unintelligent forces, chemicals, natural laws, etc. or whatever can generate intelligent processes. Or is making a distinction between "intelligence" and "intentionality" simply more dissembling in order to continue avoiding the fact that you have provided no scientific basis whatsoever for your assertion that chance* and natural* processes can [insert new begged-question semantic avoidance] generate the "intelligent" non-intentional processes necessary to acquire macro-evolutionary success? William J. Murray
Chris: Are you saying that every step of the development of life on Earth was guided and intentional? Did it develop roughly through the Darwinist proposed process but with the designer creating the beneficial mutations or were there major leaps of new body forms and structures? The creation of many of the species of Brassica plants (kale, cabbage, kohlrabi, cauliflower, broccoli and Brussels sprouts) all arose from the same plant base. They were created via random mutation and selection; not natural selection I grant you but the variation that the selection acted on was the same. I think that's pretty remarkable and all within the last few thousand years. Same with the varieties of dogs. I know these examples are not considered adequate but to me they DO show the amazing power of random mutation and selection with no design on the molecular level. ellazimm
F/N: On Dembski defining intelligence: Being sufficiently interested to follow up, I find this from WD, in 2000:
8. The Distinction Between Natural and Non-Natural Designers But isn’t there an evidentially significant difference between natural and non-natural designers? It seems that this worry is really what’s behind the desire to front-load all the design in nature. We all have experience with designers that are embodied in physical stuff, notably other human beings. But what experience do we have of non-natural designers? With respect to intelligent design in biology, for instance, Elliott Sober wants to know what sorts of biological systems should be expected from a non-natural designer. What’s more, Sober claims that if the design theorist cannot answer this question (i.e., cannot predict the sorts of biological systems that might be expected on a design hypothesis), then intelligent design is untestable and therefore unfruitful for science. Yet to place this demand on design hypotheses is ill-conceived. We infer design regularly and reliably without knowing characteristics of the designer or being able to assess what the designer is likely to do. In his 1999 presidential address for the American Philosophical Association Sober himself admits as much in a footnote that deserves to be part of his main text (“Testability,” Proceedings and Addresses of the APA, 1999, p. 73, n. 20): “To infer watchmaker from watch, you needn’t know exactly what the watchmaker had in mind; indeed, you don’t even have to know that the watch is a device for measuring time. Archaeologists sometimes unearth tools of unknown function, but still reasonably draw the inference that these things are, in fact, tools.” Sober is wedded to a Humean inductive tradition in which all our knowledge of the world is an extrapolation from past experience. Thus for design to be explanatory, it must fit our preconceptions, and if it doesn’t, it must lack epistemic value. For Sober, to predict what a designer would do requires first looking to past experience and determining what designers in the past have actually done. A little thought, however, should convince us that any such requirement fundamentally misconstrues design. Sober’s inductive approach puts designers in the same boat as natural laws, locating their explanatory power in an extrapolation from past experience. To be sure, designers, like natural laws, can behave predictably. Yet unlike natural laws, which are universal and uniform, designers are also innovators. Innovation, the emergence of true novelty, eschews predictability. It follows that design cannot be subsumed under a Humean inductive framework. Designers are inventors. We cannot predict what an inventor would do short of becoming that inventor. But the problem goes deeper. Not only can’t Humean induction tame the unpredictability inherent in design; it can’t account for how we recognize design in the first place. Sober, for instance, regards the intelligent design hypothesis as fruitless and untestable for biology because it fails to confer sufficient probability on biologically interesting propositions. But take a different example, say from archeology, in which a design hypothesis about certain aborigines confers a large probability on certain artifacts, say arrowheads. Such a design hypothesis would on Sober’s account be testable and thus acceptable to science. But what sort of archeological background knowledge had to go into that design hypothesis for Sober’s inductive analysis to be successful? At the very least, we would have had to have past experience with arrowheads. But how did we recognize that the arrowheads in our past experience were designed? Did we see humans actually manufacture those arrowheads? If so, how did we recognize that these humans were acting deliberately as designing agents and not just randomly chipping away at random chunks of rock (carpentry and sculpting entail design; but whittling and chipping, though performed by intelligent agents, do not). As is evident from this line of reasoning, the induction needed to recognize design can never get started. My argument then is this: Design is always inferred, never a direct intuition. We don’t get into the mind of designers and thereby attribute design. Rather we look at effects in the physical world that exhibit the features of design and from those features infer to a designing intelligence. The philosopher Thomas Reid made this same argument over 200 years ago (Lectures on Natural Theology, 1780): “No man ever saw wisdom [read “design”], and if he does not [infer wisdom] from the marks of it, he can form no conclusions respecting anything of his fellow creatures.... But says Hume, unless you know it by experience, you know nothing of it. If this is the case, I never could know it at all. Hence it appears that whoever maintains that there is no force in the [general rule that from marks of intelligence and wisdom in effects a wise and intelligent cause may be inferred], denies the existence of any intelligent being but himself.” The virtue of my work is to formalize and make precise those features that reliably signal design, casting them in the idiom of modern information theory. Larry Arnhart remains unconvinced. In the most recent issue of First Things (November 2000) he claims that our knowledge of design arises not from any inference but from introspection of our own human intelligence; thus we have no empirical basis for inferring design whose source is non-natural. Though at first blush plausible, this argument collapses quickly when probed. Piaget, for instance, would have rejected it on developmental grounds: Babies do not make sense of intelligence by introspecting their own intelligence but by coming to terms with the effects of intelligence in their external environment. For example, they see the ball in front of them and then taken away, and learn that Daddy is moving the ball--thus reasoning directly from effect to intelligence. Introspection (always a questionable psychological category) plays at best a secondary role in how initially we make sense of intelligence. Even later in life, however, when we’ve attained full self-consciousness and when introspection can be performed with varying degrees of reliability, I would argue that even then intelligence is inferred. Indeed, introspection must always remain inadequate for assessing intelligence (by intelligence I mean the power and facility to choose between options--this coincides with the Latin etymology of “intelligence,” namely, “to choose between”). For instance, I cannot by introspection assess my intelligence at proving theorems in differential geometry, choosing the right sequence of steps, say, in the proof of the Nash embedding theorem. It’s been over a decade since I’ve proven any theorems in differential geometry. I need to get out paper and pencil and actually try to prove some theorems in that field. Depending on how I do--and not my memory of how well I did in the past--will determine whether and to what degree intelligence can be attributed to my theorem proving. I therefore continue to maintain that intelligence is always inferred, that we infer it through well-established methods, and that there is no principled way to distinguish natural and non-natural design so that the one is empirically accessible but the other is empirically inaccessible. This is the rub. And this is why intelligent design is such an intriguing intellectual possibility--it threatens to make the ultimate questions real. Convinced Darwinists like Arnhart therefore need to block the design inference whenever it threatens to implicate a non-natural designer. Once this line of defense is breached, Darwinism quickly becomes indefensible.
In clarifying his terms, WD says: (by intelligence I mean the power and facility to choose between options--this coincides with the Latin etymology of “intelligence,” namely, “to choose between”). We may safely adjust on remarks he has made elsewhere:
. . . (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi.)
So, adjusting: (by intelligence I mean the power and facility to choose between options [towards a goal] --this coincides with the Latin etymology of “intelligence,” namely, “to choose between”). Therefore it is an error of misreading to imagine that Dembski's definition of intelligence in the context of design excludes purposefulness or intent. GEM of TKI kairosfocus
"And it is this similarity that, I submit, that is reflected in their “CSI”, not the additional factor of “intention”." The sequence of chemical symbols in DNA is useless unless it is read in a linear fashion. Otherwise no information would come from it. But, there is nothing in the material make-up of the DNA molecule that sets this charateristic as a matter of physical neccesity. So the question becomes, can it be said that DNA was not intended to be read in a linear fashion? - - - - - - - - - By the way Dr Liddle, you were going to demonstrate how neo-darwinian processes brought information into existence in the first place. I am eargerly awaiting your explanation. Upright BiPed
Okay, here's a list of things for starters: 1. The Solar System 2. The Earth 3. Life on Earth 4. The Cell Depends what you mean by micro-evolution. I personally prefer the term 'sub-specific variety' (or variation within a pre-existing gene pool of a species) and I don't personally credit it with anything particularly remarkable: certainly nothing that sheds any light on the four things listed above. Chris Doyle
I know, I know, ID doesn't ask some of my questions and I'm not expecting answers really. But the question has been asked: how come 'Darwinists' cannot accept the design inference. Well, it's 'cause we can't help but ask all the follow on questions. When I think of Paley's watch example I can see how all the design questions can be answered. We can answer a lot of the how, why and when questions. I will find the design inference a lot more satisfying and plausible when some of those issues are addressed. But, as I've promised, I'm not going to push it too much. ellazimm
Chris: I think then you need to be specific and say exactly what was designed. What are you actually claiming? If you grant 'micro'-evolution then what was actually designed? I'm finding it very hard to pin down exactly what the design hypothesis is saying. ellazimm
Meleager: "You are stating that Darwinism is “an adequate explanation” without even providing a rigorous explanation of the power (and limits of that power) of chance* and natural* processes claimed to be “an adequate explanation”. How can you claim those processes are adequate, if you cannot even direct me to where they have been vetted as adequate via a rigorous falsification metric?" I would ask the same of ID. Without knowing the power, forces or capability of a designer how can you be sure there was one capable of what is being claimed? Setting aside the fact that the claim is not yet clear. ellazimm
Hello ellazimm, "If I thought something was designed then I would want to know things about the designer and there just is no evidence that there ever has been one." You mean, apart from all of the design in nature? Just because you don't know anything about the designer, it doesn't mean it wasn't designed. Far from it. Chris Doyle
F/N: I have added remarks on molecular phylogenies and the kangaroo here. kairosfocus
NZer: "As an engineer, it seems rather odd to me that you should almost uncritically support one (seemingly crazy) hypothesis while rejecting the much more obvious appearance-of-design hypothesis. Why is this not an a priori and irrational bias (Richard Lewontin)?" Speaking for myself I find the modern biological synthesis more parsimonious. I see no other evidence or indication of a designer and, it seems to me, hypothesising the existence of one asks more questions than it answers. I know that ID proponents are not pushing into that realm but I would. If I thought something was designed then I would want to know things about the designer and there just is no evidence that there ever has been one. Also, it's not clear what kind of design is being talked about. Even ID proponents do not necessarily agree on when or at what level the proposed designs were implemented. Is every 'random' mutation not really random? Does the designer let things go for a while and then decide to tweak some plants and animals, creating new body parts and plans and then let them go again to see how it plays out? Was the genome of all living things top-loaded from the very start and then life has been allowed to expand out from there? If humans were always the goal of a great design project then why does it look like it took a few billion years for us to show up? What's the agenda in the design implementation? As I said, the design inference brings up a lot of questions for me. And I don't see a way to answer those questions. ellazimm
OT: Quantum Action confirmed in DNA by direct empirical research; (aka; another Day, another extremely bad day for neo-Darwinists) DNA Can Discern Between Two Quantum States, Research Shows Excerpt: -- DNA -- can discern between quantum states known as spin. - The researchers fabricated self-assembling, single layers of DNA attached to a gold substrate. They then exposed the DNA to mixed groups of electrons with both directions of spin. Indeed, the team's results surpassed expectations: The biological molecules reacted strongly with the electrons carrying one of those spins, and hardly at all with the others. The longer the molecule, the more efficient it was at choosing electrons with the desired spin, while single strands and damaged bits of DNA did not exhibit this property. These findings imply that the ability to pick and choose electrons with a particular spin stems from the chiral nature of the DNA molecule, which somehow "sets the preference" for the spin of electrons moving through it. http://www.sciencedaily.com/releases/2011/03/110331104014.htm bornagain77
Meleager: I am not "dissembling", and, actually, don't. I am capable of being wrong, dense, and biased, but I do not knowingly tell untruths. I'm not interested in untruths. They seem very tedious things to me. No offence taken (well, only a teensy bit) but I do want to make that clear :) And I obviously need to make my point clearer than I did: I do not think that evolutionary processes are intentional. If our definition of "intelligent" incorporates "intention" then evolutionary processes are not intelligent. However, if our definition excludes intention (and, interestingly William Demsbki's own does, explicitly) then I would argue that by that definition, both human design processes and evolutionary processes are intelligent: both are processes that involve deeply nested "decision trees". And it is this similarity that, I submit, that is reflected in their "CSI", not the additional factor of "intention". And so, if we want to find the signature of "intentional" design, as opposed to mere "intelligent" (where intelligence does not necessarily require intention) then I think we must look for something other than CSI. To others who have addressed questions to me: Thanks! I'll try to get to them later. I do appreciate the opportunity to take part in these discussions. Elizabeth Liddle
Two sales in one day? I should contact Stephen Meyer and ask him for a cut! When it comes to SITC, I tend to go on like a broken record but, as someone who has been following this debate for 15 years, reading it left a very strong impression on me. We can endlessly debate about fossils, embryos, peppered moths, panda's thumbs, etc. But it turns out that it is the cell - the most astoundingly sophisticated and complicated thing in existence - that has the final word. Chris Doyle
Heh, I just bought SITC too, for KOBO, for about $5 USD. Excellent ! Amazon was about twice that price for the same eBook. NZer
Hello again, Lizzie, “OK. I have just ordered the book.” Excellent! I’m sure I’m not alone here in very much looking forward to hearing what your reaction is to it once you’ve read it. One point that is hammered home by SITC regarding DNA is this: all tested hypotheses based on chance and/or necessity have failed. That said, a failed experiment still contributes to our scientific knowledge. The truth value of good experiments is that they are repeatable: and it has been repeatedly shown that chance and/or necessity cannot explain the concentrated information we find in the cell. The continued, albeit demoralised, search for such an explanation therefore, can only be due to the non-scientific commitments that are being brought to the table. Given that what we are trying to explain is either a product of Accident or Design – there is no third way – then on what grounds do you not rule out alternatives? I’ve yet to hear of any explanation that offers an alternative to Accident or Design. Do you have one? Your comment about how people from each side of the debate “read the evidence and arguments differently” returns directly to the point that GilDodgen made at the outset. We are not suggesting that evolutionists have “failed to understand the argument”. Rather, that observational and experimental evidence is clearly not a decisive or even an important factor for them. They are bringing these non-scientific commitments to the table which are not deterred by contrary scientific facts. One of those commitments is to “scientific consensus” for example. I put it to you, Lizzie, that your own evolutionist convictions make a significant appeal to that: “How can all those scientists have got it so wrong!?” Cheers, Chris PS. Thank-you very much for your detailed answer to the MMR question. Chris Doyle
Once again, the reason Darwinists cannot materially support their contention that evolutionary forces are in fact chance* and natural* is because the metric necessary for making that determination would be the same metric that would qualify ID as the better explanation should the examination go badly for Darwinists. They assert no such metric exists. Therefore, their claim that such forces are chance* and natural* is without basis by their own words. Their view that evolutionary forces are Darwinistic (unintelligent) is not based on any rigorous methodology; it is sheer assumption. Dr. Liddle is assuming that evolution is an unintelligent process in the first place, then compares it to intelligent design and says that evolutionary processes look a lot like intelligent design, so we have to be careful in comparing the two. She hasn't demonstrated in the first place that evolutionary (darwinian) processes can even reasonably be characterized as chance* and natural*, let alone be counted as "an adequate explanation"; it's nothing but a bare, a priori, ideological assumption. Meleagar
Thanks for the reply Lizzie. I guess you are going to get worn out by all the questions getting directed at you :-) You wrote: ' “...the genetic code couldn’t have emerged from purely physical/chemical processes” isn’t that it did but that it could. I am not claiming that it did, but that it could have.' Ok, fair call -- you say it "could" have rather than "did". But what does the Darwinist establishment say? I suspect they would go further and say that the genetic code *must have* arisen by physical/chemical processes. You then make an interesting point about how ID could be falsified by a plausible theory. I guess you are meaning that, for example, the spring in Behe's mouse trap could have been a part from another system, thus it did not have to evolve significantly to fulfill its new purpose. But why not put the boot on the other foot and critique the Darwinist mechanism of NS acting upon RMs. Surely given the complex molecular machinery, and the layers upon layers of complexity that science has only begun to touch upon, it is more reasonable to critique the (status quo) chance hypothesis. As an engineer, it seems rather odd to me that you should almost uncritically support one (seemingly crazy) hypothesis while rejecting the much more obvious appearance-of-design hypothesis. Why is this not an a priori and irrational bias (Richard Lewontin)? NZer
Dr Liddle, I believe now you are dissembling. You keep avoiding the salient points with diversions such as "I don't regard "evolutionary" and "unintelligent" as synonymous." They are not synonyms, but in this context you either believe evolution contained some intelligent direction, or you believe it did not, which makes in this context "unintelligent" synonymous with "evolution". You said: "I think evolutionary processes resemble intentional intelligent processes very closely. " Unless you are claiming that evolutionary processes are intelligent, you are indeed saying "I think unintelligent processes [at least those driving evolution - Meleagar] resemble intelligent processes very closely." What I'd be more interested in is for you to direct me to where it has been rigorously demonstrated that darwinian (unintelligent, non-artificial) forces are capable of producing what they are claimed to have produced (not just-so narratives that rely on unqualified chance. I'm sure you don't make claims of "adequate [scientific] explanation" based on nothing more than narratives depending on unqualified appeals to chance, I await your sources and quotes from published works that have vetted evolutionary forces as chance* and natural* in the first place. Meleagar
oops apologies for double post Elizabeth Liddle
@Chris
Hi Lizzie, In most cases, I’d be happy to summarise any book that I’ve read. But SITC is such an important, game-changing, bar-raising book in this debate and critics of ID cannot really be taken seriously until they both read and fully engage with the central arguments advanced by Meyer in it.
OK. I have just ordered the book.
The remit of science is far too narrow to have any relevance in the Land of Hypotheticals. Beautiful hypotheses are regularly slain by ugly facts. We know we have certainly exhausted all known improbables (in terms of chance and/or necessity). Your appeal to unknown improbables has no basis in observational or experimental evidence and is therefore unscientific. Remember, there is no third way here.
No, indeed, and that is the difference between the Land of Make Believe and the Land of Hypothesis Testing, and is where science gets its rigor. A hypothesis is only as good as the data it fits, however glorious the hypothesis. But that shouldn't stop us deriving hypotheses from theories and testing them, and there are already a number of testable (and tested) hypotheses about the origins of the genetic code.
Your belief in the existence of “historical pre-cellular entities” is also unscientific in the absence of any observational or experimental evidence to support it. That you hold such a belief can only be because you are bringing non-scientific preconceptions and commitments to the table in the first place. It’d be interesting to know exactly what they are and why you hold them.
Well, I dispute your premise, for two reasons. The first is that I do not rule out alternatives just because none has been presented. That's why I mentioned the One Black Swan. Secondly, in this case, testable alternatives have been presented and have been subjected to testing. All scientific conclusions must be provisional, but it would be wrong to claim there is no alternative to ID as the origin of the genetic code, or that it cannot have had a physical/chemical origin. So there is no need to identify my "non-scientific commitments" because they are irrelevant :)
If the evidence for “Darwinian processes” was actually “compelling”, then there would be no debate. Those who claim that the evidence is compelling bear an uncanny resemblance to those who have made an a priori commitment to the explanatory power of “Darwinian processes”. There is nothing ‘reasoned’ about that particular conviction. Cheers, Chris
Well, that argument cuts both ways. Saying that an argument can't be compelling because otherwise there wouldn't be a debate would wipe out many past scientific arguments that are now accepted as standard! I'm not saying all are compelled by the arguments (clearly you, for example, are not) but there is no reason, I suggest, for either IDists to assume that those who disagree with them have simply failed to understand the argument, nor vice versa. Clearly we read the evidence and arguments differently. I'm interested in trying to find out exactly where those differences lie.
PS. Completely unrelated question that I’d like to ask you as someone who may have a professional interest: what do you make of the MMR vaccine controversy that was stirred up by Andrew Wakefield here in the UK? Specifically, is there any possibility that there is a link between an MMR vaccine and Autism Spectrum Disorders.
Yes (this is my answer as a statistical person, not a clinical person btw, which I'm not) there is, technically, a possibility. We are back to the One Black Swan problem in another guise! It is far more difficult to rule something out (there are no black swans; MMR does not cause autism) than rule it in. However, what we can say is that rigorous studies with large statistical power have failed to demonstrate a link. The best we can do with that kind of study is to say: if there is a link, the effect size is too small to be detected by a study with very large statistical power. We can also say that the effect size claimed by Andrew Wakefield has been falsified. That's a very careful statistical answer I know! But sometimes the best we can do is quantify the risk we are wrong, rather than quantify the probability that we are right. There appears to be only a very very small probability that the claim that MMR can cause autism is correct. Moreover, even if correct, the additional risk can be no more than tiny. Elizabeth Liddle
@Chris Hi Lizzie, In most cases, I’d be happy to summarise any book that I’ve read. But SITC is such an important, game-changing, bar-raising book in this debate and critics of ID cannot really be taken seriously until they both read and fully engage with the central arguments advanced by Meyer in it. OK. I have just ordered the book.
The remit of science is far too narrow to have any relevance in the Land of Hypotheticals. Beautiful hypotheses are regularly slain by ugly facts. We know we have certainly exhausted all known improbables (in terms of chance and/or necessity). Your appeal to unknown improbables has no basis in observational or experimental evidence and is therefore unscientific. Remember, there is no third way here.
No, indeed, and that is the difference between the Land of Make Believe and the Land of Hypothesis Testing, and is where science gets its rigor. A hypothesis is only as good as the data it fits, however glorious the hypothesis. But that shouldn't stop us deriving hypotheses from theories and testing them, and there are already a number of testable (and tested) hypotheses about the origins of the genetic code.
Your belief in the existence of “historical pre-cellular entities” is also unscientific in the absence of any observational or experimental evidence to support it. That you hold such a belief can only be because you are bringing non-scientific preconceptions and commitments to the table in the first place. It’d be interesting to know exactly what they are and why you hold them.
Well, I dispute your premise, for two reasons. The first is that I do not rule out alternatives just because none has been presented. That's why I mentioned the One Black Swan. Secondly, in this case, testable alternatives have been presented and have been subjected to testing. All scientific conclusions must be provisional, but it would be wrong to claim there is no alternative to ID as the origin of the genetic code, or that it cannot have had a physical/chemical origin. So there is no need to identify my "non-scientific commitments" because they are irrelevant :)
If the evidence for “Darwinian processes” was actually “compelling”, then there would be no debate. Those who claim that the evidence is compelling bear an uncanny resemblance to those who have made an a priori commitment to the explanatory power of “Darwinian processes”. There is nothing ‘reasoned’ about that particular conviction. Cheers, Chris
Well, that argument cuts both ways. Saying that an argument can't be compelling because otherwise there wouldn't be a debate would wipe out many past scientific arguments that are now accepted as standard! I'm not saying all are compelled by the arguments (clearly you, for example, are not) but there is no reason, I suggest, for either IDists to assume that those who disagree with them have simply failed to understand the argument, nor vice versa. Clearly we read the evidence and arguments differently. I'm interested in trying to find out exactly where those differences lie.
PS. Completely unrelated question that I’d like to ask you as someone who may have a professional interest: what do you make of the MMR vaccine controversy that was stirred up by Andrew Wakefield here in the UK? Specifically, is there any possibility that there is a link between an MMR vaccine and Autism Spectrum Disorders.
Yes (this is my answer as a statistical person, not a clinical person btw, which I'm not) there is, technically, a possibility. We are back to the One Black Swan problem in another guise! It is far more difficult to rule something out (there are no black swans; MMR does not cause autism) than rule it in. However, what we can say is that rigorous studies with large statistical power have failed to demonstrate a link. The best we can do with that kind of study is to say: if there is a link, the effect size is too small to be detected by a study with very large statistical power. We can also say that the effect size claimed by Andrew Wakefield has been falsified. That's a very careful statistical answer I know! But sometimes the best we can do is quantify the risk we are wrong, rather than quantify the probability that we are right. There appears to be only a very very small probability that the claim that MMR can cause autism is correct. Moreover, even if correct, the additional risk can be no more than tiny. Elizabeth Liddle
Meleagar: I don't regard "evolutionary" and "unintelligent" as synonyms, so your substitution doesn't work :) But I'm happy to explain why I think that darwinian processes resemble intelligent processes, and where I think the difference lies, if you are interested. Elizabeth Liddle
Dr Liddle: Please note: design is a routinely and directly observed cause of functionally specific, complex organisation and associated information, of codes, of algorithms, and assembly lines etc. So, it is immediately reasonable as a candidate explanation when we observe such things. (Which we do in the living cell.) Now, these features are highly contingent so of the three known broad causal factors, forces of mechanical necessity, chance circumstances, and choice or art or design, only the two capable of explaining contingency are relevant: choice or chance contingency. We have excellent reason to understand that codes, functionally specific and complex organisation etc sit on isolated islands in the sea of possible -- but overwhelmingly non-functional -- configurations. This can be seen for instance from the easily observed chaotic and even disruptive impact of modest injections of random changes to sequences of code symbols, or to the way functionally organised things are "wired" together. Or, just from how specific the requirements are for replacement parts for a car or a complicated machine. That means that beyond a reasonable threshold of complexity on explicit or implicit information [the latter being in effect the structured set of yes/no answers required to specify the functional cluster of configs], on needle in the haystack or infinite monkeys grounds, it is unreasonable to expect the scope of resources in our solar system or the observed cosmos, to get to such a special configuration. As has been repeatedly shown, that starts at 500 or 1,000 bits, or 125 bytes or 143 ASCII characters [20 typical English words] at the upper end. Which, for complex organised entities, is a trivial amount of information. In short, chance contingency is not a credible explanation of getting to shores of islands of function for the sort of complexity we see in living systems, whereby unicellular organisms will require 100 - 1,000 k or so of genetic information, and novel body plans will require 10 - 100+ million bits. (As was shown in previous threads when you challenged the idea of such a threshold.) We have a directly observed source of FSCO/I, vs a claimed source that is not analytically credible and has not been observed to act in the desired way. But what about Genetic Algorithms that show that chance and necessity acting together can cause hill climbing and improved function? Such GA's are optimisation algorithms, are intelligently designed, require a goal-seeking capacity based on a nice trend in a mapping form a "genome" string or the like to a so-called fitness function, and so operate WITHIN islands of function. To present them as an answer to the challenge of getting TO islands of function is to beg the decisive question. And indeed, it seems that the suggestion of in effect a continent of bio-functional forms traversible by a branching tree pattern of increasing complexity, seems to be a way to try to divert the force of this point. Unfortunately, as shown already, the branching tree pattern is a construct, not an observed reality, and one that is challenged by the only actual facts from the world of the deep past: fossils. For 150+ years now, on the conventional timeline, the testimony of the fossils is: sudden appearances at body plan level, stasis in body plans [with variations being on the basic plans at different levels], and then disappearance or continuity into the modern world. GEM of TKI kairosfocus
Elizabeth Liddle said: "Exactly! I entirely agree! In fact it’s a point I keep reiterating! I think evolutionary processes resemble intentional intelligent processes very closely. Which is precisely why I don’t think we can rule out evolutionary processes just because something resembles the products of intelligent processes." Ms.Liddle, with all due respect, do you not even recognize you are again assuming your conclusion? When you say we "cannot rule out evolutionary processes" above, I presume you are saying "we cannot rule out unintelligent processes"; so you are in essence saying in the prior comment: "I think unintelligent processes resemble intentional intelligent processes very closely." Really? Care to explain that? Also, since you haven't vetted the evolutionary forces necessary to acquire macro-evolutionary targets AS unintelligent **in the first place**, how does your statement not simply **assume** that the necessary evolutionary forces and product are unintelligent? You must first vet some kind of evolutionary process and product **as** unintelligent (darwinistic) before you can make any claim about how such "evolutionary" (unintelligent) process or product **is similar to** the product of intelligent design. I'm still waiting for any paper or research where evolutionary processes have been vetted as chance* or natural*. Meleagar
Hi Lizzie, In most cases, I’d be happy to summarise any book that I’ve read. But SITC is such an important, game-changing, bar-raising book in this debate and critics of ID cannot really be taken seriously until they both read and fully engage with the central arguments advanced by Meyer in it. The remit of science is far too narrow to have any relevance in the Land of Hypotheticals. Beautiful hypotheses are regularly slain by ugly facts. We know we have certainly exhausted all known improbables (in terms of chance and/or necessity). Your appeal to unknown improbables has no basis in observational or experimental evidence and is therefore unscientific. Remember, there is no third way here. Your belief in the existence of “historical pre-cellular entities” is also unscientific in the absence of any observational or experimental evidence to support it. That you hold such a belief can only be because you are bringing non-scientific preconceptions and commitments to the table in the first place. It’d be interesting to know exactly what they are and why you hold them. If the evidence for “Darwinian processes” was actually “compelling”, then there would be no debate. Those who claim that the evidence is compelling bear an uncanny resemblance to those who have made an a priori commitment to the explanatory power of “Darwinian processes”. There is nothing ‘reasoned’ about that particular conviction. Cheers, Chris PS. Completely unrelated question that I’d like to ask you as someone who may have a professional interest: what do you make of the MMR vaccine controversy that was stirred up by Andrew Wakefield here in the UK? Specifically, is there any possibility that there is a link between an MMR vaccine and Autism Spectrum Disorders. Chris Doyle
As long as there is the bare chance that non-intelligence could have produced life or macro-evolutionary success, then there will be materialists that are satisfied with Darwinian "explanations", which are nothing more than narrative appeals to unlimited (non-quantified) chance. ID theorists don't claim that it is not possible that a collection of miracles of chance could produce life and macro-evolutionary success; they just rightfully point out that such appeals are not scientific theories. Meleagar
Also, “treelike” and “stepwise” do not even address my question; both intelligent and non-intelligent processes can operate in a “treelike” and “stepwise” manner; just because variations occur in a “treelike” and “stepwise” manner doesn’t necessarily indemnify the processes as non-intelligent.
Exactly! I entirely agree! In fact it's a point I keep reiterating! I think evolutionary processes resemble intentional intelligent processes very closely. Which is precisely why I don't think we can rule out evolutionary processes just because something resembles the products of intelligent processes. However, I do think that strictly Darwinian processes have the boundaries that I mentioned, so anything that departs from those boundaries requires some additional explanation. And we already know that other factors are important. Elizabeth Liddle
@NZer:
Lizzie wrote: “What he seemed to be saying was that the genetic code couldn’t have emerged from purely physical/chemical processes.” I understand you are a biologist, so could you give an example from your training, reading, or experience of where the genetic code has emerged by purely physical/chemical processes? I’m looking for evidence, not speculation. Do you have such evidence? An example? Thanks.
I'm a neuroscientist rather than a biologist btw. As a matter of logic, I would point out that the opposite of: "the genetic code couldn’t have emerged from purely physical/chemical processes" isn't that it did but that it could. I am not claiming that it did, but that it could have. In other words, to make, as apparently (though clearly I need to read the book in full) the claim that the code could NOT have yadda yadda, is a negative claim, and negative claims are notoriously difficult to substantiate in science (it's the One Black Swan problem). And I think this is a fundamental problem for ID, actually, at least as I have seen it formulated. Simply saying that "this couldn't have happened without an ID" can be easily falsified by any plausible theory that says it could, whether or not that theory is actually correct. In order to test ID, it needs to make a specific positive differential prediction (which is tricky, but probably not impossible). There are a number of studies and ongoing investigation into the precursors of the genetic code, those of Michael Yarus et al being among the most widely cited. You may call this kind of work "speculation" but I would say it is only "speculative" in the sense that all science can and must be speculative - theories give rise to hypotheses which generate testable predictions. Simply stopping, and saying: well, this looks impossible by any other means, so we must infer ID isn't rigorous science. (On the other hand saying: this looks like ID, in which case what we ought to see is.... would be.) Elizabeth Liddle
Elizabeth you stated: 'I am stating that it is an adequate explanatory theory, and yet the evidence states; The Capabilities of Chaos and Complexity - David L. Abel - 2009 Excerpt: "A monstrous ravine runs through presumed objective reality. It is the great divide between physicality and formalism. On the one side of this Grand Canyon lies everything that can be explained by the chance and necessity of physicodynamics. On the other side lies those phenomena than can only be explained by formal choice contingency and decision theory—the ability to choose with intent what aspects of ontological being will be preferred, pursued, selected, rearranged, integrated, organized, preserved, and used. Physical dynamics includes spontaneous non linear phenomena, but not our formal applied-science called “non linear dynamics”(i.e. language,information). http://www.mdpi.com/1422-0067/10/1/247/pdf “The difference between a mixture of simple chemicals and a bacterium, is much more profound than the gulf between a bacterium and an elephant.” (Dr. Robert Shapiro, Professor Emeritus of Chemistry, NYU) But more importantly to us personally, than the obvious fact that material processes will never bridge the 'monstrous ravine' between information and material processes, is the fact that there is a 'universe wide' spiritual chasm that man is utterly unable to bridge by his own 'good works'. That chasm is the separation of man from God. Flyleaf - Chasm http://www.youtube.com/watch?v=O-BvOuE7wfw bornagain77
kairosfocus, Liddle uses "treelike" and "stepwise", IMO, as a means of avoiding specificity and inviting biased imagination to fill in the gaps. IOW, whatever Darwinists "cannot imagine" to have been generated in a tree like, stepwise manner is the only thing that will be counted as evidence against Darwinism, but we have seen how elastic those concepts can be. Also, "treelike" and "stepwise" do not even address my question; both intelligent and non-intelligent processes can operate in a "treelike" and "stepwise" manner; just because variations occur in a "treelike" and "stepwise" manner doesn't necessarily indemnify the processes as non-intelligent. I'm still waiting for the research that has vetted such processes as chance* and natural*. Meleagar
F/N: A key observation can be had from Dr Liddle's:
I would say that the “limits” of evolutionary processes are that they are: 1)Tree-like 2)Stepwise. And in both these features they differ from human intentional design processes (we can transfer solutions to different lineages, and we can bypass tedious intermediate steps).
In short, we expect on such assumptions a [nearly] smoothly continuous gradation of life forms, from the original unicellular forms to the diversity we see. The only problem? What we ACTUALLY see is a discrete top-down, jump-wise pattern, with reuse of common themes in mosaic life forms (the platypus being the most obvious). As Meyer summarised in the PBSW article (that passed peer review by "renowned" scientists and then was made the subject of ideological, thought police tactics with Ms Forrest's NCSE in the lead): ________________ >> The Cambrian explosion represents a remarkable jump in the specified complexity or "complex specified information" (CSI) of the biological world. For over three billions years, the biological realm included little more than bacteria and algae (Brocks et al. 1999). Then, beginning about 570-565 million years ago (mya), the first complex multicellular organisms appeared in the rock strata, including sponges, cnidarians, and the peculiar Ediacaran biota (Grotzinger et al. 1995). Forty million years later, the Cambrian explosion occurred (Bowring et al. 1993) . . . One way to estimate the amount of new CSI that appeared with the Cambrian animals is to count the number of new cell types that emerged with them (Valentine 1995:91-93) . . . the more complex animals that appeared in the Cambrian (e.g., arthropods) would have required fifty or more cell types . . . New cell types require many new and specialized proteins. New proteins, in turn, require new genetic information. Thus an increase in the number of cell types implies (at a minimum) a considerable increase in the amount of specified genetic information. Molecular biologists have recently estimated that a minimally complex single-celled organism would require between 318 and 562 kilobase pairs of DNA to produce the proteins necessary to maintain life (Koonin 2000). More complex single cells might require upward of a million base pairs. Yet to build the proteins necessary to sustain a complex arthropod such as a trilobite would require orders of magnitude more coding instructions. The genome size of a modern arthropod, the fruitfly Drosophila melanogaster, is approximately 180 million base pairs (Gerhart & Kirschner 1997:121, Adams et al. 2000). Transitions from a single cell to colonies of cells to complex animals represent significant (and, in principle, measurable) increases in CSI . . . . In order to explain the origin of the Cambrian animals, one must account not only for new proteins and cell types, but also for the origin of new body plans . . . Mutations in genes that are expressed late in the development of an organism will not affect the body plan. Mutations expressed early in development, however, could conceivably produce significant morphological change (Arthur 1997:21) . . . [but] processes of development are tightly integrated spatially and temporally such that changes early in development will require a host of other coordinated changes in separate but functionally interrelated developmental processes downstream. For this reason, mutations will be much more likely to be deadly if they disrupt a functionally deeply-embedded structure such as a spinal column than if they affect more isolated anatomical features such as fingers (Kauffman 1995:200) . . . McDonald notes that genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes--the very stuff of macroevolution--apparently do not vary. In other words, mutations of the kind that macroevolution doesn't need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don't occur.6 >> ________________ In fact, Darwin knew about the suddenness of the Cambrian fossil life revolution, but thought that since the fossil record was not fully explored, the gaps would vanish as further explorations exposed the world of the past by its traces in the present. But, after 150 years and billions in cumulative research efforts, millions of collected fossils (and billions of observed ones) with over 1/4 million fossil species, we have an almost unmanageably rich fossil record that makes the same story, not only about the Cambrian but in general. That is why Gould was moved to remark, quoting Darwin: "The absence of fossil evidence for intermediary stages between major transitions in organic design, indeed our inability, even in our imagination, to construct functional intermediates in many cases, has been a persistent and nagging problem for gradualistic accounts of evolution." [[Stephen Jay Gould (Professor of Geology and Paleontology, Harvard University), 'Is a new and general theory of evolution emerging?' Paleobiology, vol.6(1), January 1980,p. 127.] "All paleontologists know that the fossil record contains precious little in the way of intermediate forms; transitions between the major groups are characteristically abrupt." [[Stephen Jay Gould 'The return of hopeful monsters'. Natural History, vol. LXXXVI(6), June-July 1977, p. 24.] "The extreme rarity of transitional forms in the fossil record persists as the trade secret of paleontology. The evolutionary trees that adorn our textbooks have data only at the tips and nodes of their branches; the rest is inference, however reasonable, not the evidence of fossils. Yet Darwin was so wedded to gradualism that he wagered his entire theory on a denial of this literal record:
The geological record is extremely imperfect and this fact will to a large extent explain why we do not find intermediate varieties, connecting together all the extinct and existing forms of life by the finest graduated steps [[ . . . . ] He who rejects these views on the nature of the geological record will rightly reject my whole theory.[[Cf. Origin, Ch 10, "Summary of the preceding and present Chapters," also see similar remarks in Chs 6 and 9.]
Darwin's argument still persists as the favored escape of most paleontologists from the embarrassment of a record that seems to show so little of evolution. In exposing its cultural and methodological roots, I wish in no way to impugn the potential validity of gradualism (for all general views have similar roots). I wish only to point out that it was never "seen" in the rocks. Paleontologists have paid an exorbitant price for Darwin's argument. We fancy ourselves as the only true students of life's history, yet to preserve our favored account of evolution by natural selection we view our data as so bad that we never see the very process we profess to study." [[Stephen Jay Gould 'Evolution's erratic pace'. Natural History, vol. LXXXVI95), May 1977, p.14.] [[HT: Answers.com] In fact, the case is worse. Perhaps 1/2 or more of the living forms are found in the fossil record, suggesting that the sample is now wide enough to capture a representative cross section. Similarly, we know that the relevant beds were able to capture soft bodied organisms and even tiny organisms, as we have recovered fossils of such forms, indeed even ephemera like footprints and raindrops are preserved. The fossil record, to high confidence, is overwhelmingly one of top-down body plan first variation, suddenness of appearance, stasis, and disappearance or continuation into the modern world. A pattern that -- despite the pattern presented by headlines -- far better fits design by the criteria cited above than incremental, branching development. And,a s would be expected from the search space challenge that has been underscored here at UD in recent weeks. Loennig summarises that point aptly in his 2004 peer-reviewed paper, "Dynamic genomes, morphological stasis, and the origin of irreducible complexity":
examples like the horseshoe crab are by no means rare exceptions from the rule of gradually evolving life forms . . . In fact, we are literally surrounded by 'living fossils' in the present world of organisms when applying the term more inclusively as "an existing species whose similarity to ancient ancestral species indicates that very few morphological changes have occurred over a long period of geological time" [85] . . . . Now, since all these "old features", morphologically as well as molecularly, are still with us, the basic genetical questions should be addressed in the face of all the dynamic features of ever reshuffling and rearranging, shifting genomes, (a) why are these characters stable at all and (b) how is it possible to derive stable features from any given plant or animal species by mutations in their genomes? . . . . A first hint for answering the questions . . . is perhaps also provided by Charles Darwin himself when he suggested the following sufficiency test for his theory [16]: "If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down." . . . Biochemist Michael J. Behe [5] has refined Darwin's statement by introducing and defining his concept of "irreducibly complex systems", specifying: "By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning" . . . [for example] (1) the cilium, (2) the bacterial flagellum with filament, hook and motor embedded in the membranes and cell wall and (3) the biochemistry of blood clotting in humans . . . . One point is clear: granted that there are indeed many systems and/or correlated subsystems in biology, which have to be classified as irreducibly complex and that such systems are essentially involved in the formation of morphological characters of organisms, this would explain both, the regular abrupt appearance of new forms in the fossil record as well as their constancy over enormous periods of time. For, if "several well-matched, interacting parts that contribute to the basic function" are necessary for biochemical and/or anatomical systems to exist as functioning systems at all (because "the removal of any one of the parts causes the system to effectively cease functioning") such systems have to (1) originate in a non-gradual manner and (2) must remain constant as long as they are reproduced and exist. And this could mean no less than the enormous time periods mentioned for all the living fossils hinted at above. Moreover, an additional phenomenon would also be explained: (3) the equally abrupt disappearance of so many life forms in earth history . . . The reason why irreducibly complex systems would also behave in accord with point (3) is also nearly self-evident: if environmental conditions deteriorate so much for certain life forms (defined and specified by systems and/or subsystems of irreducible complexity), so that their very existence be in question, they could only adapt by integrating further correspondingly specified and useful parts into their overall organization, which prima facie could be an improbable process -- or perish . . . . According to Behe and several other authors [5-7, 21-23, 53-60, 68, 86] the only adequate hypothesis so far known for the origin of irreducibly complex systems is intelligent design (ID) . . . in connection with Dembski's criterion of specified complexity . . . . "For something to exhibit specified complexity therefore means that it matches a conditionally independent pattern (i.e., specification) of low specificational complexity, but where the event corresponding to that pattern has a probability less than the universal probability bound and therefore high probabilistic complexity" [23]. For instance, regarding the origin of the bacterial flagellum, Dembski calculated a probability of 10^-234[22].
So, the common perception promoted by the tree of life icon of evolution and by many a headline on found missing links, is misleading. GEM of TKI kairosfocus
Hi Chris!
Hi Lizzie, I’m not Stephen, I’m Chris.
I'm so sorry!
Stephen Meyer has made his own case and it, though it goes far beyond “the genetic code couldn’t have emerged from purely physical/chemical processes” that is nonetheless exactly what he demonstrates in SITC. You’d have to read it all to appreciate how this has been substantiated. Although it goes beyond “an argument from lack of evidence/alternative model” (again, you need to actually read the book before dismissing it! :-) ) it is nonetheless sufficient to highlight this fact. “Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth.” And there really are only two explanations: accident or design.
I'm actually not dismissing the book. That's why I asked you to summarise it. I'll try to get hold of another copy. Although I don't actually think Sherlock Holmes was correct! His adage only makes sense if you know you have exhausted all the improbables. In science we don't know that, which is why we keep looking, rather than infer a particular improbable.
Please can you provide observational or experimental evidence for the existence of “cells which are postulated to be the ancestors of both modern bacteria and multicellular organisms”. If not, you must agree we are entitled to conclude that such ancestors exist solely in a Land of Make Believe (I love that song!)
No, I don't think so, although I would happily agree that we are in the Land of Hypotheticals. But hypotheses are what science works with. As we speak, testable hypotheses are being devised, and new questions articulated. Clearly we are unlikely to find actual traces of historical pre-cellular entities (although I guess it's possible once we know more precisely what we are looking for). Instead the approach has to be to test possible mechanisms in the lab, and then look for evidence that those lab conditions might have pertained on early earth.
I do not doubt that opponents of ID sincerely want to “know how stuff happened”. The problem is, they bring non-scientific bias and commitments to the table. Mainly, these involve: 1. A commitment to atheism (and materialism) 2. A commitment to evolution (specifically, neo-darwinism)
I absolutely disagree with the first (with the caveat that I still don't exactly know what "materialism" is). As regards the second, I would agree that most biologists and other life scientists regard the evidence for Darwinian processes as compelling, but already other processes and factors have been identified (I'm not sure what "neo-Darwinism" is supposed to encompass). But being persuaded by prior evidence is not the same as a "commitment" to an explanation regardless of possible counter-evidence.
These two both serve to cloud judgement in the face of contrary scientific facts and certainly create the appearance of obtuseness.
Well, while I would never claim that all scientists have totally unclouded judgement, I think you are inferring clouds where there is in fact, reasoned conviction that the "contrary facts" are not, in fact "contrary facts". Obviousness isn't always obvious :)
I’m not trying to bait you, Lizzie. I was merely expressing surprise that the smiley worked: I didn’t expect it to! :-O
Ah! Anyway, I don't mind being baited, but here's a :) backatcha! Cheers Lizzie Elizabeth Liddle
This is what I find amazing about pro-Darwinists: they identify their evolutionary processes as chance* and natural*, which by definition exclude intelligent design; then claim that there is no metric by which ID could be identified. If there is no ID metric X that would validate the presence of ID (as best explanation), there cannot be a non-ID, chance* & natural* metric either (not-X), since it would be the same metric. Therefore, by the Darwinists own mouth, they cannot have vetted any evolutionary process as being chance* or natural*, but then claim that they are satisfied that chance* and natural* explanations are sufficient. That simply isn't possible in logical terms; it can only be an a priori ideological bias at work. Meleagar
Elizabeth: In other words, you cannot direct me to where such limitations have been formally provided by pro-Darwinists, and you cannot provide any answer to the challenge of where evolutionary processes have been scientifically vetted as natural* or chance*. You are doing nothing but assuming your conclusion that such processes, however you narrate them as "stepwise" or "treelike". You are stating that Darwinism is "an adequate explanation" without even providing a rigorous explanation of the power (and limits of that power) of chance* and natural* processes claimed to be "an adequate explanation". How can you claim those processes are adequate, if you cannot even direct me to where they have been vetted as adequate via a rigorous falsification metric? If there is no rigorous means to examine the computational and engineering limitations of the natural* and chance* processes claimed to be sufficient for producing macro-evolutionary successes (such as winged flight and stereoscopic, color vision), then how can one possibly be satisfied that such processes are "an adequate explanation"? Meleagar
Thanks, Kairosfocus, I will. No, I'm not a mathematician, unfortunately, though I do use a lot of math in my work. I'm a cognitive neuroscientist - I do neuroimaging and some cognitive modelling - I'm particularly interested in learning, and its application to mental disorders. Elizabeth Liddle
Dr Liddle: Perhaps, you may wish to address the onward remarks, here; which speak to many of your key concerns, noting as well the linked videos below. GEM of TKI PS: Am I correct to understand that you are a Mathematician, primarily? kairosfocus
Gil: It does seem that an ideology of evolutionary materialism has held science in increasing thralldom for generations, but that that thralldom -- despite much distractive, distorting and denigratory rhetoric as a main line of defense, is slowly being broken because of the implications of the sophisticated information systems in the heart of cell based life. In particular, and as Meyer has often pointed out, the way origins science can claim to be more than a glorified just so story set up to fit whatever fashionable mythology of origins holds in a given day, is that it is based on provisional, critically open minded inference to best explanation on directly known effective causal mechanisms. For info systems, the answer to that challenge is increasingly obvious. GEM of TKI PS: Pardon, but I suggest irresistible:
irresistible [??r??z?st?b?l] adj 1. not able to be resisted or refused; overpowering an irresistible impulse 2. very fascinating or alluring an irresistible woman irresistibility , irresistibleness n irresistibly adv Collins English Dictionary – Complete and Unabridged © HarperCollins Publishers 1991, 1994, 1998, 2000, 2003
kairosfocus
Pah, it didn't work! Chris Doyle
Hi Lizzie, I’m not Stephen, I’m Chris. Stephen Meyer has made his own case and it, though it goes far beyond “the genetic code couldn’t have emerged from purely physical/chemical processes” that is nonetheless exactly what he demonstrates in SITC. You’d have to read it all to appreciate how this has been substantiated. Although it goes beyond “an argument from lack of evidence/alternative model” (again, you need to actually read the book before dismissing it! :-)) it is nonetheless sufficient to highlight this fact. “Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth.” And there really are only two explanations: accident or design. Please can you provide observational or experimental evidence for the existence of “cells which are postulated to be the ancestors of both modern bacteria and multicellular organisms”. If not, you must agree we are entitled to conclude that such ancestors exist solely in a Land of Make Believe (I love that song!) I do not doubt that opponents of ID sincerely want to “know how stuff happened”. The problem is, they bring non-scientific bias and commitments to the table. Mainly, these involve: 1. A commitment to atheism (and materialism) 2. A commitment to evolution (specifically, neo-darwinism) These two both serve to cloud judgement in the face of contrary scientific facts and certainly create the appearance of obtuseness. I’m not trying to bait you, Lizzie. I was merely expressing surprise that the smiley worked: I didn’t expect it to! :-O Chris Doyle
Lizzie wrote: "What he seemed to be saying was that the genetic code couldn’t have emerged from purely physical/chemical processes." I understand you are a biologist, so could you give an example from your training, reading, or experience of where the genetic code has emerged by purely physical/chemical processes? I'm looking for evidence, not speculation. Do you have such evidence? An example? Thanks. NZer
Meleager @ 10 (sorry to take your posts out of order) I am not stating evolutionary processes "as fact" (although I think the theory is very well supported). I am stating that it is an adequate explanatory theory, and we therefore do not need to postulate additional "intentional" processes to account for the data. We could look for evidence of them, though, and increasingly we will find them as genetically engineered organisms work their way into the ecosystem. Elizabeth Liddle
Meleager @ #11: Well, I would say that the "limits of evolution" are not longitudinal, but rather "lateral". Evolutionary processes cannot (easily) apply (please regard the teleological language as metaphorical!) a "solution" from one lineage to another. So we see bird lungs in one lineage and mammalian lungs in another. We see, in other words, "nested hierarchies" of characters. The other limit is in the size of step. The more complex a feature (in terms of its genetic specification) the less likely it is to have resulted from a single simultaneous set of fortuitous mutations. But there is no limit (or no logical limit) to the number of steps. Therefore I would say that the "limits" of evolutionary processes are that they are: 1)Tree-like 2)Stepwise. And in both these features they differ from human intentional design processes (we can transfer solutions to different lineages, and we can bypass tedious intermediate steps). Elizabeth Liddle
No, Stephen, I read more than the "contents pages". What he seemed to be saying was that the genetic code couldn't have emerged from purely physical/chemical processes. That seemed unsubstantiated to me, and, in any case, an argument from lack of evidence/alternative model rather than a positive argument. And, indeed, there is evidence supporting at least one alternative model. Yes, of course there is a "difference between a cell and the origins of life" (a category difference indeed!). A bacterium is indeed a cell - a unicellular organism. That doesn't mean that all cells are bacteria, of course, nor does it mean that the cells which are postulated to be the ancestors of both modern bacteria and multicellular organisms were much like modern bacteria. Nor does it mean that that cell did not have even simpler precursors - precursors that we might hesitate to call "alive". Re: Behe - actually some of my reasons are scientific, some simply mathematical. My more general point is that I think it is quite wrong to assume that those us who do not embrace ID are motivated by anything other than a desire to know how stuff happened (the motivation of all science, pretty well). Nor that we are all obtuse. Oh, and what worked? I took the bait? I do tend to :) Cheers Lizzie Elizabeth Liddle
I would also like to challenge EL or any other darwinism advocate to the following: If Behe didn't accurately define the limits of darwinian processes in the Edge of Evolution, please tell us, or direct us to, where the limits of Darwinian evolution have been explained in any scientific sense. If Darwinism, the core of modern evolutionary theory, is essentially that mutation and selection undirected by intelligence can produce macro-evolutionary successes such as winged flight and stereoscopic color vision, and Darwinism is to be taken as a scientific theory, then surely there has been much written about the limitations of those processes offered as a means of falsifying the claims of Darwinism and defining its capabilities and parameters. Meleagar
Elizabeth Liddle states: "Evolutionary processes have a lot in common with intelligent processes, the difference being that evolutionary processes are not intentional." Please direct me to where the evolutionary processes have ever been vetted as chance (meaning unguided by intelligence) and natural (meaning unguided by intelligence). In fact, they have not (so one asks, why the qualifiers?). No mutation or selection activity has ever been vetted (that I'm aware of) in any formal sense as being what their characteristic qualifiers have claimed as scientific fact: random* and natural*. Furthermore, unless there is a "directed vs chance*" and "artificial vs natural*" metric that can determine whether or not the aggregate product of chance & nature can produce what darwinism is claimed to have produced (a metric which mainstream evolutionary theorists deny exists), then there is simply no means by which to claim that such processes are chance* or natural*, let alone be satisfied that chance* and natural* processes can produce what they are claimed to have produced. Nor is there any way to claim (other than bald assertion) that such processes are "not intentional". IOW, your claim that evolutionary (which I take you to mean darwinian) processes are not intentional can only be a baseless assumption on your part, which you are here stating as fact. Meleagar
Oh, it worked! :-) Chris Doyle
Hi Elizabeth, Which part did you read: the contents pages!? (insert 'happy, smiling face' here ;-)) Seriously, you need to read all of it in order to better understand where many ID proponents are coming from these days. Incidentally, there is a difference between a cell and the origins of life. A bacterium is a cell: and its existence demands an explanation based solely on Darwinian evolutionary processes. On the other hand, eukaryotic cells also demand such an explanation, starting with amoebas, for example, then working all the way up to human cells. I’d be surprised if you disagree with Behe for purely scientific reasons. Which I think is the point of this particular piece. Chris Doyle
Can you summarise what Stephen Meyer considers the signature of intentional design? I have read part of the book, but not all of it (it was a loan). I don't agree that we must eliminate Darwinian evolutionary processes on "purely scientific grounds" (your "furthermore" seems a little odd - presumably Meyer's grounds are also "purely scientific"? :)) with regard to the origin of the cell. Darwin specifically excluded the origins of life from his theory - his theory is on the origin of species not the origin of life. As for the Edge of Evolution - I have read it, and I would disagree that Behe has demonstrated a "definite 'Edge'". Elizabeth Liddle
Good Morning Elizabeth, Have you read "Signature in the Cell" by Stephen Meyer. That work provides you with all the demonstrations you claim have not been made. Furthermore, surely we must eliminate Darwinian evolutionary processes on purely scientific grounds? After all, there is no observational or experimental evidence to show that the cell appeared as a result of them. Indeed, there is no scientific evidence that Darwinian evolutionary processes can do anything without pre-existing biological systems. Even then, there is a definite "Edge" to evolution (read Michael Behe's "Edge of Evolution" for more on that) that allows us to dismiss Darwinian evolutionary processes as trivial, at best. Chris Doyle
Well, my position is that IDists have failed to demonstrate that what they consider the signature of intentional design is not also the signature of Darwinian evolutionary processes. Clearly, simply noting that things that we know are intentionally designed resemble things for which we don't know the provenance isn't enough to allow us to infer that the latter were intentionally designed. That would be the equivalent of concluding that because these mammals are cats, all mammals are cats. Evolutionary processes have a lot in common with intelligent processes, the difference being that evolutionary processes are not intentional. To determine whether living things were intentionally designed, we have to detect the signature of intention, not the signature of intelligence. IMO :) Elizabeth Liddle
You're welcome, paragwinn. I must say, if it really was "an excellent example of an indefensible testimonial" then you would easily be able to demonstrate that, rather than rely upon an unsubstantiated assertion. Chris Doyle
Thank you, Chris, for an excellent example of an indefensible testimonial. paragwinn
Spot on, GilDodgen. The vast majority of evolutionists cannot handle a forum like this. The problem they find is that they are obligated to confront actual scientific fact here (unlike other forums, where anything goes: which usually means the first things to go are evidence and reason). With each passing day, it is clearer and clearer that the objections to Intelligent Design are not scientific ones. Indeed, the empirical basis for design in nature is so overwhelming that "Biologists must constantly keep in mind that what they see was not designed, but rather evolved". The many, many threads on offer here build up the same picture: that the opponents of Intelligent Design have a commitment to neo-Darwinism and/or atheism that is strong enough to overcome anything that science can bring up to question that commitment. Even the best ID opponents here are merely skirting around the edges of irrefutable, central ID arguments. Chris Doyle
"[T]hey continue to mount what I perceive as increasingly indefensible assaults on the creative powers of the Darwinian mechanism of random errors filtered by natural selection." I affirm this statement. paragwinn

Leave a Reply