Uncommon Descent Serving The Intelligent Design Community

Andy McIntosh’s Peer-Reviewed ID Paper–Note the Editor’s Note!

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Prof. Andy McIntoshProfessor Andy McIntosh, an ID proponent in the UK, has a peer-reviewed paper on the thermodynamic barriers to Darwinian evolution:

A. C. McIntosh, “Information and Entropy—Top-Down or Bottom-Up Development in Living Systems?” International Journal of Design & Nature and Ecodynamics 4(4) (2009): 351-385

The Editor appends the following note:

Editor’s Note: This paper presents a different paradigm than the traditional view. It is, in the view of the Journal, an exploratory paper that does not give a complete justification for the alternative view. The reader should not assume that the Journal or the reviewers agree with the conclusions of the paper.  It is a valuable contribution that challenges the conventional vision that systems can design and organise themselves.  The Journal hopes that the paper will promote the exchange of ideas in this important topic.  Comments are invited in the form of ‘Letters to the Editor’.

Here is the abstract: 

Abstract: This paper deals with the fundamental and challenging question of the ultimate origin of genetic information from a thermodynamic perspective. The theory of evolution postulates that random mutations and natural selection can increase genetic information over successive generations. It is often argued from an evolutionary perspective that this does not violate the second law of thermodynamics because it is proposed that the entropy of a non-isolated system could reduce due to energy input from an outside source, especially the sun when considering the earth as a biotic system. By this it is proposed that a particular system can become organised at the expense of an increase in entropy elsewhere. However, whilst this argument works for structures such as snowflakes that are formed by natural forces, it does not work for genetic information because the information system is composed of machinery which requires precise and non-spontaneous raised free energy levels – and crystals like snowflakes have zero free energy as the phase transition occurs. The functional machinery of biological systems such as DNA, RNA and proteins requires that precise, non-spontaneous raised free energies be formed in the molecular bonds which are maintained in a far from equilibrium state. Furthermore, biological structures contain coded instructions which, as is shown in this paper, are not defined by the matter and energy of the molecules carrying this information. Thus, the specified complexity cannot be created by natural forces even in conditions far from equilibrium. The genetic information needed to code for complex structures like proteins actually requires information which organises the natural forces surrounding it and not the other way around – the information is crucially not defined by the material on which it sits. The information system locally requires the free energies of the molecular machinery to be raised in order for the information to be stored. Consequently, the fundamental laws of thermodynamics show that entropy reduction which can occur naturally in non-isolated systems is not a sufficient argument to explain the origin of either biological machinery or genetic information that is inextricably intertwined with it. This paper highlights the distinctive and non-material nature of information and its relationship with matter, energy and natural forces. It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate.

Comments
Seversky (#43): Now, let's go to your specific questions: I would value clear definitions of all the words used in this context. Does “information”, for example, refer to teleo-semantic or Shannon information or Kolmogorov complexity or some other entity defined especially for the occasion? They are not all the same thing. Teleo-semantic information clearly requires intelligent agents as both sender and receiver but we can also acquire information from weather, rocks and tree-rings where we have no reason to think there is any intelligent agency involved – apart from the observer. As should be clear from my previous posts, the information debated in ID is CSI, and the subset of CSI defined and debated by me in all the previous contexts is dFSCI. You can find an explicit definition above. We can debate any pint which is not clear to you about that. It is in a sense a classical measure of complexity, in the same sense as Shannon information, and it is expressed in the same unit (bits). Shannon information is used more explicitly in the procedure elucidated by Durston in his fundamental paper, and the variation in Shannon information is used there to indirectly measure the functional complexity in sets of natural proteins with the same function. I refer you to that paper for that. Again, we can discuss these points in detail if you want. The requirement that dFSCI be "scarcely compressible" is necessary to exclude strings which can be the output of necessary algorithm, as debated in my previous posts. Protein sequences definitely satisfy that requirement, as you can see from the following non ID paper: http://www.bioinf.unisi.it/materiale/master2006/modelli/MASTER_SI_2005/BIBLIOGRAPHY/Herzel1.pdf Therefore, if strings which exhibit dFSCI have to be scarcely compressible by definition, their general complexity is approximately the same as their Kolmogorov complexity. (just a note here: I am not a mathematician, so if I make some formal errors in specific issues, I will be happy to be corrected). Regarding the information in simple data, like weather data, I agree that it is information, but it is not functional information: simple data in themselves do not convey any explicit function coded in their sequence which can be recognized ny a conscious observer. I have debated this point in some detail with KF on another thread. If you are interested, I will paste the link. What is meant by “complexity” and how does it differ from information, given that Kolmogorov complexity is treated as part of information theory? Complexity is a classical measure of information in bits. Functional complexity is the measure of information which derives from the ratio of the target space and the search space, in scarcely compressible strings, and is multiplied by a "specification coefficient" which can be binary (function present or absent), or can be a number from 0 to 1, expressing a quantitative assessment of the function. The definition of the function and its measure (both categoric or quantitative) depend on a conscious observer, but are made explicitly. I have already answered about Kolmogorov complexity. As for “specified”, does that refer to prior constraints on the range of behavior of which a given system night be capable, and can these restraints only come from a intelligent agent or might they originate in natural properties? Specified refers to the satisfaction by the system of an explicit function defined by a conscious observer, and measured according to an explicit procedure. It is, certainly, a "constraint on the range of behavior of which a given system might be capable". It is not necessarily "prior": the function is observed and recognized in the functional system (for instance, the protein), and explicitly defined. The, it becomes a prior constraint for the evaluation of the functional (target) space. And yes, the function can "only come from a intelligent agent", in the sense that only an intelligent agent can define a function, because the concept of function implies purpose, and purpose is a conscious representation. Obviously, narural properties (in the sense of non designed systems) can exhibit "apparent" function: an intelligent agent could recognize a function in a system which was not designed by an intelligent agent (that would be a "false positive" specification). But those false positive are never complex beyond some definite threshold of complexity (let's say, for the moment, Dembski's UPB). All information which is apparently specified and complex is designed by an intelligent agent (leaving out biological information, which is the object of our discussion, and whose status is what has to be decided). My understanding of “digital” is that it derives from computing where the machines are founded on devices which can occupy one of two discrete states, ‘on’ or ‘off’ represented as ‘0? or ‘1?. It it being argued that the component or functional structures of the genome are also strictly binary and that they either work or don’t work? No, "digital" refers to the fact that the information in the string is written according to a numeric code, and is read according to it. Probably, it could be defined "digital and symbolic". I am not here to fight about words. The concept is simple. I am referring in particular to the information in protein coding genes. It is digital, because it is a sequence of 4 "letters" (the nucleotides), read in "words" of 3 nucleotides, according to a specific redundant code which is contained in a specific set of enzymes (Aminoacyl tRNA synthetases). The information about protein sequence is symbolically coded in the gene sequence. It could certainly be expressed in classical binary code, through a simple conversion. The fact that the DNA information uses a quaternary code does not change anything in itself. I am using "digital" to exclude form the discussion analogic information, which requires an analogic-digital conversion to be expressed in digital form. I am doing that not because analogic information cannot be CSI (it definitely can), but because digital information is easier to discuss, and because all information in protein genes is digital. I think there is a misleading tendency, which is not peculiar to ID proponents but is also prevalent in the evolutionist camp, to regard of information as a property or constituent of the genome where I would see it rather as a property of our model of said genome. It is confusing the map for the territory. Separating the map from the territory is one of my favourite principles (I am a partial fan of NLP). So, I really don't want to confuse them. And I don't think I have. You can see that I have always explicitly separated the points where the intervention of a conscious intelligent agent is requested, form the points which can be defined in an entirely "objective" way. One of my strong points in the above discussion is that no function can be defines in a purely "objective" way. Function is always defined by intelligent agents. But the other important point is that function can be implemented by intelligent agents in objective systems, and can be recognized by intelligent agents in objective systems. In that case, there is the inference of purpose made by an intelligent observer about an intelligent agent who designed the system. Such an inference can be right or wrong. If it is wrong, we have recognized a "pseudo-function": the sytem was not designed, no purpose was implemented in it by any intelligent designer, and we have a false positive. If it is right, we have correctly recognized a conscious process through its intelligently designed output. Now, the whole point of ID is: complexity allows us to distinguish between true function and pseudo function: no pseudo-function is ever complex, not beyond some appropriate conventional threshold. Complexity eliminates the problem of false positives. (False negatives cannot be eliminated: simple systems can be designed, but not recognizable as such with certainty). So, functional information is always "a property of our models". That's the point. Our model is a way to recognize the model used by another intelligent agent in designing the objective system. Complexity excludes the possibility of error due to random occurrence of pseudo-function. In that sense, design recognition is a successful communication of meaning between two intelligent agents. The map is not the territory, but maps can definitely be shared.gpuccio
June 5, 2010
June
06
Jun
5
05
2010
10:00 PM
10
10
00
PM
PDT
Seversky: This is my main post from the Ayala thread. I paste it here, even if the final part was already pasted in #32, because it includes an important premise which is specially relevant for your questions. I apologize for the partial repetition: Now, IMO what BA is saying here is: “you have no algorithmic mechanism to generate fucntionally specified complex information”, because FSCI can be generated only by a conscious agent. That’s exactly the point of ID. So, please let’s go back to the classical concept of FSCI, or if you prefer (I definitely do) to its subset of digital functionally specified complex information (from now on: dFSCI). I have recently posted about that in another thread, debating alsio its measure (so please, those who say that we never go quantitative about that, please read more carefully). I paste here some pertinent comments I posted elsewhere: “2) Consciousness and ID. I did not realize for a long time the importance given in ID to “consciousness”. Its hard to fathom how you believe that some process has to experience its environment the way people do (what else could “consciousness” mean) in order for it to create complex specified output. Even bodily organs do incredibly complex things, without having to sense or understand the world the way that you or I do. Of course consciousness in central in ID theory. ID is about detecting design in things. Design is a process which originates in conscious intelligent beings (the designers). ID affirms that designed objects are recognizable with certainty as such if they exhibit a specific property, CSI. CSI is the main idea in ID. It is objectively recognizable, and in the known world it is always the product of design by an intelligent conscious being (leaving apart biological information, which is the object of the discussion). A special subset of CSI, digital functionally specified complex information (or, if you want, dFSCI), is specially useful for the discussion. It is easily definable as any string of digital information with the following properties: complexity higher than 10^150 (that is, length of about 500 bits); non significant compressibility (it cannot be generated through laws of necessity from a simpler string); and a recognizable, objectively definable function. That definition is very strong and useful. According to that definition, dFSCI includes language, software and practically all relevant biological information (in particular, the sequences of protein coding genes and the primary sequences of proteins). It is easy to show that no example is known of dFSCI (apart from biological information, which is the object of the debate) whic does not originate from a cosncious intelligent being (humans). And our common experience is that consciousness and intelligence are exactly the faculties used by humans in producing dFSCI. Biological information is dFSCI (any functional protein is). That’s why ID, with very sound inference based on analogy, assumes that some conscious and intelligent designer is the origin of biological information. That is, very quickly, the main idea in ID. Neo-darwinism cannot explain the emergence of dFSCI in living beings. The work of a designer can. I would like to mention that dFSCI originates from conscious intelligent beings directly; ot indirectly, through some non conscious machine which has received from an intelligent conscious being the pertinent dFSCI. In other words, Hamlet is dFSCI. Hamlet can be outputted by a PC, but only if someone has inputted it in the software. No computing machine can create Hamlet (or anything equivalent). Specification, function and purpose are definable only in relation to consciousness. Only consciousness recognizes them actively. So, consciousness is central to ID. Without consciousness, no function can be recognized. With consciousness, function can be defined, recognized and measured. And function is the only relevant form of specification in biological information. To go to your examples, bodily organs do not output dFSCI, even if they do complex things. A mchine can do complex things according to the CSI which has been inputted in the machine, but it cannot generate new dFSCI. The human body as a whole can generate new dFSCI (speaking, writing, programming) only because it is an interface for a conscious intelligent being. 3) Types of digital information. But complex meaningful sequences will not be found in monotonic strings, only in the amount of variation provided by randomness. We have three types of digital information: a) highly compressible strings, like monotonic strings. These are not dFSCI. b) truly random strings (high complexity, no functional specification). These are not dFSCI. c) pseudo-random strings, where a recognizable meaning is superimposed to the random structure by an intelligent designer (Hamlet, any software, any long discourse). And, obviouisly, any functional protein. These are dFSCI. About that, I would suggest that you read the following paper: Three subsets of sequence complexity and their relevance to biopolymeric information by David L Abel and Jack T Trevors available at the following URL: http://www.ncbi.nlm.nih.gov/pm…..MC1208958/” And, about its quantitative measure, another post of mine from another thread: “As this is a fundamental issue, I will try to be more clear. There is a general concept of CSI, which refers to any information which is complex enough (in the usual sense) and specified. Now, while I think that we can all agree on the concept of complexity, some problems appear as soon as we try to define specification. There is no doubt that specification can come in many forms: you can have compressibility, pre-specification, functional specification, and probably others. And, in a sense, any true specification, coupled to high complexity, is a mark of design, as Dembski’s work correctly affirms. But the problem is, some kinds of specifications are more difficult to define universally, and in some of them the complexity is more difficult to evaluate. Let’s take compressibility, for instance. In a sense, true compressibility which cannot be explained in any other way is a mark of design. Take a string of 1000 letters, all of which are “a”. You can explan it in two different ways: 1) It is produced by a system which can only output the letter “a”:in other words, it is the product of necessity. No CSI here. 2) It is the output of a truly random system which can output any letter with the same probability, but the intervention of a conscious agent has “forced” an output which would be extremely rare and which is readily recognizable to consciousness. The string is designed to be highly compressible. In any case, you can see that using the nconcept of compressibility as a sign of specification is not without meaning, but creates many interpretational problems. Or, take the example of analog specified information, like the classic Mount Rushmore example. The specification is very intuitive, but you have two problems: 1) The boundary between true form and vague resemblance is difficult to quantify in analog realities. 2) It is difficult to quantitavely evaluate the complexity of an analog information. For all these reasons, I have chosen to debate only a very specific subset of CSI, where all these difficulties are easily overcome. That subset is dFSCI. A few comments about this particular type of CSI: 1) The specification has to be functional. In other words, the information is specified because it conveys the intructions for a specific function, one which can be recognized and defined and objectively measured as present or absent, if necessary using a quantitative threshold. It is interesting to onserve that the concept of functional specification is earlier than Dembski’s work. 2) The information must be digital. Tha avoids all the problems with analo information, and allows an easy quantification of the search space and of the complexity. 3) The information must not be significantly compressible: in other words, it cannot be the output of an algorithm based on the laws of necessity. 4) If we want to be even more restrictive, I would say that the information must be symbolic. In other words, it has to be interpreted through a conventional code to convey its meaning. Now, in defining such a restricted subset of CSI, I am not doing anything arbitrary. I am only willfully restricting the discussion to a subset of objects which can be more easily analyzed. The discussion will be about these objects only, and any conclusion will be about these objects only. So, if we establish that objects exhibiting dFSCI are designed, I will not try to generalize that conclusion to any other type of CSI. Objects exhibiting analog specified information or compressible information can certainly be equally designed, but that’s not my problem, and others can discuss that. And do you know why it’s not my problem? Because my definition of that specific subset of CSI includes anything which interests me (and, I believe, all those who come to this blog). It includes all biological information in the genomes, and all linguistic information, and all software. That’s more than enough, for me, to go on in the discussion about ID. So, to answer explicitly your questions: 1) The presence of CSI is a mark of design certainly under the definition I have given here (dFSCI), and possibly under different definitions. I am not trying here to diminish in any way the importance of other definitions, indeed I do believe them to be perfectly valid, but here I will take care only of mine. 2) I have no doubt that, under my definition, there is no example known of CSI which is not either designed by humans or biological information. Nobody has ever been able to provide a single example which can falsify that statement. And yet even one example would do. 3) CSI in the sense I have given is certainly an objective measure. The measure only requires: a) an objective definition of a function, and an objective way to ascertain it. For an enzyme, that will be a clear definition of the enzymatic activity in standard conditions, and a threshold for that activity. The specification value will be binary (1 if present, 0 if not). b) A computation of the minimal search space (for a protein of length n, that would be at least 20^n). c) A computation, or at least a reasonable approximation, of the number of specific functional sequences: in other words, the number of different protein sequences of maximum length n which exhibit the function under the above definitions. The negative logarithm of (c/b) * a will be the measure of the specified complexity. It should be higher than a conventional threshold (a universal threshold of 10^150 is fine, but a biological threshold can certainly be much lower). For a real, published computation of CSI in proteins in the above sense with a very reasonable method, please see: Measuring the functional sequence complexity of proteins. by Durston KK, Chiu DK, Abel DL, Trevors JT Theor Biol Med Model. 2007 Dec 6;4:47. freely available online at: http://www.ncbi.nlm.nih.gov/pm…..ool=pubmed” Finally, for those who ask about units, it should be obvious that the complexity is measure in a way which is similar to the way we measure Shannon’s entropy, in bits, with the difference that specification must be present (must have value 1), otherwise there is no functional complexity.gpuccio
June 5, 2010
June
06
Jun
5
05
2010
08:49 PM
8
08
49
PM
PDT
Seversky: As you seem new to this specific discussion, I am pasting here the essential from my posts in the Ayala thread, for your convenience. That is just to set the background. Then, in the next post, I will address specifically your questions, which are certainly pertinent. So, let's start: 1) For my definition of dFSCI and of its measure, please refer to my post #32 here. 2) Here is an important discussion about the evolution of protein domains (which was an answer to Petrushka): Petrushka: "I am not terribly surprised that most genes, most metabolic mechanisms and all body plans, seem to have been invented or discovered by microbes or very simple organisms, presumably having large numbers and short spans between generations. It is not surprising that few fundamental inventions have been made by larger and slower reproducing creatures." My answer: "Now, I will just give some data, to show that your affirmations are vague and inaccurate. I will refer to the paper “The Evolutionary History of Protein Domains Viewed by Species Phylogeny” , by Song Yang, Philip E. Bourne, freely available on the internet. This paper analises the distribution of single protein domains as derived from SCOP (the database of protein families) in the evolutionary tree, and even the distribution of their unique combinations. The total number of independent domains in the whole proteome is 3464 and the total number of combinations is 116,400. The first important point is that about half of the domain information was already present at OOL (or at least, at the level of LUCA, if you believe in a pre-LUCA life): Protein domains 1984; Combinations 4631. The mean domain content per protein, at this level, is 2.33 So, the first point you have to explain is how 1984 protein domains were already working at the time of our common ancestor (supposedly 3.5 – 3.8 billion years ago), while not one of them can be found by a random search. The remaining half of the domains was discovered in the course of evolution, with the following pattern of new domains and domain combinations: Archea: Protein domains 31; Combinations 323; Bacteria: Protein domains 467; Combinations 4537. Eukaryota: Protein domains 520; Combinations 7192. Fungi: Protein domains 56; Combinations 3089. Metazoa: Protein domains 209; Combinations 12304. So, the next point is: about 1313 new domains arose after LUCA, and of them only 31 and 467 arose in archea and bacteria respectively, while the rest was “discovered” by more complex organisms. So, again, how do you explain the 209 new domains in metazoa? Another point: in the final parts of evolution, the search for new domains seems to be almost completed: for example, only about ten new domains appear in mammalia. On the contrary, the number of new combinations and the average complexity and length of the single protein definitely increase. So, to sum up: 1) More than half of the information for the proteome is already present at the stage of LUCA (or, if you want, at OOL, unless you can explain how such a complex LUCA originated in a relatively short time from inorganic matter) 2) The remaining new information was “discovered” during evolution, but certainly not only at the time of bacteria: at least half of the new domains appear in organisms more complex than bacteria, and about one quarter appear in metazoa. So, if it is true that the search for new domains “slows down” after OOL, and almost stops in the last stages, the successful search for new domains in the ocean of the search space definitely goes on for the whole evolutionary span. 3) While the search for new domains slows down, the search for new combinations in more complex protein increases along the evolutionary tree. In other words, as the repertoire of elementary folds is almost completed (demonstrating that targets not only exist, but can be fully achieved), the search for function is moved to a higher logical level. 4) Finally, we must remember that this analysis is only acoomplished at the level of protein genes (1.5% of the genome in humans). The non coding part of the genome constantly increases during evolution, and most of us (and, today, I would say most of the biologists) are convinced that non coding DNA is one of the keys to understanding genome regulation, and therefore body plans and many oyther things. Another, more complex level of abstraction and regulatory function which darwinists will have to explain after they have at least tried to explain the previous, simpler levels." 3) This was an answer to a specific question from BA about the papaer quoted previously: This is the method used in the paper I quoted: "Mapping of domains and domain combinations to species trees is too time-consuming to do manually. Our approach (see methods), similar to the approach introduced by Snel et al. [30], aims to predict the presence or absence of protein domains in ancestor organisms based on their distribution in present day organisms. Four evolutionary processes govern the presence or absence of a domain at each node in the tree: vertical inheritance, domain loss, horizontal gene transfer (HGT) and domain genesis. (Domain duplication and recombination do not affect domain presence.) Each process is assigned an empirical score according to their estimated relative probability of occurring during evolution, and the minimum overall score depicts the most parsimonious evolutionary processes of each domain or combination (see methods)". As you can see, it is based on empirical evidence, and not on functional reasoning. Obviously, it is only an estimate, and different approaches could give different numbers. The authors are well aware of that: "Table 1 lists the predicted number of domains and domain combinations originated in the major lineages of the tree of life. 1984 domains (at the family level) are predicted to be in the root of the tree (with the ratio Rhgt = 12), accounting for more than half of the total domains (3464 families in SCOP 1.73). This prediction is significantly higher than what is generally believed [5,31,32]. There are several reasons to account for the discrepancy. First, previous attempts focused on universal and ubiquitous proteins (or domains) in LUCA [5], so one protein has to exist in the majority of species in each of the three superkingdoms (usually 70%–90%) to be considered as LUCA protein [32]. Second, the root of the tree is still not solved. Thus any domains that are shared by two superkingdoms are counted as originating in the LUCA. Endosymbiosis of mitochondria and chloroplasts and horizontal gene transfer across superkingdoms can result in the same effect, which is moving the origin of protein domains towards the root. Third is our limited knowledge of protein domains. On average nearly 40% of predicted ORFs in the genomes under study cannot be assigned to any known domain. When assigned in the future they may turn out to be species or lineage specific domains that emerged relatively late on the tree of life. There are also a significant number of domains which emerge at the root of bacteria and eukaryotes. Likewise, this can be explained by the unresolved early evolution at the origin of bacteria and eukaryotes." So, we are not taking these numbers as absolute, but it is perfectly reasonable that the general scenario will be something like that, even if the numbers can change. The conclusions of the authors appear reasonable: "Notwithstanding, these data suggest that a large proportion of protein domains were invented in the root or after the separation of the three major superkingdoms but before the further differentiation of each lineage. When tracing outward along the tree from the root, the number of novel domains invented at each node decreases (Figure 4A). Many branches, and hence species, apparently do not invent any domains. As previously discussed, this might be a result of the incomplete knowledge of lineage specific domains." A functional approach is certainly possible too. That impllies having a model of the simplest living cell, and trying to estimate the number of necessary proteins and of necessary domains. The approach, anyway, is more conceptual, and not necessarily connected to evidence. Moreover, the definition of simplest living cell can vary, and a strictly reductionist approach, a la Venter, is certainly cutting down many of the naturally occurring functions. Therefore, I think that the empiric approach based on the vurrent distribution of damains and sequences is preferable, more scientific, and, I would say, perfectly “darwinian” (so that, for once, we could agree with our adversaries at least about one methodology ). Two facts cannot be questioned: 1) A lot of the protein domains were “discovered” at the root of the evolutionary tree. So darwinists must not only find a vaguely credible theory for OOL, but also one which is extremely efficient in respect to time, much more efficient than all later darwinian evolution, in order to explain how approximately half of the basic protein information was avalable after, say, 200 – 300 My from the start (whatever the “start” was). 2) Basic protein domain information is only the start, and definitely not the biggest part of the functional information to be explained. Then you have: a) The space of different protein functions in the context of a same domain (let’s remember that the same fold can have many different functions, and different active sites). b) The space of multidomain complex proteins, which implies a search in the combinatorial space of all domains. c) The fundamental space of protein regulation, maybe the biggest of all, which certainly implies at leat gene sequence, non coding DNA and epigenetic mechanisms. d) The space of multicellular integration. e) The space of body plans, system plans, organ plans, tissue plans, and so on. f) The space of complex integration to environment and higher cognitive functions (immune system, nervous system). That’s only a brief and gross summary. Each of these levels poses insurmountable impossibilities to the model of darwinian evolution. Unfortunately, most of these levels cannot yet be treated quantitatively for two reasons: 1) They are too complex 2) We know too little about them So, for the moment, let’s wait for answers about the first level, protein domain information, which is much easier to analyze. But I am not holding my breath. I hope the citation tags work as they should (it's a problem, in pasting previous posts). Anyway, the rest in next post.gpuccio
June 5, 2010
June
06
Jun
5
05
2010
08:42 PM
8
08
42
PM
PDT
Petrushka: First of all, I am not discussimg entropy here. It's not a subject I understand well enough to be able to discuss it. Second, it seems strangely that you have not read my posts in the Ayala thread, where the "the unit has been objectively defined and measured", and not only by me. No problem, anyway. I am going to paste them here (see next post). But please, read them. By the way, the unit is functional bits (or, according to Durston, fits).gpuccio
June 5, 2010
June
06
Jun
5
05
2010
08:24 PM
8
08
24
PM
PDT
I would value clear definitions of all the words used in this context. Does "information", for example, refer to teleo-semantic or Shannon information or Kolmogorov complexity or some other entity defined especially for the occasion? They are not all the same thing. Teleo-semantic information clearly requires intelligent agents as both sender and receiver but we can also acquire information from weather, rocks and tree-rings where we have no reason to think there is any intelligent agency involved - apart from the observer. What is meant by "complexity" and how does it differ from information, given that Kolmogorov complexity is treated as part of information theory? As for "specified", does that refer to prior constraints on the range of behavior of which a given system night be capable, and can these restraints only come from a intelligent agent or might they originate in natural properties? My understanding of "digital" is that it derives from computing where the machines are founded on devices which can occupy one of two discrete states, 'on' or 'off' represented as '0' or '1'. It it being argued that the component or functional structures of the genome are also strictly binary and that they either work or don't work? I think there is a misleading tendency, which is not peculiar to ID proponents but is also prevalent in the evolutionist camp, to regard of information as a property or constituent of the genome where I would see it rather as a property of our model of said genome. It is confusing the map for the territory.Seversky
June 5, 2010
June
06
Jun
5
05
2010
08:01 PM
8
08
01
PM
PDT
I have no doubt that you wish to discuss the quantification of information or entropy or whatever. I'm wondering if you can provide a specific example where the unit has been objectively defined and measured. I also wonder what could possibly be a better candidate than reproductive success as a measure of functional information, or entropy.Petrushka
June 5, 2010
June
06
Jun
5
05
2010
05:38 PM
5
05
38
PM
PDT
Aleta, Cassandra, Petrushka and others interested in quantification of dFSCI: I suggest that my post #32 here, together with the final part of the discussion on the Ayala thread, could serve as a starting point for an explicit discussion about this fundamental issue, if you are really available for a pragmatic, and not ideological, confrontation on the subject. At least, let's not hear any more that nobody in ID wants to discuss these quantitative aspects: it's exactly the contrary.gpuccio
June 5, 2010
June
06
Jun
5
05
2010
03:07 PM
3
03
07
PM
PDT
Petrushka: Would reproductive success qualify as an objective measure of functionality? If we are evaluating a single protein, the function must be defined for that protein: for example, the specific enzymatic activity of that protein. We must then define a threshold to measure if the function is considered present or not: that could be a specific value of activity in standard lab conditions. Finally, if we are using the evaluation of dFSCI in the context of a model where the fucntion must be selectable, and therefore visible to NS, we have to show that the new functional protein can confer an increase of reproductive success. But that is an indirect effect of the function. The function must be specific for the protein, and must depend on the specific informational sequence of the protein.gpuccio
June 5, 2010
June
06
Jun
5
05
2010
02:58 PM
2
02
58
PM
PDT
now this may be interesting for you acispencer: Excerpt: "While the issue is controversial, there are groups of paleontologists who have found evidence suggesting some mass extinctions were gradual, lasting for hundreds of thousands of years," Kortenkamp said. http://www.sciencedaily.com/releases/1998/05/980511075850.htm Abrupt and Gradual Extinction Among Late Permian Land Vertebrates in the Karoo Basin, South Africa Excerpt: the vertebrate fossil data show a gradual extinction in the Upper Permian punctuated by an enhanced extinction pulse at the Permian-Triassic boundary interval, http://www.sciencemag.org/cgi/content/abstract/1107068 “We see a gradual extinction leading to a sharp increase at the P/T boundary. This is followed by a continued extinction after the boundary,” Ward says. The team writes that the pattern “is consistent with a long-term deterioration of the terrestrial ecosystem,” http://www.geotimes.org/apr05/NN_PTextinction.html I have more studies, trilobites and such, and quotes from leading paleotologists that all consistently show sudden appearance, fairly rapid diversity and then gradual loss of morphological variability, and then finally, gradual extinction over long periods of time (save for catastrophic extinctions). Even the recent whale study that came out this last week fit this pattern for rapid diversity and long term stability. Acispencer, This pattern that is being found is consistent with what the Genetic Entropy model predicts.bornagain77
June 5, 2010
June
06
Jun
5
05
2010
12:20 PM
12
12
20
PM
PDT
...The specification has to be functional. In other words, the information is specified because it conveys the intructions for a specific function, one which can be recognized and defined and objectively measured as present or absent, if necessary using a quantitative threshold....
Would reproductive success qualify as an objective measure of functionality?Petrushka
June 5, 2010
June
06
Jun
5
05
2010
12:08 PM
12
12
08
PM
PDT
You know acispencer, after thinking it over, I've realized you do have a somewhat reasonable objection as to the one study not establishing a solid basis as to draw a firm conclusion for declining reproductive health over long periods of time. Thus I'm sorry for saying that you were not reasonable on that point. But as to the other point I still hold that evolutionists have a huge elephant in the living room that they are refusing to deal with the slightly detrimental mutation studies I cited.bornagain77
June 5, 2010
June
06
Jun
5
05
2010
11:20 AM
11
11
20
AM
PDT
acispencer: look at this post: http://www.americanprogress.org/issues/2009/07/reproductive_roulette.html Go down to where it says Declining Reproductive Health part 1, read it slowly,, read it again if need be, then please justify this statement of yours: "You’ve presented no evidence for a steady increase in birth defects. then go to this post: https://uncommondescent.com/intelligent-design/andy-mcintoshs-peer-reviewed-id-paper-note-the-editors-note/ read where it says 100,000 detrimental mutations and 6000 genetic disorders (as per John Sanford PhD in genetics) pay attention to the last two peer reviews in which I cite the detrimental mutation rate in humans is above what even evolutionists agree is acceptable, then please explain to me how I have not made my case. You know on second thought don't even bother I'm tired of the incoherence of your reasoning, and when corrected you will not listen anyway.bornagain77
June 5, 2010
June
06
Jun
5
05
2010
10:35 AM
10
10
35
AM
PDT
BA77: You are not even scientifically in the ballpark for making a case for the steady increase of birth defects with your cause, And to top it all off in “disconnect” you are only arguing for “limited” collateral damage from chemicals! You've presented no evidence for a steady increase in birth defects. You've provided no baseline measures with which to compare rates of defects to determine if they are increasing, decreasing, or at steady-state. If I state defects are a million a year that sounds like a big number but without a denominator it is a meaningless assertion. Much like what you have presented. I also haven't made any case for 'limited collateral" damage from chemicals. I have pointed out that there are numerous other cellular targets that chemicals may interact with and the likelyhood that chemicals will interact with these targets before the chemical ahs a chance to transit a cell's cytoplasm and enter the nucleus. That chemicals can interact with DNA is not in question what is in question is your erronious assertion that the majority of intracellular chemical interactions occur with DNA. Try a bit of research and see what you come up with for formation of protein adducts. have fun with the handwaving!Acipenser
June 5, 2010
June
06
Jun
5
05
2010
10:05 AM
10
10
05
AM
PDT
acispencer you state: "You also haven’t made your case for a decline in reproductive health….for any species. Nor have you made any case for a “widespread increase in birth defects”. typical darwinian disconnect with reality! That is the one thing that is certain, the question is can you establish your "cause" of purely chemical, and "other", mitigating, factors as well as I have established the case for the accumulation of mutational load increasing being the cause. You are not even scientifically in the ballpark for making a case for the steady increase of birth defects with your cause, And to top it all off in "disconnect" you are only arguing for "limited" collateral damage from chemicals! This is ludicrous, exactly how in the world is this limited damage scenario going to mitigate the buildup of slightly detrimental mutations I clearly established? Even you neo-Darwinists argue that natural selection (death) is the ultimate culler of these detrimental mutations, thus even you must agree, according to the premises of your own philosophy, that birth defects must increase so as to "appear" so that natural selection can act on them to eliminate them. Or do you deny the studies I cited? If so you have left the bounds of science and it is no point to reason with you since you in fact are not reasoning but clinging to blind faith.bornagain77
June 5, 2010
June
06
Jun
5
05
2010
09:44 AM
9
09
44
AM
PDT
BA77 you neglect the mutagenic effect chemicals have on DNA. I've not neglected it at all but I did point out that it is a relatively minor player in causing harm. There are many levels that a chemical may act to cause3 harm.. From overt cell death to protein adducts to enzyme agonist and antagonist. To support your assertions you will need to demonstrate that chemical damage is equivalent to DNA damage. Reams of toxicological data do not support your assertions in the least. You also haven't made your case for a decline in reproductive health....for any species. Nor have you made any case for a "widespread increase in birth defects". Good luck with that.Acipenser
June 5, 2010
June
06
Jun
5
05
2010
09:14 AM
9
09
14
AM
PDT
Petrushka (#5): I’m always confused about how information is quantified. For example, if you ran Shakespeare’s works through a spell checker and regularized the spelling, would the amount of information change? What if a typographical error in preparing a manuscript changed the spelling of a word to another variant used by the same author? Along those lines, under what circumstances would a copy error in biological reproduction change the quantity of information? I think you are repeating here more clearly a question you already made in the Ayala thread, and I would like to answer it here. To do that, I have to cite here again my definition of dFSCI amd of its measure (excuse me for continuosly quoting myself, it's just to avoid repeating each time anew things already clarified). So, here is my definition: For all these reasons, I have chosen to debate only a very specific subset of CSI, where all these difficulties are easily overcome. That subset is dFSCI. A few comments about this particular type of CSI: 1) The specification has to be functional. In other words, the information is specified because it conveys the intructions for a specific function, one which can be recognized and defined and objectively measured as present or absent, if necessary using a quantitative threshold. It is interesting to onserve that the concept of functional specification is earlier than Dembski’s work. 2) The information must be digital. Tha avoids all the problems with analo information, and allows an easy quantification of the search space and of the complexity. 3) The information must not be significantly compressible: in other words, it cannot be the output of an algorithm based on the laws of necessity. 4) If we want to be even more restrictive, I would say that the information must be symbolic. In other words, it has to be interpreted through a conventional code to convey its meaning. And here is the definition of the measure: 3) CSI in the sense I have given is certainly an objective measure. The measure only requires: a) an objective definition of a function, and an objective way to ascertain it. For an enzyme, that will be a clear definition of the enzymatic activity in standard conditions, and a threshold for that activity. The specification value will be binary (1 if present, 0 if not). b) A computation of the minimal search space (for a protein of length n, that would be at least 20^n). c) A computation, or at least a reasonable approximation, of the number of specific functional sequences: in other words, the number of different protein sequences of maximum length n which exhibit the function under the above definitions. The negative logarithm of (c/b) * a will be the measure of the specified complexity. It should be higher than a conventional threshold (a universal threshold of 10^150 is fine, but a biological threshold can certainly be much lower). For a real, published computation of CSI in proteins in the above sense with a very reasonable method, please see: Measuring the functional sequence complexity of proteins. by Durston KK, Chiu DK, Abel DL, Trevors JT Theor Biol Med Model. 2007 Dec 6;4:47. freely available online at: http://www.ncbi.nlm.nih.gov/pm…..ool=pubmed” Finally, for those who ask about units, it should be obvious that the complexity is measure in a way which is similar to the way we measure Shannon’s entropy, in bits, with the difference that specification must be present (must have value 1), otherwise there is no functional complexity. So, let's start from that and try to apply it to Shakespeare's work. To make the discussion easier, let's speak just of Hamlet. First of all, let's start with point b), which is the easiest to calculate. What is the minimal search space of the text of Hamlet? I have pasted an electronic version of the text in Word, and counted the characters, including spaces: they are 172,309. Considering, for simplicity, the alphabet (including punctuatuion) at 30 characters (they are probably a little bit more), the whole search space would be 30^172309, which is about 2^844314. So the complexity of the minimal search space is about 844314 bits. Now, let's go to point a). Is Hamlet specified? Certainly yes, and so we can give a value of 1 to the specification coefficient. But, unfortunately, there are many different ways to define the function, which are all valid, but each of it will have different consequences on our computation of point c). For simplicity, I will use here a "weak" definition, which is maybe the simplest, coupled to a "strong" functional test: - Any string which conveys to a reader all the story of the drama, without losing any detail. We could have added that no emotional connotation, or artistic efficacy, or linguistic detail and connotation be lost, but that would be more difficult to define objectively. So, let's stick to our first definition. That implies also a possible (strong) test to verify function: in other words, if any reader with a modified version of the text is not able to answer correctly any explicit question about what happens in the drama, even about details, we will assume some loss of function and shift the value of the specification coefficient to 0. That is obviously rather strong. We could define a lower threshold, specifying that the reader must be able to answer correctly about all questions about the character's actions, but not necessarily about their exact words, or give any suitable definition we want. The possibility to give different definitions of the functional specification is implicit in the fact that the function is recognized and defined by a conscious observer. The explicit definition can vary, and indeed in the case of language, as we have seen, there are really many possibilities. That is not a problem, because the measure of dFSCI is in any case pertinent to a single explicit definition of function. So, the measure will vary according to the definition it is based upon, but will be objectively defined for each explicit definition. Any explicit definition of function can be used, provided it is consistent with the model one is deriving from it. This point is important, so let's see how it would apply in the case of proteins. Here the definition of function is easier: it is usually the recongized function of the protein, such as its enzynatic activity. It is usually specific and well known. But, to measure dFSCI, we still have to give a quantitative threshhold to ascertain the fucntion and give a value of 1 or 0 to the specification coefficient. That is somewhat arbitrary, but if our purpose is to measure function which would be selectable in vivo, than we can put the threshold at a value which is reasonably consistent with that point. In other words, we can use any functional threshold we want, but the important point is that the conclusions we will derive from the measurement in our model are consistent with the value we used. So, let's go back to Hamlet. We have our point b), the search space, and we have our point a), an operational definition and a quantitative test. According to a) the text of Hamlet, in its native form, is definitely specified (value = 1). How big is c), the size of the functional space, the target space? This is always the most difficult part. I would like anyway to mention here that our definitions give us an explicit way of measuring it: to test all the possible strings of that length (or less) and to define as functional all those which will satisfy our functional test. And just count them. Unfortunately, that computation, while perfectly possible in principle, is empirically impossible because of the huge size of the search space. That's why the computation of the target space must usually be approximated by some indirect method. Now, I have no idea of how to do that for Hamlet (while I have ideas for the proteins, and I refer again to Durston's paper or to the recent Axe's paper for examples). So, just to go on with our example, let's pretend that we have approximated our number of functional strings to 2^200000 (which is a big number indeed, and IMO should more than cover your examples of different spelling and similar variations). That said, with the definitions we gave and the values obtained, the dFSCI of Hamlet is: target space / search space: 2^200000 / 2^844314 result: 2^ -644314 negative logarithm: 644314 bits multiplied by the specification factor: 644314 * 1 = 644314 fits This measure does not change with any variation which keeps the text inside the functional island we defined. It drops to zero as soon as the modified text the target space. Obviously, it is perfectly possible to define the specification coefficient so that it is not binary. In that case, we could define it as a percent of some reference fucntion, and then mutliply the bits complexity by that coefficietn to get the fits value of dFSCI. The point is, we can measure dFSCI, and if we are consistent with our definitions, the measure will be objective and useful.gpuccio
June 5, 2010
June
06
Jun
5
05
2010
08:51 AM
8
08
51
AM
PDT
Acispencer you state: "To think that chemical exposure automatically implicates genetic damage is naive at best." So acispencer what do we know for sure? We know for a fact that reproductive health is steadily declining, whether this is ALL due to chemicals I really question since it could just as well be reflective of the "natural" mutational load that has been accumulating. i.e. if chemicals are playing any significant role, in such a nation-wide catastrophe, I would say the most reasonable explanation is that the environmental chemicals, whatever they are, of national scale (they don't say for sure),, are merely exasperating, and reflecting) a already existent problem of a steady decline in birth rates. In fact, you have no nationwide chemical agent to blame for such a widespread increase in birth defects, that I know of, whereas I do have a effect that touches every single person in this nation which can explain the effect in question more satisfactorily: As well though you allude to teratogenesis (which I believe is defects brought about during development due to exposure to chemicals) you neglect the mutagenic effect chemicals have on DNA. i.e. are the reproductive DNA molecules somehow immune from the detrimental effects of chemicals that cause gross deformities? Please tell me of this unknown barrier that gives the sperm and egg such added protection that is not visited on the embryo itself. The overall pattern of evidence itself is clearly in favor of the Genetic Entropy model: the evidence for the detrimental nature of mutations in humans is overwhelming for scientists have already cited over 100,000 mutational disorders. Inside the Human Genome: A Case for Non-Intelligent Design - Pg. 57 By John C. Avise Excerpt: "Another compilation of gene lesions responsible for inherited diseases is the web-based Human Gene Mutation Database (HGMD). Recent versions of HGMD describe more than 75,000 different disease causing mutations identified to date in Homo-sapiens." I went to the mutation database website cited by John Avise and found: HGMD®: Now celebrating our 100,000 mutation milestone! http://www.biobase-international.com/pages/index.php?id=hgmddatabase I really question their use of the word "celebrating". (Of Note: The number for Mendelian Genetic Disorders is quoted to be over 6000 by geneticist John Sanford in 2010) "Mutations" by Dr. Gary Parker Excerpt: human beings are now subject to over 3500 mutational disorders. (this 3500 figure is cited from the late 1980's) http://www.answersingenesis.org/home/area/cfol/ch2-mutations.asp Human Evolution or Human Genetic Entropy? - Dr. John Sanford - video http://www.metacafe.com/w/4585582 This following study confirmed the "detrimental" mutation rate for humans per generation, of 100 to 300, estimated by John Sanford in his book "Genetic Entropy" in 2005: Human mutation rate revealed: August 2009 Every time human DNA is passed from one generation to the next it accumulates 100–200 new mutations, according to a DNA-sequencing analysis of the Y chromosome. (Of note: this number is derived after "compensatory mutations") http://www.nature.com/news/2009/090827/full/news.2009.864.html This mutation rate of 100 to 200 is far greater than even what evolutionists agree is an acceptable mutation rate for an organism: Beyond A 'Speed Limit' On Mutations, Species Risk Extinction Excerpt: Shakhnovich's group found that for most organisms, including viruses and bacteria, an organism's rate of genome mutation must stay below 6 mutations per genome per generation to prevent the accumulation of too many potentially lethal changes in genetic material. http://www.sciencedaily.com/releases/2007/10/071001172753.htmbornagain77
June 5, 2010
June
06
Jun
5
05
2010
07:38 AM
7
07
38
AM
PDT
BA77: The blame the cause on chemicals which of course implicates detrimental mutations, which or course implicates genetic entropy.
DNA damage is only one of many possible endpoints as a results of exposure to xenobiotics. To think that chemical exposure automatically implicates genetic damage is naive at best. How have you ruled out all other possibilities, i.e., teratogenesis?
Acipenser
June 5, 2010
June
06
Jun
5
05
2010
07:04 AM
7
07
04
AM
PDT
SCheeseman thanks for pointing me to this line of reasoning of using other metrics. And the second link I clicked on for declining birth rates revealed: Reproductive health in the United States is headed in the wrong direction on a host of indicators. Fertility problems, miscarriages, preterm births, and birth defects are all up. These trends are not simply the result of women postponing motherhood. In fact, women under 25 and women between 25 and 34 reported an increasing number of fertility problems over the last several decades. Nor are reproductive health problems limited to women. Average sperm count appears to be steadily declining, and there are rising rates of male genital birth defects such as hypospadias, a condition in which the urethra does not develop properly. http://www.americanprogress.org/issues/2009/07/reproductive_roulette.html The blame the cause on chemicals which of course implicates detrimental mutations, which or course implicates genetic entropy.bornagain77
June 5, 2010
June
06
Jun
5
05
2010
05:49 AM
5
05
49
AM
PDT
SCheesman:
For now, at least the fecundity of such populations makes loss due to mutational damage irrelevant.
Petrushka:
Can you name a population — any species — for which this is not true?
Well if that is true of all species, then trying to measure the effect on populations of mutational damage is impossible. Impossible, that is until the resiliancy of the population to reproduce can no longer overcome the losses due to mutational entropy, and then you'd get population decline and extinction in rapid order. That, too is simple mathematics. This doesn't mean that mutational entropy is not real, it just means you need a different metric to measure it than population growth/decline. Other factors are far more important in determining populations, except in the critical case.SCheesman
June 5, 2010
June
06
Jun
5
05
2010
05:31 AM
5
05
31
AM
PDT
Petruska you then state: "And yet they (bacteria) do not go extinct due to genetic entropy. In fact, a single microbe can give rise to a diverse population, even produce variations capable of exploiting a new source of food. Actually there are ancient bacteria that can no longer be found on the earth that were present millions of years ago i.e. thus they, as far as we can tell, are extinct: World’s Oldest Known DNA Discovered (419 million years old) - Dec. 2009 Excerpt: But the DNA (of the 250 million Year Old bacteria) was so similar to that of modern microbes that many scientists believed the samples had been contaminated. Not so this time around. A team of researchers led by Jong Soo Park of Dalhousie University in Halifax, Canada, found six segments of identical DNA that have never been seen before by science. “We went back and collected DNA sequences from all known halophilic bacteria and compared them to what we had,” Russell Vreeland of West Chester University in Pennsylvania said. “These six pieces were unique,,, http://news.discovery.com/earth/oldest-dna-bacteria-discovered.html Vreeland was referencing this earlier work of his on ancient bacteria as to the accusations of contamination: The Paradox of the "Ancient" Bacterium Which Contains "Modern" Protein-Coding Genes: “Almost without exception, bacteria isolated from ancient material have proven to closely resemble modern bacteria at both morphological and molecular levels.” Heather Maughan*, C. William Birky Jr., Wayne L. Nicholson, William D. Rosenzweig§ and Russell H. Vreeland ; http://mbe.oxfordjournals.org/cgi/content/full/19/9/1637 Yet when I asked Vreeland about a fitness test on these ancient bacteria he said "only a creationist would ask that question" and then he lectured me a little without ever giving me a straight answer to my question. I thought the question was a fairly important and straight forward question that was "scientifically neutral",, i.e. did the bacteria gain or lose functional complexity? Seems important to me. Anyway no luck with Vreeland, so I then asked Dr. Cano, who works with ancient bacteria, that are amber sealed, about a fitness test on the ancient bacteria and he graciously replied to me: In reply to a personal e-mail from myself, Dr. Cano commented on the "Fitness Test" I had asked him about: Dr. Cano stated: "We performed such a test, a long time ago, using a panel of substrates (the old gram positive biolog panel) on B. sphaericus. From the results we surmised that the putative "ancient" B. sphaericus isolate was capable of utilizing a broader scope of substrates. Additionally, we looked at the fatty acid profile and here, again, the profiles were similar but more diverse in the amber isolate.": Fitness test which compared the 30 million year old ancient bacteria to its modern day descendants, RJ Cano and MK Borucki Thus, the most solid evidence available for the most ancient DNA scientists are able to find does not support evolution happening on the molecular level of bacteria. In fact, according to the fitness test of Dr. Cano, the change witnessed in bacteria conforms to the exact opposite, Genetic Entropy; a loss of functional information/complexity, since fewer substrates and fatty acids are utilized by the modern strains. Considering the intricate level of protein machinery it takes to utilize individual molecules within a substrate, we are talking an impressive loss of protein complexity, and thus loss of functional information, from the ancient amber sealed bacteria. According to prevailing evolutionary dogma, there "HAS" to be “significant genetic/mutational drift” to the DNA of bacteria within 250 million years, even though the morphology (shape) of the bacteria can be expected to remain the same. In spite of their preconceived materialistic bias, scientists find there is no significant genetic drift from the ancient DNA. I find it interesting that the materialistic theory of evolution expects there to be a significant amount of mutational drift from the DNA of ancient bacteria to its modern descendants, while the morphology can be allowed to remain exactly the same with its descendants. Alas for the materialist once again, the hard evidence of ancient DNA has fell in line with the anthropic hypothesis. Petruska you then allude to bacteria utilizing a "new food source". Since you, as usual, cited no source for your claim, I will guess you are talking of Nylonase. Yet: Nylon Degradation – Analysis of Genetic Entropy Excerpt: At the phenotypic level, the appearance of nylon degrading bacteria would seem to involve “evolution” of new enzymes and transport systems. However, further molecular analysis of the bacterial transformation reveals mutations resulting in degeneration of pre-existing systems. http://www.answersingenesis.org/articles/aid/v4/n1/beneficial-mutations-in-bacteria As well: The non-randomness and "clockwork" repeatability of the nylon adaptation clearly indicates a designed mechanism that fits perfectly within the limited "variation within kind" model of Theism, and stays well within the principle of Genetic Entropy since the parent strain is still more fit for survival once the nylon is consumed from the environment. (Answers In Genesis) i.e. Evolutionists need to show a gain in functional complexity over and above what is already present in the parent strain, not just a variation from the parent kind that does not exceed functional complexity: Is Antibiotic Resistance evidence for evolution? - "Fitness Test" - video http://www.metacafe.com/watch/3995248 As well Petruska, Lenski's work with "cuddled" e-coli, when looked at closely, minus the evolutionary spin, clearly reveals genetic entropy: These following articles refute Lenski's supposed "evolution" of the citrate ability for the E-Coli bacteria after 20,000 generations of the E-Coli: Multiple Mutations Needed for E. Coli - Michael Behe Excerpt: As Lenski put it, “The only known barrier to aerobic growth on citrate is its inability to transport citrate under oxic conditions.” (1) Other workers (cited by Lenski) in the past several decades have also identified mutant E. coli that could use citrate as a food source. In one instance the mutation wasn’t tracked down. (2) In another instance a protein coded by a gene called citT, which normally transports citrate in the absence of oxygen, was overexpressed. (3) The overexpressed protein allowed E. coli to grow on citrate in the presence of oxygen. It seems likely that Lenski’s mutant will turn out to be either this gene or another of the bacterium’s citrate-using genes, tweaked a bit to allow it to transport citrate in the presence of oxygen. (He hasn’t yet tracked down the mutation.),,, If Lenski’s results are about the best we've seen evolution do, then there's no reason to believe evolution could produce many of the complex biological features we see in the cell. http://www.amazon.com/gp/blog/post/PLNK3U696N278Z93O Lenski's e-coli - Analysis of Genetic Entropy Excerpt: Mutants of E. coli obtained after 20,000 generations at 37°C were less “fit” than the wild-type strain when cultivated at either 20°C or 42°C. Other E. coli mutants obtained after 20,000 generations in medium where glucose was their sole catabolite tended to lose the ability to catabolize other carbohydrates. Such a reduction can be beneficially selected only as long as the organism remains in that constant environment. Ultimately, the genetic effect of these mutations is a loss of a function useful for one type of environment as a trade-off for adaptation to a different environment. http://www.answersingenesis.org/articles/aid/v4/n1/beneficial-mutations-in-bacteria Lenski's Citrate E-Coli - Disproof of "Convergent" Evolution - Fazale Rana - video http://www.metacafe.com/watch/4564682 Upon closer inspection, it seems Lenski's "cuddled" E. coli are actually headed for "genetic meltdown" instead of evolving into something better. New Work by Richard Lenski: Excerpt: Interestingly, in this paper they report that the E. coli strain became a “mutator.” That means it lost at least some of its ability to repair its DNA, so mutations are accumulating now at a rate about seventy times faster than normal. http://www.evolutionnews.org/2009/10/new_work_by_richard_lenski.html further note: The Sheer Lack Of Evidence For Macro Evolution - William Lane Craig - video http://www.metacafe.com/watch/4023134bornagain77
June 5, 2010
June
06
Jun
5
05
2010
04:44 AM
4
04
44
AM
PDT
Petruska you state: "Genetic entropy seems to have originated as an axiom and it seems to survive despite being contradicted by evidence." But you provide no evidence, in fact you never cite anything. you then state: "If the model reflects reality, it should be rather easy to point to populations that are declining due to infertility. Infertility caused by the accumulation of harmful mutations." When we look at the fossil record over long periods of time, so as to get a clear view of what is happening "in reality" we see: The following article is important in that it shows the principle of Genetic Entropy being obeyed in the fossil record by Trilobites, over the 270 million year history of their life on earth (Note: Trilobites are one of the most prolific "kinds" found in the fossil record with an extensive worldwide distribution. They appeared abruptly at the base of the Cambrian explosion with no evidence of transmutation from the "simple" creatures that preceded them, nor is there any evidence they ever produced anything else besides other trilobites during the entire time they were in the fossil record). The Cambrian's Many Forms Excerpt: "It appears that organisms displayed “rampant” within-species variation “in the ‘warm afterglow’ of the Cambrian explosion,” Hughes said, but not later. “No one has shown this convincingly before, and that’s why this is so important.""From an evolutionary perspective, the more variable a species is, the more raw material natural selection has to operate on,"....(Yet Surprisingly)...."There's hardly any variation in the post-Cambrian," he said. "Even the presence or absence or the kind of ornamentation on the head shield varies within these Cambrian trilobites and doesn't vary in the post-Cambrian trilobites." University of Chicago paleontologist Mark Webster; article on the "surprising and unexplained" loss of variation and diversity for trilobites over the 270 million year time span that trilobites were found in the fossil record, prior to their gradual and total extinction from the fossil record about 250 million years ago. http://www.terradaily.com/reports/The_Cambrian_Many_Forms_999.html In fact, the loss of morphological traits over time, for all organisms found in the fossil record, was so consistent that is was made into a scientific "law": Dollo's law and the death and resurrection of genes: Excerpt: "As the history of animal life was traced in the fossil record during the 19th century, it was observed that once an anatomical feature was lost in the course of evolution it never staged a return. This observation became canonized as Dollo's law, after its propounder, and is taken as a general statement that evolution is irreversible." http://www.pnas.org/content/91/25/12283.full.pdf+html A general rule of thumb for the "Deterioration/Genetic Entropy" of Dollo's Law as it applies to the fossil record is found here: Dollo's law and the death and resurrection of genes ABSTRACT: Dollo's law, the concept that evolution is not substantively reversible, implies that the degradation of genetic information is sufficiently fast that genes or developmental pathways released from selective pressure will rapidly become nonfunctional. Using empirical data to assess the rate of loss of coding information in genes for proteins with varying degrees of tolerance to mutational change, we show that, in fact, there is a significant probability over evolutionary time scales of 0.5-6 million years for successful reactivation of silenced genes or "lost" developmental programs. Conversely, the reactivation of long (>10 million years)-unexpressed genes and dormant developmental pathways is not possible unless function is maintained by other selective constraints; http://www.pnas.org/content/91/25/12283.full.pdf+html Dollo's Law was further verified to the molecular level here: Dollo’s law, the symmetry of time, and the edge of evolution - Michael Behe Excerpt: We predict that future investigations, like ours, will support a molecular version of Dollo's law: ,,, Dr. Behe comments on the finding of the study, "The old, organismal, time-asymmetric Dollo’s law supposedly blocked off just the past to Darwinian processes, for arbitrary reasons. A Dollo’s law in the molecular sense of Bridgham et al (2009), however, is time-symmetric. A time-symmetric law will substantially block both the past and the future,". http://www.evolutionnews.org/2009/10/dollos_law_the_symmetry_of_tim.htmlbornagain77
June 5, 2010
June
06
Jun
5
05
2010
04:04 AM
4
04
04
AM
PDT
I won't be able to post on Saturday. I suppose you won't miss me.Petrushka
June 4, 2010
June
06
Jun
4
04
2010
09:09 PM
9
09
09
PM
PDT
The same problem of inbreeding in animal husbandry produces the same decline of fitness...
Inbreeding and low population numbers are a definite problem for a species. Microbes, however, are the ultimate inbreeders. They clone themselves. And yet they do not go extinct due to genetic entropy. In fact, a single microbe can give rise to a diverse population, even produce variations capable of exploiting a new source of food.Petrushka
June 4, 2010
June
06
Jun
4
04
2010
09:07 PM
9
09
07
PM
PDT
For now, at least the fecundity of such populations makes loss due to mutational damage irrelevant.
Can you name a population -- any species -- for which this is not true? In physics, entropy started as an observation, was tested for decades, and became a "Law" only after it was impossible to fine an exception. Genetic entropy seems to have originated as an axiom and it seems to survive despite being contradicted by evidence. The claim is that harmful mutations accumulate indefinitely. It is backed by a computer model. If the model reflects reality, it should be rather easy to point to populations that are declining due to infertility. Infertility caused by the accumulation of harmful mutations.Petrushka
June 4, 2010
June
06
Jun
4
04
2010
09:03 PM
9
09
03
PM
PDT
And evolution is a gross violation of entropy yet you accept evolution as true without so much as a ripple of doubt, interesting how selective your vision is. the slow accumulation of "slightly detrimental mutations" in humans, which are far below the power of natural selection to remove from our genomes, is revealed by this following fact: “When first cousins marry, their children have a reduction of life expectancy of nearly 10 years. Why is this? It is because inbreeding exposes the genetic mistakes within the genome (slightly detrimental recessive mutations) that have not yet had time to “come to the surface”. Inbreeding is like a sneak preview, or foreshadowing, of where we are going to be genetically as a whole as a species in the future. The reduced life expectancy of inbred children reflects the overall aging of the genome that has accumulated thus far, and reveals the hidden reservoir of genetic damage that have been accumulating in our genomes." - Sanford; Genetic Entropy; page 147 The same problem of inbreeding in animal husbandry produces the same decline of fitness, I would dig up the numbers but it is late and I am fairly sure you wouldn't listen anyway. Bacteria have a much slower rate of decline, because of several intertwined reasons, but the decline is detectable by loss of fitness over millions of years.bornagain77
June 4, 2010
June
06
Jun
4
04
2010
08:48 PM
8
08
48
PM
PDT
Petrushka:
Feel free to publish a metric. The number of people sick from genetic disorders reflects our compassion and our developing medical technology, which enables very sick people to survive.
No, the length of survival people may be extended by technology, but the number of people born ill is independent of it, unless you happen to blame technology for causing many such mutational illnesses in the first place. The sieve of survival works much more brutally and effectively on bacteria than people. Bacteria don't protect their ill and build hospitals. That might explain the fact they are still around despite the short generation time (and it is the generation time, not the population that is most significant). The eugenics movement would return us to the law of bacterial survival. And who is to say that genetic entropy does not severely affect bacterial populations? Do you know the percent of bacteria that fail to develop due to mutational damage? For now, at least the fecundity of such populations makes loss due to mutational damage irrelevant.SCheesman
June 4, 2010
June
06
Jun
4
04
2010
08:45 PM
8
08
45
PM
PDT
The reason we don’t see population declines is because the drive to reproduce is powerful, and still overcomes the “resistance”. It doesn’t mean, however that the cost of mutations is not increasing, and it is in determining the cost that the object reality of genetic entropy might be calculated
Feel free to publish a metric. The number of people sick from genetic disorders reflects our compassion and our developing medical technology, which enables very sick people to survive. If genetic entropy caused a general decline in population fitness, it would be reflected in declining population numbers throughout all living species. By mathematical logic, it should most severely affect populations that reproduce rapidly and have the most mutations. And that would be microbes. Insects should also be severely affected. If the cost of mutations is increasing, make some measurements in the most vulnerable populations. I hate to repeat, but the concept of entropy is not based on axiomatic reasoning or first principles. It is based on careful measurements.Petrushka
June 4, 2010
June
06
Jun
4
04
2010
08:21 PM
8
08
21
PM
PDT
Petrushka:
Mutations happen all the time. You probably carry several. And yet populations don’t seem to decline due to infertility. Hundreds of species are known to have gone extinct, but I can’t think of one that did so due to a population wide decline in fertility.
Think of a city with an increasing population. As the traffic increases, the percentage of people who make it to work still seems to stay around 100% of those attempting to do so. But the cost and difficulty increases, and people spend more and more time sitting in their cars. The reason we don't see population declines is because the drive to reproduce is powerful, and still overcomes the "resistance". It doesn't mean, however that the cost of mutations is not increasing, and it is in determining the cost that the object reality of genetic entropy might be calculated. How many are sick? How many die young? How many are infertile? Are these percentage stable or increasing? It may be difficult to determine these numbers, but it doesn't mean it cannot be done.SCheesman
June 4, 2010
June
06
Jun
4
04
2010
07:51 PM
7
07
51
PM
PDT
Petrushka:
If a putative change in fitness for a population doesn’t result in population decline, I fail to see how it can have any objective validity.
I expect the significant number of people suffering from genetic-based illnesses might disagree with your dismissal of their objective reality. There's more to existence than procreation.SCheesman
June 4, 2010
June
06
Jun
4
04
2010
07:36 PM
7
07
36
PM
PDT
1 2 3 4

Leave a Reply