Uncommon Descent Serving The Intelligent Design Community

Media Mum about Deranged Darwinist Gunman

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

John West of the Discovery Institute Reports:

But when a gunman inspired by Darwinism takes hostages at the offices of the Discovery Channel, reporters seem curiously uninterested in fully disclosing the criminal’s own self-described motivations. Most of yesterday’s media reports about hostage-taker James Lee dutifully reported Lee’s eco-extremism and his pathological hatred for humanity. But they also suppressed any mention of Lee’s explicit appeals to Darwin and Malthus as the intellectual foundations for his views. At least, I could find no references to Lee’s Darwinian motivations in the accounts I read by the New York Times, the Los Angeles Times, the Washington Post, ABC, CNN, and MSNBC.

Major Media Spike Discovery

Comments
Now Now Indium, 'exploded' is such a real world term, let's put your 'evolutionary mechanism' glasses on see what happened, your IM probably had a random mutation that is right now at this very Instant (pun intended) being selected for a new higher level function in your computer's program! In fact in 'evolutionary mechanism land' you just got a free upgrade from Microsoft!!! 8) bornagain77
There, you did it. My IM exploded. Was to be expected I guess. Indium
Indium you state: "Since your tornado approach is a straw man (see above) and does not take into account evolutionary mechanisms it is not worth discussing." Let me fix this statement for you since you mis-wrote it: "Since your sure-footed approach is a valid example of reality (see above at your post) and does not take into account my imaginary evolutionary mechanisms that exist in my fantasy world it is not worth discussing." There you go Indium,, all better now! 8) bornagain77
gpuccio: Oh, defensive mood? I did not want to ruin your day, my apologies! @151: Only when people want red herrings dragged across the track of truth and led out to hominem oil soaked strawmen ignited to cloud, choke, confuse, poison and polarise the atmosphere. (Cheers to kairosfocus, too!) @ba77: Since your tornado approach is a straw man (see above) and does not take into account evolutionary mechanisms it is not worth discussing. Indium
Here is a direct reference for the e-coli quote: http://books.google.com/books?id=yNev8Y-xN8YC&pg=PA104&lpg=PA104&dq=a+one-celled+bacterium,+e.+coli,+is+estimated+to+contain+the+equivalent+of+100+million+pages+of+Encyclopedia+Britannica.+Expressed+in+information+in+science+jargon,+this+would+be+the+same+as+1012+bits&source=bl&ots=af595iZHH8&sig=uIFAYhd9WlHFybMNL4pyzYn4rn0&hl=en&ei=qjOOTJ7CNs_vngeeudWpCg&sa=X&oi=book_result&ct=result&resnum=2&ved=0CBgQ6AEwAQ#v=onepage&q=a%20one-celled%20bacterium%2C%20e.%20coli%2C%20is%20estimated%20to%20contain%20the%20equivalent%20of%20100%20million%20pages%20of%20Encyclopedia%20Britannica.%20Expressed%20in%20information%20in%20science%20jargon%2C%20this%20would%20be%20the%20same%20as%201012%20bits&f=false bornagain77
I don’t discuss things “in principle”.
Actually you do it all the time. You assert that evolution has been managed by design, that both mutations and selection have been influenced by design. You cannot cite any examples of this happening. No one has ever observed it happening. No one is willing to characterized the designer or the designer's methods or capabilities. So your argument boils down to saying that a process that can be observed and which can be subjected to controlled experimentation is less plausible that a process that has never been observed. Petrushka
Well Indium you state my monkey's example is not 'how evolution works' but alas you provided zero evidence for evolution doing anything at all, besides your imaginative speculation for how it should work. Actually I felt my 'empirical test' of monkeys in a zoo was far closer to reality than any of the 'possible' speculations you have ever presented. Do you disagree that you are making the absurd claim that blind processes have produced more information than the library of congress? "Again, this is characteristic of all animal and plant cells. Each nucleus ... contains a digitally coded database larger, in information content, than all 30 volumes of the Encyclopaedia Britannica put together. And this figure is for each cell, not all the cells of a body put together. ... When you eat a steak, you are shredding the equivalent of more than 100 billion copies of the Encyclopaedia Britannica." (Dawkins R., "The Blind Watchmaker [1986], Penguin: London, 1991, reprint, pp.17-18. Emphasis in original) http://members.iinet.net.au/~sejones/PoE/pe08clml.html “a one-celled bacterium, e. coli, is estimated to contain the equivalent of 100 million pages of Encyclopedia Britannica. Expressed in information in science jargon, this would be the same as 1012 bits of information. In comparison, the total writings from classical Greek Civilization is only 109 bits, and the largest libraries in the world - The British Museum, Oxford Bodleian Library, New York Public Library, Harvard Widenier Library, and the Moscow Lenin Library - have about 10 million volumes or 1012 bits.” - R. C. Wysong http://www.why-the-bible.com/origins.htm “an attempt to explain the formation of the genetic code from the chemical components of DNA… is comparable to the assumption that the text of a book originates from the paper molecules on which the sentences appear, and not from any external source of information.” Dr. Wilder-Smith Indium that you would argue about whether it is possible that protein could have perhaps. maybe. possibly, changed into another protein illustrates how completely disconnected you are from the real world for material processes have NEVER demonstrated the ability to produce ANY information, must less endless library shelfs full of information that exceeds man's ability to encode information: notes: The Coding Found In DNA Surpasses Man's Ability To Code - Stephen Meyer - video http://www.metacafe.com/watch/4050638 The Capabilities of Chaos and Complexity: David L. Abel - Null Hypothesis For Information Generation - 2009 To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: "Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration." A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis. http://www.mdpi.com/1422-0067/10/1/247/pdf Can We Falsify Any Of The Following Null Hypothesis (For Information Generation) 1) Mathematical Logic 2) Algorithmic Optimization 3) Cybernetic Programming 4) Computational Halting 5) Integrated Circuits 6) Organization (e.g. homeostatic optimization far from equilibrium) 7) Material Symbol Systems (e.g. genetics) 8) Any Goal Oriented bona fide system 9) Language 10) Formal function of any kind 11) Utilitarian work http://mdpi.com/1422-0067/10/1/247/ag Stephen C. Meyer - The Scientific Basis For Intelligent Design - video http://www.metacafe.com/watch/4104651 The DNA Enigma - Where Did The Information Come From? - Stephen C. Meyer - video http://www.metacafe.com/watch/4125886 The Capabilities of Chaos and Complexity - David L. Abel - 2009 Excerpt: "A monstrous ravine runs through presumed objective reality. It is the great divide between physicality and formalism. On the one side of this Grand Canyon lies everything that can be explained by the chance and necessity of physicodynamics. On the other side lies those phenomena than can only be explained by formal choice contingency and decision theory—the ability to choose with intent what aspects of ontological being will be preferred, pursued, selected, rearranged, integrated, organized, preserved, and used. Physical dynamics includes spontaneous non linear phenomena, but not our formal applied-science called “non linear dynamics”(i.e. language,information). http://www.mdpi.com/1422-0067/10/1/247/pdf The DNA Code - Solid Scientific Proof Of Intelligent Design - Perry Marshall - video http://www.metacafe.com/watch/4060532 etc.. etc.. etc.. Indium it is completely ludicrous for you to pretend you are being reasonable by saying material processes can generate information when the fact is that you clearly dwell in a self-imposed fantasy land in which reality is never allowed to ruin your delusions. bornagain77
Indium: As usual, you jump to conclusions. For my beers, I am more picky. gpuccio
So, everybody is happy and we can finally have a beer together! Cheers! Indium
Indium #149: Those are classical scenarios which have been used in discussions. And I agree tha "it is just not the way evolution works". Evolution (the darwinian type) simply does not work at all :) . gpuccio
Indium: I am happy that you are happy of the exchange. You can remain with your happiness. I will remain with my well argued convictions. Lupas has shown a pathway? Would that be: "The similarity means that it is likely that the type I and type II phosphatases share a common ancestor. One possibility is that tandem duplication of an ancestral phosphatase domain and subsequent N- and C- terminal truncation lead to a permuted variant by a mechanism that has been well described elsewhere" Thanks God, you are not im my position and I am not in yours. So, each one of us can be happy, even if for very different motives. I wish you the best, my friend. After all, "in principle" you could even be right... gpuccio
bs77: You are very fond of the tornado in the junkyard scenario (or monkey and typewriter). Sorry, nobody else is, since by definition it is just not the way evolution works. Indium
Indium you state: "I have met your demand for a possible pathway from one group in the SCOP database to another" ,,,And its possible a infinite number of monkey's banging away at typewriters produced the entire library of congress, but can you empirically demonstrate it to be plausible: Monkey Theory Proven Wrong: Excerpt: A group of faculty and students in the university's media program left a computer in the monkey enclosure at Paignton Zoo in southwest England, home to six Sulawesi crested macaques. Then, they waited. At first, said researcher Mike Phillips, “the lead male got a stone and started bashing the hell out of it. “Another thing they were interested in was in defecating and urinating all over the keyboard,” added Phillips, who runs the university's Institute of Digital Arts and Technologies. Eventually, monkeys Elmo, Gum, Heather, Holly, Mistletoe and Rowan produced five pages of text, composed primarily of the letter S. Later, the letters A, J, L and M crept in — not quite literature. http://www.arn.org/docs2/news/monkeysandtypewriters051103.htm The Universal Plausibility Metric (UPM) & Principle (UPP) - Abel - Dec. 2009 Excerpt: Mere possibility is not an adequate basis for asserting scientific plausibility. A precisely defined universal bound is needed beyond which the assertion of plausibility, particularly in life-origin models, can be considered operationally falsified. But can something so seemingly relative and subjective as plausibility ever be quantified? Amazingly, the answer is, "Yes.",,, c?u = Universe = 10^13 reactions/sec X 10^17 secs X 10^78 atoms = 10^108 c?g = Galaxy = 10^13 X 10^17 X 10^66 atoms = 10^96 c?s = Solar System = 10^13 X 10^17 X 10^55 atoms = 10^85 c?e = Earth = 10^13 X 10^17 X 10^40 atoms = 10^70 http://www.tbiomed.com/content/6/1/27 Probability's Nature and Nature's Probability: A Call to Scientific Integrity - Donald E. Johnson Excerpt: "one should not be able to get away with stating “it is possible that life arose from non-life by ...” or “it’s possible that a different form of life exists elsewhere in the universe” without first demonstrating that it is indeed possible (non-zero probability) using known science. One could, of course, state “it may be speculated that ... ,” but such a statement wouldn’t have the believability that its author intends to convey by the pseudo-scientific pronouncement." http://www.amazon.com/Probabilitys-Nature-Natures-Probability-Scientific/dp/1439228620 Could Chance Arrange the Code for (Just) One Gene? "our minds cannot grasp such an extremely small probability as that involved in the accidental arranging of even one gene (10^-236)." http://www.creationsafaris.com/epoi_c10.htm bornagain77
Lupas explains a rather simple evolutionary pathway between proteins which are in distinct groups in the SCOP database, exactly what you wanted. This is not a "mere logical deduction", it is a clear und relatively simple evolutionary pathway, demonstrating that there are indeed evolutionary bridges between the SCOP groups, exactly what you denied. I aggree that Lupas in other parts of the paper tries to speculate a bit, yes. So what? So far, I am very happy about this exchange: We have established that evolution can add information to genomes. We have seen that microevolutionary changes can lead to completely different changes in protein function and 3D structure, which makes microevolution extremely powerful, almost macroevolutionary... ;-) I have met your demand for a possible pathway from one group in the SCOP database to another, clearly demonstrating that the task for evolution can be much simpler than finding precise sequences of amino acids by random chance (which is basically the Axe-thesis). This is also why I am not convinced by the Axe paper: He doesn´t cite the relevant literature which demonstrates that nature does not have to search for new protein domains on an AA per AA basis. To be honest, if I were in your position now, I would not go the "This is not evidence!" and "Where you there?" way. I don´t think that you would be in a scientifically interesting position ones you go there... Indium
Indium: how different folds in the SCOP database can in principle (which is what you deny!) be reached by evolutionary processes? I have never affirmed that one thing cannot happen "in principle". I don't discuss things "in principle". I have specified hundreds of times that my discussion is about empirical evidences, not logical deductions. So, you are mistaken. From my post #112 which answered your post #110: "3) “In principle”, stranger things are possible. ID is not about what is impossible “in principle” (ID theory is not a mathematical deduction). ID, and empirical science, is instead, about what is “empirically” impossible (or possible, or likely). If you are not interested in empirical science, it’s your option. Why do darwinists start imagining things when they have no more arguments? And I state again that IMO the paper you linked gives no convincing empirical argumantes. You are free to think differently. And I would be happy if you argued about the Axe paper for what it says, instead of criticizing the journal or invoking peer to peer censorship. I am for ideas, not for authority. gpuccio
BTW, as a reviewer I wouldn´t have let Axe publish his paper without citing Lupas (cited over 100 times, which is a LOT in this field), Taylor etc. It is a mystery how this could happen. What kind of journal is this? It seems there are only two articles published in the whole history of Bio-Complexity!? Indium
gpuccio, in which way does “Our inspection confirms a report that type I and type II phosphatases, partitioned into different folds in SCOP, are in fact related through a circular permutation (Fauman et al., 1998; Grishin, 2001b).” not constitue evidence for how different folds in the SCOP database can in principle (which is what you deny!) be reached by evolutionary processes? Is this now a kind of "Where you there?" argument? Indium
Indium: the paper you link is pure wishful thinking. It proves absolutely nothing. It is a vague theory, with no evidence at all. It does not support any credible origin of protein domains, any more than all the paper about an RNA world demostrate that such a thing ever existed. Do you really believe that only because someone publishes a vague theory and supports it with vagut abstract considerations that theory is credible? The difficulties I have pointed to are objectively there, and unexplained by current models. "Arguments" such as: "One possibility is that tandem duplication of an ancestral phosphatase domain and subsequent N- and C- terminal truncation lead to a permuted variant by a mechanism that has been well described elsewhere" or: "The numerous instances of local sequence and structure similarities within different protein folds, together with evidence from proteins containing sequence and structure repeats, argues in favor of the evolution of modern single polypeptide domains from ancient short peptide ancestors (antecedent domain segments (ADSs)). In this model, ancient protein structures were formed by self-assembling aggregates of short polypeptides. Subsequently, and perhaps concomitantly with the evolution of higher fidelity DNA replication and repair systems, single polypeptide domains arose from the fusion of ADSs genes. Thus modern protein domains may have a polyphyletic origin." are the usual fairy tales to which we are unfortunately accustomed. The truth is: the Rna world, self-assembling aggregates of short polypeptides, and similar speculations are entities which have never been observed, and that, for all we know, have never existed and never will exist. gpuccio
gpuccio, I hate to repeat myself but what you ask can be found in the aticles I linked to. For example: "Our inspection confirms a report that type I and type II phosphatases, partitioned into different folds in SCOP, are in fact related through a circular permutation (Fauman et al., 1998; Grishin, 2001b)." http://www.sdsc.edu/~shindyal/ejc020204.pdf I will have a look at the Axe paper again, thank you for the link. Indium
Indium: I think you misunderstand what I say. Proteins are classified in fundamental separate 3D foldings. In SCOP classification, as I mentioned, the grouping according to "folds" includes at present 1195 different groups. My number of 6000, always derived from the SCOP classification, was based on a less than 10% homology between the groups. I believe that both criteria can point to isolated islands of functionality. For now, just for clarity, let's stick to the more restricitve one: folds according to the SCOP classification. My point is very simple, and I am surprised that you still don't fet it: when I speak of attaining a bew fold, I mean attaining a transition to a different fundamental group. If we use the "folds" classification, from one of the 1195 groups to another. If we use the 6000 sequence based classification, form one of the 6000 groups to another. Is that clear? Your examples are examples where the 3D fold changes a little, or a little more, but certainly does not deviate substantially from the initial fold, or if it does, it certainly does not achieve the functional fold in a new group, nor can at any level be considered "a step" in that direction. SAo, in no way your examples represent a transition, even partial, from one know functional fold grouping to another, unrelated one. Which was what I was requesting in my scenario. And why am I requesting that? It's simple: my argument is that we have many fundamental islands of functionality in the existing proteome. Those islands are those fundamental folds, or superfamilies, or unrelated families, according to the modality of grouping we choose. Always according to the grouping, the number of these fundamental functional units in the proteome is at present of about 1195 (folds) to 6000 (groups unrelated at primary sequence level). This is not an upper limit to anything: it is however the result of the search for functional structures in the 4 billion years of evolution, whatever the mechanism (darwinian or designed) by which those results were obtained. Moreover, there are many aspects of the modality of appearance of those functional units which suggest (at least IMO) that most of the functional structures useful in the biological context have probably been already found. That's an interesting aspect, and we can discuss it in more detail if you want. So, none of your examples is relevant to the discussion I have been doing with you (and that, it seems, you have not followed). I am not interested in evidence that mutations can change the folding of a protein: that's obvious. I state that simple random mutations cannot effect the transition from a folding functional group to another one. And anyway, just to understand better what I mean, why don't you read the paper by Axe, "The Case Against a Darwinian Origin of Protein Folds" which is exactly about this topic? Here is the link: http://bio-complexity.org/ojs/index.php/main/article/view/BIO-C.2010.1 gpuccio
gpuccio, before I enjoy the rest of my weekend a quick reply, because there are a few things I don´t understand about your answers: In which way is the number of protein domains an upper limit to the number of possible fitness-positive changes to the genome of an organism? Secondly, in the examples discussed in the papers I linked to, all of the evolved proteins with a new folding have a function. Did you really read the papers? You can find exactly what you asked for: New folding domains combined with new functions and possible paths of evolution. And finally: You seem to think that evolution cannot achieve changes to the genome that enable it to generate a comletely new folding of a resulting protein. Therefore it is a bit of a mystery to me that you don´t see the relevance of my first example: Sometimes evolution doesn´t even have to evolve anything to generate a new protein folding! A few changes in solution parameters are enough. With these findings (and combined with the other papers) your position that the barriers between different protein foldings are uncrossable by evolution is untenable. Indium
Yours is a gross representation...
The words I used are yours. There is no way to reconcile the published articles with your interpretation except to conclude that the authors are seriously wrong, or you mischaracterize them. The only difference between artificial and natural selection is that natural selection operates on all traits simultaneously, favoring the best balance for survival and reproduction. Artificial selection, as with animal and plant breeding, is often narrowly focused and produces individuals that would not survive without human assistance. I believe you suggested that the designer may tweak mutations as well as selection. I find this to be silly. No one has ever observed mutations that anticipate need, and plenty of researchers have looked. It would be an interesting research project for ID advocates, but I haven't seen any such research proposed. Petrushka
Yours is a gross representation, not so much of ID, but certainly of the personal arguments which I have many times expressed to you.
Perhaps it is, but I think you misrepresent the papers you interpret. You essentially characterize Nobel Prize winners as either liars or incompetents. You characterize the people who publish research on protein evolution as either dishonest or unable to correctly interpret their own work. that's a pretty serious allegation. Petrushka
Petrushka: Yours is a gross representation, not so much of ID, but certainly of the personal arguments which I have many times expressed to you. I have many times defined and discussed in detail the differences between natural selection and artificial selection, and it wwas not certainly the gross sillines that you state. And when I have said that a certain paper seemed to me abstract or inconclusive, it was because that was exactly my judgement about that specific paper. Good night to you. gpuccio
It strikes me that the most cogent argument made so far for ID amounts to declaring that anything learned about mutation and selection in a laboratory setting is either abstract and inclusive, or an instance of design because the selection is artificial. It would seem that evolution is the first branch of science for which research is, by definition, impossible. Petrushka
MathGrrl (#130). I agree with this post, with the following specifications: 1) I maintain that all empirical sciences are about what is empirically possible, and not about what is possible "in principle". 2) I maintain that the computation of the functional information of a protein is the only way to know how much functional information musty be created by the proposed models for its emergence, be they purely random, part chance and part necessity, or design based. What you call "the evolutionary history" of the protein must necessarily be part of the model, and explicitly. And the model, including the "evolutionary history", must be causally explanatory. That said, let's go to your #131: First, the reason for my use of scare quotes around “functional information”. While I’m getting a better feel for what you mean by this term, you have still not provided a mathematical definition that will allow me to calculate it on my own for a particular biological system. You’ve told me that it isn’t Shannon Information nor is it Kolmogorov Complexity, but you haven’t actually said what it is. I don't think that's true. I have said it many times, to you and others, and with various details. Briefly: a) I have never referred to CSI of a whole system. In my examples, I have always referred to CSI of single proteins, indeed of single protein domains. I have explained many times why. Essentially, it's easier and sufficient for our purposes. b) I use the specific subset of CSI which I call dFSCI: digital strings bearing information which is functionally specified. Genes and proteins are of that kind, so nothing is lost in the restriction. c) dFSCI is the non compressible (or scarcely compressible) information in the functionally specified string, computed as the ratio between the functional space and the search space. A practical method to approximate that ratio is to compute the uncertainty reduction in a protein family vs a random sequence, according to the Durston method, which computes the difference in Shannon's entropy H between the functional sequences in a family and a random sequence. Please see the Durston paper for the details. The method can also be applied to transitions. You say: Contributing to the definition problem is the ambiguity surrounding the “specification” issue. The amount of “functional information” in a system can vary greatly depending on how the function is specified. There seem to be no clear, unambiguous guidelines on what constitutes a valid specification. If the specification is subjective, CSI is useless as a scientific concept. I have discussed this point many times. Funtional specification is not subjective, but it can be recognized and defined only by a conscious intelligent observer. For the specification to be valid, it must however be defined objectively, and a method must be given to measure if it is present or not. If different specifications can be given for some object or system, the value of CSI will refer to the explicit specification which is given. There is nothing subjective in that: we measure a property in relation to another property. Anyway, for proteins usually the specification is absolutely obvious, if known, and corresponds to the biochemical function of the protein. Although the function of the single protein almost always is part of a higher level network of functions, for a primary analysis we must stick to the lower level function, the biochemical activity. In most cases, we can fix a conventional threshold of minimal activity, which can be measured in any lab. The definition of the function of many proteins can be easily found in protein databases. For the proteins we find in the proteome, the specification is not a problem at all: they are specified. We know their function, and we know that that function is necessary in their biological context. So, the only problem is to measure the functional complexity. Therfore, CSI is certainly not "useless as a scientific concept". A step-by-step rigorous calculation of CSI for a real world biological system would eliminate these questions and let us get to the next stage of actually testing your claims regarding CSI. A "real world biological system" is the existing proteome. We can calculate the CSI in the existing protein superfamilies. Durston has done that. And, we can calculate the CSI in any proposed transition in any explicit model for the emergence of any part of the proteome. Obviously, if darwinist do not give any explicit model for the emergence of a protein domain, we cannot calculate the CSI of a non defined transition. But for all proposed model of which we know enough detail, that can be done. It's a pity that the models darwinists propose explicitly are microevolutionary, and that their inherent functional information is always really low. Regarding the immune system and the proposed literature, I have begged for time, and I do that again. I have been rather busy here, as you can see, and my time resources are not infinite. But please believe that I am really interested to that subject, and that I have myself suggested many times some aspects of the immune system as very good examples of intelligentlòy designed algorithms. Regarding frameshift mutations, my point is very simple: the only frameshift mutation proposed in detail as a source of new functional information is, as far as I know, Ohno's model for nylonase. As it was detailed and explicit from the beginning (a great merit), it has been in time subjected to verification, and found false. gpuccio
Arthur Hunt: I know well who you are, and I am honored that you are back here. I have great esteem of your competence, even if I may disagree with you on many points. Regarding what you say, you are obviously right, but I have probably expressed my thought without the due precision, confiding that the context of the discussion could make my point clear enough. I apologize if I have not been completely precise. I said: "A “transition between beta-strand and alpha-helical conformation” is not a change of folding, but just a change in local secondary structure." Obviously, I did not mean that a change in secondary structure does not affect tertiary structure at all. I have clarified that also in my further answer to Indium at #133. I should have said, more explicitly: "A “transition between beta-strand and alpha-helical conformation” is not a transition form one functional folding to another, unrelated folding, but just a change in local secondary structure which can affect, usually in a negative way, the existing functional folding." I had made very clear in all my discussion with Indium that what I was speaking of was the emergence of new fundamental functional folds or superfamilies in the proteome, and Indium countered that argument with links showing that single mutations could affect secondary structure. I was therefore stating that such a concept, while obvious, was in no way a response to my argument. As I have said many times, this is a blog, and we all write often in a hurry, confiding that the general context, often repeated many many times, will clarify our meaning. But I am happy that you have taken part again to our discussions. You are always welcome. gpuccio
Indium: Have a nice weekend you too! And, if you want, reflect on the following: 1) Secondary structure is local and relatively simple to compute. 2) Tertiary structure and folding is all another matter. While secondary structure of the various parts of the sequence is certainly one of the components, protein folding is still a vastly non computable problem (with present resources). 3) There is no doubt that a change in local secondary structure in one place can influence the general folding. Sometimes it does, usually compromising more or less the existing function. Sometimes even a single aminoacid change can completely abolish the function through serious conformational deformations. Mendelian diseases are a well known example of that. 4) What you apparently don't understand is that all the above has nothing to do with finding a new functional folding starting from an existing, unrelated functional folding. Which is the problem I have posed, and that you still are ignoring. You can lose a functional folding through one or few appropriate mutations. But you cannot find a new, unrelated folding through one or few random mutations, which is what you seem not to understand. 5) It is clear that, under different environmental conditions, proteins fold differently. And so? 6) My number of 6000 is the number of protein groups unrelated at primary structure level which you get form the SCOP database. The same database gives also the following numbers for slightly different groupings: Folds: 1195 Superfamilies: 1962 Families: 3902 I have used the 6000 number to emphasize the separation at primary level. Therefore, IMO you have none of the things you say you have established. I agree only with one phrase: "There is good science being done on the origin of protein domains. " And nothing else. gpuccio
As to the paper for solution changing 3D structures, this finding actually argues very forcefully against the molecular reductionist (materialist) model of neo-Darwinism as is clearly illustrated here: The Case Against Molecular Reductionism - Rupert Sheldrake and Bruce Lipton - video http://www.metacafe.com/watch/4899469 further notes against genetic reductionism: New Insights Into How (Adult) Stem Cells Determine What Tissue to Become - August 2010 Excerpt: Within 24 hours of culturing adult human stem cells on a new type of matrix, University of Michigan researchers were able to make predictions about how the cells would differentiate, or what type of tissue they would become.,,, "Our research confirms that mechanical factors are as important as the chemical factors regulating differentiation," Fu said. "The mechanical aspects have, until now, been largely ignored by stem cell biologists." http://www.sciencedaily.com/releases/2010/08/100801190257.htm Electricity Forms Your Heart - July 2010 Excerpt: “The direction of growth and orientation of various cell types in tissue culture can be influenced by externally applied electric fields.” They added, “Furthermore, endogenous [inside organism] electric currents exist in a variety of tissues and have been hypothesized to influence cell migration and shape.” http://www.creationsafaris.com/crev201007.htm#20100731a The Gene Myth, Part II - August 2010 Excerpt: So even with the same sequence a given protein can have different shapes and functions. Furthermore, many proteins have no intrinsic shape, taking on different roles in different molecular contexts. So even though genes specify protein sequences they have only a tenuous influence over their functions.,,, So, to reiterate, the genes do not uniquely determine what is in the cell, but what is in the cell determines how the genes get used.,,, Only if the pie were to rise up, take hold of the recipe book and rewrite the instructions for its own production, would this popular analogy for the role of genes be pertinent. http://darwins-god.blogspot.com/2010/08/gene-myth-part-ii.html Cortical Inheritance: The Crushing Critique Against Genetic Reductionism - Arthur Jones - video http://www.metacafe.com/watch/4187488 ,,,Thus Indium, please tell me exactly how these findings are not absolutely crushing against the required molecular reduction of neo-Darwinism. Are you completely impervious to this shattering disconnect in your theory? bornagain77
gpuccio, Now, onward to hopefully reach further understanding and agreement! First, the reason for my use of scare quotes around "functional information". While I'm getting a better feel for what you mean by this term, you have still not provided a mathematical definition that will allow me to calculate it on my own for a particular biological system. You've told me that it isn't Shannon Information nor is it Kolmogorov Complexity, but you haven't actually said what it is. Contributing to the definition problem is the ambiguity surrounding the "specification" issue. The amount of "functional information" in a system can vary greatly depending on how the function is specified. There seem to be no clear, unambiguous guidelines on what constitutes a valid specification. If the specification is subjective, CSI is useless as a scientific concept. A step-by-step rigorous calculation of CSI for a real world biological system would eliminate these questions and let us get to the next stage of actually testing your claims regarding CSI. One particular area that I think would be interesting to test your claims on is the evolution of the immune system. From our earlier posts:
I have repeatedly suggestes a context which you have always refused to comment upon: the emergence of new protein domains, of new protein superfamilies, which we know has happened repeatedly in natural history.
This is an area of very active research, based on the predictions of modern evolutionary theory, in particular the nested hierarchy. Given your interest in ID, you may be familiar with this literature: http://www.nature.com/ni/journal/v7/n5/full/ni0506-433.html This link shows the literature on the evolution of the immune system (surely a specified function by your definition) presented to Behe at the Dover trial. This is only a small subset of the information available via Pubmed and other sources. Similar amounts of data are available on the evolution of other functional systems.
I cannot answer you about the immune system now, because I have not the time. I will try to get back on that later.
Another promising area for testing is that of frameshift mutations mentioned by Petrushka above. Those might show the creation of more "functional information" than simple point mutations. Let's get quantifiable! MathGrrl
gpuccio, I'm back from my travels and glad to see the discussion is still ongoing. It seems like a good point to summarize where we've reached agreement and what points are still under contention. We seem to agree that evolutionary mechanisms, particularly mutation and differential reproductive success (which results in what is sometimes too loosely called "natural selection") can generate "functional information" (more on my reason for using scare quotes will follow). We also seem to agree that the amount of "functional information" created by evolutionary mechanisms is additive. That is, the "functional information" created by a series of mutations that become fixed in a population is equal to the sum of the "functional information" of each mutation. From the above two points of agreement, we seem to agree that evolutionary mechanisms could, in principle, create sufficient "functional information" to cross the boundary to "Complex Specified Information". I recognize that you do not believe this to be possible in practice, but mathematically there is nothing preventing it. Another point of apparent agreement is that any calculation of "functional information" must take into account the history of the changes between the initial state and the final state of the system being measured. You yourself made this point in the discussion of the evolution of citrate digestion in Lenski's experiment. Even though citrate digestion is a completely new function and therefore, according to your original definition of CSI, meets the criteria of specification, you noted (correctly, in my view) that we should only measure the "functional information" in the mutations that created the function, not for all the components of the genome that support it. This does, however, have consequences for some of your previous CSI calculations. When you have come up with large numbers for CSI of certain proteins, you have not taken into consideration their evolutionary history. Simply computing four to the power of the length of the genome or twenty to the power of the number of amino acids in the protein is mathematically equivalent to asserting that those biological components appeared complete and de novo. I believe we both agree that such an assertion is not aligned with emprical observations of real biological systems. No biologist claims that such structures arise instantly, so demonstrating that it is unlikely that they could do so does not pose a problem for modern evolutionary theory. To keep this of manageable length, I'll discuss what I see as our currently open issues in a separate post. I am interested to know if you agree with me on our points of agreement thus far. MathGrrl
So, gpuccio, I will now stop to throw citations at you about topics I barely understand. ;-) In any case I think I have established a few things: 1. In principle, evolution can add information to the genome. I think you knew and accepted this from the beginning but it is still good to have established this again, because some ID guys think this is impossible. 2. Even very small changes to the genome can result in a completely different protein folds, something you thought would be impossible. 3. Even if I accept your number of 6000 (which I don´t) the Durston paper is wrong because it doesn´t take these additional ways to improve the fitness into account. In any case his approach is of no use to determine the evolvability of anything, since he doesn´t take evolutionary mechanisms into account at all. 4. There is good science being done on the origin of protein domains. While this is a very difficult topic the examples that are discussed clearly demonstrate that your claim that evolution in principle can not cross from one domain into another is false. Have a nice weekend! Indium
gpuccio, I think you underestimate the importance of the secondary structure: Such a substantial change as observed in the article I linked is always also resulting in a change of the tertiary structure. So even your new demand has been met. How did the protein domains evolve? That is a very hard question indeed and most of this happened in the extremely distant past. There is some work being done, however. It has been found, that identical sequences for different conditions of the solution can lead to completely distinct 3D structures. http://www.jbc.org/content/277/20/17863 It has been shown that in principle a change in 3D structure can occur without loss of function. Link 1 Link 2 People even write overviews about different scenarios for the evolution of new protein domains: Link 3 Link 4 So, as expected the is no fundamental reason why evolution should not be able to reach different families of 3D configurations of proteins. Regarding Durston: I don´t know how you end up with only 6000 ways to improve the fitness of any given organism by changing its genome. Could you elaborate? Indium
gpuccio@125:
As for your new example: transitions between beta-strand and alpha-helical conformations Again you are not a biologist and you cannot know. A “transition between beta-strand and alpha-helical conformation” is not a change of folding, but just a change in local secondary structure. No harm done, anyway.
I'm sorry, but this is just wrong. A change in "local secondary structure", from beta-strand to alpha helix, is by definition a change in a structural fold. And it most certainly causes a dramatic leap in overall three-dimensional structure. On this point, Indium is spot-on correct. For the record, I am a biochemist who works on matters related to structure and function of proteins. Arthur Hunt
Indium: english is not my first language too. I am italian. As for your new example: transitions between beta-strand and alpha-helical conformations Again you are not a biologist and you cannot know. A "transition between beta-strand and alpha-helical conformation" is not a change of folding, but just a change in local secondary structure. No harm done, anyway. Regarding Durston, what he has done is extremely valuable: he has measured functional information in different protein families in a reliable way. Nobody else had done that so brilliantly before. And you are wrong again when you say: Nobody thinks that as a general rule new configurations poof into existence from random sequences. As I have shown, if a new domain emerges from a pre-exixting unrelated domain, the starting state if de facto random in relation to the new funtion which will emerge, because it has no information about that function, it ha s a different folding and is unrelated at the primary structure level (less than 10% homology). Now, according to what we know from natural history, those transitions must have taken place some way. If darwinian mechanisms are not able to explain them, other explanations must be offered. We in ID are offering a very good one. I believe there are 10^35 possible positive reconfigurations which can be reached by Darwinian processes. Again, I will not force your beliefs. Beliefs are personal. Bit what a pity that all the process of evolution, in 4 billion years, and whatever its causal mechanisms may be (darwinian or design related), has found only 6000 of them. Well, at least we have great expectations for the future! :) gpuccio
Maybe you will never follow, but I have explained many times, especially to you, that artificial selection is a form of design, and that it is completely different, and vastly more powerful, than natural selection.
In the same way that laboratory chemistry is more powerful than "natural" chemistry. I think I understand. Petrushka
BTW, thanks for your patience so far. English is not even my first language and in addition the topic at hand is complicated enough to make articulation of my thoughts difficult at times. Indium
gpuccio, so, macroevolution in your sense of the word can be observed when the evolution of a new protein folding is observed? You seem to think thats impossible to generate with a small change to the genome. Again I am not a biologist, but a quick search turns up quite a few articles. Please have a look at this Overview
In addition, in a few cases significant transitions in structure have been demonstrated following one or a few amino acid mutations in a protein sequence. Examples include transitions between beta-strand and alpha-helical conformations in mutants of the Arc repressor [7] and in the Kazal-type serine protease inhibitor domain [8].
Reference 7 and 8 can be found in the linked article. So, your demand for a case where a simple change in the genome can lead to completely different folding types can be met. Now, for your statistics and Durston. I can see two problems there. First of all, I don´t believe your number of 10, 100 or 1000 possible positive changes to the genome. I believe there are 10^35 possible positive reconfigurations which can be reached by Darwinian processes. Now both of us have guessed a number. How can we decide who is right? Until we have come to such a decision, is it correct to take just *one* function into account, like Durston does? Secondly, his calculations are not usefull anyway: Nobody thinks that as a general rule new configurations poof into existence from random sequences. All he really does is putting some numbers on the tornado in the junkyard scenario, neglecting evolutionary mechanisms. Indium
Petrushka: Maybe you will never follow, but I have explained many times, especially to you, that artificial selection is a form of design, and that it is completely different, and vastly more powerful, than natural selection. If you don't follow, or just don't agree, let's leave it to that. But please remember, in the future, that that is my conviction, and therefore it's useless that you quote to me new papers describimg the potentialities of artificial selection. I agree with that. gpuccio
Indium: I am happy that we can go on in our conversation in a spirit of serene confrontation: that's all I expect from my interlocutors. As you say that you are not a biologist, I may perhaps clarify some further aspects for you (after all, I am a medical doctor, which could be considered as a "lessere form" of biologist :) ). I will start form the article you quote. The fact is that in that example a single mutation brings a chane in the biochemical activity of the molecule. That's again a very good example of microevolution. As you can see if you look at the paper, especially at figure 5, the mutation certainly modifies the active site, but the 3D structure of the molecule, and its fold, remain almost the same (you can see the slight variations in different colors in the figure, orange and blue). That's why I have kept mt discussion at the level of protein domains and isolated superfamilies. Each superfamily may have lots of different proteins with different functions, sometimes slightly different, sometimes very different, but the general fold and the "general function" are the same. Take, for instance, the case of nylonase, which derives form the penicillinase domain, through, probably, a couple of mutations. The biological function of nylonase is very different form that of penicillinases: the first has a digestive role for nutrition, the second aerves to protect bacteria from penicillins producted by other bacteria. But, biochemically, both are esterases, and they share exactly the same fold and the same biochemical function. The small variation at the level of the active site modifies the affinity for specific substrates (nylon or penicillin), but the enzynatic activity is anyway of the same biochemical type. And the structure is the same. We jhave to distinguish between the general fold of a domain, which usually defines its general function, and the specific active site, usually determioned by a much smaller number of AAs. Small variations in the active site can mokdify substantially the final effect of the protein in a biological context, but they don't modify substantially the folding and general biochemical characterization of the protein. That's why the paper you quote is good and interesting, but again is only a description of a microevolutionary event, and in no way it shows any progression toward a new, different, isolated fold or superfamily, which was my example and context. You say: My point is that evolution routinely finds *many* new ways to improve the fitness of organisms in different situations. From your arguments, I think that you are not a statistician, too. No problem with that. Let's clarify. Let's suppose that in a system 10 different solution which can improve fitness are potentially available, and that each of these solutions has a probability to be found, through ramdom search, of 10^-45. The probability of finding at least one ppf the solution should be approximately (I am not being necessarily precise here, if there is any statistician out there, he can correct me): (10^-45)*10 that is, 10^-44. Which is an improvement of only one order of magnitude, and not a great consolation. To significantly improve the probability, you need "at least", say, 10^10 different functional solutions for that context (which I believe absolutely non realistic), and that would anyway leave the probability at 10^-35, which is not a joke at all. Please note that, at cthe level I have suggested of fundamental functionality (protein domains with a less than 10% homology isolation) we are aware at present of only about 6000 (6*10^3) different superfamilies/families in the global proteome. That means that evolution in all its history, has only found that number of fundamental protein structures. Moreover, the rate of appearance of new protein domains at that level has constantly decreased in natural history. Do you really believe that in the search space 10^10 or more functional structures still lie undiscovered, neither by nature nor by us? Somebody has to win the lottery! No. That's simply wrong. Or rather, it is true only for real lotteries. That's another of the shamefully wrong statistical arguments that darwinists love to use against ID. In lotteries, as one ticket is sorted out of all those which were sold, someone must necessarily win. But in a random search where th probability is extremely low, nobody will win even after billion of years. Think of that in this way: we have a lottery with 10^153 tickets. 10^3 tickets are sold. A winning number is extracted out of all the original 10^153 repertoire. The probability that one of those who bought a ticket may win are 10^-150, which is Dembski's UPB. That means that nobody would realistically ever win, even if one number were extracted each Planck time by each fundamental particle in the universe, for 14 billion years. So, no, it's not true that somebody has to win the lottery. You say: Organisms have the ability to evolve in *some* direction, or not at all. The probability of each direction might be very small but sometimes the organism *will* evolve and change over time in *some* direction even if the probabilty of each direction is small. No. Not if it is so small. Even considering the sum of all the probabilities for all directions. Finally, I can agree with you about the possibilities of dimerisation, or simply of exon shuffling, or of sexual allele shuffling: these "modular" reorganisations of existing domains have certainly an important role. I have never dealt with them, mainly because in those cases it is IMO much more difficult to compute the search space and the probabilities of a functional result. In principle, I can agree that at least some of these variations could be in the potential range of a random search. Others certainly are not. But anyway, they don't explain anything of how the basic information units (protein superfamilies) originated. That's why I stick to that model. I could vertainly discuss higher levels of organization (regulatory network, the immune system, the nervous system, or simply the genetic code) where the design is even more obvoious. But for many reasons, the rigorous treatment of those contexts is much more difficult. Therefore, I stick to signle proteins, where quantitative analyses are much more in our realistic range. gpuccio
gpuccio, first of all, you are right about the citrate digestion. That makes this example a not so good illustration of IC! Well, in a way it still is. I think that most of the time parts of IC systems have or had other funtions than the IC one... Regarding Durston: I did not say that evolution can bring about *any* kind of function. My point is that evolution routinely finds *many* new ways to improve the fitness of organisms in different situations. Retrospectively looking only at the one that evolved and wondering about the small probability makes not much sense when the organisms could have evolved in completely different directions (or not at all). If you win the lotterey it also doesn´t make much sense to attribute this to divine intervention because the probability of this event is so small. Somebody has to win the lottery! Organisms have the ability to evolve in *some* direction, or not at all. The probability of each direction might be very small but sometimes the organism *will* evolve and change over time in *some* direction even if the probabilty of each direction is small. Also, it seems from your comment that you might accept that global information contents of organsims/genomes can increase. That is a good basis for future discussions about this topic I guess! Oh, and here is an example of how islands of functionality might be crossed. Since I am not a biologist the details are hard to understand for me, however! ;-)
Our study emphasizes how single point mutations can engender unexpected leaps in protein function thus enabling the appearance of new functionalities in proteins without the need for promiscuous intermediates.
http://nar.oxfordjournals.org/content/36/13/4390.full At least one other way to get new functionality without functional intermediates is by reactivating pseudogenes. This might be a rare situation however. On the other hand, intermediates might also be normal genes which undergo fitness-neutral changes until a new function emerges, again without intermediates with real function. Thirdly, sometimes a simple dimerisation can lead to new functions. Bovince seminal ribonuclease seems to be such a case, biologists might correct me if I am wrong. Indium
They are certainly not darwinian mechanisms.
I'm not sure I follow. A discussion of artificial selection makes up a huge portion of Origin of Species. It's one of his main lines of evidence. Petrushka
Petrushka (#117): You surprise me now. You should now quite well that I have no doubts about the huge possibilities of directed evolution and artificial selection. We have long discussed that issue, and you should remember that I am absolutely concinced that directed evolution and artificial selection are one of the best scenarios of intelligent design. They are certainly not darwinian mechanisms. gpuccio
Directed evolution circumvents our profound ignorance of how a protein's sequence encodes its function by using iterative rounds of random mutation and artificial selection to discover new and useful proteins. Proteins can be tuned to adapt to new functions or environments by simple adaptive walks involving small numbers of mutations. Directed evolution studies have shown how rapidly some proteins can evolve under strong selection pressures and, because the entire 'fossil record' of evolutionary intermediates is available for detailed study, they have provided new insight into the relationship between sequence and function. Directed evolution has also shown how mutations that are functionally neutral can set the stage for further adaptation.
http://www.nature.com/nrm/journal/v10/n12/abs/nrm2805.html Petrushka
gpuccio: Here's some more on the same subject: http://scholar.google.com/scholar?hl=en&lr=&q=related:snFAUWZhkIsJ:scholar.google.com/&um=1&ie=UTF-8&ei=WCuJTJy2MJSg8AT5-5TfDg&sa=X&oi=science_links&ct=sl-related&resnum=1&ved=0CCIQzwIwAA Petrushka
Petruahka: I had already seen the link, but I need time to read the paper and understand it well. At first sight, it seems rather abstract and inconclusive, but give me time: if you have read the paper, you will have seen that it is rather complex. Anyway, I am happy that darwinists are going on in their efforts to falsify ID, while declaring that it is not a scientific theory. That's scientific debate... gpuccio
gpuccio:
it appears possible for adaptive walks with only random substitutions to climb with relative ease up to the middle region of the fitness landscape from any primordial or random sequence
See the link at post #105 Petrushka
Indium (#111): I do want. gpuccio
Indium: Thank you for your comments. I respect your opinions, I will just explain briefly why I don't think they are pertinent: 1) Your only explicit objection to Durston's metric is the old (and IMO completely inconsistent) concept that evolution can produce "any kind of function". That's false. As I have argued many times, complex systems cannot do with "any kind of function". They pose severe restrictions to what can be useful and what cannot. Protein folds and active sites must be very specific give those biochemical properties which can be integrated into the complex network which already exists, not to speak of regulatory networks and procedures, and so on. So, in each defined context, evolution can really take only a few specific directions, even for a conscious engineer, even more for mere chance and NS. And anyway, the vast majority of protein sequences remain non functional in any given biological context. 2) You can believe what you like. That's not the same as showing that it is a credible scientific hypothesis. Anyway, I never question people's faith. 3) "In principle", stranger things are possible. ID is not about what is impossible "in principle" (ID theory is not a mathematical deduction). ID, and empirical science, is instead, about what is "empirically" impossible (or possible, or likely). If you are not interested in empirical science, it's your option. 4) Look, you say that because of general probability estimations evolution can *in principle* not add, say 150 bits of information. I never, never said that. I say that a function which requires more than 150 new bits of functional information to appear from a starting state cannot be empirically found through pure random variation. I don't believe it's the same thing. This is easily refuted: When a process can add 10 bits, by repetition it can add 150 bits or 1000 bits. You are easily refuting what you had easily imagined I had said. My compliments. 5) Finally, Lenski again. You say: Let´s go back to the Lenski case: The new citrate permeability is only usefull because these organisms can already digest citrate, which in itself is a complex function, right? How many bits do we need for citrate digestion? 10? 20? 50? So, how is the evolution of citrate permeabilty unrelated to the previous evolution of citrate digestion? Why shouldn´t we add the 10 bits to the previous information to get the information content for the full system? What happens if a subsequent evolutionary step makes these processes 100 times more efficient? Another 10 bits? And so on. Let´s imagine Lenski would not have done this work. I am sure at some point some ID guy would have come along, seen this system and decided that this is a (small) unevolvable IC system: How usefull is citrate digestion without permeability and vice versa? I think you are apparently confounded here. Living beings use citrate all the time. It is the essential component of the Krebs cycle, which is universal in all aerobic living cells. Again I quote Behe: "Now, wild E. coli already has a number of enzymes that normally use citrate and can digest it (it’s not some exotic chemical the bacterium has never seen before). However, the wild bacterium lacks an enzyme called a “citrate permease” which can transport citrate from outside the cell through the cell’s membrane into its interior. So all the bacterium needed to do to use citrate was to find a way to get it into the cell. The rest of the machinery for its metabolism was already there. As Lenski put it, “The only known barrier to aerobic growth on citrate is its inability to transport citrate under oxic conditions." So, your question "How usefull is citrate digestion without permeability and vice versa?" is easily answered: very useful indeed! The problem is only that E. coli cannot use exogenous citrate, for lack of permeability to it. Your comments about possible wrong inferences of IDists about IC are therefore completely out of order. (And anyway, I have never used the concept of IC in my discussions with you, because it was not necessary for my argument here). Moreover, I thought I had made it clear that nowhere in my argument I was measuring the global information content of a whole organism. So, why do you continue to argue in that sense? I have discussed the difficulties in the evolution of protein domains by darwinian mechanisms, applying the concept of CSI to single proteins, and never to more complex systems. Therefore, I consider your objections about the total information content of an organism absolutely irrelevant. gpuccio
Oh, btw, if you want I can dig up a paper where it is shown that single point mutations can indeed generate a large jump in binding properties, which I think would invalidate all your comments about uncrossable islands of functionality. Indium
gpuccio, thanks again for your explanation. I think I now have to divide my answer into some kind of bullet points: 1) No, I don´t accept Durstons metric at all. We can go into a detailed discussion of this if you like, but you will also find effective rebuttals on the web. One of the basic problems is that he is subject to some kind of lottery fallacy: He just checks one potential outcome or function. Evolution could have resulted in an incredibly large set of different functions and also in completely different realizations of the same function and each time Durston retrospectively would calculate the amazingly low probability that exactly THIS outcome is observed. 2) I believe that very simple mutation events can open roads to completly new and unrelated protein-protein binding sites and therefore to new functions. Also, large unselectable areas of sequence space can still be crossed by neutral steps or pseudogenes which are later reactivated. 3) If evolution can add 10 bits of information to the genome of an organism, it can also add 150 bits or 1000 bits. It just takes time. Nothinm that you say changes this fact *in principle*. Since the general principle is all I am interested in at the moment, this is enough for me. Look, you say that because of general probability estimations evolution can *in principle* not add, say 150 bits of information. This is easily refuted: When a process can add 10 bits, by repetition it can add 150 bits or 1000 bits. If you now say that the organisms somehow might loose information that is a completely different argument! Let´s go back to the Lenski case: The new citrate permeability is only usefull because these organisms can already digest citrate, which in itself is a complex function, right? How many bits do we need for citrate digestion? 10? 20? 50? So, how is the evolution of citrate permeabilty unrelated to the previous evolution of citrate digestion? Why shouldn´t we add the 10 bits to the previous information to get the information content for the full system? What happens if a subsequent evolutionary step makes these processes 100 times more efficient? Another 10 bits? And so on. Let´s imagine Lenski would not have done this work. I am sure at some point some ID guy would have come along, seen this system and decided that this is a (small) unevolvable IC system: How usefull is citrate digestion without permeability and vice versa? Indium
zeroseven: I think that in this thread (and maybe also in others) I have answered in some detail, and I hope with some clarity, to many of your questions (for instance, in 85 and 88). You have given not one word of comment about my answers. But, fortunately, you have now creatively managed a new question about chihuahuas. Is that your usual epistemological approach to discussions? I hope you are enjoying yourself, my friend... :) gpuccio
Indium: First of all, the Szostak paper. I have given a very detailed analysis of it and of its cerdibility and meaning on this blog. I have also debated my analysis on the same thread with one very competent and thoughtful interlocutor who has done his best to defend the paper (almost certainly a very good biologist). At the end of the discussion, I am still absolutely convinced that the paper is seriously flawed in its conclusions, and that I have given very good evidence of that. In this moment I don't remember what thread it was (maybe someone can provide a link, or I will have to look for it later and give you the link). You can read the whole discussion and judge for yourself. Let's go to your questions: What stops an organism from adding this kind of information (like in Lenskis lab) over time? What stops these evolution events from being dependent on each other or from being related, for example in a way that the combined change leads to a completely new effect? These are two different questions. My answer to the first is: nothing stops an organism from undergoing microevolutionary events which are compatible with its probabilistic resources for the random part, and which can be selected in the context of its environment. That happens, even if it usually requires, at least in the observed cases, a very high reproductive rate, big populations, very small complexity of the transition (one or two AAs), and a very strong envronmental selection (think of antibiotic resistance) to happen in an observable time (which, however, can be of many years, especially for supposed two AAs mutations: selectable one AA mutations can be achieved in a bacterial culture in short times). You can find a very good discussion about those empirically observed facts in Behe's TEOE. What you can have, in the end, is a certain number of microevolutionary events, bearing small unrelated tweakings of existing functions, in the organism. To the second question, my answer is: it is not a question of being "stopped": functions either are related, ore are not. Let's be more clear: At a certain time in natural history, a new protein domain representing a completely new protein superfamily appears, in a new species. That has happened thousands of times, according to our knowledge of the proteome, and of natural history. Let's call this new protein domain A, and let's assume that its length is about 130 AAs (a very reasonable length for a protein domain). To be more precise, let's assume that we can apply the Durston method to compute the real functional information in that domain: let's say it is 350 Fits (a reasonable value: in Durston's paper, protein families of that length have approximately that functional information). So, it is perfectly legit to ask: how did that new protein domain arise? The darwinist answer will probably be that it was the result of gradual mutations, possibly in a duplicated gene of some pre-existing protein domain. Let's call that "precursor" A. So, A exists before B, and B derives from A, in our model. For simplicity, we can think of A and B as approximately of the same length. But, as they are domains in two different protein superfamilies, by definition they are unrelated at the primary sequence level: we can safely assume that they present less than 10% homology, which is the same as a completely random level of homology. IOWs, A is a sequence completely isolated from the sequence of B in the search space, and is in no way "near" B. These are all things we know. So, the transition from A to B, if it is completely random, must create 350 bits of functional information by a random walk in the search space: that is empirically impossible. You say: but a series if related small functional variations could bring us from A to B. I ask you: why should it be the case? You should give me at least one of two kinds of arguments to make me believe that such an assumption is credible, at least as hypothesis: 1) Give me some logical reason why it should be the case: I cant' see any. We have two long sequences, totally unrelated and isolated in the search space, with two completely different folds and functions. What reason in the world can you suggest for them being connected by a series of small functional states, each one selectable? There is nothing in what we know of protein folding, of protein function, and of biochemical laws, which can justify that. And that strange property should be true not only in one case, but in thousands of unrelated cases. 2) You could give me empirical evidence: you could say: look, I have this example, or at lest this model, where I have detailed the 50 or 70 intermediate functional states and shown why tey are selectable, and how thet realize a series which "builds" the new sequence for the new final function. Can you, or can anyone else? Or, to put it differently: Based on the ability to add small amounts of information, can we agree that the information content of an organism is not constant and can be increased through evolution? You have really said something different here. There is no doubt that the information content if an organism is not constant. Each new genetic disease is due to the loss of some functional information. In rare cases, and in specific contexts, microevolutionary events create a few bits of new functional information. We agree on that. What we probably don't agree about, is that the complex individual functions that we observe today in the proteome could have originated sequencially from existing precursors by a darwinian mechanism. I deny that model for two important reasons: a) Each single transition to generate a new protein domain/superfamily is too complex to be in the range of purely random variation. b) There is absolutely no rational motive and no empirical evidence that complex functions, such as protein domains, can be achieve through a sequence of small functional and selectable variations. Indeed, all we know and observe is absolutely against that. gpuccio
Ok it doesn´t! ;-) http://www.nature.com/nature/journal/v410/n6829/full/410715a0.html Indium
gpuccio, thanks for the explanation. You certainly have a point: Calculations of the probabilities are difficult and it is probably not always warranted to just add the different bits. Still, in a way that doesn´t safe your argument. If, repeatedly, new functions with small amounts of information can arise in an organism it will over time become more complex (and contain more information, right?). And since there is no reason why subsequent evolutionary changes might not be dependent on previous ones, this will in retrospective give the impression of irreducible complexity or at least of a very improbable event. In a way, that is exactly what seemed to have happened in Lenskis lab. I have to ask again: What stops an organism from adding this kind of information (like in Lenskis lab) over time? What stops these evolution events from being dependent on each other or from being related, for example in a way that the combined change leads to a completely new effect? Or, to put it differently: Based on the ability to add small amounts of information, can we agree that the information content of an organism is not constant and can be increased through evolution? Do you know ? (Hope the link works!) Indium
The fitness landscape in sequence space determines the process of biomolecular evolution. To plot the fitness landscape of protein function, we carried out in vitro molecular evolution beginning with a defective fd phage carrying a random polypeptide of 139 amino acids in place of the g3p minor coat protein D2 domain, which is essential for phage infection. After 20 cycles of random substitution at sites 12–130 of the initial random polypeptide and selection for infectivity, the selected phage showed a 1.7×104-fold increase in infectivity, defined as the number of infected cells per ml of phage suspension. Fitness was defined as the logarithm of infectivity, and we analyzed (1) the dependence of stationary fitness on library size, which increased gradually, and (2) the time course of changes in fitness in transitional phases, based on an original theory regarding the evolutionary dynamics in Kauffman's n-k fitness landscape model. In the landscape model, single mutations at single sites among n sites affect the contribution of k other sites to fitness. Based on the results of these analyses, k was estimated to be 18–24. According to the estimated parameters, the landscape was plotted as a smooth surface up to a relative fitness of 0.4 of the global peak, whereas the landscape had a highly rugged surface with many local peaks above this relative fitness value. Based on the landscapes of these two different surfaces, it appears possible for adaptive walks with only random substitutions to climb with relative ease up to the middle region of the fitness landscape from any primordial or random sequence, whereas an enormous range of sequence diversity is required to climb further up the rugged surface above the middle region.
http://www.plosone.org/article/info:doi/10.1371/journal.pone.0000096 Petrushka
zeroseven you ask: Does a chihuahua have a different amount of CSI than a wolf? Yes: Phylogenetic Relationships, Evolution, and Genetic Diversity of the Domestic Dog Excerpt: The Xoloitzculntli or Mexican hairless dog also has gone through population contraction followed, presumably, by close inbreeding for several hundred generations. Thus it is likely to have reduced genetic variation,,,,,,the mean sequence divergence in dogs, 2.06, was almost identical to the 2.10 (sequence divergence) found within wolves. (please note the sequence divergence is slightly smaller for the entire spectrum of dogs than for 'parent' wolves) http://jhered.oxfordjournals.org/cgi/reprint/90/1/71.pdf But this is not surprising zeroseven in that we find that Natural Selection reduces genetic diversity because it 'sifts' what is already preexisting genetic information without ever replenishing the loss of genetic diversity: "...but Natural Selection reduces genetic information and we know this from all the Genetic Population studies that we have..." Maciej Marian Giertych - Population Geneticist - member of the European Parliament - EXPELLED Another strong piece of genetic evidence, for the recent origin of man, is that scientists find the differences of the 'younger' human races (Chinese, Europeans, American Indians, etc.. etc..) are losing genetic information when compared to the original race of humans which is thought to have migrated out of east Africa some 50,000 years ago. "We found an enormous amount of diversity within and between the African populations, and we found much less diversity in non-African populations," Tishkoff told attendees today (Jan. 22) at the annual meeting of the American Association for the Advancement of Science in Anaheim. "Only a small subset of the diversity in Africa is found in Europe and the Middle East, and an even narrower set is found in American Indians." Tishkoff; Andrew Clark, Penn State; Kenneth Kidd, Yale University; Giovanni Destro-Bisol, University "La Sapienza," Rome, and Himla Soodyall and Trefor Jenkins, WITS University, South Africa, looked at three locations on DNA samples from 13 to 18 populations in Africa and 30 to 45 populations in the remainder of the world.- I wonder what Hitler would have thought of that study? This following study is interesting in that it shows the principle of Genetic Entropy being obeyed for the estimated 60,000 year old anatomically modern humans found in Australia: Ancient DNA and the origin of modern humans: John H. Relethford Excerpt: Adcock et al. clearly demonstrate the actual extinction of an ancient mtDNA lineage belonging to an anatomically modern human, because this lineage is not found in living Australians. Although the fossil evidence provides evidence of the continuity of modern humans over the past 60,000 years,,, http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=33358 EXPELLED - Natural Selection And Genetic Mutations - video http://www.metacafe.com/watch/4036840 Natural Selection Reduces Genetic Information - Dr. Georgia Purdom - video http://www.metacafe.com/watch/4036808 Natural Selection Reduces Genetic Information - No Beneficial Mutations - Spetner - Denton - video http://www.metacafe.com/watch/4036816 Darwinism’s Last Stand? - Jonathan Wells Excerpt: Despite the hype from Darwin’s followers, the evidence for his theory is underwhelming, at best. Natural selection - like artificial selection - can produce minor changes within existing species. But in the 150 years since the publication of Darwin’s Origin of Species by Means of Natural Selection, no one has ever observed the origin of a new species by natural selection - much less the origin of new organs and body plans. http://www.evolutionnews.org/2009/06/junk_dna_darwinisms_last_stand.html#more further note; This following paper, though of evolutionary bent, offers a classic example of the effects of Genetic Entropy over deep time of 270 million years: A Cambrian Peak in Morphological Variation Within Trilobite Species; Webster Excerpt: The distribution of polymorphic traits in cladistic character-taxon matrices reveals that the frequency and extent of morphological variation in 982 trilobite species are greatest early in the evolution of the group: Stratigraphically old and/or phylogenetically basal taxa are significantly more variable than younger and/or more derived taxa. http://www.sciencemag.org/cgi/content/abstract/317/5837/499 bornagain77
Indium: maybe I have not been clear enough. In Lenski's work, always according to our assumptions, a couple of mutations changed some existing protein a little, allowing permeability to citrate. Microevolutionary changes usually don't change substantially the protein, they just "teak" an existing protein in its island of functionality. Again, I don't know what the molecular basis of Lenski's function is. So all this discussion is higly speculative, but it can be valid as a general model. So, let's say that protein A changes a little because of a coordinated mutation of two aminoacids, which bears a new function form its existing basic structure and fold (usually, the easiest way for that to happen is a change at the active site). Well, we have a transition which has given a new function. So, the transition is functional, specified. Is it an example of dFSCI? No, because the complexity of the transition is, at most, of 8.65 bits (it could be less). As you say, the mutation, at least in Lenski's case, has been fixed. That's because it gives a reproductive advantage (at least in Lenski's environment). As I have already mentioned, that process requires two events: 1) The original bacterium where the double mutation occurred must expand to all, or most of the population, thanks to its reproductive advantage. 2) In that process, the mutations must be preserved by negative selection of new mutations on those sites. Point 1 is certainly the most important in a short term scenario, but both are anyway in the range of NS, because the double mutation has given a true advantage to its carrier, at least in that context. I think we can agree on that. That's what is called a microevolutionary event. Now, you say: After that fixation nothing prevents these new strains from undergoing another round of the same mutation and fixation cycle, correct? Yes, nothing prevents the new population, with its citrate permeability, to undergo some other microevolutionary event of one or two mutations in the same individual, bearing some other new tweaked function. And so? And then they would have increased the information content by 20 bits. No, that is a common mistake. A transition of 20 bits is a transition where 20 new bits of functional information are necessary to achieve a new function from the initial state. It is a transition wherte at least 5 coordinated mutations must occur randomly in the same individual, to bring the functional variation which can be selected. That is not the sum of two independent, and unrelated, microevolutionary events of 10 bits each. The probability of each 10 bits event is 1:1024. But the probability of a single 20 bits event is 1:1048576. It's 1000 times more unlikely. Functional bits are an exponential measure. So, if 20 bits are necessary to achieve a new function, that event will occur in about 1000 times the time in which a single 10 bits event can occur, if all the other variables (population size, mutation rate, etc.) remain the same. That's why my threshole for single functional random variations is of 150 bits (about 35 AAs necessary for the function). That is a very good threshold to make a random event completely unlikely in a conceivable biological context. Whilw I have lowered the value from Dembski's original 500 bits, that is still an extreme threshold in our context. So, how could a 4 AAs variation (about 17-18 bits) be achieved in a reasonable time? There is only one possibility. Let's say that A must become B to give a new function. And let's say that the difference is of 4 AAs. The times implied would be really long. But if a state exists, let's call it A1, which is intermediate between A and B and is different form them for only two AAS, and that state is functional and selectable, then the transition A - B can be deconstructed into two independent transitions: A - A1 and A1 - B, each of two AAs. A1, when achieved in one individual of the population, would expand, and become a new population where the second part of the variation could happen with the same probability as the first: IOWs, the total probability of the event would be much higher than in the case where all four mutations must be achieved randomly in the same individual for a new function to arise. Is that clear? That's why I have written, many times, that: "more complex mutations must be deconstructed into simple selectable steps for the darwinian model to work". Now, you can believe that such a deconstruction can be done in all cases. I don't. And there are many reasons not to believe that. The first is that there is no logical reason to assume that. We know that complex functions are not the passive sum of simpler functions, but require higher levels of organization tro work. that is generally true in all fields. In the biological field, in particular, there are a lot of reason to understand that the darwinian myth of function deconstruction is, indeed, a myth. Let's go back to our example. There are different ways in which our A1 can be selectable and expandable. And please, remember that, to be expanded, A1 has to be more functional than A, because it's exactly A that it must expand against. So, the possibilities: 1) A1 can have the same function of A, but at a higher grade. This is simplre enough, and easiest "tweak" which can be realized. If B too has the same function at a higher level than A1, the whole transition is simple enough. But the problem is that it does not create a real new function: it just improves what already exists. To be more clear, as we are speaking of proteins, that case is a case of moving inside a functional island, and a specific fold, at most improving the affinity of an existing molecule for its substrate, or shifting to a similar substrate of the same kind, but slightly different. Note that most of the few examples we have of really understood microevolution are of this kind (and nylonase would be one of them). 2) A1 can have a completely different function, and fold. I don't believe that any of that can be achieved with a two AAs mutation, certainly not starting from a totally unrelated different protein superfamily. But if it could, two possibilities remain: 2a) the transition form A1 to B remains in the same new functional island. then, the really important transition is only the foirst. the second, again, is only a tweak of what already exists. 2b) The second transition again finds a new functional configuration and fold with only two more AAs change. Woderful. What a pity that it does not happen, neither for the first, nor for the second transition. What really has happened, and we don't know how (at least according to the darwinian model) is that new protein domains and superfamilies have constantly appeared in natural history, that each of them is totally unrelated to the others, and that the transtition from one to the other would imply a jump in functional information well above my threshold of 150 bits. And no functional intermediates are known, in any case of that kind of transitions: they are not in the proteome, they have not been proposed by darwinists at any detail of molecular model. They exist only in their imagination. A myth, as I said. In the emantime, darwinists play with fairy tales of small steps which nobody has ever really conceived, least of all observed. Of neutral mutations which become fixed and hey, they are magically just those we needed to "help" the tired mechanism of NS. Of proteins which traverse the almost infinite search space of protein sequences as though they could proudly swim in it and gain the distant shore. And so on. While the facts are: more than 6000 protein domains totally unrelated at the sequence level (less than 10% homology), each of them functional and represented by multiple functional proteins in the proteome, and no intermediate newteen them ever observed. 35 protein families analyzed by Durston for their functional information content, which is in the range of 46 (ankyrin) to 2416 (Flu PB2) Fits, with 28 of them above my threshold of 150 Fits. And so on. gpuccio
gpuccio, Does a chihuahua have a different amount of CSI than a wolf? zeroseven
gpuccio, in Lenskis work the change you quantified to have roughly 10 bits was fixed in populations of bacteria. After that fixation nothing prevents these new strains from undergoing another round of the same mutation and fixation cycle, correct? And then they would have increased the information content by 20 bits. Or is there anything that prevents the new strains from evolving after the first 10 bits? A hidden switch? Indium
Re:
I have not personally seen an application of Dembski’s Explanatory Filter to a real biological system. Do you have a reference?
Try explaining the origin of the digitally coded information system joined to metabolism for first life, then again for body plan level diversity. Have a look at Signature in the Cell -- on the merits, not the strawman caricatures -- for starters. And of course the issues of bias int eh media, and of the significance of neo-malthusianism in the context of darwinism, still go a begging; much less the question of a real alternative. GEM of TKI kairosfocus
But frankly, that paper is hihly specualtive, abstract, and doe not prove anything.
Well you're the expert on the explanatory filter. What are the odds that 470 sequences that look like frameshifts are not? Perhaps you could take one of the numerous examples from the paper and explain the author's error. Petrushka
Not that it is any surprise, but my question to mathgrrl now stands. There is not a shred of evidence that physics alone can accomplish the symbol systems necessary for evolution to even occur in the first place. This little tidbit may now go back to being ignored by materialists of every stripe. Upright BiPed
Indium: Yes to all, but only if you in some way select the mutated organism and expand it. Let's be more clear. If you obtain a 10 bits mutation of some kind in a time t in a population of, say, 10^9 bacteria, then if you want to have the same probability of obtaining the next 10 bits mutation in the same time t at the second round, you have to select the single mutated individual and expand it to a population of 10^9 individuals bearing the first mutation. That, in the darwinist model, is one of the tasks assigned to the necessity part, NS (the other being the fixation of the mutation by negative selection of further mutations). That's why I say that more complex mutations must be deconstructed into simple selectable steps for the darwinian model to work. And as I believe that that is in general impossible, that's why I don't believe that the darwinian model is a credible model. It's as simple as that. Alternatively, in designed protein engineering, the same expansion and fixation can be done by artificial means (artificial selection), which don't require the single steps to be "naturally selectable" (that is, to bear a reproductive advantage). It is only requested that they can be recognized by the designer. That's why ID is a credible model, and that's why I believe in it. It's as simple as that. gpuccio
gpuccio: You seem to agree that biological objects can increase the "information content" by 10 bits, lets say by a couple of mutations. Let´s also say you take the "new", mutated organisms and do a similar experiment and again find an information increase of 10 bits. With respect of the original organism they final one has increased it´s information content by 20 bits, correct? Indium
Petrushka: I apologize, but I answered your question about the frameshift mutations in the above post to MathGrrl. Would you be so kind to read it there? It's the final part. gpuccio
MathGrrl: Briefly: Okay, your definition is evolving (pardon the pun). It seems that CSI is a subset of something called “functional information”, where “functional information” becomes CSI when it reaches a certain number of bits. Correct? Correct! Let's say you got it at last. Sometimes, repetition is useful. You still haven’t demonstrated with an example taken from a real biological system how, exactly, to calculate “functional information”. Doing so would clarify your terms much more than additional verbal descriptions. Please, read in this thread my post #53 (to you), and the related links, read the Durston paper: http://www.tbiomed.com/content/4/1/47 and my posts #64 (to you), #67 (to KF), #74 (to you), #88 (to zeroseven), #89 (to Petrushka). I think I have been pretty active, in this post (and elsewhere, with you too) giving "examples taken from real biological systems". Maybe here too repetition would help, but frankly I am tired. It appears that we are in agreement that evolutionary mechanisms can generate “functional information”, correct? Correct. In simple forms, they can. And they do. “Necessity” is another term of art requiring definition. Do you mean “any mechanism that is a result of known chemistry or physics”? I mean any mechanism that is strictly and explicitly algorithmic. That would include the working of laws of physics, at least at the non quantum level. And any model which describes each step of the algorithm without shifting to probabilistic inferences. What is the difficulty? In all ID literature, starting with Dembski, "necessity" is used for all algorithmic explanations which do not include a probabilistic description and the use of random events ion the model. I thought that was clear. That's why RV is not a necessity algorithm, and NS is. I have not personally seen an application of Dembski’s Explanatory Filter to a real biological system. Do you have a reference? All my examples given above are applications of the filter to real biological systems. Some parts of the filter are usually taken for granted by me, such as the assumption that there is no known necessity mechanism, out of the suggested NS (which I take into account explicitly in the discussion), which can determine the specific sequence of proteins. But if you have different views about that point, please express them. We may be reaching a point of quantifiability! Is “functional information” equivalent to Kolmogorov complexity? If so, why not use the common term? First of all, functional information is a measurement of the complexity which is ecessary to achieve the function (IOWs, of the specified complexity). That's why we use a specific term. Neither Kolmogorov complexity nor Shannon's H imply any reference to funtion. If you read the Durston's paper, you will see that he uses Shannon's H variations to measure functional complexity. I agree with that approach. Anyway, it is true that the complexity we measure must be scarcely compressible, otherwise it could be generated by a simpler algorithm. That is usually taken for granted for proteins, because it is universally recognized that their sequences are scarcely compressible, so that requisite is already satisfied. Actually, in a population, random mutation and selection are taking place in parallel across a large number of individuals. That may be true, but for different mutations in different genes. If we are discussing the "evolution" of a specific gene, it would start with one mutation (or anyway a few mutations, if we want to stretch probabilities to the extreme) in one individual. Unless that mutation is fixed and expanded by positive NS from that specific individual, it cannot contribute to future events in a non random way, IOWs we are still in the purely random model, for which ID computations apply. So you’re saying that the vast majority of random mutations that are preserved by natural selection result in only a small change to both the genotype and the phenotype? That is completely in line with the predictions of modern evolutionary theory. Well, I am happy of that. That means that there are at least small fragments of modern evolutionary theory which are not pure folly :) hat is not actually correct. The importance of neutral and even slightly deleterious mutations has been identified as extremely important. It is correct. Neutral mutations can be as important as you want, but they can contribute to the final event only in random ways. Therefore, they are not a necessity model, and the ID computations still apply. That's a point surprisingly misunderstood by darwinists, in their epistemological distraction. Neutral mutations can become fixed in a population. Yes, but randomly. They are by definition neutral, therefore they cannot be selected. Why should a neutral mutation which is in the long range useful for the final event be fixed better than one which is useless or negative? Again, we are in a purely random system. (I would really appreciate an explicit comment on this point, instead of the usual iterations of "you have not given any real example". Thank you). “Easily” may be overstating the case, but “plausibly” certainly applies, especially when neutral mutations are taken into account. Wrong. I had said: "So, your model seems to be: any transition of 35 AAs or more can easily be deconstructed into small transitions of two AAS, each of them functional and selectable thanks to a specific reproductive advantage." For the reasons stated above, neutral mutations cannot contribute to selectable steps any more than the original random mutations, so they do not change anything. And let's say that you have to show that "any transition of 35 AAs or more can be deconstructed". I happily take away the easily. I will applaud if you succeed, even if that was difficult. I cannot answer you about the immune system now, because I have not the time. I will try to get back on that later. Finally, about frameshift mutations: The only "real example" (you see. I learn quickly) of frameshift mutation I am aware of is Ohno's theory about nylonase in his 1984 paper: Susumu Ohno, Birth of a unique enzyme from an alternative reading frame of the preexisted, internally repetitious coding sequence, Proc. Natl. Acad. Sci. USA Vol. 81, pp. 2421-2425, April 1984 Ohno was a brilliant scientist, but this particular theory has been vastly proven false. And with it, the annexed rhetorics of darwninists against ID. I am well aware of the Okamura paper suggested by Petrushka, because it is cited in the Wikipedia page which gives the "disclosure" about the Ohno theory (I suppose it is meant as some form of consolation :) ). But frankly, that paper is hihly specualtive, abstract, and doe not prove anything. There is certainly not one "real example" of frameshift mutation bearing functional results in it. I am not aware of any more realistic follow up to it. Therefore, I believe that no example of frameshift mutation nearing a new functional protein is known. If you know differently, please let us know (and above all, let Wikipedia and all darwinists know: they will be very happy to have a new "nylonase" argument, after the sad destiny of the original one). gpuccio
gpuccio, If you're only interested in single mutations that result in a significant increase in "functional information" (however you measure it), perhaps we should be discussing the frameshift events mentioned by Petrushka:
You are aware that there are many proposed frameshift events? http://www.sciencedirect.com/science?_ob=MImg&_imagekey=B6WG1-4KJV32X-2-C&_cdi=6809&_user=10&_pii=S0888754306001807&_origin=search&_coverDate=12%2F31%2F2006&_sk=999119993&view=c&wchp=dGLbVlW-zSkWb&md5=edfb0857c096cf57efa86fce8eab7c6b&ie=/sdarticle.pdf
A worked, step-by-step example of how to calculate CSI for one of these mutations would be very helpful in understanding your terms and thereby evaluating your claims. Could you please provide one? MathGrrl
gpuccio, My apologies for the delay in replying. I'm currently in exotic Des Moines (pronounced, of course, DAY MWAH) and finally settled after a day of meetings.
the variation in Lenski’s case, if it is of two AAS, is of about 10 bits of functional information (something less, indeed). It is specified, but not complex. So, it is not a variation which can be defined as CSI, whatever threshold you fix for CSI.
Okay, your definition is evolving (pardon the pun). It seems that CSI is a subset of something called "functional information", where "functional information" becomes CSI when it reaches a certain number of bits. Correct?
Functional information is the amount of information necessary for the function to emerge. In a transition, it is the amount of variation which is necessary for the new function to emerge. Functional information can be simple (below a threshold, which I have suggested at 150 bits for biological systems), or complex (above that threshold). An object, or a transition, which is specified (a function, or a new function), and complex, exhibits CSI, and is empirically found to be designed in all known cases. Is that clear now?
It's getting clearer. Basically you seem to be saying that CSI is "functional information" over a certain number of bits. You still haven't demonstrated with an example taken from a real biological system how, exactly, to calculate "functional information". Doing so would clarify your terms much more than additional verbal descriptions. It appears that we are in agreement that evolutionary mechanisms can generate "functional information", correct?
In evaluating if an object, or a transition, exhibits CSI, we always have to exclude any known mechanism based on necessity which can have generated the result.
"Necessity" is another term of art requiring definition. Do you mean "any mechanism that is a result of known chemistry or physics"?
This is one of the fundamental principles of ID, and of Dembski’s filter.
I have not personally seen an application of Dembski's Explanatory Filter to a real biological system. Do you have a reference?
Indeed, the functional information we measure must be really “pseudo-random” information, scarcely compressible. IOW, we must look at the true Kolmogorov complexity of our result.
We may be reaching a point of quantifiability! Is "functional information" equivalent to Kolmogorov complexity? If so, why not use the common term?
The darwinian mechanism is made of two different components, applied repeatedly: RV + NS. Now, RV is the random part, while NS is a necessity mechanism. The two parts act sequencially, one after the other, and then the cycle is repeated.
Actually, in a population, random mutation and selection are taking place in parallel across a large number of individuals.
I want to state clearly here that any computation of the functional information in a system must be applied only to the random part, that is to the ability of RV to generate a specific result without any help from a necessity mechanism.
So you're saying that the vast majority of random mutations that are preserved by natural selection result in only a small change to both the genotype and the phenotype? That is completely in line with the predictions of modern evolutionary theory.
Yes, you have forgotten a very important if: that each of the “steps” of two AAs mutations must be functional and selectable. IOWs, each step must be visible to NS (which is not a trifle: it must confer a reproduction advantage).
That is not actually correct. The importance of neutral and even slightly deleterious mutations has been identified as extremely important.
Why? Because the two AAs mutation (about 10 bits, in the range of what a random biological system like RV can achieve) must be functional, must be positively selected, and must expand in the population and be fixed against further variation.
Neutral mutations can become fixed in a population.
So, your model seems to be: any transition of 35 AAs or more can easily be deconstructed into small transitions of two AAS, each of them functional and selectable thanks to a specific reproductive advantage.
"Easily" may be overstating the case, but "plausibly" certainly applies, especially when neutral mutations are taken into account.
Interesting indeed. While I see no logical reason why that should be generally true, I would certainly be very happy to analyze any specific darwinian model for such a deconstruction of any of the complex transitions we know must have happened. I have repeatedly suggestes a context which you have always refused to comment upon: the emergence of new protein domains, of new protein superfamilies, which we know has happened repeatedly in natural history.
This is an area of very active research, based on the predictions of modern evolutionary theory, in particular the nested hierarchy. Given your interest in ID, you may be familiar with this literature: http://www.nature.com/ni/journal/v7/n5/full/ni0506-433.html This link shows the literature on the evolution of the immune system (surely a specified function by your definition) presented to Behe at the Dover trial. This is only a small subset of the information available via Pubmed and other sources. Similar amounts of data are available on the evolution of other functional systems. If you don't think Lenski's experiment provides enough "functional information" to constitute CSI, I'd be very interested in seeing a worked example of your calculation for one of the immune system functions referenced in the above link. MathGrrl
I believe there may be a way to rigorously settle the fact that parent bacteria are losing information in beneficial adaptations, besides mathematically. It is fairly well known that it is only when a computer erases information that the second law is obeyed for its computation: Landauer's principle Of Note: "any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase ,,, Specifically, each bit of lost information will lead to the release of an (specific) amount (at least kT ln 2) of heat.,,, Landauer’s Principle has also been used as the foundation for a new theory of dark energy, proposed by Gough (2008). http://en.wikipedia.org/wiki/Landauer%27s_principle Also of interest is that a cell apparently seems to be successfully designed along the very stringent guidelines laid out by Landauer's principle of 'reversible computation' in order to achieve such amazing energy efficiency, something man has yet to accomplish in any meaningful way for computers: Notes on Landauer’s principle, reversible computation, and Maxwell’s Demon - Charles H. Bennett Excerpt: Of course, in practice, almost all data processing is done on macroscopic apparatus, dissipating macroscopic amounts of energy far in excess of what would be required by Landauer’s principle. Nevertheless, some stages of biomolecular information processing, such as transcription of DNA to RNA, appear to be accomplished by chemical reactions that are reversible not only in principle but in practice.,,,, http://www.hep.princeton.edu/~mcdonald/examples/QM/bennett_shpmp_34_501_03.pdf Thus I hold that it may be possible to measure a precise heat release for 'beneficial adaptations, since I hold that it will always fall in accord with Genetic Entropy and the 'beneficial adaptaion' will always be the result of a loss of information from the original optimal information that was in the parent bacteria: This study agrees with the reasonableness of the proposition: Functional Information and Entropy in living systems - Andy McIntosh Excerpt: There has to be previously written information or order (often termed “teleonomy”) for passive, non-living chemicals to respond and become active. Thus the following summary statement applies to all known systems: Energy + Information equals Locally reduced entropy (Increase of order) (or teleonomy) with the corollary: Matter and Energy alone does not equal a Decrease in Entropy http://www.heveliusforum.org/Artykuly/Func_Information.pdf As well, a point that seems to get lost in the details of elucidating how much functional information is in a molecular string, is the fact that information is shown to be its own unique entity that is completely transcendent and separate from matter and energy by quantum teleportation as well as the refutation of the hidden variable argument in quantum entanglement. This is no small thing to consider! "Information is information, not matter or energy. No materialism which does not admit this can survive at the present day." Norbert Weiner - MIT Mathematician - Father of Cybernetics Information and entropy – top-down or bottom-up development in living systems? A.C. McINTOSH Excerpt: It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate. http://journals.witpress.com/pages/papers.asp?iID=47&in=4&vn=4&jID=19 etc.. etc.. bornagain77
You are aware that there are many proposed frameshift events? http://www.sciencedirect.com/science?_ob=MImg&_imagekey=B6WG1-4KJV32X-2-C&_cdi=6809&_user=10&_pii=S0888754306001807&_origin=search&_coverDate=12%2F31%2F2006&_sk=999119993&view=c&wchp=dGLbVlW-zSkWb&md5=edfb0857c096cf57efa86fce8eab7c6b&ie=/sdarticle.pdf Petrushka
Petrushka: They are neither added nor changed. A new function is obtained through a transition from an existing state to a final state. If the transition generates a new function, it is specified. If the transition is of two AAs, and if those two AAs are absolutely necessary for the new function in their unique form, then the complexity of the transition is 20^2, that is 8.65 bits. If the transition is of three AAs, with the same assumptions as above, than the complexity is of 20^3, that is about 13 bits. In general, each AA site which has to have an unique value to ensure the function contributes 4.32 Fits of functional complexity. In the case that more than one AA in that site could give the new function, we can apply the Durston method, which takes into account "how much" a single mutation gas to be specific for the function by using the concept of Shannon's H to compute the change in functional uncertainty between the two states. In that case, each AA in the transition will contribute less than 4.32 Fits to the total complexity, according to how "unspecific" it is. I hope that clarifies some important concepts about quantifying a functional transition. gpuccio
zeroseven: Absolutely not. First of all, Lensky will probably detail (maybe he has already done that) the mutations in his experiment. I am welll sure they will be shown to be very simple. In another famous case, that of nylonase, darwinists have believed for decades that it emerged through a frameshift mutation of an existing protein gene. That would mean the generation of true new CSI, because a frameshift mutation transforms all the existing codons in a completely random way. A functional result of such an event would be against all the ID theory. Obviously, now we know that the darwinist theory was completely wrong, and that nylonase originated through a couple of mutations in the existing penicillinase domain, whose fold and esterase function it keeps, with a shift in target affinity at the active site. That is exactly another case of microevolution, and no new CSI is implied. As you can see, the concept of CSI can be applied (to exclude that new CSI has been generated) to all cases of supposed "evolution" of which we know the molecular basis. In the cases I quoted, it has shown, or will show very soon, that no new CSI generation is implied, and that they are cases of microevolution, invoilving new functional information in very simple form (few Fits). But another important application of CSI is to apply it to existing models of evolution. For instance, as I have said lots of time in this and in other threads, without ever receiving any answer from darwinists, the darwinian theory implies that new protein superfamilies which emerge in natural history must have come from exisisting, different superfamilies through the traditional process of RV + NS. Well, according to ID theory, and to the application of the concept of CSI, that is impossible thorugh random variation alone. Please notice that in this case, if we choose any specific superfamily, of which the time of appearance is more or less known (according to standard darwinian methods, such as molecular clocks, nested hyerarchies, homologies and so on), and if darwinist could make the concession pf proposing a model of how that new protein superfamily emerged (that is, form what different precursor it derived), then we have a situation where we know with reasonable precision: a) The original state (the sequence of the ancestor superfamily), with its function and fold. b) The final state (the sequence of the new superfamily, with its new function and new fold). c) The transition (how many AAs have changed) d) The functional complexity in Fits of the transition (which can be estimated by the Durston method). So, if we compute d), and it is above our threshold (for the moment we can assume mine of 150 Fits), the hypothesis of a purely random transition is invalidated. It is obviously possible that the transition happened in steps, each of them selectable. But then it is the duty of those who propose the model to show that those steps exist, and not only in their imagination, and that they are selectable. After all, we are speaking of a transition from one fold and one sequence to another fold and another sequence. Remember also that the primary sequences of the initial state and of the final state are by definition totally unrelated (less than 10% homology is a good threshold to define completely isolated protein families, and according to SCOP database we have at present 6258 different genetic domain sequence subsets with that property). That means that in such a model the initial state is totally neutral, in the search space, in relation to the final state. Therefore, there is no guarantee that functional intermediates exist between the two states. Indeed, the contrary is true: it is extremely likely, and completely reasonable, that such "selectable functional intermediates" don't exist at all. However, darwinists have certainly never shown those intermediates in such a transition model, Indeed, as far as I know, darwinists have never shown any transition model of that kind. So, what would you call a theory which proposes a causal mechanism (RV + NS) and has no model of how that mechanism could explain thousands of macroevolutionary molecular events that must have occurred? Indeed no reasonable model of macroevolution at the molecular level? And whose only real models are cases of microevolution, involving simple tweaking of existing functions, through one or two random mutations, always fully inside an existing island of functionality? gpuccio
because possibly he has not yet detailed the mutations implied at the molecular level...
Suppose it turns out that the Lensky result depends on two or three point mutations. Is that three bits added, or just three bits changed? Petrushka
gpuccio, Thanks for the explanation. But with reference to your penultimate paragraph, will this always be a problem in real world biological examples? That is, that we will not have detailed enough information to run the calculations? zeroseven
zeroseven: I agree with you: you are confused. But nothing bad in that. I will try to clarify. Usually, quantities are very precise, eg the boiling point of a substance, the energy required to break an electron off an atom etc. There is universal agreement as to amounts and quantities as they are based on observation and then measured in experiments. You are giving examples from physics. Have you any acquaintance with sciences such as biology, medicine, psychology? Be sure that the scenario is very different. Aren't they sciences? Yes, they are. the only thing which is inappropriate here is your epistemology. But with CSI this seems not to be the case. You have given 3 different “thresholds” that various people have adopted beyond which we can take it that CSI has been generated. I am afraid you have not followed well the discussion. The threshold of complexity in the evaluation of CSI has only one purpose: to avoid false positives. If you know something of statistics, you can appreciate that conventional thresholds are used all the time in statistical inference. For instance, the threshold of alpha error in hypothesis testing is usually set at 0.5, but many prefer to set it at 0.01 to reduce the 5% of errors which is implied by such a high threshold. There is nothing vague or non scientific in that. Moreover, I have specified in my posts that the different threshold I quoted are appropriate in different contexts. Dembski's UPB of 500 bits has the purpose to avoid any possible false positive in the whole system of the universe, and of all its computational resources. KF and other have sometimes elevated that thrshold to 1000 bits just to be even more sure that no possible false positive can ever happen. But my personal threshold of 150 bits has a very specific meaning, which I have stated very clearly in my posts: it is a "biological probability bound". IOWs, it is not referred to the whole universe, but to a specific system: our planet, with its 4 billion years of existence, and to the realistic probabilistic resources of bacterial reproductive rate and of the rate of mutations in that system. So, my threshold is not only a conventional value, but is based on a specific evaluation of a realistic and well defined system, which is, I believe, well appropriate for a discussion about the origin of life and of species. And in no way is its setting "based on the desired outcome". What gave you this strange idea? Finally, what do you mean with the following? Surely the only way to to say that Lenski’s experiment does not produce new CSI, is to measure it before and after? But you seem to be saying you can’t measure it because we don’t know enough about what is occurring? In that case what use is it? But it simple: the only way to to say that Lenski’s experiment does not produce new CSI, is to measure the new information (necessary for the new function to emerge) generated from the initial state to the final state, what I have called "the transition". I don't know for sure how many mutations did it in the case of Lensky, and I have reasonably assumed that they were probably a couple. I have invited anyone who has more detailed information about that to provide it. But it is true that, if we have no detailed enough information about a transition, we cannot say if that transition implied the generation of new CSI. Why do you think that such a fact implies that "CSI is no use"? It is absolòutely normal and obvious that, if we have not the information necessary to apply a mathematical concept, we can't apply it in that case. We can obviously apply it in all other cases where that information is available. I understand your esteem for Lensky, but why do you think that the fact that we cannot apply a concept to his experiment (only because possibly he has not yet detailed the mutations implied at the molecular level) makes the concept "not useful"? Epistemology has become very strange, these days... gpuccio
PS: UB, at this time we can look back and see that at no 1, MG's initial objection for this thread was that ideas are not responsible for the people who follow them, joined to the inevitable turnabout allusion to Christian atrocities. The issues of media bias and malthusian influences in darwinist thought feeding into a nihilistic desperation and hysteria were then taken up. After that we see the injection of a distraction on defining and quantifying information, then a refusal to be responsive when this was addressed. That adds up, but not to a happy sum by any means. kairosfocus
07: You and others are again invited to read the weak argument correctives, especially 26 - 28; as you were already invited in this thread. You will immediately see that in fact the simplest metric of FSCI, functionally specific bits, is a commonplace of information systems such as the PC you are using. And, that in every case where we directly and independently know the source of FSCI, it is intelligently caused. That is, FSCI is an empirically reliable sign of causation by directed contingency. You are further invited to examine the measures of functional sequence complexity of thirty-five protein families that was published in the peer reviewed literature, here. The fallacy of setting up and knocking over a strawman you just indulged at 80 above therefore stands exposed as irresponsible, willfully obtuse and materially untruthful; as well as disrespectful. After that, it is time to get back to the serious issue of media bias and the even more serious issue of neo-malthusian nihilism connected to today's darwinist outlook; which this thread is being diverted from. Kindly, do better than that next time. Good day GEM of TKI kairosfocus
I hope everyone had a safe extended-holiday weekend… Mathgrrl, Your claim that I am “uncivil” is a turn of events I am more than prepared to live with. However, in your reasoning you stated that my comments were “baseless”. I challenge you on that. You made a comment regarding an ID proponent who would dare to question evolutionary theory (on an ID blog) while “ignoring a truly phenomenal amount of scientific research over the past century and a half.” In return I asked a simple question. “What research over the past century and a half indicates that inanimate matter can establish symbols systems so evolution (in whatever and any form you wish to believe in it) can even occur in the first place?” The words you and I selected for our individual posts are easily accessible to the average reader. (That is fair to say isn’t it?) The positions are hardly in question. The average person might look at this and come to the conclusion that I think there is more to the story of Life than you, and that you consider the level of explanation to be sufficient to your taste – so much so that apparently anyone on the Internet making certain statements should be reminded they are carelessly speaking outside the consensus. The casual observer might also conclude I am suggesting you personally reconsider the repeated observation of symbolic information processing inside the cell. All of it whirling along in the orchestrated harmony granted to it by the “frozen accident” - as it was first referred to. That functioning thing which evolution needs in order to work at all. The point in the causal chain (where no matter what else we may believe) we both know it all comes together and works; the core act of a diving cell, to copy the information. That information is recorded in a symbolic format. Physics can’t explain it any more than physics can explain the existence of a red plastic ball. I noticed previously that you have given several strong opinions in defense of materialistic thought, and therefore could assume you knew something about it. In particular I assumed you knew that symbolic information was being processed inside the cell. After all, these topics are hardly hidden from view within the ID debate. However, in my response back to you (#27) I simply answered your question, “Could you expand upon it a bit, please?” I offered a very straightforward answer and gave you two examples of the chemical relationships to which I was referring, one of them was an example of the use of symbols in structural information (like the formation of proteins from DNA), and the other was an example of symbols used in bio regulation (like that observed in second messengers or cAMP). I wrote:
The entire body is made up of context-specific reactions and interplay between chemical constituents which have nothing whatsoever to do with each other outside of the context of the system they are coordinated within. cAMP has absolutely nothing to do with glucose. Cytosine-Thymine-Adenine is a chemical symbol mapped to Leucine based upon an arbitrary rule.
…and I also offered two quotes from a respected research biologist which directly supports the comments I had made.
”The second noteworthy aspect is that the computation involves the use of chemical symbolism as information is transmitted…whole cell involvement and transient chemical symbols are typical of cellular computation…These chemical forms act as symbols that allow the cell to form a virtual representation of its functional status…any successful 21st century description of biological functions will include control models that incorporate cellular decisions based on symbolic representations”
Your response was to simply ignore the examples I had given. You made absolutely no comment whatsoever about protein synthesis, second messengers, cAMP, glucose, adenine, information transfer, regulation networks - nothing at all. Instead, you implied that the quotations I had offered lent no support to my claim, and then turned around and asked again for examples - which you just ignored. Clearly, the question (What research over the past century and a half indicates that inanimate matter can establish symbols systems so evolution (in whatever and any form you wish to believe in it) can even occur in the first place?) is not one you intend to address. It is just as clear that, despite not allowing yourself to be open to question, you intend to continue being confrontational to ID concepts. Coincidentally, I hardly think that being confrontational with you is a “baseless” response. Your contrived protestations for descriptive clarity are then seen for what they are. Upright BiPed
MG: It is a little disappointing that this thread continues to be tangential to the major issues raised in the original post. However, it does seem that some further remarks need to be made on the tangential matter. First, I find it sadly disappointing that, again, you have chosen to ignore where there is a specific response to your request on definition, both from BA77 and the undersigned. So, it is simply false that there is a "resistance" to provision of definition, though there is a recognition of the limitations of definition, and a pointing out that a great many things we work with are not subject to the sort of definition you a priori demand. On wider questions you posed, the basic problem with the incrementalist model of origin of bio-information is that it assumes a particular structure to the configuration space of biological systems that is most definitely unwarranted. Namely, that there is a vast, easily accessible continent of function, which leads to easy progress step by step. You have no right to assume such a model, as a moment's reflection on how codes work will tell you: by far and away most at-random complex symbol strings will be non-functional. And DNA stores a code, actually it seems several codes; which are central to life's function. Starting at 100+ k bits of stored information, and ranging upwards of billions. Now, the point of the issue of the threshold of functionally specific complex organisation and associated digitally coded information for bio-function, is that this implies that instead we have deeply isolated islands of function in vast seas of non-function. Consequently, the first challenge is to get to the first viable life forms, where until we have coded stored information joined to metabolic nanotechnology, self-replicating life does not exist and there is no reproduction for variations and environmental culling pressures to shift populations. In short -- and this has been so ever since Darwin truncated his theory at this strategic point -- there is a gap at the root of the tree of life so-called. And, hypothetical replicator molecules in Darwin's warm little pond or the modern equivalent, do not account for the origin of observed von Neumann replicator tied to metabolism cell based life. Then, when we come to novel body plans, we see that we are not "merely"looking at needing to account for the suggested spontaneous origin of ~ 100 k bits of initial bio-information, the codes and the machines that make those codes work, joined to the metabolic systems that turn environmental resources into cell components and energy. Instead, dozens of times over, we have to account for the spontaneous origin of embryologically feasible new architectures for living organisms, with upwards of 10 million bits apiece. Just 1,000 bits is far beyond the credible threshold for spontaneous information generation of the resources of our observed cosmos. Of course, if one has arrived on a beach of function, then one can plausibly discuss how one may by random small variation and differential performance, move towards peaks of performance. But that is not where the issue lives. In short, some big questions on origin of complex biological information are routinely being begged in how we are taught biology and related disciplines. That is, once the complex, functionally specific coded information threshold issue is on the table, macroevolution cannot properly be claimed to be simply accumulated microevolution. And indeed, for a generation, it has been widely known that the fossil record bears this point out: it shows sudden appearances, stasis and disappearance of basic forms, as the overwhelming pattern. There is no empirically well founded smoothly varying tree of life, starting with the gap where there should be a root, and going on to the origins of major body plans. The so-called Cambrian life revolution is the capital illustration of this pattern, but the pattern is the overwhelming one in the fossil record, headlines about missing links notwithstanding. The real question, instead, is how to get to the shores of function that are on islands deeply isolated in vast configuration spaces. And that question points to the only empirically well founded source of complex coded information: intelligence. GEM of TKI kairosfocus
BA, so I assume the answer is "no". So then what's the point of it if you can't make measurements in the real world? zeroseven
zeroseven, evolve some new function that exceeds the parent strain in the fitness test. Is Antibiotic Resistance evidence for evolution? – ‘The Fitness Test’ – video http://www.metacafe.com/watch/3995248 the test is blatantly clear,,, You are the one that believes the absurd position that bacteria evolved into all life we see on earth. If this position of yours is true you should be able to point me to thousands upon thousands of examples of the fitness test being passed, complete with list upon list of new functional proteins being generates as well as a fairly long list of protein machinery being originated by material processes. yet you cannot even cite one single protein originating by purely material processes. Shoot man using all his technology and lab equipment will never find a novel functional protein seeing that they exceed 1 in 10^77 in rarity. The following, if you care anything about the truth, which I highly doubt, shows one of the most crushing problems against neo-Darwinian evolution for ever producing any trivial amounts of functional information whatsoever: Poly-Functional Complexity equals Poly-Constrained Complexity The primary problem that poly-functional complexity presents for neo-Darwinism is this: To put it plainly, the finding of a severely poly-functional/polyconstrained genome by the ENCODE study has put the odds, of what was already astronomically impossible, to what can only be termed fantastically astronomically impossible. To illustrate the monumental brick wall any evolutionary scenario (no matter what “fitness landscape”) must face when I say genomes are poly-constrained to random mutations by poly-functionality, I will use a puzzle: If we were to actually get a proper “beneficial mutation’ in a polyfunctional genome of say 500 interdependent genes, then instead of the infamous “Methinks it is like a weasel” single element of functional information that Darwinists pretend they are facing in any evolutionary search, with their falsified genetic reductionism scenario I might add, we would actually be encountering something more akin to this illustration found on page 141 of Genetic Entropy by Dr. Sanford. S A T O R A R E P O T E N E T O P E R A R O T A S Which is translated ; THE SOWER NAMED AREPO HOLDS THE WORKING OF THE WHEELS. This ancient puzzle, which dates back to 79 AD, reads the same four different ways, Thus, If we change (mutate) any letter we may get a new meaning for a single reading read any one way, as in Dawkins weasel program, but we will consistently destroy the other 3 readings of the message with the new mutation. This is what is meant when it is said a poly-functional genome is poly-constrained to any random mutations. The puzzle I listed is only poly-functional to 4 elements/25 letters of interdependent complexity, the minimum genome is poly-constrained to approximately 500 elements (genes) at minimum approximation of polyfunctionality. For Darwinist to continue to believe in random mutations to generate the staggering level of complexity we find in life is absurd in the highest order! Notes: Simplest Microbes More Complex than Thought - Dec. 2009 Excerpt: PhysOrg reported that a species of Mycoplasma,, “The bacteria appeared to be assembled in a far more complex way than had been thought.” Many molecules were found to have multiple functions: for instance, some enzymes could catalyze unrelated reactions, and some proteins were involved in multiple protein complexes." http://www.creationsafaris.com/crev200912.htm#20091229a First-Ever Blueprint of 'Minimal Cell' Is More Complex Than Expected - Nov. 2009 Excerpt: A network of research groups,, approached the bacterium at three different levels. One team of scientists described M. pneumoniae's transcriptome, identifying all the RNA molecules, or transcripts, produced from its DNA, under various environmental conditions. Another defined all the metabolic reactions that occurred in it, collectively known as its metabolome, under the same conditions. A third team identified every multi-protein complex the bacterium produced, thus characterising its proteome organisation. "At all three levels, we found M. pneumoniae was more complex than we expected," http://www.sciencedaily.com/releases/2009/11/091126173027.htm Scientists Map All Mammalian Gene Interactions – August 2010 Excerpt: Mammals, including humans, have roughly 20,000 different genes.,,, They found a network of more than 7 million interactions encompassing essentially every one of the genes in the mammalian genome. http://www.sciencedaily.com/releases/2010/08/100809142044.htm bornagain77
BA77@76, Ok BA, can you use these equations and tell me what the CSI of Lenski's bacteria is before and after? zeroseven
as well, the fact is that evolutionists must pass the fitness test before they can even claim that new complex functionality/information exists which was not present in the parent species. For evolutionists to try to claim that a sub-species which has lost robustness for survivability, as Lenski's 'cuddled' e-coli clearly does, when compared to its parent wild strain, is to ignore the main point that evolutionists need to establish in the first place. To play semantics with a devolved strain, to see if 'new' functional information has 'evolved', is an exercise in futility for the first step in assessing a gain in functional information/complexity (the fitness test) has not even been passed.,,,, bornagain77
zeroseven you state: And despite mathgrrls efforts, and referring to BA’s notes, I haven’t seen a precise application of it? excuse me but this video, which is listed in my notes, shows a precise application of Szostak's equation for functional information: Mathematically Defining Functional Information In Molecular Biology – Kirk Durston – short video http://www.metacafe.com/watch/3995236 Entire video: http://vimeo.com/1775160 as well I listed this paper in which functional information was calculated for 35 protein families,,, Measuring the functional sequence complexity of proteins – Kirk K Durston, David KY Chiu, David L Abel and Jack T Trevors – 2007 Excerpt: We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable. To demonstrate the relevance to functional bioinformatics, a method to measure functional sequence complexity was developed and applied to 35 protein families.,,, http://www.tbiomed.com/content/4/1/47 ,,, thus zeroseven you expose yourself for being thoroughly disingenuous with the evidence I provided,,, bornagain77
gpuccio, I am not a mathematician or information theorist, and so the technical discussions confuse me. But as a layperson's observation, the way you are using maths and calculations seems very different to how it is usually done in science. Usually, quantities are very precise, eg the boiling point of a substance, the energy required to break an electron off an atom etc. There is universal agreement as to amounts and quantities as they are based on observation and then measured in experiments. But with CSI this seems not to be the case. You have given 3 different "thresholds" that various people have adopted beyond which we can take it that CSI has been generated. It just all seems very ad-hoc and non precise. Why 500 bits, why 1,000, why 150? So 149 would not do it? Do you not think there is a danger of setting thresholds based on the desired outcome? And despite mathgrrls efforts, and referring to BA's notes, I haven't seen a precise application of it? Surely the only way to to say that Lenski's experiment does not produce new CSI, is to measure it before and after? But you seem to be saying you can't measure it because we don't know enough about what is occurring? In that case what use is it? Yours, confused zeroseven
MathGrrl: I am afraid there is some confusion about terms here (I don't know if i may have been imprecise in some phrase, in case I apologize). the variation in Lenski's case, if it is of two AAS, is of about 10 bits of functional information (something less, indeed). It is specified, but not complex. So, it is not a variation which can be defined as CSI, whatever threshold you fix for CSI. If you are aware that the number of functional mutations in Lensky's case is different, please let me know, and we will update the computation. Functional information is the amount of information necessary for the function to emerge. In a transition, it is the amount of variation which is necessary for the new function to emerge. Functional information can be simple (below a threshold, which I have suggested at 150 bits for biological systems), or complex (above that threshold). An object, or a transition, which is specified (a function, or a new function), and complex, exhibits CSI, and is empirically found to be designed in all known cases. Is that clear now? Let's go to your other point. I have already debated it many times with others, and maybe also with you (I can't remember), but here we are again. In evaluating if an object, or a transition, exhibits CSI, we always have to exclude any known mechanism based on necessity which can have generated the result. This is one of the fundamental principles of ID, and of Dembski's filter. Indeed, the functional information we measure must be really "pseudo-random" information, scarcely compressible. IOW, we must look at the true Kolmogorov complexity of our result. Now, you say: If my understanding of your argument is correct, you’re claiming that evolutionary mechanisms cannot generate more than a certain amount of CSI as a single mutation becomes fixed in a population. First of all, the correct form is: "evolutionary mechanisms cannot generate more than a certain amount of functional information". Let's be clear about that. The darwinian mechanism is made of two different components, applied repeatedly: RV + NS. Now, RV is the random part, while NS is a necessity mechanism. The two parts act sequencially, one after the other, and then the cycle is repeated. I want to state clearly here that any computation of the functional information in a system must be applied only to the random part, that is to the ability of RV to generate a specific result without any help from a necessity mechanism. You go on: However, evolution proceeds by many small changes. If each mutation generates 10 bits of CSI, it only takes 15 mutations to hit your 150 bit boundary. (Well, just to be precise, it would require about 17.3 mutations of two AAs, because each of them is 8.65 bits; IOWs, my threshold corresponds to about 35 coordinated mutations, as I believe I had said). But, in essence, that's perfectly correct. I agree with you. That is a perfectly reasonable model. . If... Yes, you have forgotten a very important if: that each of the "steps" of two AAs mutations must be functional and selectable. IOWs, each step must be visible to NS (which is not a trifle: it must confer a reproduction advantage). Why? Because the two AAs mutation (about 10 bits, in the range of what a random biological system like RV can achieve) must be functional, must be positively selected, and must expand in the population and be fixed against further variation. So, your model seems to be: any transition of 35 AAs or more can easily be deconstructed into small transitions of two AAS, each of them functional and selectable thanks to a specific reproductive advantage. Interesting indeed. While I see no logical reason why that should be generally true, I would certainly be very happy to analyze any specific darwinian model for such a deconstruction of any of the complex transitions we know must have happened. I have repeatedly suggestes a context which you have always refused to comment upon: the emergence of new protein domains, of new protein superfamilies, which we know has happened repeatedly in natural history. Well, we know that each protein superfamily is isolated from the others at the level of primary and tertiary structures. Very simply, the primary sequence of each superfamily bears no similarity to the sequence of others (that's how they are defined), and the folding is different. Well, we have hundreds of examples where a new superfamily appears at some point of natural history. That has happened hundreds of times. I ask: how? Please, show a model of that emergence form some pre-existing, different superfamily in a different species, where a transition of "at least" 35 functional AAs has been achieved through single steps of two AAs transitions, each of them bearing a functional gain selectable by NS trough a reproductive advantage. Please, show one such example from the rich darwinian literature, with those minimal 17 functional steps molecularly verified, and then we can really discuss your beautiful model. Then you will have demonstrated that that transition implied no real emergence of CSI, because it can be obtained through a mechanism where the RV part is in the range of what randomness can achieve, and the NS part is well described and documented in detail. That will be a good start. Then you have only to show that the same model can work for all the thousands of protein superfamilies we know. But don't worry: a good start is half the battle. gpuccio
Warmabomber is but fulfilling views of Energy Sec. Holdren etc. Who is responsible for Warmabomber's violent agenda? By: Glenn Harlan Reynolds
Seeing humanity as destructive, Holdren wrote in favor of forced abortion and putting sterilizing agents in the drinking water, and in particular of sterilizing people who cause “social deterioration.” . . . In contemporary America, no respectable person would advocate, say, the involuntary sterilization of blacks or Jews. Why, then, should it be any more respectable to advocate the involuntary sterilization of everyone? Or even of those who cause “social deterioration?” Likewise, references to particular ethnic or religious groups as “viruses” or “cancers” in need of extirpation are socially unacceptable, triggering immediate thoughts of genocide and mass murder. Why, then, should it be acceptable to refer to all humanity in this fashion? Does widening the circle of eliminationist rhetoric somehow make it better?
DLH
Mathgrrl, you keep saying that no one has provided you with a mathematical definition of functional information, yet I have given you a working definition and gpuccio and kairos have give you a much more detailed definition. It is simply ludicrous for you to state this: 'If there is no mathematical definition of “functional information”, for example, then tgpeeler’s claims about how much of it can be generated by evolutionary mechanisms are simply meaningless. Frankly, I’m very surprised to see the resistance to providing detailed definitions.' ,,, Yet here is the definition again,,,, Mathematically Defining Functional Information In Molecular Biology – Kirk Durston – short video http://www.metacafe.com/watch/3995236 Entire video: http://vimeo.com/1775160 Functional information and the emergence of bio-complexity: Robert M. Hazen, Patrick L. Griffin, James M. Carothers, and Jack W. Szostak: Abstract: Complex emergent systems of many interacting components, including complex biological systems, have the potential to perform quantifiable functions. Accordingly, we define ‘functional information,’ I(Ex), as a measure of system complexity. For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, Ex (e.g., the RNA-GTP binding energy), I(Ex)= -log2 [F(Ex)], where F(Ex) is the fraction of all possible configurations of the system that possess a degree of function > Ex. Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree. In each case we observe evidence for several distinct solutions with different maximum degrees of function, features that lead to steps in plots of information versus degree of functions. Measuring the functional sequence complexity of proteins – Kirk K Durston, David KY Chiu, David L Abel and Jack T Trevors – 2007 Excerpt: We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable. To demonstrate the relevance to functional bioinformatics, a method to measure functional sequence complexity was developed and applied to 35 protein families.,,, http://www.tbiomed.com/content/4/1/47 bornagain77
CannuckianYankee,
I think the point we’re all attempting to make here is that scientific questions often do not lend themselves to rigorous mathematical equations and definitions as you seem to require of ID.
The terms CSI, functional information, FSCI, and others are used very frequently on this site and are often associated with quantitative values and claims about real world scenarios. I don't believe it is unreasonable to expect rigorous definitions of those terms. If there is no mathematical definition of "functional information", for example, then tgpeeler's claims about how much of it can be generated by evolutionary mechanisms are simply meaningless. Frankly, I'm very surprised to see the resistance to providing detailed definitions. ID makes claims about the real world. How do you propose to test those claims without knowing how to measure the quantities that you are discussing? MathGrrl
gpuccio, Thank you for another detailed reply. Here, from that reply, is where I think we can have a productive conversation about quantifying CSI:
Now, if we are speaking of a transition where the change in functional information is, say, of two AAs, as it is likely in the Lenski model or in the nylonase model, we are in the range of less than 10 bits. I think nobody in his mind has ever suggested that kind of threshold for CSI. So, it is very simple: a transition between two states where the change is of less than 10 bits can never be defined as a transition which has generated new CSI.
I'm somewhat confused by this because you first estimate that the changes in Lenski's experiment constitute about 10 bits of CSI then you claim that it is not CSI. It seems like you are using the term in two different ways. If my understanding of your argument is correct, you're claiming that evolutionary mechanisms cannot generate more than a certain amount of CSI as a single mutation becomes fixed in a population. However, evolution proceeds by many small changes. If each mutation generates 10 bits of CSI, it only takes 15 mutations to hit your 150 bit boundary. You seem to be saying that evolutionary mechanisms can't generate large amounts of CSI in one fell swoop. Assuming that CSI can be rigorously defined, that would be an expected prediction of modern evolutionary theory. Evolution deals with incremental change based on what already exists. Your previous calculations that came out with large CSI values for certain proteins do so only by ignoring evolutionary mechanisms and assuming that the protein appeared complete and whole in its current form. That is not a biologically realistic scenario. I would very much like to drive this quantification discussion to a conclusion. Could you please take me step-by-step through the calculation of CSI for Lenski's citrate eating e. coli? There is more than enough information about that experiment online for us to make some reasonable estimates about the number of mutations required. This exercise will also allow us to address the ambiguity around what constitutes a valid specification. MathGrrl
F/N: Pardon, but to return to the main area of focus for the thread, I think 28's alternatives to the desperate darwinism tinged neo-malthusianism that helped Mr Lee go off the rails [cf 19], is a point to begin finding ways forward; and it allows us to get out of the poisonous atmosphere fostered by the sort of media bias the original post complains of. So, pardon my putting the following back on the table: ==================== >> Instead of a sterile debate in a poisonous rhetorical atmosphere, let us instead discuss possibilities for a positive future that uses the human capacity to intelligently analyse the possibilities of the forces and materials of the world, to create opportunities for a future that gets us out of the neo-Malthusian trap. As sparkers for thought: 1 –> Energy is the key resource for everything else. So, long term, fusion, shorter term, I think new wave fission such as developments of pebble bed technology offer us a way forward. 2 –> Information technologies, though they are rooted in some of the dirtiest industries of all [look up what happens with Si chip fabrication . . . ] are a key intellectual power multiplier, so this technology should be given a priority, on both hard and soft sides. 3 –> The modularity principle would allow things to be localised, reducing the need for massive conurbations, that seem to have largely become ungovernable. Technologies should be as modular as possible, and as networkee as possible to take advantage of network economics. 4 –> Timber is the major construction resource, so we should look to sustainable timbers, especially the potential of processed bamboos based on species such as Guadua angustifolia [100', 5 - 7 y, higher growth density than pine forests]. Bamboo and the like also can make paper. 5 –> A lot of construction of relatively light structures can move to technologies such as bamboo bahareque, through a modern version of wattle-daub. 6 –> The automotive industry needs to go fuel cell long run, shorter run, I like things like algae oil [couple coal plants to feed bio oils grown by algae, cutting emissions 50%], and I think if we can do butanol in a fermenter cost effectively, we are looking at 1:1 for gasoline for Otto cycle engines. 7 –> That brings up biotech. We need a big thrust to get cells to manufacture as much of our chemistry as we can, industrial and pharmaceutical. Bugs will do it for us, on the cheap, once they are reprogrammed. (Remember, they are existing Von Neumann replicator technologies.) 8 –> Wind and solar will probably remain fringe but useful technologies. With one major exception, we need to look back to sailing schooners as small regional carriers in a post oil world. 9 –> Rail is the most inherently efficient bulk mover land transportation system, so we need to look to how that could be regenerated — subsidies, overt and hidden, killed rail. 10 –> We need to look to aquaculture and high tech agriculture to feed ourselves. 11 –> We need to break out of Terra, using our moon as staging base — 1/6 gravity makes everything so much easier to do, with Mars as the first main target. Beyond Mars, the Asteroid belt. 12 –> Transform these and we are looking at real estate to absorb a huge onward population. 13 –> As a long shot, high risk high payoff investment, physics needs to look at something that can get us an interstellar drive and transportation system. 14 –> So, investment in high energy accelerators and related physics and astronomy should be seen as a long term investment of high risk but potentially galaxy-scale [or bigger?] payoff. 15 –> Settling the Solar system takes the potential human population to the dozens of billions. 16 –> If we can break out and find terraformable star systems beyond, the sky is literally the limit, even if we are restricted to a habitable zone band in our galaxy. (For, we are dealing with potentially millions of star systems.) _____________________ Now, wouldn’t it have made a big difference if we had been discussing these sorts of possibilities instead of the eco-collapse, climate collapse and over-population themes that serve little purpose but to drive people to desperation — and into the arms of those who offer convenient “solutions”? >> ==================== I think finding a way forward is what we really need to discuss. And, by putting up something serious and discussing it, we can move beyond the limits set by media biases. GEM of TKI kairosfocus
GP: You are also very right. One way to see that is to look at the plausibility thresholds identified by Abel in his recent paper, for different scales:
The UPM from both the quantum (q?A) and classical molecular/chemical (c?A) perspectives/levels can be quantified by Equation 1. This equation incorporates the number of possible transitions or physical interactions that could have occurred since the Big Bang. Maximum quantum-perspective probabilistic resources q?u were enumerated above in the discussion of a UPB [6,7] [[8] (pg. 215-217)]. Here we use basically the same approach with slight modifications to the factored probabilistic resources that comprise ?. Let us address the quantum level perspective (q) first for the entire universe (u) followed by three astronomical subsets: our galaxy (g), our solar system (s) and earth (e). Since approximately 1017 seconds have elapsed since the Big Bang, we factor that total time into the following calculations of quantum perspective probabilistic resource measures. Note that the difference between the age of the earth and the age of the cosmos is only a factor of 3. A factor of 3 is rather negligible at the high order of magnitude of 1017 seconds since the Big Bang (versus age of the earth). Thus, 1017 seconds is used for all three astronomical subsets: [Universe qWu = 10^43 trans/s * 10^17s to date * 10^80 p,n,e = 10^140 Galaxy qWg = . . . * 10^67 = 10^127 Solar system qWs = . . .*10^57 = 10^117 Earth qWe = . . .*10^42 = 10^102 These above limits of probabilistic resources exist within the only known universe that we can repeatedly observe--the only universe that is scientifically addressable. Wild metaphysical claims of an infinite number of cosmoses may be fine for cosmological imagination, religious belief, or superstition. But such conjecturing has no place in hard science. Such claims cannot be empirically investigated, and they certainly cannot be falsified. They violate Ockham's (Occam's) Razor [40]. No prediction fulfillments are realizable. They are therefore nothing more than blind beliefs that are totally inappropriate in peer-reviewed scientific literature. Such cosmological conjectures are far closer to metaphysical or philosophic enterprises than they are to bench science.
For chemical reactions with 10^-13 s as the speed limit, these fall to 10^108, 10^96, 10^85, and 10^70 respectively. Thus, the scope of search resources for chemistry is well within the limit you are proposing. Even, for the whole universe, much less my monkeys, keyboards and banana plantations model for digital text strings -- an upper limit on keyboarding would be like 10^-3 s, generous given the realities of keyboard bounce and the need for debouncing. [Even the use of a cross-coupled RS latch with a chageover switch or a latching JK flipflop where the o/p switches the f/f to the store state on first contact in ~ 10 ns depending on IC technology will not get us beyond that.] So, 150 functional bits [2^150 ~ 1.43*10^45 . . . Get XCalc, folks; my favourite convenience RPN simple calc] is fairly safe for slow reactions relative to long chain organic, endothermic molecules that have to be chaperoned step by step in observed cases. Note how there are hundreds of ribosomes typically, putting parallel processing to get cumulative speed. The Durston paper then puts in the knife, starting with just the individual proteins, not even the level of organising the life form, getting metabolism demanding many times over multiplied dozens of proteins, and self replication. Then, to innovate new body plans . . . In short, there are very good reasons why we do not see the Darwinian mechanisms doing much more than Behe's edge of evo observations from Malaria parasites. Organisms do adapt and vary, but the mechanisms by which the organisms come about and their body plans come about, are far beyond what it is credible that chance variations and mechanical necessity can do, whether with chemicals in warm little ponds, or with genetic accidents. And we have just one empirically known mechanism that can routinely exceed random walk searches: active information injected by intelligences. GEM of TKI kairosfocus
KF: thank you for your always precise contributions. There is no doubt that 500 or 1000 bits is a very reasonable threshold for an universal case. My reason for proposing a lower threshold of 150 bits in the specific context of proteins and biological information, is tha here we are not in a general case, but in a very particular one. And we have a good knowledge of the available probabilstic resources, which, even in an extreme and very safe calculation, are much lower than those of the universe, taking into consideration a reasonable esteem of the most favourable case (bacteria), with the highest population number and replication rate, and as a timeframe the estimated useful age of earth, let's say 4 billion years. Moreover, when we apply the calculation to protein families, we can try a reasonable estimate of the functional space, and so we need not push our threshold higher for safety reasons. For instance, if we look at the estimates of FSCI in Durston's paper, we can see that 28 protein families out of 35 he has analyzed are well beyond my threshold of 150 functional bits. And 12 of them are beyond the 500 bits threshold, with 6 beyond the 1000 bits. The 7 ones which are below the 150 bits threshold are very short proteins, of less than 100 AAs. But it is interesting to observe that, out of 12 protein families below 100 AAs length, 5 do have a functional complexity higher than 150 bits, starting with insulin which is 66 AAs long, and has a functional complexity of 156 Fits. Therefore, the important point is that functional complexity is a function of length, but not only of length. The Durston method measures very well, and very naturally, how much of the AA length is really necessary for the function, which, together with length, is a very good estimate of the target space. It is obvious, at least for me, that even the worst case of ankyrin, with its 33 AAs and 46 Fits, could not emerge in a random system (the probability for that to happen still being of 1:7x10^13, which is reassuring enough. But we have to fix a threshold somewhere, and for protein complexity I believe that 150 Fits is very appropriate, because it can be derived by reasonable and very generous assumptions about possible random biological systems, and empirically it seems to detect almost all the cases of complexity in the existing proteome, leaving out only a minority of probable false negatives. gpuccio
PS: GP [and MG], the reason for going from 500 bits -- the number of Planck time quantum states the atoms of the observed universe run through in the thermodynamically credible lifespan of the cosmos -- is that it is often hard to specify how many states will be functional. So, taking the number of states of the observed universe as the practical upper limit on an island of function -- it is the largest functional object we observe! -- we isolate that to the degree of 1 in 10^150, by squaring this number. (Practically speaking, start with 500 bits and double it. Number of possible states for B bits is 2^B. Doubling B gives a squared number of states. 16 bits gives 64 k address space, and 32 bits gives 4 gigs.) At 1,000 bits, an island of function the scope of our whole universe will be utterly isolated. And, no practically feasible search will be able to use up even a small fraction of 10^150 random walk search-steps. That means that we are utterly unlikely to get to shores of an island of function by sheer dumb luck. Where, as Marks and Dembski show us, on average, search algorithms will be no better than random walk searches. [Unless the algor is very well matched to the space on intelligent, active info, it will be typically WORSE than random search: if you take a multiple choice test based on misinformation, you are MORE likely to pick wrong items than if you picked at random.] In the real world -- as opposed tot he world of selectively hyperskeptical objections -- we routinely recognise intelligent configuration of symbols from their complexity and organised functionality. That is why we know the posts above are intelligently designed, not random bursts of noise on the Internet. We have an Internet full of examples of FSCI being the product of intelligence, and ZERO cases of FSCI being proiduced by lucky noise. That is why objections are getting into ever more esoteric issues on what intelligence is, whether free will exists, and whether the matter can be quantified -- never mind that the quantifications are easily accessible [cf no 27, WACs top right at UD for summaries], they too will be objected to in turn; again and again. That is why I am picking the simplest, as it is what we routinely implicitly use when we go buy a hard drive or more memory for our PCs, or when we send out a file as an e-mail attachment and notice its size. In short, when we see objections tot he sort of simple metric above: X = F*C*B, we have good reason to know the objections are selectively hyperskeptical, and we can see why relative to very familiar examples. Indeed, as UB and TGP keep on pointing out, the objectors must intelligently produce samples of FSCI to make the objection to FSCI, underscoring that the objections are self-referentially incoherent. As in, reductio ad absurdum. kairosfocus
MathGrrl: Pardon. Above, you interjected a question on the quantification of CSI/[d]FSCI, in a thread which is primarily about dealing with media bias and manipulation. (And since in comment no 1, you made an objection on the subject, which was answered, you are aware of the main subject.) By 41 - 46, you had several responses, INCLUDING DISCUSSIONS OF THE QUANTIFICATION OF CSI/[d]FSCI AND RELATED MATTERS, WITH ONWARD LINKS THAT DISCUSS THE GENERAL CONCEPT OF INFORMATION AND GO SO FAR AS ONWARD ISSUES IN THERMODYNAMICS. In response, you picked on 46, which was a "sauce for the goose" turnabout intended to show the inadequacy of objections on definition that demand mathematicisation where it is inappropriate. By 53 - 55, GP and the undersigned repeated the exercise, and UB pointed to the underlying incoherence again; GP pointing to a previous case where you apparently dropped out of a discussion when he answered the same basic question/objection. In addition, in my own responses, I highlighted the variety, limitations and problems associated with definitions. In your latest responses, I find little responsiveness to this substantial body of discussion. In particular, your repetition of demands on definitions does not seem to reflect any serious engagement of the issue of definition and the linked one of quantification. I will go so far here, as to point out that if you call for an inspection of the properties of your PC's hard drive [or even of a document written in Word or the like], you will see a listing of xxx bits. That listing gives the number of bits at work to store the functionally specific information that makes your PC work, or allows you to present the document in question. Shannon information is in effect a raw capacity or transmission measure, with some adjustment based on the redundancy of the alphabet of symbols used and some further allowance for the effects of noise, as is discussed in Section A the always linked through my handle in the LH column. (As a rule, some symbols from a set of such will be more often used, e.g. e vs x in English. [And that in turn answers to the question of what symbols are, by pointing to a live example, the ASCII set, or the old fashioned glyphs we studied in our first classes in school. Also, given long experience at UD, when we see one definition simply being the occasion for more and more demands for further definitions, in a context that refuses to engage the already linked serious discussions on the nature and limits of definition: either you go circular [a dictionary] or you have primitive concepts that are not further defined than by pointing to cases and using family resemblance [the sciences, Mathematics], then the implication is that the objections are rhetorical and distractive not substantial.]) However, the information listed for your hard drive is not simply Shannon info, it is information that functions, and in specific ways. As this PC reminded me the hard way a scarce week back, a very few corrupted bits in Config Sys are quite enough to kill function, and lose you the months of stored info on a hard drive when the drive has to be wiped and reloaded with the OS. So, we can see just how specific functionality can be. Believe you me, the "islands of function in a wider sea of non-functional configs" is NOT a mere empty metaphor. (Advice: back up all your files that if you lose, you regret [at min, email to yourself on an Internet mail account] . . . I got caught getting lazy on that, again. "It's a new PC . . . " and "I gotta rush off . . . " OUCH.) So also, as noted, recognising that we can identify adequate vs inadequate function -- does it WORK? -- we may define a dummy variable on go/no go: F = 1/0. Similarly, on applying a threshold for complexity -- more or less a measure of how isolated and hard to find at random target zones will be in configuration spaces stipulated by the size of the body of information and the scope of the set of symbols, we can have a go/no on complex enough that random walks are unlikely to encounter the island of function: 500 - 1,000 bits is more than good enough, usually we just use 1,000 bits and again do C = 1/0. The FSCI simple metric is one step away: Take the product of number of bits at work [B], with F and C: X = F*C*B. (NB: This is actually presented in Section A, the always linked, and in a simple form in the Weak Argument correctives, 25 - 28.) A bit rough and ready, but good enough for a lot of practical work. (BTW, grades in school are assessed and measured in a very similar way, as are job performance ratings. In short, the approach is quite good enough for serious real-world contexts.) In short, pardon again: your behaviour (given several on-point responses that you did not address) raises a serious question: are you raising a serious burning question, or are you dragging a distractive red herring across the track of a subject that is going where you do not wish to see it go? In this case, highlighting how the major media [including nominally "conservative" houses] is so often dominated by an evolutionary materialistic agenda that turns that ideology into a sacred cow. So, hard questions about what evolutionary materialism is doing to us as a culture that need to be asked, are being persistently ducked by those who man our microphones, cameras and editorial staff-rooms. Not to mention, our classrooms. It is therefore highly relevant to cite some choice words from Lord Keynes' peroration to his famous General Theory:
. . . Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back. I am sure that the power of vested interests is vastly exaggerated compared with the gradual encroachment of ideas. Not, indeed, immediately, but after a certain interval; for in the field of economic and political philosophy there are not many who are influenced by new theories after they are twenty-five or thirty years of age, so that the ideas which civil servants and politicians and even agitators apply to current events are not likely to be the newest. But, soon or late, it is ideas, not vested interests, which are dangerous for good or evil.
And so, MG, it is not so much that Mr Lee was insane, but that the mad man is the canary in the mines, informing us of the poisonous currents in the air, by his unbalanced ravings. As was pointed out in 19 - 21 and 36 above. If we are to take this thread in a positive direction, why not pick up on the positive responses in 28 above, which suggest a way to solutions, short and long term, including a short discussion on why even investment in some truly esoteric particle/high energy physics research may have long term incalculably positive payoffs? I think this can provide a mutually beneficial and relatively uncontentious focus for discussion that could help us turn a sad tragedy into a possibility for hope. Can we go for that? GEM of TKI kairosfocus
MathGrrl: I still don’t see how, according to your own definitions, the mutations that arose during Lenski’s long running experiment did not create CSI. I want to answer simply and explicitly to this point, so that it is really clear. You certainly know that the role of complexity in any CSI definition is to rule out simple changes whose complexity is in the range of a random model explanation. That requires a threshold of complexity to define CSI. The threshold is usually taken at a very high level (of complexity, or at a very low level of probability, which is the same), just to be "on the safe side". The threshold can vary according to the context, and therefore to the possible probabilistic resources of the model we are considering. Dembski has spoken of a UPB, at about 500 bits of complexity. That's because he is reasoning at the level of the whole known universe, and even in that context he want to stay "very safe", to avoid any possible false positive. Others, even more generously, have pushed that threshold to 1000 bits. I maintain that, for any realistic model of biological information, where we certainly have potential probabilistic resource in a much lower range, a threshold of, say, 150 bits is much more appropriate. Now, if we are speaking of a transition where the change in functional information is, say, of two AAs, as it is likely in the Lenski model or in the nylonase model, we are in the range of less than 10 bits. I think nobody in his mind has ever suggested that kind of threshold for CSI. So, it is very simple: a transition between two states where the change is of less than 10 bits can never be defined as a transition which has generated new CSI. As far as I know, the threshold I have suggested (150 bits) for biological models is the lowest which has ever been suggested. You can stick to that in your discussions with me, if you want, but not to any lower threshold value. So, just to be clear, to show to me that any transition has generated CSI, it is not only necessary that the transition has generated a new function (which in the case of Lenski and nylonase can be reasonably assumed), but also that the transition itself implied a functional complexity of at least 150 bits. IOW, that at least 35 independent AA mutations were necessary for the new function to appear. As you can see, I am keeping the threshold very low. I like risk. But two AAs has never been a complexity threshold for anyone. gpuccio
MathGrrl: Please, explain better what is not clear to you abot the quantification of dFSCI. In my two posts I linked above, I have answered in detail to this point. Would you please comment in some detail about them? Again, you speak of citrate digestion. I quote from my first post: "Yes, but it is a multiple function, a sysyem, and it depends on many differnt simple functions. Most of those functions have not varied in Lenski’s experiment (probably). If Behe is right, the only function which has varied is in the transport system, and that variation is probably due to a tweaking of an existing transport protein. And that tweaking is probably due to very few mutations. We are, IOW, in the same scenario as nylonase: microevolution. In this scenario, we can reason about CSI/dFSCI only after we know in detail what has changed at the molecular level, what biochemical function exactly has appeared which was not present before, and in what protein. This is an important point. We cannmot reason about CSI if we do not know the details of what we are observing (or, at least, the details of the model we are evaluating: more on that after). CSI is not philosophy: it is science." So, again, to compute the variation in functional complexity in a transition, you must know in detail what has changed at the molecular level. Even if we probably don't know that in the case of Lensky's system, is extremely likely that the mutations are only a few (maybe even one or two). That rules out that new CSI has been generated. And again, the new function which has emerged is probably permeability to citrate, and not the ability to digest it. To compute CSI in a transition, we must know in some detail the starting state and the final state at the molecular level. That's why I ask you again what is your model of the emergence of a new protein domain in natural history, an event which certainly occurred at least a few thousand times independently. If you woul offer at least one hypothetic model, such as: protein domain superfamily A emerged form the existing, different protein domain superfamily B at this point of natural history, in approximately this time, we can make a rough computation of the dFSCI variation implied in the transition. By the way, an explicit method to compute the functional information variation in a transition is outlined in the Durston paper, many times linked, upon which I don't remember you have ever commented. So, welcome to the discussion again, but please be specific and answer our specific comments. In detail. gpuccio
MathGrrl, I think the point we're all attempting to make here is that scientific questions often do not lend themselves to rigorous mathematical equations and definitions as you seem to require of ID. As I pointed out in an earlier post, Darwinists to not entertain that level of rigor on many (or most) of their arguments. FSCI is a useful term to a point, but perhaps not to the point you require. That does not render it scientifically useless. At the same time, neither is "evolution" or even "RV + NS," or most of the terminology by which Darwinists maintain the science of ToE. Earth and biological science is not necessarily mathematical. Math is a much more rigorous and exact discipline than science. Science attempts to arrive at the best explanation (especially when we're dealing with events, which neither you nor I have personally witnessed). When math can be applied to scientific arguments, we are all the better for it, but quite often phenomena do not currently lend themselves to such precision. They may eventually, and we should attempt to be as precise as we can, but in order for you to understand ID's basic arguments, such precision is not "essential." I will give you this much; quantification may be useful, as some of the current ID research attempts to do just that. It may be useful to be able to quantify the level of FSCI in DNA, but at this point, I don't believe it is "essential," as you seem to believe it is. We don't quantify the exact amount of FSCI in the posts we offer on this board, yet I don't think any of us would doubt that the FSCI is present. From the upthread discussion" BA @ 4 “The fact is that there is not one single instance of the purely material processes creating any functional information whatsoever!!!” TGP: "This is true and I’ll take a second to tell Mathgrrl why it will always be true." MG: "from your post upthread are meaningless. We need a referent to objective reality for that claim to be assessed." We have the referent objective reality available to us. BA stated exactly what that reality is. The problem we face is when the question is begged regarding RV + NS to be able to produce CSI. The typical Darwinian answer is contrary to our experience; thus, it is reasonable to object. It seems to me that many Darwinists attempted to attack Behe's very basic definition of "irreducible complexity" not long ago with the same sort of rhetoric. "IC can't be quantified, therefore it is useless." Then they attempted to demonstrate that Behe's mousetrap is not irreducibly complex; thus negating their earlier objection. If IC can't be quantified, then their attempt to refute it is meaningless. Of course IC can be quantified. It is quantified in the precise definition Behe supplied for it. But they failed to apply Behe's precise definition in their attempted refutation; preferring to redefine what he meant so they could attack it. Then they applied a similar argument to the "objective reality" of the bacterial flagellum, and failed once again. I understand your caution in wanting to know exactly what we mean by CSI or further FSCI, but those terms are defined as rigorously as can be expected for the moment. And if you look at the history of ID terminology, proponents attempt to redefine those terms more rigorously as reasonable objections occur. This is why we now speak of FSCI as more quantifiable than simply CSI. They may not be quantifiable as far as precise numeric equations as to what FSCI contains, but they are quantifiable as far as being able to recognize FSCI from non FSCI (Shannon information, for example). Here's a thought experiment for you: What is complex information? What sort of information fits with that definition? Well, Shannon Information fits as well as CSI fits. Both are forms of complex information. What is Complex Specified Information? Well Shannon information does not fit, because it is not specified. The information in this post fits because it fits all three criteria; it is information, it is complex, and it is specified. What is functionally specified complex information? Shannon? no, The information in this sentence? Yes. All printed information that forms sentences? Not necessarily. Not all sentences are necessarily functional. For example: "Judge bananas shut smoothly on mountain oceans." It's a complete sentence, is complex and specified, but it forms no function relevant to reality. Well, you could argue that it forms a function as far as an example of what I mean, but by itself without reference to my argument here, it serves no function. You see then how the definition is quantified? It weeds out certain types of information. That may be as quantifiable as we presently can be. But the definition is useful because it relates to our experience. In our experience, we encounter FSCI only in reference to purposeful conscious applications, and not in reference to non-purposeful, non-conscious applications such as RV + NS, or any sort of chance happening, such as throwing magnetic letters at a refrigerator, or may cat walking across my computer keyboard and producing Shannon information. Any argument that we encounter RV + NS producing FSCI in biological systems is simply question begging at this point. This is not to say that you couldn't demonstrate how RV + NS can produce FSCI, but in our present experience, it has not been demonstrated. CannuckianYankee
Mathgrrl, why do you not address this post here?,,,, https://uncommondescent.com/darwinism/media-mum-about-deranged-darwinist-gunman/#comment-363043 ,,,In which the mathematical definition of functional information, and how it relates to molecular biology, is laid out in detail? That is exactly what you asked for, it is not? Do you truly believe Lenski's 'cuddled' e-coli, which are kept isolated from the wild strain, are proof of a gain in functional information? It is not even close Mathgrrl, even though it is sold as supposedly irrefutable proof of evolution! These following articles refute Richard E. Lenski's 'supposed evolution' of the citrate ability for the E-Coli bacteria after 20,000 generations of the E-Coli from his 'Long Term Evolution Experiment' (LTEE) which has been going on since 1988: Multiple Mutations Needed for E. Coli - Michael Behe Excerpt: As Lenski put it, “The only known barrier to aerobic growth on citrate is its inability to transport citrate under oxic conditions.” (1) Other workers (cited by Lenski) in the past several decades have also identified mutant E. coli that could use citrate as a food source. In one instance the mutation wasn’t tracked down. (2) In another instance a protein coded by a gene called citT, which normally transports citrate in the absence of oxygen, was overexpressed. (3) The overexpressed protein allowed E. coli to grow on citrate in the presence of oxygen. It seems likely that Lenski’s mutant will turn out to be either this gene or another of the bacterium’s citrate-using genes, tweaked a bit to allow it to transport citrate in the presence of oxygen. (He hasn’t yet tracked down the mutation.),,, If Lenski’s results are about the best we've seen evolution do, then there's no reason to believe evolution could produce many of the complex biological features we see in the cell. http://www.amazon.com/gp/blog/post/PLNK3U696N278Z93O Lenski's e-coli - Analysis of Genetic Entropy Excerpt: Mutants of E. coli obtained after 20,000 generations at 37°C were less “fit” than the wild-type strain when cultivated at either 20°C or 42°C. Other E. coli mutants obtained after 20,000 generations in medium where glucose was their sole catabolite tended to lose the ability to catabolize other carbohydrates. Such a reduction can be beneficially selected only as long as the organism remains in that constant environment. Ultimately, the genetic effect of these mutations is a loss of a function useful for one type of environment as a trade-off for adaptation to a different environment. http://www.answersingenesis.org/articles/aid/v4/n1/beneficial-mutations-in-bacteria Lenski's work actually did do something useful in that it proved that 'convergent evolution' is impossible because it showed that evolution is 'historically contingent'. This following video and article make this point clear: Lenski's Citrate E-Coli - Disproof of Convergent Evolution - Fazale Rana - video http://www.metacafe.com/watch/4564682 The Long Term Evolution Experiment - Analysis Excerpt: The experiment just goes to show that even with historical contingency and extreme selection pressure, the probability of random mutations causing even a tiny evolutionary improvement in digestion is, in the words of the researchers who did the experiment, “extremely low.” Therefore, it can’t be the explanation for the origin and varieity of all the forms of life on Earth. http://www.scienceagainstevolution.org/v12i11f.htm Upon closer inspection, it seems Lenski's 'cuddled' E. coli are actually headed for genetic meltdown instead of evolving into something better. New Work by Richard Lenski: Excerpt: Interestingly, in this paper they report that the E. coli strain became a “mutator.” That means it lost at least some of its ability to repair its DNA, so mutations are accumulating now at a rate about seventy times faster than normal. http://www.evolutionnews.org/2009/10/new_work_by_richard_lenski.html Is this your ace in the hole Mathgrrl? And if your supposedly strongest piece of evidence falls completely apart upon cursory examination, what does this say about all the other evidence you have been brainwashed with? further notes: In fact, trying to narrow down an actual hard number for the truly beneficial mutation rate, that would actually explain the massively integrated machine-like complexity of proteins we find in life, is what Dr. Behe did in this following book: The Edge Of Evolution - Michael Behe - Video Lecture http://www.c-spanvideo.org/program/199326-1 A review of The Edge of Evolution: The Search for the Limits of Darwinism The numbers of Plasmodium and HIV in the last 50 years greatly exceeds the total number of mammals since their supposed evolutionary origin (several hundred million years ago), yet little has been achieved by evolution. This suggests that mammals could have "invented" little in their time frame. Behe: ‘Our experience with HIV gives good reason to think that Darwinism doesn’t do much—even with billions of years and all the cells in that world at its disposal’ (p. 155). Dr. Behe states in The Edge of Evolution on page 135: "Generating a single new cellular protein-protein binding site (in other words, generating a truly beneficial mutational event that would actually explain the generation of the complex molecular machinery we see in life) is of the same order of difficulty or worse than the development of chloroquine resistance in the malarial parasite." That order of difficulty is put at 10^20 replications of the malarial parasite by Dr. Behe. This number comes from direct empirical observation. Richard Dawkins’ The Greatest Show on Earth Shies Away from Intelligent Design but Unwittingly Vindicates Michael Behe - Oct. 2009 Excerpt: The rarity of chloroquine resistance is not in question. In fact, Behe’s statistic that it occurs only once in every 10^20 cases was derived from public health statistical data, published by an authority in the Journal of Clinical Investigation. The extreme rareness of chloroquine resistance is not a negotiable data point; it is an observed fact. http://www.evolutionnews.org/2009/10/richard_dawkins_the_greatest_s.html Thus, the actual rate for 'truly' beneficial mutations, which would account for the staggering machine-like complexity we see in life, is far in excess of one-hundred-billion-billion mutational events. So this one in a thousand, to one in a million, number for 'truly' beneficial mutations is actually far, far, too generous for an estimate for evolutionists to use as an estimate for beneficial mutations. In fact, from consistent findings such as these, it is increasingly apparent the principle of Genetic Entropy is the overriding foundational rule for all of biology, with no exceptions at all, and belief in 'truly' beneficial mutations is nothing more than wishful speculation on the materialist part which has no foundation in empirical science whatsoever. Evolution vs. Genetic Entropy - video http://www.metacafe.com/watch/4028086 The foundational rule of Genetic Entropy for biology, which can draw its foundation in science from the twin pillars of the Second Law of Thermodynamics and from the Law of Conservation of Information (Dembski, Marks, Abel), can be stated something like this: "All beneficial adaptations away from a parent species for a sub-species, which increase fitness to a particular environment, will always come at a loss of the optimal functional information that was originally created in the parent species genome." bornagain77
Upright Biped,
Mathgirl, your response to TGPeeler in 51 is so completely self serving that it’s a little difficult to see it as anything but a matter of pure rhetoric. Which it is.
Asking for definitions in order to understand someone's argument is "pure rhetoric"? Actually, I am doing tgpeeler the courtesy of taking his points seriously and spending my time and intellectual effort to understand them better. You, on the other hand, are being remarkably uncivil by casting baseless aspersions on my intentions.
You are more than welcome to remedy that situation by applying your own objection to your own objection, and simply answering the question upthread from CY. Perhaps that will provide a certain amount of perspective to the rhetoric. “Could you please provide a mathematical definition of “definition,” so that any interested observer can objectively measure it, and thus know exactly what you’re referring to?” This question, by your own standards, is a completely legitimate question, and should be answered. So please answer it.
I could go down the rathole you're attempting to dig by explaining how mathematicians define terms, but that would simply distract from the main point of the discussion (which, I suspect, is your goal). The point is that terms of art like "functional information" must be rigorously defined before they are used. The inability of tgpeeler or yourself to do so means that any claims about functional information are quite literally meaningless. If you would like to respond to my courtesy with courtesy, I would be delighted to continue the discussion with you. If, instead, you want to persist in your attempts at distraction, I'll spend my time with the more civil participants here. MathGrrl
gpuccio! Hello again! I did indeed drop out of our previous discussion due to the pressures of the first weeks of the new semester, for which I hope you will accept my apologies.
think we had some discussions about functional informayion recently. I would suggest you look at my last answers to you here: https://uncommondescent.com.....ent-362414 and here: https://uncommondescent.com.....ent-362415 to which, I believe, you have never answered.
Thank you for the links and for the time you spent to write those posts.
And if there is any other aspect of the quantification of functional information in proteins which you would like to discuss, let’s do it.
Great! After reading your linked posts, it seems to me that quantification is exactly what is missing. I still don't see how, according to your own definitions, the mutations that arose during Lenski's long running experiment did not create CSI. In fact, we seem to agree that understanding the evolutionary history of a particular function is essential to determining the amount of CSI in the underlying proteins and genome. That suggests that we should be in agreement that simply computing four to the power of the length of the genome or twenty to the power of the length of the protein is not relevant to real world biological systems. However, we seem to not be in agreement on that point. Quantification is, therefore, essential to making any progress in our discussion. If you could walk me through a step by step calculation of CSI for a real world biological function (e.g. citrate digestion in e. coli or the ability to digest nylon), I think we could both learn a lot. That would enable me to independently calculate the CSI of other biological systems and determine if your claims about the inability of evolutionary mechanisms to create CSI are correct. Thanks again for coming back into the discussion. MathGrrl
Part 6 and 7 of the Norman Geisler video may be very interesting to many on UD because he adds to the CS Lewis argument that God exist because there is an inherent need in man for God, by quoting the leading atheists themselves, and as was amply demonstrated by the deranged Darwinist: Norman Geisler - The New Atheism (6/8) http://www.youtube.com/watch?v=d_8aGYz6PS8 Brooke Fraser- "C S Lewis Song" http://www.youtube.com/watch?v=GHpuTGGRCbY bornagain77
Cabal,
By replacing “Evolutionary theory” with “Intelligent Design”, his statement becomes more relevant with respect to reality.
By replacing "reality" with "fantasy" your statement becomes more relevant to reality. In my opinion, as far as the level of discourse in your comment goes, if that is the best evolutionists have to offer, I don't see any future for evolution. I don't even see the value of evolution in the present either. Except of course for its evolved purpose of being a wedge. Clive Hayden
This video is very informative about 'New Atheists", which may be of interest to some since we have dealt with so many Darwinists who are 'New Atheist' here on UD. Norman Geisler - The New Atheism (1/8) http://www.youtube.com/watch?v=fS6UL5BvIC0 bornagain77
Mathgirl, your response to TGPeeler in 51 is so completely self serving that it's a little difficult to see it as anything but a matter of pure rhetoric. Which it is. You are more than welcome to remedy that situation by applying your own objection to your own objection, and simply answering the question upthread from CY. Perhaps that will provide a certain amount of perspective to the rhetoric. "Could you please provide a mathematical definition of “definition,” so that any interested observer can objectively measure it, and thus know exactly what you’re referring to?" This question, by your own standards, is a completely legitimate question, and should be answered. So please answer it. Upright BiPed
MG: I suggest that after you look at GP's links, you may find it helpful tot hen scroll back up to 42 - 45 above, where several posts did respond to your request. My own simple note on the easiest metric of functional information and its significance in light of your own earlier post is in that cluster. (But you will need to think a bot on the meaning of "definition." On this one Wiki does a reasonable job here. NWE improves it, here. pay particularly close attention to the concept of ostensive definition, as this comes closest to capturing how we form a concept by abstracting from exemplars, then refine its boundaries through precising descriptions.) Thereafter I suggest 25 - 28 in the Weak Argument Correctives and relevant entries in the glossary. My own note as linked in the LH column through my handle will speak to several aspects of the information issues at an initial undergraduate level, and will also in Appendix 1 tie in classical and statistical thermodynamics. beyond that you may wish to follow the links from App 1 in that note to the TMLO discussion. (Somewhere there is also a link to the whole of the Thaxton et al book online.) These will also direct you onward to some fairly serious discussions. And, the publications at the Evo Info Lab by Marks and Dembski will be of help. GEM of TKI kairosfocus
MathGrrl: I think we had some discussions about functional informayion recently. I would suggest you look at my last answers to you here: https://uncommondescent.com/intelligent-design/intelligent-design-and-the-demarcation-problem/#comment-362414 and here: https://uncommondescent.com/intelligent-design/intelligent-design-and-the-demarcation-problem/#comment-362415 to which, I believe, you have never answered. And if there is any other aspect of the quantification of functional information in proteins which you would like to discuss, let's do it. gpuccio
tgpeeler,
Mathgrrl @ 39 “Could you please provide a mathematical definition of “functional information” so that any interested observer can objectively measure it?” No. I can’t. Regrets. You are free to try
It's your term, so it's up to you to define it. Until you do, statements like this one
BA @ 4 “The fact is that there is not one single instance of the purely material processes creating any functional information whatsoever!!!” This is true and I’ll take a second to tell Mathgrrl why it will always be true.
from your post upthread are meaningless. We need a referent to objective reality for that claim to be assessed. I'm genuinely interested in understanding the information theory based arguments that are seen frequently here on UD. If you decide to define your terms, I'd love to continue the discussion. MathGrrl
Cabal wrote: In my opinion, as far as the level of discourse in this thread goes; if that is the best ID proponents have to offer, I don’t see any future for ID. I don’t even see any value of ID in the present either.
First of all, thank you for your participation, despite the fact I intensely disagree, I think it is valuable for readers at UD to hear the best arguments the anti-ID side has to offer. I point out the irony of this claim:
Cabal wrote: In my opinion, as far as the level of discourse in this thread goes; if that is the best ID proponents have to offer, I don’t see any future for ID. I don’t even see any value of ID in the present either.
This thread is not the best ID has to offer. But the irony is that you say you don't see any value in ID in the present day which implicitly suggests you see value in Darwinism. The irony is that if Darwinism (as defined by Dawkins and Coyne) is true, then there is no value in anything. To quote Dawkins, "there is no good no evil, nothing but pointless indifference". Darwinists defend their view as if the universe will be better if Darwinism is true and if humanity accepts Darwinism as true. That is the height of non-sequiturs! If Darwinism is true, then there is inherently no metric for what is better or worse, and thus there is no logical reason to defend Darwinism. That's what I find astonishing about Dawkinsian Darwinists, they are not able to logically demonstrate why acceptance of Darwinism is good, since, by definition, Dawkinsian Darwinism implies the notion of "good" is only an illusion. And it is amazing to me that a philosophy premised on the pointlessness of human existence should be defended with such vigor as if it were the holy grail. This is the height of irrationality. The value of ID is that if it is true it opens the possibility that there may be an Intelligent Designer, and though Intelligent Design does not necessarily imply God's existence, it certainly makes the possibility compelling. To quote Dawkins:
the presence of a creative deity in the universe is clearly a scientific hypothesis. Indeed, it is hard to imagine a more momentous hypothesis in all of science.
Even if the chance of ID being true is remote, the payoff could be infinite. If Darwinism is true, then humanity is screwed for sure, and there is no logical way to demonstrate that acceptance of Darwinism is a "good" thing! And that is the lack of logic the Dearanged Darwinist Gunman demonstrated. If Darwinism is true, and even if the human race were to go extinct, how is this logically a bad thing since extinction in the Darwinist world is a good thing!!!!! The lack of rationality by Lee is displayed in various incarnations by Darwinists like Dawkins defending the "value" of Darwinism. Finding value in Darwinism is like the search for square circles in Euclidean geometry. It is a logically contradictory concept, thus it is ironic that a Darwinist attack ID as having no value, since Darwinism on its own terms is demonstratably valueless. One might appeal to Theistic evolution as compromise. But as Coyne pointed out, Theistic evolution is really another form of creationism, and thus doesn't really help the cause of pure Darwinism. So, to hear a Darwinist use teleological terms like "value" is like Gunman Lee thinking he's doing "good" by taking people hostage to prevent human extinction. On what Darwinists grounds is any form of extinction a bad thing (even human extinction)? In Darwinism, extinction is a good thing! scordova
btw, The issue with Wikipedia is that it purports to be an encyclopedia of factual information (which quite often it is, and quite oten not, depending on the level of controversy on a subject). The problem is that the "factual information" can change from one day to the next. Anytime an ID advocate attempts to update an article concerning ID, it usually changes that very day to reflect the POV of Darwinists. So you're not getting all the facts on ID, nor on evolution. All you're getting is opinion and interpretation of facts. Hence, an extremely biased approach to fact finding and reporting. I like Wiki for certain information, such as music history and the biographies of classical composers, for example, but when it comes to the life sciences, and the biographies of persons involved in controversial issues, I need to be much more cautious. In fact, I have two family members (a brother and an uncle) of some note, who have Wiki articles on them. I've contributed to those articles. Not once has any of my information changed or been disputed. Yet I can imagine that Dr. Dembski, for example can't get a "fair and balanced" biographical article written about him without the nasty biases of Darwinists creaping in. CannuckianYankee
mathgrrl (and tg, stephenb), "ID advocates want clarity; Darwinist partisans want confusion. Clarity is Darwinism’s greatest enemy. Once everyone understands that there is some evidence for evolution but no evidence at all for Darwinism, Darwinists will be out of business. Thus, they must be dishonest to survive. Wikipedia, which also seeks to obfuscate, can hardly be trusted to illuminate the issue. When in doubt, read our “frequently raised objections” section to understand what ID does and does not argue." This is exactly right. mathgrrl, since even young earth creationists accept "evolution," perhaps you could be more demanding towards definitions when it comes to "evolution" as you seem to be towards ID concepts such as "functional information." I mean really, ID advocates are much more precise in what they mean than Darwinists, who think "evolution" can be used loosely to show "change over time," and that simply because organisms change over time, this means that such change occurred through unplanned natural processes. Are you satisfied that when Darwinists refer to "evolution" they mean the same thing as when young earth creationists refer to "evolution?" If not, then I think you can see why we use the terms "Darwinist" and "Darwinism" to refer to something specific; something more "mathematically definitional" to use your term. So we have two terms: "evolution" = biological change over time. "Darwinism" = evolution via random variation + natural selection. The two are not the same. Please be consistent when you demand that we be precise and clear. CannuckianYankee
---mathgirl: " Wikipedia summarizes the problems with the term here [Darwinism] You do not seem to appreciate what all the fuss is about. ID does not challenge [a] macro-evolution but argues that [b] unguided, naturalistic processes such as random variation and natural selection did not drive the process. Darwinists, make claims for [b] but provide evidence only for [a] hoping that no one will notice the difference. Indeed, they have no evidence at all to support [b]. In order to promote this farce and obfuscate the issue, they purposely use the imprecise word “evolution,” which can be taken either way. ID holds that, if macro-evolution occurred, it was, at least in part, designed or programmed to unfold according to the prior intent of a designer—that it had “man [forgive the gender reference] in mind.” Darwinism insists that no intent, design, or program is necessary—“that evolution is a purposeless, mindless process that did NOT have man in mind.” Thus, ID advocates use the word “Darwinism” to refer to the stronger claim of unguided evolution as opposed to the weasel word “evolution,” which can be shifted and morphed as needed. ID advocates want clarity; Darwinist partisans want confusion. Clarity is Darwinism’s greatest enemy. Once everyone understands that there is some evidence for evolution but no evidence at all for Darwinism, Darwinists will be out of business. Thus, they must be dishonest to survive. Wikipedia, which also seeks to obfuscate, can hardly be trusted to illuminate the issue. When in doubt, read our "frequently raised objections" section to understand what ID does and does not argue. StephenB
Mathgrrl @ 39 "Could you please provide a mathematical definition of “functional information” so that any interested observer can objectively measure it?" No. I can't. Regrets. You are free to try. To take CY one step further, while you're at it, perhaps you could also provide a mathematical definition of each word in the phrase "Could you please provide ... so that any interested observer can objectively measure it?" That way I might be able to understand your question. Because right now I think it's irrelevant. Thanks. tgpeeler
Mathgrrl, as kairos pointed out, you have, by your own intelligence, generated far more information on this blog than can be reasonably be expected from the material processes of the entire universe over the entire history of the universe. You may say that "well given enough time evolution can reach unmatched levels of functional information/complexity in a small step by small step fashion". Yet the small steps that you are trying to traverse, in you evolutionary scenario. are shown to be anything but 'small'; Evolution vs. Functional Proteins - Doug Axe - Video http://www.metacafe.com/watch/4018222 Estimating the prevalence of protein sequences adopting functional enzyme folds: Doug Axe: Excerpt: Starting with a weakly functional sequence carrying this signature, clusters of ten side-chains within the fold are replaced randomly, within the boundaries of the signature, and tested for function. The prevalence of low-level function in four such experiments indicates that roughly one in 10^64 signature-consistent sequences forms a working domain. Combined with the estimated prevalence of plausible hydropathic patterns (for any fold) and of relevant folds for particular functions, this implies the overall prevalence of sequences performing a specific function by any domain-sized fold may be as low as 1 in 10^77, adding to the body of evidence that functional folds require highly extraordinary sequences. http://www.ncbi.nlm.nih.gov/pubmed/15321723 Book Review - Meyer, Stephen C. Signature in the Cell. New York: HarperCollins, 2009. Excerpt: As early as the 1960s, those who approached the problem of the origin of life from the standpoint of information theory and combinatorics observed that something was terribly amiss. Even if you grant the most generous assumptions: that every elementary particle in the observable universe is a chemical laboratory randomly splicing amino acids into proteins every Planck time for the entire history of the universe, there is a vanishingly small probability that even a single functionally folded protein of 150 amino acids would have been created. Now of course, elementary particles aren't chemical laboratories, nor does peptide synthesis take place where most of the baryonic mass of the universe resides: in stars or interstellar and intergalactic clouds. If you look at the chemistry, it gets even worse—almost indescribably so: the precursor molecules of many of these macromolecular structures cannot form under the same prebiotic conditions—they must be catalysed by enzymes created only by preexisting living cells, and the reactions required to assemble them into the molecules of biology will only go when mediated by other enzymes, assembled in the cell by precisely specified information in the genome. So, it comes down to this: Where did that information come from? The simplest known free living organism (although you may quibble about this, given that it's a parasite) has a genome of 582,970 base pairs, or about one megabit (assuming two bits of information for each nucleotide, of which there are four possibilities). Now, if you go back to the universe of elementary particle Planck time chemical labs and work the numbers, you find that in the finite time our universe has existed, you could have produced about 500 bits of structured, functional information by random search. Yet here we have a minimal information string which is (if you understand combinatorics) so indescribably improbable to have originated by chance that adjectives fail. http://www.fourmilab.ch/documents/reading_list/indices/book_726.html If this isn't enough to make you doubt the the sufficiency of the power of "almighty" evolution to produce life on earth Mathgrrl, the fact is that genetic reductionism is not even true in the first place i.e. mutations to DNA do not solely effect body-plan morphogenesis: Stephen Meyer - Functional Proteins And Information For Body Plans - video http://www.metacafe.com/watch/4050681 The Origin of Biological Information and the Higher Taxonomic Categories - Stephen Meyer"Neo-Darwinism seeks to explain the origin of new information, form, and structure as a result of selection acting on randomly arising variation at a very low level within the biological hierarchy, mainly, within the genetic text. Yet the major morphological innovations depend on a specificity of arrangement at a much higher level of the organizational hierarchy, a level that DNA alone does not determine. Yet if DNA is not wholly responsible for body plan morphogenesis, then DNA sequences can mutate indefinitely, without regard to realistic probabilistic limits, and still not produce a new body plan. Thus, the mechanism of natural selection acting on random mutations in DNA cannot in principle generate novel body plans, including those that first arose in the Cambrian explosion." http://eyedesignbook.com/ch6/eyech6-append-d.html bornagain77
PS: oops, here kairosfocus
MG: When you wrote the above post you supplied a sample of digitally coded linguistically functional, complex information. That is an instance in point. 600 128-state ASCII characters worth, or 128^600 possible configs, 2.118*10^1264. That is vastly more states than the number of Planck time states scannable by the about 10^80 atoms of our observed cosmos across its working life. In other words, if the universe were converted into impossibly fast monkeys, keyboards and banana plantations to support them, it could not produce the text once in the thermodynamically reasonable lifespan of the cosmos. But you tossed it off -- by using intelligently directed contingency -- in a few minutes. The specific function was recognisable and could be assigned a dummy variable of binary state. The complexity threshold, 1,000 bits storage capacity is easier to assign a similar dummy variable. Once complex and specific function is identified, we multiply the two dummies by the bit storage and we have a simple metric. A similar metric would obtain for say the discs of data I just used to continue reloading this PC after a config sys wipeout. But the function there would be algorithmic. A similar metric would easily extend to integrated network systems, as is discussed here. Beyond that simple metric, you can look at various metrics that have been developed in recent years. But functional bits is as familiar as the PC industry and the internet. As tot he definitionitis game, we simply note that life itself -- a condition of our doing physics or informaiton theory or whatever, and the subject of study of biology, has no clean cut definition. I suspect you remember the way basic physics definitions run in circles after a certain point [I recall my exchange with my 4th form physics teacher on that], until you resolve by pointing to specific cases and describing then saying if it is sufficiently close to that we accept it. Such ostensive definition by example and family resemblance is the basis of practical definitions. Your last post is of course a case in point. Having put this tangential issue -- if these keep up they become as red herring rhetorical tactic -- to bed, we need to return to the main issue for the thread. Media bias that is revealing on ideological commitments and issues. Issues that invite us to look into the malthusian roots of Darwinism, and to examine why it would be helpful to deal with those issues so we can get on with real solutions -- why not look at was it 27 above -- instead of playing somebody's create a perceived crisis to get an agenda mainstreamed game. G'day GEM of TKI kairosfocus
Mathematically Defining Functional Information In Molecular Biology - Kirk Durston - short video http://www.metacafe.com/watch/3995236 Entire video: http://vimeo.com/1775160 Functional information and the emergence of bio-complexity: Robert M. Hazen, Patrick L. Griffin, James M. Carothers, and Jack W. Szostak: Abstract: Complex emergent systems of many interacting components, including complex biological systems, have the potential to perform quantifiable functions. Accordingly, we define 'functional information,' I(Ex), as a measure of system complexity. For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, Ex (e.g., the RNA-GTP binding energy), I(Ex)= -log2 [F(Ex)], where F(Ex) is the fraction of all possible configurations of the system that possess a degree of function > Ex. Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree. In each case we observe evidence for several distinct solutions with different maximum degrees of function, features that lead to steps in plots of information versus degree of functions. http://genetics.mgh.harvard.edu/szostakweb/publications/Szostak_pdfs/Hazen_etal_PNAS_2007.pdf Measuring the functional sequence complexity of proteins - Kirk K Durston, David KY Chiu, David L Abel and Jack T Trevors - 2007 Excerpt: We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable. To demonstrate the relevance to functional bioinformatics, a method to measure functional sequence complexity was developed and applied to 35 protein families.,,, http://www.tbiomed.com/content/4/1/47 Here is the fitness test that you (or any evolutionists) must pass to concretely ascertain that the new functionality (complexity/information), which evolved, by supposedly purely material processes, was in fact not a beneficial adaptation that was derived from preexisting functional information that was already inherent in the genome. i.e. You must prove that the new functionality of a beneficial adaptation did in fact violate the principle of genetic entropy. For a broad outline of the 'Fitness test', required to be passed to show a violation of the principle of Genetic Entropy, please see the following video and articles: Is Antibiotic Resistance evidence for evolution? - 'The Fitness Test' - video http://www.metacafe.com/watch/3995248 Testing the Biological Fitness of Antibiotic Resistant Bacteria - 2008 http://www.answersingenesis.org/articles/aid/v2/n1/darwin-at-drugstore Thank Goodness the NCSE Is Wrong: Fitness Costs Are Important to Evolutionary Microbiology Excerpt: it (an antibiotic resistant bacterium) reproduces slower than it did before it was changed. This effect is widely recognized, and is called the fitness cost of antibiotic resistance. It is the existence of these costs and other examples of the limits of evolution that call into question the neo-Darwinian story of macroevolution. http://www.evolutionnews.org/2010/03/thank_goodness_the_ncse_is_wro.html bornagain77
sorry, I meant to say "mathematical definition of 'mathematical definition.'" :) CannuckianYankee
mathgirl, Could you please provide a mathematical definition of "definition," so that any interested observer can objectively measure it, and thus know exactly what you're referring to? CannuckianYankee
tgpeeler,
BA @ 4 “The fact is that there is not one single instance of the purely material processes creating any functional information whatsoever!!!” This is true and I’ll take a second to tell Mathgrrl why it will always be true.
Thanks for replying. Before going further, though, in my experience it is essential to understand exactly what we mean by our terms when discussing information theory. Could you please provide a mathematical definition of "functional information" so that any interested observer can objectively measure it? With a definition in hand, we can look at your claim in detail. MathGrrl
Upright Biped, Perhaps I'm missing something, but I don't see how you quotations in 26 make your original statement:
What reasearch over the past century and a half indicates that inanimate matter can establish symbols systems so evolution (in whatever and any form you wish to believe in it) can even occur in the first place?
any more clear. Could you please explain what "symbol system" you mean in more detail? Thanks. MathGrrl
JB: Right as rain. The Other Uses section in that Wiki article linked by MG begins . . . .
The term "Darwinism is often used in the United States by promoters of creationism, notably by leading members of the intelligent design movement, as an epithet to attack evolution as though it were an ideology (an "ism") of philosophical naturalism, or atheism"
. . . and goes downhill from there. It is a slanderous, truth-evasive hit piece -- the very Creationists themselves mark a significant difference with design thought [cf the Weak Argument Correctives MG] -- not a serious objective discussion; as was noted in 24. On why "darwinism" or "the neo-darwinian/modern synthesis" are legitimate and apt terms, cf. 24:
The Neo-Darwinian synthesis, often shortened to “Darwinism,” as of last count, was still the predominant school in evolutionary biology, and as a core level paradigm it embeds a great many worldview level elements. . . .
For those who do not realise the predominant worldviews context and ideological agenda for Darwinian evolutionism and broader origins science schools of thought (what Wiki was trying desperately to obscure), it is worth excerpting Lewontin's 1997 NYRB article yet again: _____________________ >> . . . to put a correct view of the universe into people's heads we must first get an incorrect view out . . . the problem is to get them to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth . . . . To Sagan, as to all but a few other scientists, it is self-evident that the practices of science provide the surest method of putting us in contact with physical reality, and that, in contrast, the demon-haunted world rests on a set of beliefs and behaviors that fail every reasonable test . . . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [From: “Billions and Billions of Demons,” NYRB, January 9, 1997. Bold emphasis added. (NB: The key part of this quote comes after some fairly unfortunate remarks where Mr Lewontin gives the "typical" example -- yes, we can spot a subtext -- of an ill-informed woman who dismissed the Moon landings on the grounds that she could not pick up Dallas on her TV, much less the Moon. This is little more than a subtle appeal to the ill-tempered sneer at those who dissent from the evolutionary materialist "consensus," that they are ignorant, stupid, insane or wicked. Sadly, discreet forbearance on such is no longer an option: it has to be mentioned, as some seem to believe that such a disreputable "context" justifies the assertions and attitudes above!)] >> ______________________ Unfortunately, there is abundant evidence that this summary is precisely correct; it is not just an idiosyncrasy. That is why Johnson -- a design thinker as opposed to a creationist -- was quite correct to rebut thusly:
For scientific materialists the materialism comes first; the science comes thereafter. [Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose." . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [Emphasis added.] [The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
GEM of TKI PS: More details here. kairosfocus
MathGrrl - Would you feel better about things if we used the term "the modern synthesis" instead of Darwinism? They mean the exact same thing. The difference is that people know what you are talking about if you say "Darwinism" and they don't know what you're talking about if you say "modern synthesis". So it is pretty obvious which term is preferable in public conversation. The only people who object are those trying to obscure the debate. We are being quite precise when we say that our objection is with Darwinism, as many in the ID movement are neo-Lamarckians. The only thing contained in that wikipedia article was innuendo - there is no difference in meaning for the term as used by IDers or creationists. johnnyb
Cabal: Sorry, but in light of what I can see in the Lee manifesto [cf. 18 supra] vs what I am hearing from your remarks (and those of others of like ilk), I am led to infer that you are indulging the rhetoric of distancing of the inconvenient. First of all, kindly observe the actual key protest in the original post:
when a gunman inspired by Darwinism takes hostages at the offices of the Discovery Channel, reporters seem curiously uninterested in fully disclosing the criminal’s own self-described motivations. Most of yesterday’s media reports about hostage-taker James Lee dutifully reported Lee’s eco-extremism and his pathological hatred for humanity. But they also suppressed any mention of Lee’s explicit appeals to Darwin and Malthus as the intellectual foundations for his views . . .
That is, the objection is that there is a sacred cow, where darwinist and associated ideas are effectively immunised from critical examination when it comes to adverse societal implications and consequences. By contrast, had the crazed gunman been associated with the Christian faith or other ideas that are similarly out of favour with the media elites, that context would indubitably have been trumpeted. (Indeed, the recent cabbie stabbing incident in NY shows how such will be suggested, even where the evidence points in other directions.) In short, fundamentally, this is a protest about media [and power elite] bias and agendas. Now, Mr Lee's context was plainly shaped by a neo-Malthusian apocalypticism that is a commonplace of environmental radicalism, which it so happens is also rooted in the pool of ideas that shaped the darwinian frame of thought, right from the outset. In the introduction to Origin [1872 Edn], we may simply and directly read:
. . . In the next chapter the Struggle for Existence amongst all organic beings throughout the world, which inevitably follows from the high geometrical ratio of their increase, will be considered. This is the doctrine of Malthus, applied to the whole animal and vegetable kingdoms. As many more individuals of each species are born than can possibly survive; and as, consequently, there is a frequently recurring struggle for existence, it follows that any being, if it vary however slightly in any manner profitable to itself, under the complex and sometimes varying conditions of life, will have a better chance of surviving, and thus be naturally selected. From the strong principle of inheritance, any selected variety will tend to propagate its new and modified form. This fundamental subject of Natural Selection will be treated at some length in the fourth chapter; and we shall then see how Natural Selection almost inevitably causes much Extinction of the less improved forms of life, and leads to what I have called Divergence of Character . . .
Now, today, we will see Natural selection typically defined thusly:
Natural selection is the process by which traits become more or less common in a population due to consistent effects upon the survival or reproduction of their bearers. It is a key mechanism of evolution. The natural genetic variation within a population of organisms may cause some individuals to survive and reproduce more successfully than others in their current environment. Natural selection is the process by which traits become more or less common in a population due to consistent effects upon the survival or reproduction of their bearers. It is a key mechanism of evolution. The natural genetic variation within a population of organisms may cause some individuals to survive and reproduce more successfully than others in their current environment . . . [Wiki]
In short, while duly noting the more developed ideas on engines of variation that Darwin did not know about, we can plainly see the continuity in the ideas. The intellectual stronghold of Darwinism is therefore a key underpinning to the plausibility of today's neo-Malthusianism. So, it is utterly unsurprising to see Mr Lee demanding . . .
“forums of leading scientists who understand and agree with the Malthus-Darwin science> and the problem of human overpopulation . . . ”
. . . in his opening, theme-setting remarks, as I pointed out in 18 above. In short, Lee did accurately transmit the ideas he was taught and became obsessed over, but failed to govern himself morally in his means of advocacy, resorting to violence. His derangement then led him to imagine that his favourite TV channel could -- by promoting the views he espoused -- transform the course of policy and behaviour of the public. His final act of derangement was in how he acted out: grab hostages and try to get things his way at gunpoint. In turn, this brings to the surface something that is a key point of contrast between the Judaeo-Christian, theistic worldview and the radically secularist evolutionary materialism that would replace it: core ethical principles. Moshe and Jesus jointly teach us that we should govern ourselves by neighbour love rooted in the fact that we are equally created in God's image and have a dignity such that to abuse a fellow human being is to offend the One who made us all. The often "misunderestimated" Paul therefore summarised the ethical import of this view thusly, in Rom 13:8b - 10:
Rom 13::8 . . . he who loves his fellowman has fulfilled the law. 9The commandments, "Do not commit adultery," "Do not murder," "Do not steal," "Do not covet,"[a] and whatever other commandment there may be, are summed up in this one rule: "Love your neighbor as yourself."[b] 10Love does no harm to its neighbor. Therefore love is the fulfillment of the law.
So, while indeed the deranged and the just plain ordinary fallible, fallen, and ill-willed do defy such principles, the principles are there as major resources that are often underscored and serve as key societal restraints. Consequently, they have played a pivotal role in the history of the rise of liberty, as say the US Declaration of Independence shows in its crucial 2nd paragraph, that begins with the affirmation of Creation-rooted equality as the ground for unalienable rights to be secured by just government. At worldview level, since the Judaeo-Christian tradition is rooted in the inherently good Creator-God, it has a foundational is that grounds ought. It also calls on us to turn to the transforming power of that Transcendent, to find the way towards walking in the light, not living in the dark. We have significant freedom, meaning and hope, anchored in him who rose from Death, with 500 witnesses. By contrast, when we consult Cornell professor William Provine in his well-known 1998 Darwin Day address at U of Tennessee (a highly significant venue, given what happened in that state in the 1920's), we hear:
Naturalistic evolution has clear consequences that Charles Darwin understood perfectly. 1) No gods worth having exist; 2) no life after death exists; 3) no ultimate foundation for ethics exists; 4) no ultimate meaning in life exists; and 5) human free will is nonexistent . . . . The first 4 implications are so obvious to modern naturalistic evolutionists that I will spend little time defending them . . . . How can we have meaning in life? When we die we are really dead; nothing of us survives. Natural selection is a process leading every species almost certainly to extinction . . . [Evolution: Free Will and Punishment and Meaning in Life, Second Annual Darwin Day Celebration Keynote Address, University of Tennessee, Knoxville, February 12, 1998 (abstract).]
In short, Provine here brings out the inherent amorality, determinism and despair of evolutionary materialism with great force. He tries to put a brave face on it, arguing that even though we have no meaning, we can in effect invent a meaning for ourselves. Thus, he inadvertently reveals that he cannot live with meaninglessness and malthusian doom [to be culminated in the immolation of the solar system as our sun ages]; a hint that we are more, much more than he can admit on his worldview. Is it any wonder that such hopeless despair and doom can lead an unbalanced person to foolishly desperate action? And, it is therefore no wonder that the wedge of critical analysis and truth would be driven in between the tentacles of such despair and our civilisation, in a rescue attempt. For: 1 --> The underlying Malthusian vision ignores the power of the creative intelligence to transform the resource base of our world. 2 --> So instead of the inevitably doomed contention between geometrically increasing populations and arithmetically increasing productivity with static technology, since the 1800's we have dramatically increased life expectancy and standards of living while the population has soared [in large part due to improved public health so that children survive to reach reproductive age]. 3 --> Today, the ordinary person in many countries lives at a level undreamed of by kings and the richest merchants of but a few centuries past. 4 --> This, because of the power of scientifically based technological development. 5 --> We should celebrate that, instead of so over-emphasising the challenge posed by problems, to create a crisis mentality; the precise desperation-driven mindset that led Mr Lee to snap. 6 --> Yes, there are stresses on resources, and there are serious questions on environmental degradation, pollution etc [exaggerations and over-reading of dubious computer models and timeline massaging by questionable statistical processing notwithstanding], but we have long known where solutions will come from: progress in understanding of our world and progress in applying that understanding to create new opportunities. 7 --> So, we should cultivate a mindset of hope, mutual respect and restraint, dedication to sustainable progress, and mutual support, not a crisis mentality that only serves to empower those with agendas that they want to "mainstream," too often without any truly balanced assessment. 8 --> And, we should be sufficiently mature that we can fairly and squarely address the issues of evolutionary materialism for society, without needing to suppress the ways in which malthusian-Darwinian thought has been used to push agendas that have turned out to be ill-informed or abusive or both. ___________________ I trust the suggestions in 27 above will help us move in that direction. GEM of TKI kairosfocus
Clive Hayden:
The media … don’t report that it was a Darwinist that did so and so motivated by his/her belief. Nothing intellectually dishonest here.
Really? IMHO it is even worse than that: Not just the usual media slanted reporting but from the Parnassos of Intelligent Design itself. Maybe not intellectual dishonesty, just the common denigration of anything that may reflect unfavourably on evolution: From scordovas OP:
John West of the Discovery Institute Reports: But when a gunman inspired by Darwinism takes hostages at the offices of the Discovery Channel...
scordova continues further down in the thread with:
Evolutionary theory is not objective and verifiable. It is speculative and frequently refuted and without scientific merit.
By replacing “Evolutionary theory” with “Intelligent Design”, his statement becomes more relevant with respect to reality. In my opinion, as far as the level of discourse in this thread goes; if that is the best ID proponents have to offer, I don’t see any future for ID. I don’t even see any value of ID in the present either. Except of course for its designed purpose of being a wedge. Cabal
MathGirl: You're new. Just a few things. First, from your screen name, and your reference to physics, I suspect that your field is in astrophysics, or something similar. This is pertinent since even biologists presume that Darwinism is firmly grounded, and, if your area is outside of biology, you might be even more sure that a solid evidentiary foundation exists. The fact is, this foundation does not exist. That is the basis of the ID challenge. Second, over the years here at ID, Darwinists have objected to our use of terms as innoccous as the Modern Synthesis, neo-Darwinism, and Darwinism. So, it really doesn't matter what term we use, they'll object---and fail to provide a term that can be used. Using Darwinism is a useful shorthand for this blog. Generally it refers to the historical line connecting Darwin's speculations, neo-Darwinian adaptation of Mendelian genetics, and lastly, the Modern Synthesis. That is generally what it refers to here. Third, Darwinism does contain within itself the seeds of social applications. It has, at times, spilled over into mainstream culture with identifiable by-products; namely, eugenics, German/Hitlerian Anti-Semitism, and, of course, brands of atheism/materialism whose leading icon is Richard Dawkins. This particular thread deals with this latter understanding. So you, and all of us, have to distinguish which meaning we're referring to when we use the term Darwinism. (Generally the context makes clear the particular use we're applying) If you want to provide us with the abundance of evidence that supports Darwin's musings, then please feel free to do so; and, likewise, I'll feel free to respond. PaV
SC: Good to see you bring our attention back to the Klinghoffer article. Any takers? G kairosfocus
Breaking News: David Klinghoffer adds more at EvolutionNews. See: More on the Darwin (and Obama) Angles in the Discovery Channel Hostage Episode scordova
Talk about Evolution. Talk about Malthus and Darwin until it sinks into the stupid people's brains until they get it!! Darwinist Gunman Lee
That could be the slogan for PandasThumb, the NCSE, Nick Matzke, TalkOrigin.org, and Wes Elsberry's ATBC! scordova
Thanks tgpeeler that was directly to the point,, I think Perry Marshall is also fairly to the point about the 'monstrous ravine' that runs between material processes and functional information: The DNA Code - Solid Scientific Proof Of Intelligent Design - Perry Marshall - video http://www.metacafe.com/watch/4060532 further notes: "A code system is always the result of a mental process (it requires an intelligent origin or inventor). It should be emphasized that matter as such is unable to generate any code. All experiences indicate that a thinking being voluntarily exercising his own free will, cognition, and creativity, is required. ,,,there is no known law of nature and no known sequence of events which can cause information to originate by itself in matter. Werner Gitt 1997 In The Beginning Was Information pp. 64-67, 79, 107." (The retired Dr Gitt was a director and professor at the German Federal Institute of Physics and Technology (Physikalisch-Technische Bundesanstalt, Braunschweig), the Head of the Department of Information Technology.) “Because of Shannon channel capacity that previous (first) codon alphabet had to be at least as complex as the current codon alphabet (DNA code), otherwise transferring the information from the simpler alphabet into the current alphabet would have been mathematically impossible” Donald E. Johnson – Bioinformatics: The Information in Life Biophysicist Hubert Yockey determined that natural selection would have to explore 1.40 x 10^70 different genetic codes to discover the optimal universal genetic code that is found in nature. The maximum amount of time available for it to originate is 6.3 x 10^15 seconds. Natural selection would have to evaluate roughly 10^55 codes per second to find the one that is optimal. Put simply, natural selection lacks the time necessary to find the optimal universal genetic code we find in nature. (Fazale Rana, -The Cell's Design - 2008 - page 177) Ode to the Code - Brian Hayes The few variant codes known in protozoa and organelles are thought to be offshoots of the standard code, but there is no evidence that the changes to the codon table offer any adaptive advantage. In fact, Freeland, Knight, Landweber and Hurst found that the variants are inferior or at best equal to the standard code. It seems hard to account for these facts without retreating at least part of the way back to the frozen-accident theory, conceding that the code was subject to change only in a former age of miracles, which we'll never see again in the modern world. https://www.americanscientist.org/issues/pub/ode-to-the-code/4 Deciphering Design in the Genetic Code Excerpt: When researchers calculated the error-minimization capacity of one million randomly generated genetic codes, they discovered that the error-minimization values formed a distribution where the naturally occurring genetic code's capacity occurred outside the distribution. Researchers estimate the existence of 10 possible genetic codes possessing the same type and degree of redundancy as the universal genetic code. All of these codes fall within the error-minimization distribution. This finding means that of the 10 possible genetic codes, few, if any, have an error-minimization capacity that approaches the code found universally in nature. http://www.reasons.org/biology/biochemical-design/fyi-id-dna-deciphering-design-genetic-code DNA - The Genetic Code - Optimal Error Minimization & Parallel Codes - Dr. Fazale Rana - video http://www.metacafe.com/watch/4491422 Nick Lane Takes on the Origin of Life and DNA - Jonathan McLatchie - July 2010 Excerpt: It appears then, that the genetic code has been put together in view of minimizing not just the occurence of amino acid substitution mutations, but also the detrimental effects that would result when amino acid substitution mutations do occur. http://www.evolutionnews.org/2010/07/nick_lane_and_the_ten_great_in036101.html etc.. etc.. etc.. bornagain77
BA @ 4 "The fact is that there is not one single instance of the purely material processes creating any functional information whatsoever!!!" This is true and I'll take a second to tell Mathgrrl why it will always be true. Mathgrrl, to keep this really simple it goes like this. In order to create information, there are at least three necessary conditions that must exist. There must be 1) a symbol set (an alphabet, say), 2) a set of rules for arranging those symbols in certain ways so as to encode meaning (vocabulary and grammar), and 3) a means of freely manipulating those symbols according to the rules, to generate information (a mind). If you are a naturalist/materialist/physicalist then you realize that your explanations for everything start and end with the laws of physics. That's what naturalism means, after all. See the "causal closure of nature." But the laws of physics fail to account for all three of these necessary conditions. So the problem is simply beyond the explanatory powers of naturalism. Physics can never speak to the set of symbols. Why certain letters, pictograms, scents, chemicals, dots (braille), and so on mean what they do. Physics can never speak to the rules for the arrangement of those symbols. Why do the words "dog" and "Hund" refer to the same kind of mammal? Physics has nothing to say about this nor will it ever. Physics can also not explain the "free" will required to select the symbols and arrange them according to the rules. For if the symbols were selected on the basis of some law based algorithm then one would be presented with either a string of AAAAAAAAAAAAAAAAA (for example) or gibberish. So this is the problem in a nutshell. Naturalists, although they claim to stand on the intellectual high ground, actually reject reason, and for this reason alone can safely be ignored if the conversation is about anything that matters to human beings. It's quite funny in one way and very sad in another. tgpeeler
F/N: Instead of a sterile debate in a poisonous rhetorical atmosphere, let us instead discuss possibilities for a positive future that uses the human capacity to intelligently analyse the possibilities of the forces and materials of the world, to create opportunities for a future that gets us out of the neo-Malthusian trap. As sparkers for thought: 1 --> Energy is the key resource for everything else. So, long term, fusion, shorter term, I think new wave fission such as developments of pebble bed technology offer us a way forward. 2 --> Information technologies, though they are rooted in some of the dirtiest industries of all [look up what happens with Si chip fabrication . . . ] are a key intellectual power multiplier, so this technology should be given a priority, on both hard and soft sides. 3 --> The modularity principle would allow things to be localised, reducing the need for massive conurbations, that seem to have largely become ungovernable. Technologies should be as modular as possible, and as networkee as possible to take advantage of network economics. 4 --> Timber is the major construction resource, so we should look to sustainable timbers, especially the potential of processed bamboos based on species such as Guadua angustifolia [100', 5 - 7 y, higher growth density than pine forests]. Bamboo and the like also can make paper. 5 --> A lot of construction of relatively light structures can move to technologies such as bamboo bahareque, through a modern version of wattle-daub. 6 --> The automotive industry needs to go fuel cell long run, shorter run, I like things like algae oil [couple coal plants to feed bio oils grown by algae, cutting emissions 50%], and I think if we can do butanol in a fermenter cost effectively, we are looking at 1:1 for gasoline for Otto cycle engines. 7 --> That brings up biotech. We need a big thrust to get cells to manufacture as much of our chemistry as we can, industrial and pharmaceutical. Bugs will do it for us, on the cheap, once they are reprogrammed. (Remember, they are existing Von Neumann replicator technologies.) 8 --> Wind and solar will probably remain fringe but useful technologies. With one major exception, we need to look back to sailing schooners as small regional carriers in a post oil world. 9 --> Rail is the most inherently efficient bulk mover land transportation system, so we need to look to how that could be regenerated -- subsidies, overt and hidden, killed rail. 10 --> We need to look to aquaculture and high tech agriculture to feed ourselves. 11 --> We need to break out of Terra, using our moon as staging base -- 1/6 gravity makes everything so much easier to do, with Mars as the first main target. Beyond Mars, the Asteroid belt. 12 --> Transform these and we are looking at real estate to absorb a huge onward population. 13 --> As a long shot, high risk high payoff investment, physics needs to look at something that can get us an interstellar drive and transportation system. 14 --> So, investment in high energy accelerators and related physics and astronomy should be seen as a long term investment of high risk but potentially galaxy-scale [or bigger?] payoff. 15 --> Settling the Solar system takes the potential human population to the dozens of billions. 16 --> If we can break out and find terraformable star systems beyond, the sky is literally the limit, even if we are restricted to a habitable zone band in our galaxy. (For, we are dealing with potentially millions of star systems.) _____________________ Now, wouldn't it have made a big difference if we had been discussing these sorts of possibilities instead of the eco-collapse, climate collapse and over-population themes that serve little purpose but to drive people to desperation -- and into the arms of those who offer convenient "solutions"? GEM of TKI kairosfocus
math, The entire body is made up of context-specific reactions and interplay between chemical constituents which have nothing whatsoever to do with each other outside of the context of the system they are coordinated within. cAMP has absolutely nothing to do with glucose. Cytosine-Thymine-Adenine is a chemical symbol mapped to Leucine based upon an arbitrary rule. " Two aspects of this particular genomic computation deserve special mention. The first aspect is that the computation involves many molecules and compartments of the cell, not just DNA and DNA binding proteins. For example, the membrane transport proteins LacY and IIAGlc are essential. The second noteworthy aspect is that the computation involves the use of chemical symbolism as information is transmitted. Thus, the presence of allolactose inducer represents the availability of lactose and the ability of the cell to synthesize functional LacY and LacZ. Similarly, the concentrations of IIAGlc-P and cAMP represent the availability of glucose to the cell. Both whole cell involvement and transient chemical symbols are typical of cellular computation and signal transduction in general. " - James Shapiro, University of Chicago Biology Dept - - - - - - "In its information processing, the cell makes use of transient information about ambient conditions and internal operations. This information is carried by environmental constituents and signals received from other cells and organisms. The cell’s receptors and signal transduction networks transform this transient information into various chemical forms (second messengers, modified proteins, lipids, polysaccharides and nucleic acids) that feed into the operation of cell proliferation, checkpoints, and cellular or multicellular developmental programs. These chemical forms act as symbols that allow the cell to form a virtual representation of its functional status and its surroundings. My argument here is that any successful 21st century description of biological functions will include control models that incorporate cellular decisions based on symbolic representations." - James Shapiro, University of Chicago Biology Dept - - - - - Off the the day...later all Upright BiPed
PS: I forgot: a return to a hunter-gatherer [and I assume subsistence farming] culture would probably collapse the global population to the order of 100's of mns. Consider the moral issues connected to that. kairosfocus
MG: Unfortunately, Wikipedia on topics like this is a particularly unreliable and one-sided source. As are too many others. The Neo-Darwinian synthesis, often shortened to "Darwinism," as of last count, was still the predominant school in evolutionary biology, and as a core level paradigm it embeds a great many worldview level elements. In particular, in the dominant forms, it is strongly associated with evolutionary materialism [a descriptive term of a worldview and associated knowledge claims dating back to beyond Plato, but given prominence again over the past 150 years, under a scientific rubric], and also with both metaphysically a priori materialism a la Lewontin et al, and the methodological "soft from" often used in debates when objections are made. It is that cluster of associations that makes the term controversial, but the term plainly has merit; it is not essentially corrupted into the sort of smear-by-label that terms like "fundamentalism" and "[Biblical] Creationist" have largely become. And this last, at the hands of secularists and their publicists. So, the term "Darwinist" is not inherently abusive. It so happens that evolutionary materialism -- the a priori that drives much current hydrogen to humans evolutionary thought, including much of the claimed account for body plan level biodiversity by and large on chance variation and natural selection that is the specific concern of NDT and its minor competitors -- is inherently amoral, having no bridge to cross the gap between is and ought. That makes it persistently controversial, and the ways in which it and its linear ancestors historically have come to serve as ideological rationales or roots for associated movements that have been harmful, raises serious questions that need to be faced, not ducked or dismissed. I suggest you look at the Weak Argument Correctives on this and related terms. Then, I suggest the thread may profit from your more specific examination of the thought of the more serious people Mr Lee was reading, like Daniel Quinn in My Ishmael. As a Kirkus review summarises:
Another irresistible rant from Quinn, a sequel to his Turner Tomorrow Fellowship winner, Ishmael (1992), concerning a great, telepathic ape who dispenses ecological wisdom about the possible doom of humankind. Once more, Quinn focuses on the Leavers and Takers, his terms for the two basic, warring kinds of human sensibility. The planet's original inhabitants, the Leavers, were nomadic people who did no harm to the earth. The Takers, who have generally overwhelmed them, began as aggressive farmers obsessed with growth, were the builders of cities and empires, and have now, in the late 20th century, largely run out of space to monopolize. Quinn's books have not featured many memorable characters, aside from Ishmael. This time out, though, he invents a lively figure, 12-year-old Julie Gerchak, who is tough and wise beyond her years, having had to deal with a self-destructive, alcoholic mother. Julie responds to Ishmael's ad seeking a pupil with an earnest desire to save the world (a conceit carried over from the earlier novel). Once again, the gentle ape shares his wisdom in a series of questions and answers that resemble, in method, a blend of the Socratic dialogues and programmed learning. Moving beyond his theories about Leavers and Takers, Ishmael presents a detailed critique of educational systems around the world, suggesting that their function is not to usefully educate but to regulate the flow of workers into a Taker society. This is all very well, but what does Ishmael/Quinn suggest be done to redeem the Takers, and to save the earth? Quinn seems to want to sketch out how change might come about, but it's never fully explored. Instead, the novel is increasingly taken up with the mysteries surrounding Ishmael's travels and fate. This is the weakest of Quinn's novels, but his ideas are as thought-provoking as ever, even so.
There is now a trend in which many major issues are being fought out at popular level through slanted movies, popular books/novels [Dan Brown . . . ], and "news," rhather than being seriously grappled with on the merits. Ideas have consequences, especially when they become the dominant forces of key institutions in a culture. So, we would be a lot more comfortable if there were more frequent signs in these pages of a serious grappling with moral-cultural issues connected to the presence of Darwinist thought in our civilisation over these past 150 years GEM of TKI kairosfocus
johnnyb,
Darwinism is a well-established term in molecular biology, with a well-defined meaning. Why do you have trouble with the term?
Wikipedia summarizes the problems with the term here: http://en.wikipedia.org/wiki/Darwinism in the section "Other uses". MathGrrl
Upright Biped,
What reasearch over the past century and a half indicates that inanimate matter can establish symbols systems so evolution (in whatever and any form you wish to believe in it) can even occur in the first place?
This sounds like you might have an interesting question, but I'm having difficulty understanding exactly what problem you are articulating. Could you expand upon it a bit, please? MathGrrl
Pardon, Klinghoffer. kairosfocus
PPS: Kinkghoffer has a more elaborate analysis that makes the same basic point [though I am not interested in the Obama connexion he also makes]. kairosfocus
PS: The insane are often extremely logical. It is their premises and perceptions that cause them to lose contact with reality and to forfeit common-sense restraints. kairosfocus
F/N: The efforts at distancing, sadly, are more revealing than the actual case of a man so ideologised he lost moral balance. A glance at the manifesto shows: 1: eco-extremism "save the planet" 2: Point no 1, setting his theme, proclaims inter alia a call for "forums of leading scientists who understand and agree with the Malthus-Darwin science and the problem of human overpopulation." 3: Thus, he correctly understands that Darwin's theory is rooted in a particular view of Malthus' views on population and resource crowding out; multiplied by survival of he fittest and unlimited variation leading to preservation of favoured races. (The connexions to eco-extremism from Darwinism are obvious in that light.) 4: The difference of Lee's eco extremism from say Galtonian eugenicism or Social Darwinism or aryan racial superiority nazism, is that in Mr Lee's view the whole human race is unfavoured and should be eliminated or at least drastically curtailed from being a pollutant/filth: "programs on Discovery Health-TLC must stop encouraging the birth of any more parasitic human infants and the false heroics behind those actions . . . All former pro-birth programs must now push in the direction of stopping human birth, not encouraging it" 5: His ire at civilisation -- and by fairly direct implication of "greed" as a code-word [yup,even fundy yahoos can read subtexts too] for Western, Capitalist socio-economic systems -- is particularly revealing, given the connexion between morality and worldview roots: "Civilization must be exposed for the filth it is. That, and all its disgusting religious-cultural roots and greed." 6: It turns out that his anchor baby objection is a spin on the population increase as pollutant/filth theme that drives so much of the rant: "Find solutions FOR these countries so they stop sending their breeding populations to the US and the world to seek jobs and therefore breed more unwanted pollution babies. FIND SOLUTIONS FOR THEM TO STOP THEIR HUMAN GROWTH AND THE EXPORTATION OF THAT DISGUSTING FILTH!" 7: He brings his eco-extremism to the focus, demanding -- he has absorbed the ""solutions" marketing buzz word: "solutions for Global Warming, Automotive pollution, International Trade, factory pollution, and the whole blasted human economy. Find ways so that people don't build more housing pollution which destroys the environment to make way for more human filth! Find solutions so that people stop breeding as well as stopping using Oil in order to REVERSE Global warming and the destruction of the planet! " 8: He then returns to the Malthus-Darwin connexion: "Develop shows that mention the Malthusian sciences about how food production leads to the overpopulation of the Human race. Talk about Evolution. Talk about Malthus and Darwin until it sinks into the stupid people's brains until they get it!!" 9: He calls for the end of humanity: "Saving the Planet means saving what's left of the non-human Wildlife by decreasing the Human population. That means stopping the human race from breeding any more disgusting human babies!" 10: His misanthropy then culminates: "Humans are the most destructive, filthy, pollutive creatures around and are wrecking what's left of the planet with their false morals and breeding culture . . . . Children represent FUTURE catastrophic pollution whereas their parents are current pollution. NO MORE BABIES!" ______________________ In short, it is pretty clear that the popularisation of evolutionary materialistic thought, in a context of radical environmentalism -- cf Pianka et al on this -- and amorality, have triggered this unstable man into an act of madness that cost him his life and could have cost much more. We need to think very soberly on the implications of commonly promoted origins science ideas for society. And, despite Mev6's attempts to deflect the focus, the issue of media bias on this case will not go away. Mr Lee was deranged, but -- as Keynes warned ever so long ago now -- the madman distilling fantasies out of the trends of the times [Hitler was another] is the canary in the mines, warning us of the poisonous currents in the air. Including those tracing to evolutionary materialism and its fellow traveller speculations. So, the sharp contrast between ever so sharply headlined right-wing fundy capitalistic crazies and silence on the motivations of malthusian darwinist eco-socialist global radicalist ones, is telling. GEM of TKI kairosfocus
James Lee was a nut, so it's hard to decipher his thinking processes. However, looking at his manifesto, the only mentions of Darwin or evolution are:
...agree with the Malthus-Darwin science and the problem of human overpopulation.
and
Develop shows that mention the Malthusian sciences about how food production leads to the overpopulation of the Human race. Talk about Evolution. Talk about Malthus and Darwin until it sinks into the stupid people's brains until they get it!!
It's hard to extrapolate from only two samples, but it would seem that Lee was mostly concerned with population pressures - he didn't mention darwin or evolution in any separate context apart from Malthus or 'overpopulation'. Certainly, population pressures have a part to play in evolution, but that's a longer term impact that doesn't seem to agree with the rest of his statements, which seem more related to what happens when a population grows beyond the available food supply. It doesn't appear that the bulk of what constitutes the ToE had much to do with his 'thinking', such as it was. He could have left out any mention of Darwin/evolution and it wouldn't have changed the overall statement of his views one bit. From what I've read in the media, they've tended to use words like "radical environmentalist", etc. That seems a little more accurate, but misses the mark too IMHO. About the only really accurate word is "deranged", period. mikev6
I was moved by this paragraph on the deranged Darwinist at Crevo: Malthusian Maniac Killed Before Killing Hostages Excerpt: One of Lee’s statements demanded saving the lions, tigers, giraffes, elephants, ants, beetles and other animals, but then he said, “The humans? The planet does not need humans.” This shows he was a nutcase, because he could not think logically about evolution. If humans arose by a Darwinian process, then they are just as much a part of nature as beetles, and whatever they do is just as amoral, meaningless and purposeless as any other part. If humans wipe out all other life, so what? Why would Lee care? His anguish is a desperate cry from his soul. Despite his love for Darwin, he could not extricate himself from the image of God imprinted in his being. http://www.creationsafaris.com/crev201009.htm#20100902a bornagain77
AMEN. The media has a opinion and agenda on how the American people should think. Therefore they would not undercut evolutionism by disclosing this as a motive. Just as they would bang a drum if it had been a creationist with a creationist agenda behind the crimes. The establishment and so its media is a team player for the great ideas of modern America. The list is long on how and what they mislead and withhold from the people. It is a conspiracy of a elite establishment determined to impose its will and whim on American thought and life. Why else be in journalism? Robert Byers
NZer@13, Please explain how you know what mathgrrl's "world view" is, what the world view is, and why this world view would prevent her from making an assessment that someone is mentally ill? zeroseven
MathGrrl - Darwinism is a well-established term in molecular biology, with a well-defined meaning. Why do you have trouble with the term? johnnyb
Sounds like Mathgrrl has been well programmed by the establishment. She/he is just regurgitating the same old same old liberal spuke that we hear over and over again. Yawn... Mathgrrl wrote: "...are so eager to smear evolutionary biologists by association with the mentally ill individual who took the hostages..." Well, given your worldview Mathgrrl, how do you know that the individual in question was mentally ill? Perhaps it is you that suffers so, and the now deceased is (was) the one actually living consistently with the worldview he espoused. NZer
"It is simply not possible to make a statement like that without ignoring a truly phenomenal amount of scientific research over the past century and a half." Your statement carries a lot of punch. Surely it is loaded with the requisite back up to enforce its certainty, correct? What reasearch over the past century and a half indicates that inanimate matter can establish symbols systems so evolution (in whatever and any form you wish to believe in it) can even occur in the first place? Upright BiPed
Mathgrrl you state: 'If you reject science a priori because of your religious beliefs, you should at least be up front about it.' Glad you agree so wholeheartedly,, so will you now be up front about your religion? Evolution is promoted by its practitioners as more than mere science. Evolution is promulgated as an ideology, a secular religion—a full-fledged alternative to Christianity, with meaning and morality. . . . Evolution is a religion. This was true of evolution in the beginning, and it is true of evolution still today. - Michael Ruse - Darwinian Atheist and Eminent scientific philosopher William Provine Lays Out The True Implications Of Evolution - video http://www.metacafe.com/watch/4109249 ---- But this is all beside the point Mathgrrl,,, I'm still waiting for you to point me to the proof of material processes generating functional information?!? Or are you holding out because you want the million dollar prize yourself??? The Capabilities of Chaos and Complexity: David L. Abel - Null Hypothesis For Information Generation - 2009 To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: "Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration." A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis. http://www.mdpi.com/1422-0067/10/1/247/pdf Can We Falsify Any Of The Following Null Hypothesis (For Information Generation) 1) Mathematical Logic 2) Algorithmic Optimization 3) Cybernetic Programming 4) Computational Halting 5) Integrated Circuits 6) Organization (e.g. homeostatic optimization far from equilibrium) 7) Material Symbol Systems (e.g. genetics) 8) Any Goal Oriented bona fide system 9) Language 10) Formal function of any kind 11) Utilitarian work http://mdpi.com/1422-0067/10/1/247/ag bornagain77
Those of you in this thread continuing to use the words "Darwinism" and "Darwinist" as if they had any more meaning than "Einsteinist" or "Mendeleevism" are so eager to smear evolutionary biologists by association with the mentally ill individual who took the hostages that you fail to recognize that nothing in his ranting manifesto is supported at all by modern evolutionary theory (or, indeed, even Darwin's original hypotheses). Is this really the level of discourse that you wish UD to be known for? MathGrrl
scordova,
Evolutionary theory is not objective and verifiable. It is speculative and frequently refuted and without scientific merit.
It is simply not possible to make a statement like that without ignoring a truly phenomenal amount of scientific research over the past century and a half. If you reject science a priori because of your religious beliefs, you should at least be up front about it. MathGrrl
Also, Lee rants against immigrants and the "anchor baby filth" that they bring. But I only noticed a passing mention of immigration at one of the articles cited (the one at CNN), and none gave the "anchor baby" quote. Besides that, even Fox News didn't seem to think Lee's affinity for Darwin was worth mentioning. AMW
Maybe they don't talk about it because the word "Darwin" shows up twice (and "evolution" once) in an 1,100+ word rant. And in no instance does he bother to explain why Darwin or evolution would imply that humanity needs to be wiped out. AMW
MathGrrl - I think you misunderstand the point of John West's post. It isn't that Darwinism makes you do X or Y, it's that the media skewers all right wingers on behalf of any single person who misbehaves, but is completely silent when it comes to even discussing this guy's motivations. That creates an unbalanced picture, because, if you get your information from the news, you see crazy right-wingers, and crazy people whose leanings are unreported, but no crazy left-wingers. Isn't that funny? If a fundamentalist Christian had done this, what would the media reporting be about? In addition, they would be blaming Bush, Beck, Hannity, and others. But since it is a Darwinist who is doing it, the fact that he did it on the basis of Darwinism isn't even worth reporting, much less the media circus that would ensue had he been of the opposite persuasion. johnnyb
An idea is not responsible for the people who believe in it. The evidence for modern evolutionary theory (which is no more “Darwinism” than modern physics is “Einsteinism”) is objective and verifiable.
Evolutionary theory is not objective and verifiable. It is speculative and frequently refuted and without scientific merit. See: Third Member Of National Aacademy of Sciences to Criticize Darwinism Also Trashes Dawkins So the questions are: 1. Is Darwinism true 2. Is it beneficial to society The answer to 1 is No. The answer to 2 is a matter of opinion, but I see little benefit to humanity for this speculative (and erroneous) idea. So I say, no to 2. Darwinism is not beneficial to society. Consider what one evolutionist had to say about natural selection:
murder is the product of evolutionary forces and that the homicidal act, in evolutionary terms, conveys advantages to the killer. David Buss
scordova
Mathgirl I also notice you stated "a successful scientific theory" Maybe you are the one who can point me to the evidence of material processes generating functional information,,,, notes from this morning: The fact is that there is not one single instance of the purely material processes creating any functional information whatsoever!!! (and what is Darwinian evolution save for material processes with replication thrown in!?!) Would you like to be the first Darwinist in the entire world to demonstrate that purely material processes can produce any functional information whatsoever. If you do so, there is a million dollar prize waiting for you,,, “The Origin-of-Life Prize” ® (hereafter called “the Prize”) will be awarded for proposing a highly plausible mechanism for the spontaneous rise of genetic instructions in nature sufficient to give rise to life. http://www.us.net/life/index.htm Perhaps you think that the origin of life is too much to ask of a Darwinist,, so I will settle for you showing me just one example of enough ‘trivial’ functional information being generated to pass the fitness test,,, Is Antibiotic Resistance evidence for evolution? – ‘The Fitness Test’ – video http://www.metacafe.com/watch/3995248 Testing the Biological Fitness of Antibiotic Resistant Bacteria – 2008 http://www.answersingenesis.org/articles/aid/v2/n1/darwin-at-drugstore Mathematically Defining Functional Information In Molecular Biology – Kirk Durston – short video http://www.metacafe.com/watch/3995236 and this paper: Measuring the functional sequence complexity of proteins – Kirk K Durston, David KY Chiu, David L Abel and Jack T Trevors – 2007 Excerpt: We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable. To demonstrate the relevance to functional bioinformatics, a method to measure functional sequence complexity was developed and applied to 35 protein families.,,, http://www.tbiomed.com/content/4/1/47 Thus zeroseven Mathgirl, despite you saying this statement,,,
”’the only thing we know of that is capable of producing the complexity we see in the eye is an evolutionary process.”’ 'a successful scientific theory'
with absolutely no evidence to back your claim up, the truth of the matter is that the only thing we know that is capable of producing functional information is intelligence,,, Stephen C. Meyer – The Scientific Basis For Intelligent Design – video http://www.metacafe.com/watch/4104651 etc… etc… etc… potpourri,,, Ideas do indeed have consequences, far more consequences than many people realize right now,,,, Flyleaf- Cassie http://www.youtube.com/watch?v=S5X0cWsC8rY bornagain77
Mathgirl, ideas have consequences and if you deny that God gives you your transcendent rights of worth, then you by default grant the state you are living under the sole right to determine what you are worth ,,,
We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness. — That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, — That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles,,,
So Mathgirl do you think it befitting that a state that deemed you a 'worthless eater' as NAZISM deemed 'sub-races' should have the right to kill you or would you be morally indignant that a state would choose to do as such? If you were morally indignant that they would seek to kill you what right would you have to say that your morals for worth are any more valid than the states morals for worth saying you are worthless, since you have denied the worth that God places on your life? notes: How Darwin's Theory Changed the World: Excerpt: "Only in the late nineteenth and especially the early twentieth century did significant debate erupt over issues relating to the sanctity of human life, especially infanticide, euthanasia, abortion, and suicide. It was no mere coincidence that these contentious issues emerged at the same time that Darwinism was gaining in influence. Darwinism played an important role in this debate, for it altered many people's conceptions of the importance and value of human life, as well as the significance of death" Richard Weikart http://www.gnmagazine.org/issues/gn85/darwin-theory-changed-world.htm From Darwin To Hitler - Richard Weikart - video http://www.youtube.com/watch?v=w_5EwYpLD6A Hitler's Ethic: The Nazi Pursuit of Evolutionary Progress - Richard Weikart http://www.amazon.com/Hitlers-Ethic-Pursuit-Evolutionary-Progress/dp/0230618073 The Dark Legacy Of Charles Darwin - 150 Years Later - video http://www.metacafe.com/watch/4060594 bornagain77
MathGrrl, The media does report that it was a Christian that did so and so motivated by his/her belief, they don't report that it was a Darwinist that did so and so motivated by his/her belief. Nothing intellectually dishonest here. Clive Hayden
An idea is not responsible for the people who believe in it. The evidence for modern evolutionary theory (which is no more "Darwinism" than modern physics is "Einsteinism") is objective and verifiable. Do you blame Christianity for the atrocities committed by a small subset of Christians? Attempting to blame the actions of a mentally disturbed individual on a successful scientific theory is not just ridiculous and insensitive to the real issues and real person, but grossly intellectually dishonest. MathGrrl

Leave a Reply