Uncommon Descent Serving The Intelligent Design Community

Media Mum about Deranged Darwinist Gunman

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

John West of the Discovery Institute Reports:

But when a gunman inspired by Darwinism takes hostages at the offices of the Discovery Channel, reporters seem curiously uninterested in fully disclosing the criminal’s own self-described motivations. Most of yesterday’s media reports about hostage-taker James Lee dutifully reported Lee’s eco-extremism and his pathological hatred for humanity. But they also suppressed any mention of Lee’s explicit appeals to Darwin and Malthus as the intellectual foundations for his views. At least, I could find no references to Lee’s Darwinian motivations in the accounts I read by the New York Times, the Los Angeles Times, the Washington Post, ABC, CNN, and MSNBC.

Major Media Spike Discovery

Comments
Re:
I have not personally seen an application of Dembski’s Explanatory Filter to a real biological system. Do you have a reference?
Try explaining the origin of the digitally coded information system joined to metabolism for first life, then again for body plan level diversity. Have a look at Signature in the Cell -- on the merits, not the strawman caricatures -- for starters. And of course the issues of bias int eh media, and of the significance of neo-malthusianism in the context of darwinism, still go a begging; much less the question of a real alternative. GEM of TKIkairosfocus
September 8, 2010
September
09
Sep
8
08
2010
05:47 AM
5
05
47
AM
PDT
But frankly, that paper is hihly specualtive, abstract, and doe not prove anything.
Well you're the expert on the explanatory filter. What are the odds that 470 sequences that look like frameshifts are not? Perhaps you could take one of the numerous examples from the paper and explain the author's error.Petrushka
September 8, 2010
September
09
Sep
8
08
2010
05:33 AM
5
05
33
AM
PDT
Not that it is any surprise, but my question to mathgrrl now stands. There is not a shred of evidence that physics alone can accomplish the symbol systems necessary for evolution to even occur in the first place. This little tidbit may now go back to being ignored by materialists of every stripe.Upright BiPed
September 8, 2010
September
09
Sep
8
08
2010
05:23 AM
5
05
23
AM
PDT
Indium: Yes to all, but only if you in some way select the mutated organism and expand it. Let's be more clear. If you obtain a 10 bits mutation of some kind in a time t in a population of, say, 10^9 bacteria, then if you want to have the same probability of obtaining the next 10 bits mutation in the same time t at the second round, you have to select the single mutated individual and expand it to a population of 10^9 individuals bearing the first mutation. That, in the darwinist model, is one of the tasks assigned to the necessity part, NS (the other being the fixation of the mutation by negative selection of further mutations). That's why I say that more complex mutations must be deconstructed into simple selectable steps for the darwinian model to work. And as I believe that that is in general impossible, that's why I don't believe that the darwinian model is a credible model. It's as simple as that. Alternatively, in designed protein engineering, the same expansion and fixation can be done by artificial means (artificial selection), which don't require the single steps to be "naturally selectable" (that is, to bear a reproductive advantage). It is only requested that they can be recognized by the designer. That's why ID is a credible model, and that's why I believe in it. It's as simple as that.gpuccio
September 8, 2010
September
09
Sep
8
08
2010
04:04 AM
4
04
04
AM
PDT
gpuccio: You seem to agree that biological objects can increase the "information content" by 10 bits, lets say by a couple of mutations. Let´s also say you take the "new", mutated organisms and do a similar experiment and again find an information increase of 10 bits. With respect of the original organism they final one has increased it´s information content by 20 bits, correct?Indium
September 8, 2010
September
09
Sep
8
08
2010
02:51 AM
2
02
51
AM
PDT
Petrushka: I apologize, but I answered your question about the frameshift mutations in the above post to MathGrrl. Would you be so kind to read it there? It's the final part.gpuccio
September 8, 2010
September
09
Sep
8
08
2010
01:09 AM
1
01
09
AM
PDT
MathGrrl: Briefly: Okay, your definition is evolving (pardon the pun). It seems that CSI is a subset of something called “functional information”, where “functional information” becomes CSI when it reaches a certain number of bits. Correct? Correct! Let's say you got it at last. Sometimes, repetition is useful. You still haven’t demonstrated with an example taken from a real biological system how, exactly, to calculate “functional information”. Doing so would clarify your terms much more than additional verbal descriptions. Please, read in this thread my post #53 (to you), and the related links, read the Durston paper: http://www.tbiomed.com/content/4/1/47 and my posts #64 (to you), #67 (to KF), #74 (to you), #88 (to zeroseven), #89 (to Petrushka). I think I have been pretty active, in this post (and elsewhere, with you too) giving "examples taken from real biological systems". Maybe here too repetition would help, but frankly I am tired. It appears that we are in agreement that evolutionary mechanisms can generate “functional information”, correct? Correct. In simple forms, they can. And they do. “Necessity” is another term of art requiring definition. Do you mean “any mechanism that is a result of known chemistry or physics”? I mean any mechanism that is strictly and explicitly algorithmic. That would include the working of laws of physics, at least at the non quantum level. And any model which describes each step of the algorithm without shifting to probabilistic inferences. What is the difficulty? In all ID literature, starting with Dembski, "necessity" is used for all algorithmic explanations which do not include a probabilistic description and the use of random events ion the model. I thought that was clear. That's why RV is not a necessity algorithm, and NS is. I have not personally seen an application of Dembski’s Explanatory Filter to a real biological system. Do you have a reference? All my examples given above are applications of the filter to real biological systems. Some parts of the filter are usually taken for granted by me, such as the assumption that there is no known necessity mechanism, out of the suggested NS (which I take into account explicitly in the discussion), which can determine the specific sequence of proteins. But if you have different views about that point, please express them. We may be reaching a point of quantifiability! Is “functional information” equivalent to Kolmogorov complexity? If so, why not use the common term? First of all, functional information is a measurement of the complexity which is ecessary to achieve the function (IOWs, of the specified complexity). That's why we use a specific term. Neither Kolmogorov complexity nor Shannon's H imply any reference to funtion. If you read the Durston's paper, you will see that he uses Shannon's H variations to measure functional complexity. I agree with that approach. Anyway, it is true that the complexity we measure must be scarcely compressible, otherwise it could be generated by a simpler algorithm. That is usually taken for granted for proteins, because it is universally recognized that their sequences are scarcely compressible, so that requisite is already satisfied. Actually, in a population, random mutation and selection are taking place in parallel across a large number of individuals. That may be true, but for different mutations in different genes. If we are discussing the "evolution" of a specific gene, it would start with one mutation (or anyway a few mutations, if we want to stretch probabilities to the extreme) in one individual. Unless that mutation is fixed and expanded by positive NS from that specific individual, it cannot contribute to future events in a non random way, IOWs we are still in the purely random model, for which ID computations apply. So you’re saying that the vast majority of random mutations that are preserved by natural selection result in only a small change to both the genotype and the phenotype? That is completely in line with the predictions of modern evolutionary theory. Well, I am happy of that. That means that there are at least small fragments of modern evolutionary theory which are not pure folly :) hat is not actually correct. The importance of neutral and even slightly deleterious mutations has been identified as extremely important. It is correct. Neutral mutations can be as important as you want, but they can contribute to the final event only in random ways. Therefore, they are not a necessity model, and the ID computations still apply. That's a point surprisingly misunderstood by darwinists, in their epistemological distraction. Neutral mutations can become fixed in a population. Yes, but randomly. They are by definition neutral, therefore they cannot be selected. Why should a neutral mutation which is in the long range useful for the final event be fixed better than one which is useless or negative? Again, we are in a purely random system. (I would really appreciate an explicit comment on this point, instead of the usual iterations of "you have not given any real example". Thank you). “Easily” may be overstating the case, but “plausibly” certainly applies, especially when neutral mutations are taken into account. Wrong. I had said: "So, your model seems to be: any transition of 35 AAs or more can easily be deconstructed into small transitions of two AAS, each of them functional and selectable thanks to a specific reproductive advantage." For the reasons stated above, neutral mutations cannot contribute to selectable steps any more than the original random mutations, so they do not change anything. And let's say that you have to show that "any transition of 35 AAs or more can be deconstructed". I happily take away the easily. I will applaud if you succeed, even if that was difficult. I cannot answer you about the immune system now, because I have not the time. I will try to get back on that later. Finally, about frameshift mutations: The only "real example" (you see. I learn quickly) of frameshift mutation I am aware of is Ohno's theory about nylonase in his 1984 paper: Susumu Ohno, Birth of a unique enzyme from an alternative reading frame of the preexisted, internally repetitious coding sequence, Proc. Natl. Acad. Sci. USA Vol. 81, pp. 2421-2425, April 1984 Ohno was a brilliant scientist, but this particular theory has been vastly proven false. And with it, the annexed rhetorics of darwninists against ID. I am well aware of the Okamura paper suggested by Petrushka, because it is cited in the Wikipedia page which gives the "disclosure" about the Ohno theory (I suppose it is meant as some form of consolation :) ). But frankly, that paper is hihly specualtive, abstract, and doe not prove anything. There is certainly not one "real example" of frameshift mutation bearing functional results in it. I am not aware of any more realistic follow up to it. Therefore, I believe that no example of frameshift mutation nearing a new functional protein is known. If you know differently, please let us know (and above all, let Wikipedia and all darwinists know: they will be very happy to have a new "nylonase" argument, after the sad destiny of the original one).gpuccio
September 8, 2010
September
09
Sep
8
08
2010
01:07 AM
1
01
07
AM
PDT
gpuccio, If you're only interested in single mutations that result in a significant increase in "functional information" (however you measure it), perhaps we should be discussing the frameshift events mentioned by Petrushka:
You are aware that there are many proposed frameshift events? http://www.sciencedirect.com/science?_ob=MImg&_imagekey=B6WG1-4KJV32X-2-C&_cdi=6809&_user=10&_pii=S0888754306001807&_origin=search&_coverDate=12%2F31%2F2006&_sk=999119993&view=c&wchp=dGLbVlW-zSkWb&md5=edfb0857c096cf57efa86fce8eab7c6b&ie=/sdarticle.pdf
A worked, step-by-step example of how to calculate CSI for one of these mutations would be very helpful in understanding your terms and thereby evaluating your claims. Could you please provide one?MathGrrl
September 7, 2010
September
09
Sep
7
07
2010
08:27 PM
8
08
27
PM
PDT
gpuccio, My apologies for the delay in replying. I'm currently in exotic Des Moines (pronounced, of course, DAY MWAH) and finally settled after a day of meetings.
the variation in Lenski’s case, if it is of two AAS, is of about 10 bits of functional information (something less, indeed). It is specified, but not complex. So, it is not a variation which can be defined as CSI, whatever threshold you fix for CSI.
Okay, your definition is evolving (pardon the pun). It seems that CSI is a subset of something called "functional information", where "functional information" becomes CSI when it reaches a certain number of bits. Correct?
Functional information is the amount of information necessary for the function to emerge. In a transition, it is the amount of variation which is necessary for the new function to emerge. Functional information can be simple (below a threshold, which I have suggested at 150 bits for biological systems), or complex (above that threshold). An object, or a transition, which is specified (a function, or a new function), and complex, exhibits CSI, and is empirically found to be designed in all known cases. Is that clear now?
It's getting clearer. Basically you seem to be saying that CSI is "functional information" over a certain number of bits. You still haven't demonstrated with an example taken from a real biological system how, exactly, to calculate "functional information". Doing so would clarify your terms much more than additional verbal descriptions. It appears that we are in agreement that evolutionary mechanisms can generate "functional information", correct?
In evaluating if an object, or a transition, exhibits CSI, we always have to exclude any known mechanism based on necessity which can have generated the result.
"Necessity" is another term of art requiring definition. Do you mean "any mechanism that is a result of known chemistry or physics"?
This is one of the fundamental principles of ID, and of Dembski’s filter.
I have not personally seen an application of Dembski's Explanatory Filter to a real biological system. Do you have a reference?
Indeed, the functional information we measure must be really “pseudo-random” information, scarcely compressible. IOW, we must look at the true Kolmogorov complexity of our result.
We may be reaching a point of quantifiability! Is "functional information" equivalent to Kolmogorov complexity? If so, why not use the common term?
The darwinian mechanism is made of two different components, applied repeatedly: RV + NS. Now, RV is the random part, while NS is a necessity mechanism. The two parts act sequencially, one after the other, and then the cycle is repeated.
Actually, in a population, random mutation and selection are taking place in parallel across a large number of individuals.
I want to state clearly here that any computation of the functional information in a system must be applied only to the random part, that is to the ability of RV to generate a specific result without any help from a necessity mechanism.
So you're saying that the vast majority of random mutations that are preserved by natural selection result in only a small change to both the genotype and the phenotype? That is completely in line with the predictions of modern evolutionary theory.
Yes, you have forgotten a very important if: that each of the “steps” of two AAs mutations must be functional and selectable. IOWs, each step must be visible to NS (which is not a trifle: it must confer a reproduction advantage).
That is not actually correct. The importance of neutral and even slightly deleterious mutations has been identified as extremely important.
Why? Because the two AAs mutation (about 10 bits, in the range of what a random biological system like RV can achieve) must be functional, must be positively selected, and must expand in the population and be fixed against further variation.
Neutral mutations can become fixed in a population.
So, your model seems to be: any transition of 35 AAs or more can easily be deconstructed into small transitions of two AAS, each of them functional and selectable thanks to a specific reproductive advantage.
"Easily" may be overstating the case, but "plausibly" certainly applies, especially when neutral mutations are taken into account.
Interesting indeed. While I see no logical reason why that should be generally true, I would certainly be very happy to analyze any specific darwinian model for such a deconstruction of any of the complex transitions we know must have happened. I have repeatedly suggestes a context which you have always refused to comment upon: the emergence of new protein domains, of new protein superfamilies, which we know has happened repeatedly in natural history.
This is an area of very active research, based on the predictions of modern evolutionary theory, in particular the nested hierarchy. Given your interest in ID, you may be familiar with this literature: http://www.nature.com/ni/journal/v7/n5/full/ni0506-433.html This link shows the literature on the evolution of the immune system (surely a specified function by your definition) presented to Behe at the Dover trial. This is only a small subset of the information available via Pubmed and other sources. Similar amounts of data are available on the evolution of other functional systems. If you don't think Lenski's experiment provides enough "functional information" to constitute CSI, I'd be very interested in seeing a worked example of your calculation for one of the immune system functions referenced in the above link.MathGrrl
September 7, 2010
September
09
Sep
7
07
2010
08:23 PM
8
08
23
PM
PDT
I believe there may be a way to rigorously settle the fact that parent bacteria are losing information in beneficial adaptations, besides mathematically. It is fairly well known that it is only when a computer erases information that the second law is obeyed for its computation: Landauer's principle Of Note: "any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase ,,, Specifically, each bit of lost information will lead to the release of an (specific) amount (at least kT ln 2) of heat.,,, Landauer’s Principle has also been used as the foundation for a new theory of dark energy, proposed by Gough (2008). http://en.wikipedia.org/wiki/Landauer%27s_principle Also of interest is that a cell apparently seems to be successfully designed along the very stringent guidelines laid out by Landauer's principle of 'reversible computation' in order to achieve such amazing energy efficiency, something man has yet to accomplish in any meaningful way for computers: Notes on Landauer’s principle, reversible computation, and Maxwell’s Demon - Charles H. Bennett Excerpt: Of course, in practice, almost all data processing is done on macroscopic apparatus, dissipating macroscopic amounts of energy far in excess of what would be required by Landauer’s principle. Nevertheless, some stages of biomolecular information processing, such as transcription of DNA to RNA, appear to be accomplished by chemical reactions that are reversible not only in principle but in practice.,,,, http://www.hep.princeton.edu/~mcdonald/examples/QM/bennett_shpmp_34_501_03.pdf Thus I hold that it may be possible to measure a precise heat release for 'beneficial adaptations, since I hold that it will always fall in accord with Genetic Entropy and the 'beneficial adaptaion' will always be the result of a loss of information from the original optimal information that was in the parent bacteria: This study agrees with the reasonableness of the proposition: Functional Information and Entropy in living systems - Andy McIntosh Excerpt: There has to be previously written information or order (often termed “teleonomy”) for passive, non-living chemicals to respond and become active. Thus the following summary statement applies to all known systems: Energy + Information equals Locally reduced entropy (Increase of order) (or teleonomy) with the corollary: Matter and Energy alone does not equal a Decrease in Entropy http://www.heveliusforum.org/Artykuly/Func_Information.pdf As well, a point that seems to get lost in the details of elucidating how much functional information is in a molecular string, is the fact that information is shown to be its own unique entity that is completely transcendent and separate from matter and energy by quantum teleportation as well as the refutation of the hidden variable argument in quantum entanglement. This is no small thing to consider! "Information is information, not matter or energy. No materialism which does not admit this can survive at the present day." Norbert Weiner - MIT Mathematician - Father of Cybernetics Information and entropy – top-down or bottom-up development in living systems? A.C. McINTOSH Excerpt: It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate. http://journals.witpress.com/pages/papers.asp?iID=47&in=4&vn=4&jID=19 etc.. etc..bornagain77
September 7, 2010
September
09
Sep
7
07
2010
07:21 PM
7
07
21
PM
PDT
You are aware that there are many proposed frameshift events? http://www.sciencedirect.com/science?_ob=MImg&_imagekey=B6WG1-4KJV32X-2-C&_cdi=6809&_user=10&_pii=S0888754306001807&_origin=search&_coverDate=12%2F31%2F2006&_sk=999119993&view=c&wchp=dGLbVlW-zSkWb&md5=edfb0857c096cf57efa86fce8eab7c6b&ie=/sdarticle.pdfPetrushka
September 7, 2010
September
09
Sep
7
07
2010
05:06 PM
5
05
06
PM
PDT
Petrushka: They are neither added nor changed. A new function is obtained through a transition from an existing state to a final state. If the transition generates a new function, it is specified. If the transition is of two AAs, and if those two AAs are absolutely necessary for the new function in their unique form, then the complexity of the transition is 20^2, that is 8.65 bits. If the transition is of three AAs, with the same assumptions as above, than the complexity is of 20^3, that is about 13 bits. In general, each AA site which has to have an unique value to ensure the function contributes 4.32 Fits of functional complexity. In the case that more than one AA in that site could give the new function, we can apply the Durston method, which takes into account "how much" a single mutation gas to be specific for the function by using the concept of Shannon's H to compute the change in functional uncertainty between the two states. In that case, each AA in the transition will contribute less than 4.32 Fits to the total complexity, according to how "unspecific" it is. I hope that clarifies some important concepts about quantifying a functional transition.gpuccio
September 7, 2010
September
09
Sep
7
07
2010
04:39 PM
4
04
39
PM
PDT
zeroseven: Absolutely not. First of all, Lensky will probably detail (maybe he has already done that) the mutations in his experiment. I am welll sure they will be shown to be very simple. In another famous case, that of nylonase, darwinists have believed for decades that it emerged through a frameshift mutation of an existing protein gene. That would mean the generation of true new CSI, because a frameshift mutation transforms all the existing codons in a completely random way. A functional result of such an event would be against all the ID theory. Obviously, now we know that the darwinist theory was completely wrong, and that nylonase originated through a couple of mutations in the existing penicillinase domain, whose fold and esterase function it keeps, with a shift in target affinity at the active site. That is exactly another case of microevolution, and no new CSI is implied. As you can see, the concept of CSI can be applied (to exclude that new CSI has been generated) to all cases of supposed "evolution" of which we know the molecular basis. In the cases I quoted, it has shown, or will show very soon, that no new CSI generation is implied, and that they are cases of microevolution, invoilving new functional information in very simple form (few Fits). But another important application of CSI is to apply it to existing models of evolution. For instance, as I have said lots of time in this and in other threads, without ever receiving any answer from darwinists, the darwinian theory implies that new protein superfamilies which emerge in natural history must have come from exisisting, different superfamilies through the traditional process of RV + NS. Well, according to ID theory, and to the application of the concept of CSI, that is impossible thorugh random variation alone. Please notice that in this case, if we choose any specific superfamily, of which the time of appearance is more or less known (according to standard darwinian methods, such as molecular clocks, nested hyerarchies, homologies and so on), and if darwinist could make the concession pf proposing a model of how that new protein superfamily emerged (that is, form what different precursor it derived), then we have a situation where we know with reasonable precision: a) The original state (the sequence of the ancestor superfamily), with its function and fold. b) The final state (the sequence of the new superfamily, with its new function and new fold). c) The transition (how many AAs have changed) d) The functional complexity in Fits of the transition (which can be estimated by the Durston method). So, if we compute d), and it is above our threshold (for the moment we can assume mine of 150 Fits), the hypothesis of a purely random transition is invalidated. It is obviously possible that the transition happened in steps, each of them selectable. But then it is the duty of those who propose the model to show that those steps exist, and not only in their imagination, and that they are selectable. After all, we are speaking of a transition from one fold and one sequence to another fold and another sequence. Remember also that the primary sequences of the initial state and of the final state are by definition totally unrelated (less than 10% homology is a good threshold to define completely isolated protein families, and according to SCOP database we have at present 6258 different genetic domain sequence subsets with that property). That means that in such a model the initial state is totally neutral, in the search space, in relation to the final state. Therefore, there is no guarantee that functional intermediates exist between the two states. Indeed, the contrary is true: it is extremely likely, and completely reasonable, that such "selectable functional intermediates" don't exist at all. However, darwinists have certainly never shown those intermediates in such a transition model, Indeed, as far as I know, darwinists have never shown any transition model of that kind. So, what would you call a theory which proposes a causal mechanism (RV + NS) and has no model of how that mechanism could explain thousands of macroevolutionary molecular events that must have occurred? Indeed no reasonable model of macroevolution at the molecular level? And whose only real models are cases of microevolution, involving simple tweaking of existing functions, through one or two random mutations, always fully inside an existing island of functionality?gpuccio
September 7, 2010
September
09
Sep
7
07
2010
04:25 PM
4
04
25
PM
PDT
because possibly he has not yet detailed the mutations implied at the molecular level...
Suppose it turns out that the Lensky result depends on two or three point mutations. Is that three bits added, or just three bits changed?Petrushka
September 7, 2010
September
09
Sep
7
07
2010
03:11 PM
3
03
11
PM
PDT
gpuccio, Thanks for the explanation. But with reference to your penultimate paragraph, will this always be a problem in real world biological examples? That is, that we will not have detailed enough information to run the calculations?zeroseven
September 7, 2010
September
09
Sep
7
07
2010
02:33 PM
2
02
33
PM
PDT
zeroseven: I agree with you: you are confused. But nothing bad in that. I will try to clarify. Usually, quantities are very precise, eg the boiling point of a substance, the energy required to break an electron off an atom etc. There is universal agreement as to amounts and quantities as they are based on observation and then measured in experiments. You are giving examples from physics. Have you any acquaintance with sciences such as biology, medicine, psychology? Be sure that the scenario is very different. Aren't they sciences? Yes, they are. the only thing which is inappropriate here is your epistemology. But with CSI this seems not to be the case. You have given 3 different “thresholds” that various people have adopted beyond which we can take it that CSI has been generated. I am afraid you have not followed well the discussion. The threshold of complexity in the evaluation of CSI has only one purpose: to avoid false positives. If you know something of statistics, you can appreciate that conventional thresholds are used all the time in statistical inference. For instance, the threshold of alpha error in hypothesis testing is usually set at 0.5, but many prefer to set it at 0.01 to reduce the 5% of errors which is implied by such a high threshold. There is nothing vague or non scientific in that. Moreover, I have specified in my posts that the different threshold I quoted are appropriate in different contexts. Dembski's UPB of 500 bits has the purpose to avoid any possible false positive in the whole system of the universe, and of all its computational resources. KF and other have sometimes elevated that thrshold to 1000 bits just to be even more sure that no possible false positive can ever happen. But my personal threshold of 150 bits has a very specific meaning, which I have stated very clearly in my posts: it is a "biological probability bound". IOWs, it is not referred to the whole universe, but to a specific system: our planet, with its 4 billion years of existence, and to the realistic probabilistic resources of bacterial reproductive rate and of the rate of mutations in that system. So, my threshold is not only a conventional value, but is based on a specific evaluation of a realistic and well defined system, which is, I believe, well appropriate for a discussion about the origin of life and of species. And in no way is its setting "based on the desired outcome". What gave you this strange idea? Finally, what do you mean with the following? Surely the only way to to say that Lenski’s experiment does not produce new CSI, is to measure it before and after? But you seem to be saying you can’t measure it because we don’t know enough about what is occurring? In that case what use is it? But it simple: the only way to to say that Lenski’s experiment does not produce new CSI, is to measure the new information (necessary for the new function to emerge) generated from the initial state to the final state, what I have called "the transition". I don't know for sure how many mutations did it in the case of Lensky, and I have reasonably assumed that they were probably a couple. I have invited anyone who has more detailed information about that to provide it. But it is true that, if we have no detailed enough information about a transition, we cannot say if that transition implied the generation of new CSI. Why do you think that such a fact implies that "CSI is no use"? It is absolòutely normal and obvious that, if we have not the information necessary to apply a mathematical concept, we can't apply it in that case. We can obviously apply it in all other cases where that information is available. I understand your esteem for Lensky, but why do you think that the fact that we cannot apply a concept to his experiment (only because possibly he has not yet detailed the mutations implied at the molecular level) makes the concept "not useful"? Epistemology has become very strange, these days...gpuccio
September 7, 2010
September
09
Sep
7
07
2010
02:02 PM
2
02
02
PM
PDT
PS: UB, at this time we can look back and see that at no 1, MG's initial objection for this thread was that ideas are not responsible for the people who follow them, joined to the inevitable turnabout allusion to Christian atrocities. The issues of media bias and malthusian influences in darwinist thought feeding into a nihilistic desperation and hysteria were then taken up. After that we see the injection of a distraction on defining and quantifying information, then a refusal to be responsive when this was addressed. That adds up, but not to a happy sum by any means.kairosfocus
September 7, 2010
September
09
Sep
7
07
2010
01:58 PM
1
01
58
PM
PDT
07: You and others are again invited to read the weak argument correctives, especially 26 - 28; as you were already invited in this thread. You will immediately see that in fact the simplest metric of FSCI, functionally specific bits, is a commonplace of information systems such as the PC you are using. And, that in every case where we directly and independently know the source of FSCI, it is intelligently caused. That is, FSCI is an empirically reliable sign of causation by directed contingency. You are further invited to examine the measures of functional sequence complexity of thirty-five protein families that was published in the peer reviewed literature, here. The fallacy of setting up and knocking over a strawman you just indulged at 80 above therefore stands exposed as irresponsible, willfully obtuse and materially untruthful; as well as disrespectful. After that, it is time to get back to the serious issue of media bias and the even more serious issue of neo-malthusian nihilism connected to today's darwinist outlook; which this thread is being diverted from. Kindly, do better than that next time. Good day GEM of TKIkairosfocus
September 7, 2010
September
09
Sep
7
07
2010
01:53 PM
1
01
53
PM
PDT
I hope everyone had a safe extended-holiday weekend… Mathgrrl, Your claim that I am “uncivil” is a turn of events I am more than prepared to live with. However, in your reasoning you stated that my comments were “baseless”. I challenge you on that. You made a comment regarding an ID proponent who would dare to question evolutionary theory (on an ID blog) while “ignoring a truly phenomenal amount of scientific research over the past century and a half.” In return I asked a simple question. “What research over the past century and a half indicates that inanimate matter can establish symbols systems so evolution (in whatever and any form you wish to believe in it) can even occur in the first place?” The words you and I selected for our individual posts are easily accessible to the average reader. (That is fair to say isn’t it?) The positions are hardly in question. The average person might look at this and come to the conclusion that I think there is more to the story of Life than you, and that you consider the level of explanation to be sufficient to your taste – so much so that apparently anyone on the Internet making certain statements should be reminded they are carelessly speaking outside the consensus. The casual observer might also conclude I am suggesting you personally reconsider the repeated observation of symbolic information processing inside the cell. All of it whirling along in the orchestrated harmony granted to it by the “frozen accident” - as it was first referred to. That functioning thing which evolution needs in order to work at all. The point in the causal chain (where no matter what else we may believe) we both know it all comes together and works; the core act of a diving cell, to copy the information. That information is recorded in a symbolic format. Physics can’t explain it any more than physics can explain the existence of a red plastic ball. I noticed previously that you have given several strong opinions in defense of materialistic thought, and therefore could assume you knew something about it. In particular I assumed you knew that symbolic information was being processed inside the cell. After all, these topics are hardly hidden from view within the ID debate. However, in my response back to you (#27) I simply answered your question, “Could you expand upon it a bit, please?” I offered a very straightforward answer and gave you two examples of the chemical relationships to which I was referring, one of them was an example of the use of symbols in structural information (like the formation of proteins from DNA), and the other was an example of symbols used in bio regulation (like that observed in second messengers or cAMP). I wrote:
The entire body is made up of context-specific reactions and interplay between chemical constituents which have nothing whatsoever to do with each other outside of the context of the system they are coordinated within. cAMP has absolutely nothing to do with glucose. Cytosine-Thymine-Adenine is a chemical symbol mapped to Leucine based upon an arbitrary rule.
…and I also offered two quotes from a respected research biologist which directly supports the comments I had made.
”The second noteworthy aspect is that the computation involves the use of chemical symbolism as information is transmitted…whole cell involvement and transient chemical symbols are typical of cellular computation…These chemical forms act as symbols that allow the cell to form a virtual representation of its functional status…any successful 21st century description of biological functions will include control models that incorporate cellular decisions based on symbolic representations”
Your response was to simply ignore the examples I had given. You made absolutely no comment whatsoever about protein synthesis, second messengers, cAMP, glucose, adenine, information transfer, regulation networks - nothing at all. Instead, you implied that the quotations I had offered lent no support to my claim, and then turned around and asked again for examples - which you just ignored. Clearly, the question (What research over the past century and a half indicates that inanimate matter can establish symbols systems so evolution (in whatever and any form you wish to believe in it) can even occur in the first place?) is not one you intend to address. It is just as clear that, despite not allowing yourself to be open to question, you intend to continue being confrontational to ID concepts. Coincidentally, I hardly think that being confrontational with you is a “baseless” response. Your contrived protestations for descriptive clarity are then seen for what they are.Upright BiPed
September 7, 2010
September
09
Sep
7
07
2010
08:43 AM
8
08
43
AM
PDT
MG: It is a little disappointing that this thread continues to be tangential to the major issues raised in the original post. However, it does seem that some further remarks need to be made on the tangential matter. First, I find it sadly disappointing that, again, you have chosen to ignore where there is a specific response to your request on definition, both from BA77 and the undersigned. So, it is simply false that there is a "resistance" to provision of definition, though there is a recognition of the limitations of definition, and a pointing out that a great many things we work with are not subject to the sort of definition you a priori demand. On wider questions you posed, the basic problem with the incrementalist model of origin of bio-information is that it assumes a particular structure to the configuration space of biological systems that is most definitely unwarranted. Namely, that there is a vast, easily accessible continent of function, which leads to easy progress step by step. You have no right to assume such a model, as a moment's reflection on how codes work will tell you: by far and away most at-random complex symbol strings will be non-functional. And DNA stores a code, actually it seems several codes; which are central to life's function. Starting at 100+ k bits of stored information, and ranging upwards of billions. Now, the point of the issue of the threshold of functionally specific complex organisation and associated digitally coded information for bio-function, is that this implies that instead we have deeply isolated islands of function in vast seas of non-function. Consequently, the first challenge is to get to the first viable life forms, where until we have coded stored information joined to metabolic nanotechnology, self-replicating life does not exist and there is no reproduction for variations and environmental culling pressures to shift populations. In short -- and this has been so ever since Darwin truncated his theory at this strategic point -- there is a gap at the root of the tree of life so-called. And, hypothetical replicator molecules in Darwin's warm little pond or the modern equivalent, do not account for the origin of observed von Neumann replicator tied to metabolism cell based life. Then, when we come to novel body plans, we see that we are not "merely"looking at needing to account for the suggested spontaneous origin of ~ 100 k bits of initial bio-information, the codes and the machines that make those codes work, joined to the metabolic systems that turn environmental resources into cell components and energy. Instead, dozens of times over, we have to account for the spontaneous origin of embryologically feasible new architectures for living organisms, with upwards of 10 million bits apiece. Just 1,000 bits is far beyond the credible threshold for spontaneous information generation of the resources of our observed cosmos. Of course, if one has arrived on a beach of function, then one can plausibly discuss how one may by random small variation and differential performance, move towards peaks of performance. But that is not where the issue lives. In short, some big questions on origin of complex biological information are routinely being begged in how we are taught biology and related disciplines. That is, once the complex, functionally specific coded information threshold issue is on the table, macroevolution cannot properly be claimed to be simply accumulated microevolution. And indeed, for a generation, it has been widely known that the fossil record bears this point out: it shows sudden appearances, stasis and disappearance of basic forms, as the overwhelming pattern. There is no empirically well founded smoothly varying tree of life, starting with the gap where there should be a root, and going on to the origins of major body plans. The so-called Cambrian life revolution is the capital illustration of this pattern, but the pattern is the overwhelming one in the fossil record, headlines about missing links notwithstanding. The real question, instead, is how to get to the shores of function that are on islands deeply isolated in vast configuration spaces. And that question points to the only empirically well founded source of complex coded information: intelligence. GEM of TKIkairosfocus
September 7, 2010
September
09
Sep
7
07
2010
12:50 AM
12
12
50
AM
PDT
BA, so I assume the answer is "no". So then what's the point of it if you can't make measurements in the real world?zeroseven
September 6, 2010
September
09
Sep
6
06
2010
08:28 PM
8
08
28
PM
PDT
zeroseven, evolve some new function that exceeds the parent strain in the fitness test. Is Antibiotic Resistance evidence for evolution? – ‘The Fitness Test’ – video http://www.metacafe.com/watch/3995248 the test is blatantly clear,,, You are the one that believes the absurd position that bacteria evolved into all life we see on earth. If this position of yours is true you should be able to point me to thousands upon thousands of examples of the fitness test being passed, complete with list upon list of new functional proteins being generates as well as a fairly long list of protein machinery being originated by material processes. yet you cannot even cite one single protein originating by purely material processes. Shoot man using all his technology and lab equipment will never find a novel functional protein seeing that they exceed 1 in 10^77 in rarity. The following, if you care anything about the truth, which I highly doubt, shows one of the most crushing problems against neo-Darwinian evolution for ever producing any trivial amounts of functional information whatsoever: Poly-Functional Complexity equals Poly-Constrained Complexity The primary problem that poly-functional complexity presents for neo-Darwinism is this: To put it plainly, the finding of a severely poly-functional/polyconstrained genome by the ENCODE study has put the odds, of what was already astronomically impossible, to what can only be termed fantastically astronomically impossible. To illustrate the monumental brick wall any evolutionary scenario (no matter what “fitness landscape”) must face when I say genomes are poly-constrained to random mutations by poly-functionality, I will use a puzzle: If we were to actually get a proper “beneficial mutation’ in a polyfunctional genome of say 500 interdependent genes, then instead of the infamous “Methinks it is like a weasel” single element of functional information that Darwinists pretend they are facing in any evolutionary search, with their falsified genetic reductionism scenario I might add, we would actually be encountering something more akin to this illustration found on page 141 of Genetic Entropy by Dr. Sanford. S A T O R A R E P O T E N E T O P E R A R O T A S Which is translated ; THE SOWER NAMED AREPO HOLDS THE WORKING OF THE WHEELS. This ancient puzzle, which dates back to 79 AD, reads the same four different ways, Thus, If we change (mutate) any letter we may get a new meaning for a single reading read any one way, as in Dawkins weasel program, but we will consistently destroy the other 3 readings of the message with the new mutation. This is what is meant when it is said a poly-functional genome is poly-constrained to any random mutations. The puzzle I listed is only poly-functional to 4 elements/25 letters of interdependent complexity, the minimum genome is poly-constrained to approximately 500 elements (genes) at minimum approximation of polyfunctionality. For Darwinist to continue to believe in random mutations to generate the staggering level of complexity we find in life is absurd in the highest order! Notes: Simplest Microbes More Complex than Thought - Dec. 2009 Excerpt: PhysOrg reported that a species of Mycoplasma,, “The bacteria appeared to be assembled in a far more complex way than had been thought.” Many molecules were found to have multiple functions: for instance, some enzymes could catalyze unrelated reactions, and some proteins were involved in multiple protein complexes." http://www.creationsafaris.com/crev200912.htm#20091229a First-Ever Blueprint of 'Minimal Cell' Is More Complex Than Expected - Nov. 2009 Excerpt: A network of research groups,, approached the bacterium at three different levels. One team of scientists described M. pneumoniae's transcriptome, identifying all the RNA molecules, or transcripts, produced from its DNA, under various environmental conditions. Another defined all the metabolic reactions that occurred in it, collectively known as its metabolome, under the same conditions. A third team identified every multi-protein complex the bacterium produced, thus characterising its proteome organisation. "At all three levels, we found M. pneumoniae was more complex than we expected," http://www.sciencedaily.com/releases/2009/11/091126173027.htm Scientists Map All Mammalian Gene Interactions – August 2010 Excerpt: Mammals, including humans, have roughly 20,000 different genes.,,, They found a network of more than 7 million interactions encompassing essentially every one of the genes in the mammalian genome. http://www.sciencedaily.com/releases/2010/08/100809142044.htmbornagain77
September 6, 2010
September
09
Sep
6
06
2010
06:50 PM
6
06
50
PM
PDT
BA77@76, Ok BA, can you use these equations and tell me what the CSI of Lenski's bacteria is before and after?zeroseven
September 6, 2010
September
09
Sep
6
06
2010
05:28 PM
5
05
28
PM
PDT
as well, the fact is that evolutionists must pass the fitness test before they can even claim that new complex functionality/information exists which was not present in the parent species. For evolutionists to try to claim that a sub-species which has lost robustness for survivability, as Lenski's 'cuddled' e-coli clearly does, when compared to its parent wild strain, is to ignore the main point that evolutionists need to establish in the first place. To play semantics with a devolved strain, to see if 'new' functional information has 'evolved', is an exercise in futility for the first step in assessing a gain in functional information/complexity (the fitness test) has not even been passed.,,,,bornagain77
September 6, 2010
September
09
Sep
6
06
2010
04:41 PM
4
04
41
PM
PDT
zeroseven you state: And despite mathgrrls efforts, and referring to BA’s notes, I haven’t seen a precise application of it? excuse me but this video, which is listed in my notes, shows a precise application of Szostak's equation for functional information: Mathematically Defining Functional Information In Molecular Biology – Kirk Durston – short video http://www.metacafe.com/watch/3995236 Entire video: http://vimeo.com/1775160 as well I listed this paper in which functional information was calculated for 35 protein families,,, Measuring the functional sequence complexity of proteins – Kirk K Durston, David KY Chiu, David L Abel and Jack T Trevors – 2007 Excerpt: We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable. To demonstrate the relevance to functional bioinformatics, a method to measure functional sequence complexity was developed and applied to 35 protein families.,,, http://www.tbiomed.com/content/4/1/47 ,,, thus zeroseven you expose yourself for being thoroughly disingenuous with the evidence I provided,,,bornagain77
September 6, 2010
September
09
Sep
6
06
2010
04:33 PM
4
04
33
PM
PDT
gpuccio, I am not a mathematician or information theorist, and so the technical discussions confuse me. But as a layperson's observation, the way you are using maths and calculations seems very different to how it is usually done in science. Usually, quantities are very precise, eg the boiling point of a substance, the energy required to break an electron off an atom etc. There is universal agreement as to amounts and quantities as they are based on observation and then measured in experiments. But with CSI this seems not to be the case. You have given 3 different "thresholds" that various people have adopted beyond which we can take it that CSI has been generated. It just all seems very ad-hoc and non precise. Why 500 bits, why 1,000, why 150? So 149 would not do it? Do you not think there is a danger of setting thresholds based on the desired outcome? And despite mathgrrls efforts, and referring to BA's notes, I haven't seen a precise application of it? Surely the only way to to say that Lenski's experiment does not produce new CSI, is to measure it before and after? But you seem to be saying you can't measure it because we don't know enough about what is occurring? In that case what use is it? Yours, confusedzeroseven
September 6, 2010
September
09
Sep
6
06
2010
03:23 PM
3
03
23
PM
PDT
MathGrrl: I am afraid there is some confusion about terms here (I don't know if i may have been imprecise in some phrase, in case I apologize). the variation in Lenski's case, if it is of two AAS, is of about 10 bits of functional information (something less, indeed). It is specified, but not complex. So, it is not a variation which can be defined as CSI, whatever threshold you fix for CSI. If you are aware that the number of functional mutations in Lensky's case is different, please let me know, and we will update the computation. Functional information is the amount of information necessary for the function to emerge. In a transition, it is the amount of variation which is necessary for the new function to emerge. Functional information can be simple (below a threshold, which I have suggested at 150 bits for biological systems), or complex (above that threshold). An object, or a transition, which is specified (a function, or a new function), and complex, exhibits CSI, and is empirically found to be designed in all known cases. Is that clear now? Let's go to your other point. I have already debated it many times with others, and maybe also with you (I can't remember), but here we are again. In evaluating if an object, or a transition, exhibits CSI, we always have to exclude any known mechanism based on necessity which can have generated the result. This is one of the fundamental principles of ID, and of Dembski's filter. Indeed, the functional information we measure must be really "pseudo-random" information, scarcely compressible. IOW, we must look at the true Kolmogorov complexity of our result. Now, you say: If my understanding of your argument is correct, you’re claiming that evolutionary mechanisms cannot generate more than a certain amount of CSI as a single mutation becomes fixed in a population. First of all, the correct form is: "evolutionary mechanisms cannot generate more than a certain amount of functional information". Let's be clear about that. The darwinian mechanism is made of two different components, applied repeatedly: RV + NS. Now, RV is the random part, while NS is a necessity mechanism. The two parts act sequencially, one after the other, and then the cycle is repeated. I want to state clearly here that any computation of the functional information in a system must be applied only to the random part, that is to the ability of RV to generate a specific result without any help from a necessity mechanism. You go on: However, evolution proceeds by many small changes. If each mutation generates 10 bits of CSI, it only takes 15 mutations to hit your 150 bit boundary. (Well, just to be precise, it would require about 17.3 mutations of two AAs, because each of them is 8.65 bits; IOWs, my threshold corresponds to about 35 coordinated mutations, as I believe I had said). But, in essence, that's perfectly correct. I agree with you. That is a perfectly reasonable model. . If... Yes, you have forgotten a very important if: that each of the "steps" of two AAs mutations must be functional and selectable. IOWs, each step must be visible to NS (which is not a trifle: it must confer a reproduction advantage). Why? Because the two AAs mutation (about 10 bits, in the range of what a random biological system like RV can achieve) must be functional, must be positively selected, and must expand in the population and be fixed against further variation. So, your model seems to be: any transition of 35 AAs or more can easily be deconstructed into small transitions of two AAS, each of them functional and selectable thanks to a specific reproductive advantage. Interesting indeed. While I see no logical reason why that should be generally true, I would certainly be very happy to analyze any specific darwinian model for such a deconstruction of any of the complex transitions we know must have happened. I have repeatedly suggestes a context which you have always refused to comment upon: the emergence of new protein domains, of new protein superfamilies, which we know has happened repeatedly in natural history. Well, we know that each protein superfamily is isolated from the others at the level of primary and tertiary structures. Very simply, the primary sequence of each superfamily bears no similarity to the sequence of others (that's how they are defined), and the folding is different. Well, we have hundreds of examples where a new superfamily appears at some point of natural history. That has happened hundreds of times. I ask: how? Please, show a model of that emergence form some pre-existing, different superfamily in a different species, where a transition of "at least" 35 functional AAs has been achieved through single steps of two AAs transitions, each of them bearing a functional gain selectable by NS trough a reproductive advantage. Please, show one such example from the rich darwinian literature, with those minimal 17 functional steps molecularly verified, and then we can really discuss your beautiful model. Then you will have demonstrated that that transition implied no real emergence of CSI, because it can be obtained through a mechanism where the RV part is in the range of what randomness can achieve, and the NS part is well described and documented in detail. That will be a good start. Then you have only to show that the same model can work for all the thousands of protein superfamilies we know. But don't worry: a good start is half the battle.gpuccio
September 6, 2010
September
09
Sep
6
06
2010
01:44 PM
1
01
44
PM
PDT
Warmabomber is but fulfilling views of Energy Sec. Holdren etc. Who is responsible for Warmabomber's violent agenda? By: Glenn Harlan Reynolds
Seeing humanity as destructive, Holdren wrote in favor of forced abortion and putting sterilizing agents in the drinking water, and in particular of sterilizing people who cause “social deterioration.” . . . In contemporary America, no respectable person would advocate, say, the involuntary sterilization of blacks or Jews. Why, then, should it be any more respectable to advocate the involuntary sterilization of everyone? Or even of those who cause “social deterioration?” Likewise, references to particular ethnic or religious groups as “viruses” or “cancers” in need of extirpation are socially unacceptable, triggering immediate thoughts of genocide and mass murder. Why, then, should it be acceptable to refer to all humanity in this fashion? Does widening the circle of eliminationist rhetoric somehow make it better?
DLH
September 6, 2010
September
09
Sep
6
06
2010
09:19 AM
9
09
19
AM
PDT
Mathgrrl, you keep saying that no one has provided you with a mathematical definition of functional information, yet I have given you a working definition and gpuccio and kairos have give you a much more detailed definition. It is simply ludicrous for you to state this: 'If there is no mathematical definition of “functional information”, for example, then tgpeeler’s claims about how much of it can be generated by evolutionary mechanisms are simply meaningless. Frankly, I’m very surprised to see the resistance to providing detailed definitions.' ,,, Yet here is the definition again,,,, Mathematically Defining Functional Information In Molecular Biology – Kirk Durston – short video http://www.metacafe.com/watch/3995236 Entire video: http://vimeo.com/1775160 Functional information and the emergence of bio-complexity: Robert M. Hazen, Patrick L. Griffin, James M. Carothers, and Jack W. Szostak: Abstract: Complex emergent systems of many interacting components, including complex biological systems, have the potential to perform quantifiable functions. Accordingly, we define ‘functional information,’ I(Ex), as a measure of system complexity. For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, Ex (e.g., the RNA-GTP binding energy), I(Ex)= -log2 [F(Ex)], where F(Ex) is the fraction of all possible configurations of the system that possess a degree of function > Ex. Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree. In each case we observe evidence for several distinct solutions with different maximum degrees of function, features that lead to steps in plots of information versus degree of functions. Measuring the functional sequence complexity of proteins – Kirk K Durston, David KY Chiu, David L Abel and Jack T Trevors – 2007 Excerpt: We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable. To demonstrate the relevance to functional bioinformatics, a method to measure functional sequence complexity was developed and applied to 35 protein families.,,, http://www.tbiomed.com/content/4/1/47bornagain77
September 6, 2010
September
09
Sep
6
06
2010
09:16 AM
9
09
16
AM
PDT
CannuckianYankee,
I think the point we’re all attempting to make here is that scientific questions often do not lend themselves to rigorous mathematical equations and definitions as you seem to require of ID.
The terms CSI, functional information, FSCI, and others are used very frequently on this site and are often associated with quantitative values and claims about real world scenarios. I don't believe it is unreasonable to expect rigorous definitions of those terms. If there is no mathematical definition of "functional information", for example, then tgpeeler's claims about how much of it can be generated by evolutionary mechanisms are simply meaningless. Frankly, I'm very surprised to see the resistance to providing detailed definitions. ID makes claims about the real world. How do you propose to test those claims without knowing how to measure the quantities that you are discussing?MathGrrl
September 6, 2010
September
09
Sep
6
06
2010
08:36 AM
8
08
36
AM
PDT
1 2 3 4 5 6

Leave a Reply