Uncommon Descent Serving The Intelligent Design Community

Evolutionist: You’re Misrepresenting Natural Selection

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

How could the most complex designs in the universe arise all by themselves? How could biology’s myriad wonders be fueled by random events such as mutations?  Read more

Comments
Elizabeth: Well, I am happy that I can make my essay without losing too many marks .)gpuccio
January 2, 2012
January
01
Jan
2
02
2012
12:31 AM
12
12
31
AM
PDT
Right. I'm glad you agree with me that macroevolution, by gpuccio's definition, doesn't exist. So there's no point in his claiming that it can't be explained by evolutionary theory, is there? (Did you inadvertently back the wrong horse there, Joe? ;))Elizabeth Liddle
January 1, 2012
January
01
Jan
1
01
2012
11:57 AM
11
11
57
AM
PDT
Probability isn't especially mysterious, you just need to be clear what you mean by it in any given context. Sometimes it's used as a frequency estimate, sometimes as a measure of uncertainty. The two aren't the same, though, mathematically, so it's important to distinguish, and to interpret the word appropriately. And the reason I prefer "stochastic" to "random" is that it has a much more precise meaning. "Random" is a disaster :)Elizabeth Liddle
January 1, 2012
January
01
Jan
1
01
2012
11:54 AM
11
11
54
AM
PDT
I'll have to respond in pieces, gpuccio, but let me start at the end:
Well, I was not speaking of a logical falsification, but of an empirical falsification. The reasoning is simple. You propose a model. You empirically define some effect that derives form the model, and a minimal threshold of that effect that is empirically interesting. Than you make the experimentation, dimensioning your samples so that the power of the test will be 95%. Then you find a p value of 0.30. You don’t reject the null hypothesis. At the same time, with a power of 95% and a beta error of 5%, you can affirm that it is very unlikely that your model is good, at least that it is good enough to assume an effect of the size you had initially assumed. Empirically, that means that you model not only is not a good explanation, but realistically it is not an useful explanation at all.
I agree that you can falsify the hypothesis that an effect is greater than some threshold effect size. Cool :)Elizabeth Liddle
January 1, 2012
January
01
Jan
1
01
2012
11:27 AM
11
11
27
AM
PDT
Elizabeth: Only now I have read your post 19.2. Here are my comments. Well, I do believe that you don't understand statistics well. Let's see: Again, that word “random”. You really need to give a tight definition of it, because it has no accepted tight definition in English. I have given it. I quote from my post 13: "a) A “random system” is a system whose behaviour cannot be described in terms of necessity laws, usually because we cannot know all the variables and/or all the laws, but which can still be described well enough in terms of some probability model. The tossing of a coin is a random system. Genetic variation, if we don’t consider the effects of NS (differential reproduction) is a random system. NS introduces an element of necessity, due to the interaction of reproductiove functions and the environment." I can be more clear, and say that a system is not "random" because it is not governed by laws of necessity. I have always been clear about that. It's our model of the system that is probabilistic, because that's the best model that can describe it. Much of your confusion in the following derives from your misunderstanding of my point about that. And we simply do not know whether gravity is subject to quantum uncertainty or not. Again confusion. I have never spoken of "gravity", but of "Neton'w law of gravity". Again you misunderstand: I am speaking of the scientific model, and you answer about the noumenon! Newton’s law is indeed deterministic, but it is only a law, not an explanation – not a theory. And all a law is a mathematical description that holds broadly true. That doesn’t make it a causal relation. From Wikipedia: "A scientific law is a statement that explains what something does in science just like Newton's law of universal gravitation. A scientific law must always apply under the same conditions, and implies a causal relationship between its elements. The law must be confirmed and broadly agreed upon through the process of inductive reasoning. As well, factual and well-confirmed statements like "Mercury is liquid at standard temperature and pressure" are considered to be too specific to qualify as scientific laws. A central problem in the philosophy of science, going back to David Hume, is that of distinguishing scientific laws from principles that arise merely accidentally because of the constant conjunction of one thing and another." A law, like a theory, is about causal relations. Atheory is usually wider than a simple law, but essentially they are the same thing: an explanatory model of reality, based on logical and mathemathical relations. We do not know whether mass is the cause of gravitational force. It may be that gravitational force is the cause of mass. Or it could be that “cause” is itself merely a model we use to denote temporal sequence, and ceases to make sense when considering space-time. But I’m no physicist, so I can’t comment further except to say that you are making huge and unwarranted assumptions here. And you are completely wrong here. Philosophically, you can dispute whether we can ever assess a true causal relation (Hume tried to do exactly that). But science is all about assumed causal relations. Newton's model does assume that mass is the cause of gravitational force, and not the other way round. Again, here, you make a terrible confusion between scientific methodology and modeling, and mere statistical analysis of data. While statisitic can never tell us which is the cause in a relation, methodology can (with all the limits of scientific knowledge, obviously). For instance, if there is a statistical correlation between two variables, that's all we can say at the statistical level. But, if one variable precedes the other in time, methodology tells us that a causal relation, if assumed, can be only in one direction. That's why a scientific model is much more than the description of statistical relations. A model makes causal assumptions, and tries to explain what we observe. Obviously, we well know that scientific models are never absolute, and never definitive. But they can be very good just the same. Depends what you mean by “nothing to do with probability”. I thought it was clear. It means that the evolution of a wave function is mathemathically computed, in a striclty deterministic way. There is no probability there. I am talking about scientific (explanatory) models. I am too. There is always unmodelled variance, if only experimental error. Now, I will be very clear here, because this is the single point where you bring most of the confusion in. Any system, except for qauntum measurements implying a collapse of the wave function, is considered to be deterministic, in physics. Therefore, in principle, any system could be modeled with precision if we knew all the variables, and all the laws implied. That would leave no space to probability, exactly the opposite of what you state. But, obviously, there is ample space for probab ility in science. So, in science, we model those parts of the system that we understand according to causal relations (the "necessity" parts), and we describe probabilistically those parts that we cannot model that way (the "random" parts). If you computa a regression line, you are creating a necessity model (the regression line is a mathemathical object), assuming a specific mathemathical relationh between two variables. If your model is good, it will explain part of the variance you observe, and if you are happy with that, you can make methodological assumptions, propose a theory and a causal relationship. That is a methodological activity, and it is supported by statistical analysis, but not in any way determined by it. And you will have residuals, obviously. Still unexplained variance. What are they? The effects of other variables, obviously, including sampling error, measurement errors, and whatever else. It is obvious that, if we could model everything, we would have a strict necessity system (unless agents endowed with free will are implied :) ). But we treat that part as random variance, exactly because we can't model it. If we can model part of that residual variance by some new necessity relation, then we can refine our modle, that will become better. That's what you seem not to understand. The model is explanatory, and anytime possible it is based on necessity and assumed causal relations. The data will always have some random component, because usually we cannot model all necessity interaction, even if only because of measurement errors. Quantum mechanics, for its probabilistic part, is the only model I know that is supposedly based on intrinsic randomness (and even that is controversial). It is often possible to use deterministic models, even when the underlying processes are indeterminate. What does that mean? All processes, in principle, are determinate (except what said about quantum processes). If we use a deterministic model, it's because we believe that it describes well, to a point, what really happens. We may be wrong, obviously. But that's the idea, when we make science. Similarly we often have to use stochastic models even when the underlying processes are determinate. Wrong. We use probabilistic models when we have no successful model based on necessity. And the underlying processes are always determinate, it's our possibility to describe them that determiones the use of a necessity model or of a probabilistic modle. I think a big problem (and I find it repeatedly in ID conversations) concerns the word “probability” itself, which is almost as problematic as “random”. Sometimes people use it as a substitute for “frequency” (as in probability distributions which are based on observed frequency distributions) . At other times they use it to mean something closer to “likelihood”. And at yet other times they use it as a measure of certainty. We need to be clear as to which sense we are using the word, and not equivocate between mathematically very different usages, especially if the foundation of your argument is probabilistic (as ID arguments generally are). That again shows great confusion. There is no doubt that the nature of probability is a very controversial, and essentially unsolved, philosophical problem. If you have time to spend, you can read about that here: http://plato.stanford.edu/entries/probability-interpret/ But the philosophical difficulties in interpreting probabilities have never prevented scientists to use the theory of probability efficiently, no more than the controversial interpretations of quantum mechanics have prevented its efficient use in physics. You seem to have strange difficulties with words like probability, and random. Only "stochastic" seems to reassure you, for reasons that frankly I cannot grasp :) For your convenience, No. Pretty well all sciences, and certainly life sciences, use models in which the error term is extremely important. And biology is full of stochastic models. In fact I simply couldn’t do my job without stochastic models (and I work in life-science, but in close collaboration with physicists). Again, a model can be a necessity model, and still take into account error terms, that will be treated probabilistically. The word itself, "error", refers to a concept of necessity. Measurement errors, for instance, create random noise (unless they are systematic) that makes the detection of the necessity relation more difficult. I think you are confusing a law with a model. A law, generally, is an equation that seems to be highly predictive in certain circumstances, although there are always residuals – always data points that don’t lie on the line given by the equation, and these are not always measurement error. We often come up with mathematical laws, even in life sciences, but that doesn’t mean that the laws represent some fundamental “law of necessity”. It just means that, usually within a certain data range (as with Newton, and Einstein, whose laws break down beyond certain data limits) relationships can be summarised by a mathematical function fairly reliably – perhaps very reliably sometimes. ????? What does that mean? Laws, models and theories are essentially the same kind of thing. OK, laws are more restricted, and usually more strongly supported by data. But the principle is the same: we create logical and mathemathical models to explain what we observe. The residuals are not necessarily evidence tha the law is wrong. As already said, they can be often explained by measurement errors, or by the interference of unknown variables. And of course, sometimes they do show that the laws is wrong. And what do you mean when you say: "but that doesn’t mean that the laws represent some fundamental “law of necessity”" Laws are laws of necessity. How fundamental they are, depends only on how well they explain facts. And obviouysly, if you mean, with "fundamental", absolute, then no scientific law, or model, or theory will ever be "absolute". But they can be very good and very important. This is a false distinction in my opinion. You can describe the results of a single coin toss by a probabilistic model just as well as you can describe the results of repeated tosses. ???? What do you mean? How can probability help you describe "the results of a single coin toss"? What can you say? Maybe it will be head? Is that a description of the event? But if you toss the coin many times, you can observe mathematical regularities. For instance, you can say that the percent of heads will become nearer to 50% as the number of tosses increases. That is a mtahemathical relation, but as you can see it is not a necessity one: it is based on a probability distribution, a different mathemathical object. But if you want to predict the results of an individual toss, as opposed to the aggregate results of many tosses, you need to build a more elaborate model that takes into account all kinds of extra data, including the velocity, spin, distance and angle etc of the coin. OK. As I have said from the beginnig, each single toss is determined. Completely determined. And you cannot possibly know all the factors, so there will still be an error term in your equation. Yes, but most times the error term can be made small enough that the prediction is empirically accurate. Otherwise we could never compute trajectories, orbits, and so on. In other words, predictive models always have error terms; sometimes these can be practically ignored; at other times, you need to characterise the distribution of the error terms and build a full stochastic model. As already said, that a data analysis include a probabilistic evaluation of errors does not mean that the explanatory model is not a necessity model. I agree that characterising uncertainty is fundamental to scientific methodology. I disagree that stochastic and non-stochastic models are “deeply different”. In fact I’d say that a non-stochastic model is just a special case of a stochastic model where the error term is assumed to be zero. Again you stress unduly the error term. Random errors are an empirical problem, but they do not imply that a necessity theory is wrong. The evaluation of a theory implies much more than the error term. It means to evaluate how well it explains observed facts, if it containf internal inconsistencies, and if there are better necessity theories that can explain the same data. No. That is not the purpose of Fisher’s test, which has nothing to do with “causal necessity” per se (although it can be used to support a causal hypothesis). Please, read again what I wrote: "Take Fisher’s hipothesis testing, for instance, that is widely used as research methodology in biological sciences. As you ceretainly know, the purpose of the test is to affirm a causal necessity, or to deny it." Well I could have repeated: the purpose of the test as used in biological sciences. Must you always so fastidious, and without reason? You know as well as I do that Fisher's hypothesis testing is used methodologically to affirm, or deny, or just leave undetermined, some specific theories. As I have already said, it is never the statistical analysis in itself that affirms or denies: it's the methodologiacl context, supported by the statistical anlysis. Nor can you use Fisher’s test to “deny” a “causal necessity”. Why not? If you compute the beta error and power, you can deny a specific effect of some predefined minimal size with a controlled error risk, exactly as you do when you affirm the effect with the error risk given by the alpha error. Otherwise, why should medical studies have sufficient statistical power? However, if Fisher’s test tells you that your observed data are quite likely to be observed under the null, you cannot conclude that your hypothesis is false, merely that you have no warrant for claiming that it is true. I don't agree. If the power of your study is big enough, you can conclude that if a big enough effect compatible with your model had been present, your research should have detected it (always with a possibility of error measured by the beta error). Right. Except that a good scientist will then attempt to devise an alternative hypothesis that could also account for the observed data. Alternative hypotheses must always be considered. That is a fundamental of methodology. If you reject the null hypothesis, that does not automatically validate your model. I am very well aware of that. Let's say that, if you reject the null hypothesis, you usually propose your model as the best explanation, after having duly considered all other explanations you are aware of. But if you “retain the null”, by Fisher’s test, you cannot conclude that your hypothesis is false. Fisher’s test cannot be used to falsify any hypothesis except the null. It cannot be used in the Popperian sense of falsification in other words. Well, I was not speaking of a logical falsification, but of an empirical falsification. The reasoning is simple. You propose a model. You empirically define some effect that derives form the model, and a minimal threshold of that effect that is empirically interesting. Than you make the experimentation, dimensioning your samples so that the power of the test will be 95%. Then you find a p value of 0.30. You don't reject the null hypothesis. At the same time, with a power of 95% and a beta error of 5%, you can affirm that it is very unlikely that your model is good, at least that it is good enough to assume an effect of the size you had initially assumed. Empirically, that means that you model not only is not a good explanation, but realistically it is not an useful explanation at all. That is empirical falsification, not in the Popper sense, but in the sense that counts in biological research. More in another post.gpuccio
January 1, 2012
January
01
Jan
1
01
2012
11:15 AM
11
11
15
AM
PDT
No one can provide an example of something that doesn't exist. That's the point- your position doesn't have any examples to call upon.Joe
January 1, 2012
January
01
Jan
1
01
2012
10:51 AM
10
10
51
AM
PDT
Umm the whole point is there ain’t any examples of macroevolution so defined.
Exactly Joe. That is precisely the point I'm making. So until you or someone provide one, there's nothing that needs to be explained.Elizabeth Liddle
January 1, 2012
January
01
Jan
1
01
2012
10:29 AM
10
10
29
AM
PDT
Umm the whole point is there ain't any examples of macroevolution so defined. And conferring freater fitness is not an added function.Joe
January 1, 2012
January
01
Jan
1
01
2012
09:53 AM
9
09
53
AM
PDT
But, gpuccio, the more you tell me about your dFCSI, the more useless it seems to me as a metric! It doesn't seem to be highly with usefulness. And useless information isn't really information, is it? What good is information if it doesn't tell you what you need to know? That's why I asked you whether the val allele would still have 4.3 bits more information than the met allele if the environment changed so that the met allele becomes the allele that confers greater fitness. That seems to me a really crucial question.
Meanwhile, I would say that I generally reason on the AA sequence, assuming that synonimous mutations do not affect the protein function (I know that is not always true, òet’s say it is an approximation). It is probably possible to reason on the nucleotide sequence, but Durston works at AA level, and so do I.
But val to met is not a synonymous mutation and it does affect function. And you are right, even "silent" mutations (GUA to GUC for instance) can apparently affect function. Which is where your definition of function starts to fall apart, or rather, where we have to distinguish carefully between function as in "helps the organism reproduce) and function as in "has some biochemical properties". It is relatively easy to measure the functional value of a variant in my sense (by measuring relative fitness); I have no idea how you measure functional value simply from the biochemistry. That's why I keep accusing you of forgetting the phenotype! Surely the effect of a sequence on the phenotype that determines its functional value? And shouldn't that be a major input into your information metric?
egarding the question about the environment change, we have to define the added function at biochemical level, whatever it is. The change of 4.3 bits is tied to adding the biochemical difference. Either it is useful or not in the environment, nothing changes. We define the function as “locally” as possible, ususally in terms of biochemical properties. It is obviously possible to define the function more generically, such as “adding reproductive power in this specific environment”, but that would be wcarcely useful, and anyway tied to a particular environment. As I have tried to repeat many times, dFSCI works if it describes the information needed to obtain some very well defined, objective property. Biochemical activities in a specified context are very good examples of a well specified function.
But this makes no sense to me. I stipulated that, originally, the val allele provided "added function" i.e. conferred greater fitness, and you said that it represented an additional 4.3 bits of information. Now, that same allele, in a different environment, confers less fitness than the met allele. So are you saying it never did add 4.3 bits of information? Or are you saying that any change of one AA adds one bit of information, even if that change makes the phenotype less fit? Or are you, as you seem to be, saying that dFSCI can only be measured in with a well-defined function in a specific context? If so, it seems entirely irrelevant to evolutionary theory, which is all about adaptation to context! And yet you say that only if there is a step-change increase of > 150 bits in dFCSI can we call a change "macroevolution", right? So can you give an actual example of macroevolution, as thus defined, and say how you computed the size of the step-change?Elizabeth Liddle
January 1, 2012
January
01
Jan
1
01
2012
09:44 AM
9
09
44
AM
PDT
Elizabeth: Well, an answer to this: "It depends. You must tell me why the two alleles are assosiated with different reproduction. If the difference in the two alleles is the cause of different reproduction, then I suppose that the function cannot be exactly the same, or that there must be some indirect mechanism that connects the difference to the reproductive effect. Another possibility is that the two alleles are only linked to other proteins that infleunce different reproduction." Meanwhile, I would say that I generally reason on the AA sequence, assuming that synonimous mutations do not affect the protein function (I know that is not always true, òet's say it is an approximation). It is probably possible to reason on the nucleotide sequence, but Durston works at AA level, and so do I. Regarding the question about the environment change, we have to define the added function at biochemical level, whatever it is. The change of 4.3 bits is tied to adding the biochemical difference. Either it is useful or not in the environment, nothing changes. We define the function as "locally" as possible, ususally in terms of biochemical properties. It is obviously possible to define the function more generically, such as "adding reproductive power in this specific environment", but that would be wcarcely useful, and anyway tied to a particular environment. As I have tried to repeat many times, dFSCI works if it describes the information needed to obtain some very well defined, objective property. Biochemical activities in a specified context are very good examples of a well specified function. About the repeat allele, I really need more details about the function. I can only say that in general, a repetition is a compressible feature, and it implies few bits of information. Indeed, a simple repetition can probably be explained algorithmically, and I don't believe it contributes much to dFSCI. dFSCI is high in pseudorandom sequences, that cannot be efficiently compressed, but still convey information.gpuccio
January 1, 2012
January
01
Jan
1
01
2012
08:45 AM
8
08
45
AM
PDT
OK, thanks!
So, let’s assume that protein A and protein B differ for one AA. Let’s say that protein B has some added function, that determines better reproduction vs the replicators with protein A. We can evaluate the dFSCI of the transition from A to B, for the added function.
Let's say sequence is identical except that in the met allele there is an AUG whereas in the val allele there is a GUG.
If only that specific AA substitution confers that added function (IOWs, if val and only val confers the added function when substituted at that site), then the dFSCI of the transition is 4.3 bits.
Would the answer be the same if the met allele was AUG and the val allele was GUC? Or is it 4.3 bits regardless of how many differences there are between the two triplets? And two more questions: If the environment now changes, and the met allele becomes the more advantageous of the two, does the val allele still have 4.3 bits more dFCSI than the met allele, or does the met allele get those bits back, as it were? And the second question is about my second pair of alleles. The 7 repeat has 7 repeats of a 48 base pair section and the 9 repeat 9 repeats. The 7 repeat allele is more advantageous. What is the dFCSI of the difference? Thanks :)Elizabeth Liddle
January 1, 2012
January
01
Jan
1
01
2012
08:19 AM
8
08
19
AM
PDT
Sorry, this new format is difficult to track. What answers are you waiting for?Elizabeth Liddle
January 1, 2012
January
01
Jan
1
01
2012
08:09 AM
8
08
09
AM
PDT
It depends. You must tell me why the two alleles are assosiated with different reproduction. If the difference in the two alleles is the cause of different reproduction, then I suppose that the function cannot be exactly the same, or that there must be some indirect mechanism that connects the difference to the reproductive effect.
But we can observe that the two alleles ARE associated with different reproduction rates, without knowing why, simply by measuring reproduction rates in two populations, one of which has one allele and one of which has the other. Are you saying that if you observe that an allele is associated with increased fitness that you cannot estimate the dFCSI of the increase unless you know the mechanism? Can you even say for sure whether it is an increase or decrease?
Another possibility is that the two alleles are only linked to other proteins that infleunce different reproduction.
Let's say that we've done an experiment and actually inserted the relevant allele into the genomes of experimental animals. Could you do it then?
You see, omly darwinists can be content with a statement such as “the val allele was associated with greater reproductive success than the met allele”. All normal people would simply ask: why?
Who says this darwinist is content? Darwinists are normal people and indeed ask "why?" What I am asking you is how you compute the dFCSI of the change, because unless you can give an examplar of a step-change of > bits then there is nothing that demands explanation! And this (partly hypothetical) case is simple: we have two pairs of proteins, slightly different, whose sequence is known, and which produce different reproduction rates.
It’s about causes, remember. About necessity models, and cause effect relationships. That’s what sicence is about. Science is not only descriptive. It tries to explain things.
Yes, of course. I'm not disputing that (did you respond t omy response to your post on Fisher? I'm not sure I checked). But it's also about measuring things, and in this instance I would like to know how you would measure the difference in dFCSI between two alleles that result in two slightly different proteins, one associated with greater reproductive success than the other. Let's say that we think that the difference is due to more efficient dopamine function in the allele associated with greater fitness.Elizabeth Liddle
January 1, 2012
January
01
Jan
1
01
2012
08:07 AM
8
08
07
AM
PDT
Elizabeth: While waiting for your answers, I will give you a general answer to your question, with a few assumptions: So, let's assume that protein A and protein B differ for one AA. Let's say that protein B has some added function, that determines better reproduction vs the replicators with protein A. We can evaluate the dFSCI of the transition from A to B, for the added function. If only that specific AA substitution confers that added function (IOWs, if val and only val confers the added function when substituted at that site), then the dFSCI of the transition is 4.3 bits.gpuccio
January 1, 2012
January
01
Jan
1
01
2012
08:06 AM
8
08
06
AM
PDT
Elizabeth: For the "advantageousness of the variant, please answer my post 25.1.1.1.1gpuccio
January 1, 2012
January
01
Jan
1
01
2012
08:01 AM
8
08
01
AM
PDT
Elizabeth: The Durston method applies to protein families, to hundreds of sequence that have the same function. The function is considered a constrint to uncertainty in the protein sequence. Therefore, those AAs that have never varied contribute 4.3 bits to the total dFSCI, ehilr those that can freely change contribute 0 bits. All intermediate situations contribute correspondingly to the reduction of uncertainty given by the function for each sire, computed according to Shannon's formula. Please, review Durston's paper for more information.gpuccio
January 1, 2012
January
01
Jan
1
01
2012
08:00 AM
8
08
00
AM
PDT
Well, can you calculate it for the two pairs of alleles I've given you? I can hunt out the actual sequences, but for now, perhaps you could just give me the formula with the parameters, and I will try to supply the parameters? But I'm not clear where in your definition you plug in the advantageousness of the variant. If you are going to measure the increase in dFCSI of an sequence that results in increased fitness, or some potentially macroevolutionary change, how do you evaluate the increase, and where does it go in your dFCSI calculation?Elizabeth Liddle
January 1, 2012
January
01
Jan
1
01
2012
07:55 AM
7
07
55
AM
PDT
Elizabeth: It depends. You must tell me why the two alleles are assosiated with different reproduction. If the difference in the two alleles is the cause of different reproduction, then I suppose that the function cannot be exactly the same, or that there must be some indirect mechanism that connects the difference to the reproductive effect. Another possibility is that the two alleles are only linked to other proteins that infleunce different reproduction. You see, omly darwinists can be content with a statement such as "the val allele was associated with greater reproductive success than the met allele". All normal people would simply ask: why? It's about causes, remember. About necessity models, and cause effect relationships. That's what sicence is about. Science is not only descriptive. It tries to explain things.gpuccio
January 1, 2012
January
01
Jan
1
01
2012
07:54 AM
7
07
54
AM
PDT
Elizabeth: I paste here some recent short definitions of dFSCI I have given here. I have given much more detailed fìdefinitions, but in this moment I don't know how to retrieve them: "Just to try some “ID for dummies”: Functionally specified information. It’s not difficult. I need a string of bits that contains the minimal information necessary to do something. The information that is necessary to do something is functionally specified. Isn’t that simple? Complexity: How many bits do I need to achieve that “something”? It’s not difficult. It is simple. Programmers know very well that, if they want more functions, they have to write more code. Let’s take the minimal code that can do some specific thing. That is the specified complexity for that function." And about complexity: "My definition of complexity in dFSCI is very simple: given a digital string that carries the information for an explicitly defined function, complexity (expressed in bits as -log2) is simply the ratio between the number of functional states (the number of sequences carrying the information for the function) and the search space (the number of possible sequences). More in detail, some approximations must be made. For a protein family, the search space will usually be calculated for the mean length of the proteins in that family, as 20^length. The target space (the number of functional sequences) is the most difficult part to evaluate. The Durston method gives a good approximation for protein families, while in principle it can be approximated even for a single protein if enough is known about its structure function relationship (that at present cannot be easily done, but knowledge is growing increasingly in that field). This ratio expresses well the probability of finding the target space by a random search or a random walk form an unrelated state. dFSCI can be categorized in binary form (present or absent) if a threshold is established. The threshold must obviously be specific for each type of random system, and take into account the probabilistic resources available to the system itself. For a generic biological system on our planet, I have proposed a threshold of 150 bits (see a more detailed discussion here): https://uncommondescent.com.....ent-410355 As already discussed, the measurement of dFSCI applies only to a transition or search or walk that is reasonably random. Any explicitly known necessity mechanism that applies to the transition or search or walk will redefine the dFSCI for that object. Moreover, it is important to remember that the value of dFSCI is specific for one object and for one explicitly defined function."gpuccio
January 1, 2012
January
01
Jan
1
01
2012
07:49 AM
7
07
49
AM
PDT
So if I told you that in the first case, the val allele was associated with greater reproductive success than the met allele, and in the second case that the 7 repeat allele was associated with greater reproductive success than the 9 repeat, how would that affect your calculation of dFSCI? Or would it make no difference?Elizabeth Liddle
January 1, 2012
January
01
Jan
1
01
2012
07:45 AM
7
07
45
AM
PDT
Elizabeth: a) They contain the same dFSCI. The variation is neutral. In Durstin's method, those variations in a protein family are ecatly the way we compute the dFSCI of the family. b) Please, tell me more about the gene and its function, otherwise I cannot answer. In general, at least for a protein coding gene, those variations that do not affect funtion do not contribute to dFSCI.gpuccio
January 1, 2012
January
01
Jan
1
01
2012
07:40 AM
7
07
40
AM
PDT
Unfortunately the explanations lack evidentiary support. You can't even refute Dr Behe's claim that two new protein-to-protein binding sites are beyond the capability of stochastic processes. Maybe in the New Year....Joe
January 1, 2012
January
01
Jan
1
01
2012
07:31 AM
7
07
31
AM
PDT
Hey Elizabeth- Strange taht you call ID arguments untenable yet you cannot offer anything in support of the claims for your position. The fallacy of your position appears to be that there is supporting evidence for it.Joe
January 1, 2012
January
01
Jan
1
01
2012
07:29 AM
7
07
29
AM
PDT
But those are not definitions! Do you have actual definitions somewhere?Elizabeth Liddle
January 1, 2012
January
01
Jan
1
01
2012
07:10 AM
7
07
10
AM
PDT
I mean, absolutely not true! The biological literature is full of studies addressing those questions.Elizabeth Liddle
January 1, 2012
January
01
Jan
1
01
2012
06:05 AM
6
06
05
AM
PDT
You and darwinists seem not to be interested in asking those questions
Absolutely not. Just check the literature.Elizabeth Liddle
January 1, 2012
January
01
Jan
1
01
2012
06:04 AM
6
06
04
AM
PDT
OK, let's step back a bit then (literally, heh). Can I ask you a couple of questions? First: Let's take a gene for some neuromodulatory protein. And let's say there are two alleles for this protein, and they differ in one amino acid. Where in one allele there is methionine, in the other there is valine. Both proteins perform the same function in the phenotype. Which, if either, sequence contains more dFCSI? Second: take another gene, with two alleles. In both there are tandem repeats of a 48 pair sequence. However in one allele there are seven repeats, and in the other, nine. Again, both proteins perform the same function in the phenotype. Which, if either, sequence contains more dFCSI If these questions are not answerable, can you explain why?Elizabeth Liddle
January 1, 2012
January
01
Jan
1
01
2012
06:03 AM
6
06
03
AM
PDT
Elizabeth: By defining “function” so idiosyncratically, you are, IMO, tying yourself in knot. It's a knot I am perfectly comfortable in. Either “function”, as you define it, applies to any compound with any biochemical effect Yes, it does. A compound with a biochemical effect can be defined as functional, for that specific effect. But if the existence of that compound, and its presence in the context where it react, can be easily explained, and don't require any functional information to exist, then there is no reason to infer design just because a chemical reaction happens somewhere on our planet. But if I observe a lot of chemical reagents arranged in an ordered way, so that they can react one with the other, in order and in the exact proportions, so that some rare chemical result may be obtained, it is perfectly reasonable to ask whether that context is designed. Maybe it is not designed. Maybe the configuration I observe could easily happen in a non design context. That's why I have to analyze the "null hypothesis" of a random result in a random system. But take an enzyme. It accelerates a chemical reaction that vwould not occur, if not at trivial rates, in the absence of that enzyme. So I define a function. And what allows the function to work? It is the specific sequence of aminoacids (let's say 300) in that enzyme. And is there an explanation for that specific sequence to be there? Those are that we in ID want to answer. Logical, reasonable, scientific questions. You and darwinists seem not to be interested in asking those questions. Again, it's your choice. But we will go on asking them, and answering them. or you need to unpack “non trivial”,which, it seems to me, lands you back in phenotypic effects. No. It lands me back in functional complexity.gpuccio
January 1, 2012
January
01
Jan
1
01
2012
04:53 AM
4
04
53
AM
PDT
Elizabeth: And many biological catalysts are not proteins at all, yet by your definition have a “function”. And some proteins with “trivial biochemical activity” nontheless have a “function” (in terms of effect on the phenotype).ùùOK. And there is no reason to think those things are designed, if only a few bits of information can give us that function.gpuccio
January 1, 2012
January
01
Jan
1
01
2012
04:42 AM
4
04
42
AM
PDT
Elizabeth: Do you see the problem? No. There is no problem at all. Please, go back to my definition of dFSCI and of function, and you will find the answers. I sum them up here, for you convenience :) : a) To evaluate dFSCI, an observer can objectively define any fuction he likes. A defined function does not imply that the observed object id designed. I have also made the example of a stoen that can be defined as implementing the function of paper weight. I am not implying by that that the stone is desigmed for that function. b) A complex function, one that needs many bits (for instance, at least 150 for a biological context) of functional information to work, allows us to make a design inference. That's what I meant by "non trivial". Trivial functions are often non designed, but they can well be defined. Is that clear?gpuccio
January 1, 2012
January
01
Jan
1
01
2012
04:40 AM
4
04
40
AM
PDT
1 2 3 4 5 8

Leave a Reply