Uncommon Descent Serving The Intelligent Design Community

Two forthcoming peer-reviewed pro-ID articles in the math/eng literature

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The publications page at EvoInfo.org has just been updated. Two forthcoming peer-reviewed articles that Robert Marks and I did are now up online (both should be published later this year).*

——————————————————-

“Conservation of Information in Search: Measuring the Cost of Success”
William A. Dembski and Robert J. Marks II

Abstract: Conservation of information theorems indicate that any search algorithm performs on average as well as random search without replacement unless it takes advantage of problem-specific information about the search target or the search-space structure. Combinatorics shows that even a moderately sized search requires problem-specific information to be successful. Three measures to characterize the information required for successful search are (1) endogenous information, which measures the difficulty of finding a target using random search; (2) exogenous information, which measures the difficulty that remains in finding a target once a search takes advantage of problem-specific information; and (3) active information, which, as the difference between endogenous and exogenous information, measures the contribution of problem-specific information for successfully finding a target. This paper develops a methodology based on these information measures to gauge the effectiveness with which problem-specific information facilitates successful search. It then applies this methodology to various search tools widely used in evolutionary search.

[ pdf draft ]

——————————————————-

“The Search for a Search: Measuring the Information Cost of Higher Level Search”
William A. Dembski and Robert J. Marks II

Abstract: Many searches are needle-in-the-haystack problems, looking for small targets in large spaces. In such cases, blind search can stand no hope of success. Success, instead, requires an assisted search. But whence the assistance required for a search to be successful? To pose the question this way suggests that successful searches do not emerge spontaneously but need themselves to be discovered via a search. The question then naturally arises whether such a higher-level “search for a search” is any easier than the original search. We prove two results: (1) The Horizontal No Free Lunch Theorem, which shows that average relative performance of searches never exceeds unassisted or blind searches. (2) The Vertical No Free Lunch Theorem, which shows that the difficulty of searching for a successful search increases exponentially compared to the difficulty of the original search.

[ pdf draft ]

—————

*For obvious reasons I’m not sharing the names of the publications until the articles are actually in print.

Comments
the usual assumptions about uniform probability distributions How so, pray tell?tribune7
January 28, 2009
January
01
Jan
28
28
2009
12:02 PM
12
12
02
PM
PDT
Re #185 I suspect you have better things to do with your time. The speaker is described as: Biophysics PhD candidate, University of Guelph If he gets his Ph D then I will never trust anything from the University of Guelph again. E.g the usual assumptions about uniform probability distributions confusing relative likelihood with relatively probability quoting papers such as Axe which have been thoroughly debunked in the literature without mentioning they even contentious and so on Basically there is nothing new in it.Mark Frank
January 28, 2009
January
01
Jan
28
28
2009
11:38 AM
11
11
38
AM
PDT
And my motto is never trust a patrician :-) Just want to see if it will change your mind just a tad :-)tribune7
January 28, 2009
January
01
Jan
28
28
2009
10:34 AM
10
10
34
AM
PDT
Hey tribune, protector of the people! No, I haven't. I'll take a look when time permits. Thank you.Prof_P.Olofsson
January 28, 2009
January
01
Jan
28
28
2009
10:28 AM
10
10
28
AM
PDT
Professor O, have you seen UD's most recent post? https://uncommondescent.com/intelligent-design/mathematically-defining-functional-information-in-biology/tribune7
January 28, 2009
January
01
Jan
28
28
2009
10:19 AM
10
10
19
AM
PDT
CJY, Also see my post [155]. You can beat chance by chance without mysterious explanations.Prof_P.Olofsson
January 28, 2009
January
01
Jan
28
28
2009
10:12 AM
10
10
12
AM
PDT
CJY[180],
From what I understand, NFLT has already proven that in order to increase the probability of search, the search procedure needs to be matched to the correct search space.
That seems to be part of the ID folklore but it is not what the NFLT (Wolpert & Macready 1997) says. It states that all search algorithms are equally good/bad if the fitness function is chosen uniformly. So, if the conclusion of NFLT is false, all we can conclude is that the uniformity assumption is not satisfied. In evolutionary biology, it is quite obvious that the conclusion does not hold (for example, the darwininan search clearly beats random search in cases we all can agree upon such as chloroquine resistance, per Michael Behe) and that the assumptions are not met (for example, the fitness function that makes all genotypes equally fit is not very likely); hence, the NFLT simply does not apply. The ID folklore now seems to be that (a) either NFLT applies and Darwin loses (we all agree about that) or (b) the NFLT does not apply and Darwin loses which is completely unsubstantiated.
Thus, when Dembski assumes uniformity, he takes the perfect non-teleological non-biased starting point and begins from there.
More ID folklore. Assuming uniformity means that you make an assumption. The only way to possibly make such an assumption valid is if you state it as a prior distribution in the sense of Bayesian inference, and update it in the presence of data.Prof_P.Olofsson
January 28, 2009
January
01
Jan
28
28
2009
09:59 AM
9
09
59
AM
PDT
Rob: “ Informal claims such as “it is as difficult to find a search as it is to find a target” are not supported by the paper unless you make a lot of arbitrary assumptions. One of the arbitrary assumptions involved in measuring active information even violates Dembski’s own repeated warning. He has told us several times that “how we measure information needs to be independent of whatever procedure we use to individuate the possibilities under consideration.” And yet, the active information measure depends very much on how we individuate the possibilities.” First, those assumptions could be a part of an hypothesis which incorporates the math within these papers. As such, until the assumptions are falsified or the math is shown to be incorrect, we have a standing scientific hypothesis. Second, what are those arbitrary assumptions? You seem to be saying that these assumptions have to do with how we measure active information by individuating the possibilities, yet I don’t see this as a problem at all, since the probability of a pattern is measured against a non-arbitrary uniform probability distribution. Breaking up the pattern and convoluting it into a couple of different searches would be the arbitrary action, and I’m not sure that would even return a different measurement when these separate searches are each measured against a uniform probability distribution. As far as I understand, Active info = probability associated with bit operations taken to find the pattern – probability of pattern measured against a uniform distribution. Neither the number of bit operations nor the uniform probability distribution are arbitrary figures, so neither is their difference (active info) arbitrary. Rob: “ The recurring use of the uniform distribution in many attempts to apply math to biology is a model assumption must be argued just like any other model assumption. Indeed, Olle Haggstrom’s response to the active info approach is titled “Uniform distribution is a model assumption”. The question asked by Haggstrom and others is, why should we always expect uniform randomness? In fact, it’s an impossible expectation. If everything were characterized by a uniform distribution, then that would be a non-uniform distribution of distributions." Actually no, that would not be a non-uniform distribution of distributions, if *everything* is characterized by a uniform distribution. If you actually mean *everything*, then that would include the distribution of distributions. Simply put, the foundational randomness and chaos from which everything arises would be uniform and would not be biased toward any specific outcome. Thus, when Dembski assumes uniformity, he takes the perfect non-teleological non-biased starting point and begins from there. However, there is no need to expect “uniform randomness” in order to make sense of these papers. Olle Hagstrom seems to be missing the point in that one option of the papers is that there can be an infinite regress of active information and thus no true uniform search space exists “outside” of our universe. The implications of this would be that there has always been an infinite regress of bias in the foundational search space to produce our universe, life, evolution, and intelligence. However, that infinite regress of active info is only one of the options which I have discussed in my comment #122, and if there is indeed “uniform randomness,” then the ID position becomes more of an obvious and accurate choice. One way to begin to answer if the preceding assumptions are justified is to “test randomness” and see if it has any tendency to match non-uniform search spaces with search algorithms to generate active information. As long as no active information is generated, we can conclude that a random source (background noise and set of laws put together with no consideration for future results) will remain uniform with respect to randomly generated search procedures. Any other assumption is merely that – an arbitrary, un-necessary, and unfounded assumption. Excellent ID research, eh? CJYman: The only formal proof presented is that CSI won’t generate itself through only chance and law and this is implicit within these two papers. Rob: “Just for clarification, are you saying that this formal proof is implicit in these two papers? (An implicit formal proof sounds like an oxymoron to me.) If not, where is this formal proof to be found? Thanks." Sorry for not being more clear, but it is the fact that CSI (as a measurement of better than chance performance) necessarily requires active info which is implicit in these papers. The formal proof that chance and law (barring previous active info) won’t generate active info is what these papers seem to produce. Thus, the formal proof of active info not being generated via chance and law also formally proves that CSI won’t generate from merely chance and law.CJYman
January 28, 2009
January
01
Jan
28
28
2009
09:28 AM
9
09
28
AM
PDT
Prof. Olofsson: “Let me clarify: I am questioning the claims that the “search for a search” paper is pro-ID as was claimed in the introduction. In order to be considered pro-ID, it would have to have some implications for biology.” It only shows that random search won’t match search algorithm to search space in order to increase the probability of finding a given target. From what I understand, NFLT has already proven that in order to increase the probability of search, the search procedure needs to be matched to the correct search space. The NFLT could be presented with Dembski and Mark's paper as an hypothesis with ID connections as I’ve previously explained in my comments #121 and 122, and could be falsified as shown in my comment #162. Simply, if biological evolution can be modeled as a search and if our set of natural laws can be modeled as a search (which many physicists seem to do in discussing the values of our laws in relation to all possible mathematical universes) then the implications of the paper apply to all aspects of nature including biology. Prof. Olofsson: “I ask, for example, (1) what is the rationale behind claiming that search algorithms are chosen according to probability distributions, in particular the K-W distribution [154]? and (2) if they are, why do these distributions have to be uniform [159]? and point out (3) even if they are uniform, they are still quite likely to beat random search [155].” 1) It seems that the only assumption necessary is that the laws generating the search algorithm and search space are one out of many possible mathematical configurations of those laws and that whatever produces the search algorithm and search space matches those laws in a fashion which is blind to future results. Thus, the matching of (values for) laws = a search through a probability of all possible (values for) laws. I don’t see how this assumption could be controversial unless the universe and its laws where eternal (without beginning) and the only possibility. 2) They don’t have to be. Don’t the papers merely show that the degree to which a search space is non-uniform is the same degree to which the higher order search for that search space is non-uniform? As I have explained previously (in #121 and/or 122), this allows us the option of an infinite regress of active information. 3) Are you saying that any type of search through a uniform search space will return better than chance performance? Doesn’t that directly contradict the NFLT? Are you saying that searching for an evolutionary algorithm through unguided processes (matching of search procedure to search space with no consideration for future results) will stand a better chance of being found than attempting to locate the original pattern that the search algorithm is to find at better than chance results? It seems that you are saying that it is easier for non-foresighted processes to find the matching of algorithm and search space responsible for finding an optimized antenna than it is to find that optimized antenna by random search. Can you provide any evidence for this? If that were true, then I ask again: … if what you are stating is true and actually has practical effect (ie: actually happens in the real world), why doesn’t anyone just show that background noise (chance) and an arbitrary set of laws (set of laws collected without any consideration for future results — absent foresight) will produce systems which process signs/units and evolve into greater and greater specified complexity, and just falsify ID theory and be done with it. If evolution is so powerful absent previous foresight, why do programmers bother programming boundary conditions for future known targets into the evolutionary algorithms used to solve problems (ie: max efficiency antenna shape)?CJYman
January 28, 2009
January
01
Jan
28
28
2009
09:22 AM
9
09
22
AM
PDT
Gpuccio That was very clear and informative thanks. There were a number of things I did not understand, both about antibody maturation and what you believe. I realise now that you proposed antibody maturation only as an analogy of the supposed design process and not as an example. MarkMark Frank
January 28, 2009
January
01
Jan
28
28
2009
08:22 AM
8
08
22
AM
PDT
ROb[165], Good point! :)Prof_P.Olofsson
January 28, 2009
January
01
Jan
28
28
2009
07:25 AM
7
07
25
AM
PDT
rna: Thank you for your very correct post, with which I agree completely. What you say if true, but I am afraid you have misunderstood my point (I must not be in good form in these days). In the above discussions, I was not arguing on how rich the general space of proteins is of functional islands, or on how big those islands of functionalities are. I have often discussed that problem elsewhere, and, although I do believe that the islands of functionality are really distant and interspersed, I am perfectly aware that the problem is not at all solved, and that it is very important. All the facts that you cite are correct, even if there are other aspects, which I will not detail here. Mark can witness how, even on his blog, I have often pointed to the measurement of the size of the functional space as to one of the fundamental problems, and like you I expect important clarifications from ongoing research in that field. But in my posts I was just responding to the statements made by others that all functional proteins would lie in a "sweet spot" of the search space, and that therefore no search was really necessary. That is simply false. Now, to be more clear, let's take all the known proteome frome any database of functional proteins. Let's forget, for a moment, the relationship with function, and let's look simply at their primary structures. What I was saying is that those primary structures are absolutely interspersed in the space of possible sequences (obviously, with similar proteins sharing various levels of homology). That is absolutely true, and requires a search to get to the different islands of functional structures. We may discuss on how easy it could be to get to some island in a search (in other words, we could discuss if the search space is more like the coast of Maine or like the Pacific Ocean), but it is absolutely true that it is an ocean with a lot of islands. The things you say confirm my point: it is true that two proteins may have similar 3d structure and function, and completely different primary structure (although that's more an exception than the rule). But you see, random variation (the search) is supposed to work on the primary structure, knowing nothing of the function. It is only the selection part which is interested in the function, and with all the restraints which I have already discussed in this thread. So, those two proteins are distant islands for the search, because of their different primary structures. Therefore, while you say "Thus, the assumption you made of functional proteins being remote islands in sequence space is maybe a little premature.", I have to counter that I made no assumption: the functional proteins we know "are" remote islands in sequence space. You may argue that they could be very big islands, and not so distant as I believe. That can be discussed. But there is no doubt that they are islands, and that they are interspersed in the ocean. The "sweet space" is only a sweet invention.gpuccio
January 28, 2009
January
01
Jan
28
28
2009
07:14 AM
7
07
14
AM
PDT
Mark: Now, that's where you have misunderstood my point: "You are positing that an intelligence intervenes every time an antigen enters an individual vertebrate to direct the search for an antibody. The evidence for this intervention is the improbability of all current non-design solutions." No, I was not saying that. I don't believe that a conscious intelligence (the designer) is directly active in the process of the immune response. What I believe is that the process of immune response, in all its parts, is a very intelligent algorithmic procedure embedded in the immune system, probably in that part of the genome which controls procedures, and which we don't understand very much. For the rest, it works like any other software: a complex algorithm which produces intelligent results mechanically, because it has been programmed to do so. So, the designer does not intervene directly, but he does intervene indirectly, through the programmed procedure. Now you will have noticed that the procedure is very complex, and that it requires the careful interaction of many different cell types. So, it is certainly a very good example of complex reality which suggests design. But that was not the reason why I have cited it in our discussion. As you remember, our discussion was about the modalities by which specific proteins could be obtained in a design perspective. I quote from my post: "Let’s take an example. In the case of antibody maturation, which I have cited, the intelligent procedure embedded in the immune system realizes a very significant increase of the affinity of the primary antibody response in just a few months. As I have said, that is attained through targeted hypermutation and intelligent selection through direct measurement of the affinity of the new clones to the antigen. And still, that is a rather indirect method, because the system does not know in advance the sequence to be found, but only the function to be measured." So, my reasoning is as such. We don't know the modalities by which the designer implements the information "directly", when he designs the genome or its variations, but we have at least this interesting model where the designer "indirectly" designs proteins, through an embedded procedure. And this model is interesting because it uses partial random search, coupled to function measurement for selection. That is also the main method used by protein engineers today. So, as we know that the designer uses that kind of algorithm in the model of the immune system, it could be reasonable to hypothesize that he could use it also in building the genome information, but this time "directly", for instance through targeted, guided random variation (think to some possible hypermutation process on a duplicated gene), followed by function measurement and direct selection (which need not pass through the long process of genome expansion though reproductive advantage; the "good" results could just be kept, and the bad results passed again through hypermutation). In alternative, I have offered also the possibility that the designer may work knowing already the solution: in that case, he could still use targeted hypermutation, but just "keep" the correct nucleotides, without any preliminary function measurement. The third alternative is direct, intelligent mutation. That would obviously be the easiest way. That was simply my argument. I hope it is clear now.gpuccio
January 28, 2009
January
01
Jan
28
28
2009
06:45 AM
6
06
45
AM
PDT
gpuccio: I would be really careful in any statements regarding the possible distribution of functional proteins in sequence space. This is an area under intense experimental investigation and I don't think a consensus view is emerging yet. Let me give some examples: The first comes actually from DNA or RNA in vitro selections. This is of course an artificial and simplified example but it yields an idea about the distribution of functional solutions to a chemical problem (enzyme activity, binding of a ligand, regulation) in sequence space. In vitro selection experiments typically start with a pool of random RNA sequence of 40 to 70 nucleotides in length e. g. with a sequence space of 4 to the power of 40 or ~ 10 the power of 24. Due to technical limitations normally only 10 to the power of 14 molecules can be synthetized and investigated (more do not fit in a test tube :-)). Thus only a very tiny fraction of the possible sequence space can be explored in these experiments. Yet, in these experiments normally not only one but multiple solutions to a given problem are found and normally on must restrict oneself to look only for the most abundant solutions. This seems to indicate that the sequence space is rather rich in possible solutions = functional sequences and moreover that for any given problem multiple structurally and sequentially unrelated solutions can be found. The functional RNAs found in such experiments do not only trivial things but there have been molecules e. g. working as RNA-polymerases, catalyzing peptide-bond formation or catalyzing chemical reactions that are not even possible with naturally occuring protein enzymes. Second, many examples are known where proteins with totally different folds and unrelated in sequence catalyze exactly the same reaction using the same chemistry such as the proteases subtilisin and trypsin but there are many more examples. Third, the same reaction can be catalyzed using enzymes not only unrelated in structure and sequence but also using very different chemistry such as the metalloproteases and the cysteinproteases but examples for many other enzyme classes can be found. If anything this argues for a sequence space rich in functional proteins. Fourth, there are examples where the exchange of a single amino acid or very few amino acids leads to a novel stable three-dimensional fold of a protein with a change in fold being a prerequisit for a possible novel function. On the other hand each functional protein sequence seems to be surrounded by similar sequences with the same fold and function in sequence space such as the functional myoglobins from different species. Thus, the assumption you made of functional proteins being remote islands in sequence space is maybe a little premature.rna
January 28, 2009
January
01
Jan
28
28
2009
06:34 AM
6
06
34
AM
PDT
Mark: Well, we have seen how the primary antibody response takes place. I would like to remark again that the primary repertoire is built without any information from outside antigens, and that its production can be already considered a highly engineered procedure, using random variation in a strictly controlled scenario, and attaining the best blind defense for any (or most) possible antigens. But once an antigen really enter the body, information is inputted from the environment in the form of the epitopes, and is processed and stored in the antigen presenting cells (APCs). It's that information which allows the specific selection of active antibodies from the pool of the repertoire, and their amplification through proliferation of the appropriate B cells. Let's remember also that that process is strictly controlled by the T compartment. Then, in the following months (3-6), ensues the very interesting process of antibody maturation. That is less well understood, but we know many things about it. The process implies two different mechanisms: a) Somatic hypermutation, probably due do specific mutating enzymes, strictly targeted to the small region of DNA coding for the active site recognizing the antigen (about 2000 bp) b) Selection of the resulting mutated clones according to their affinity to the antigen: the clones with higher affinity are stimulated, and those with lower affinity are suppressed. The process is cyclically repeated in the Germinal Center of the lymph node, until high affinity is achieved. Future, secondary responses to the same antigen are base on those high affinity antibodies, and are much more efficient. A few comments. The hypermutation is the random part of the process, but again it is highly engineered: it takes place only in a very short segment of DNA (the appropriate one). It is probably well controlled as to its rate and modality, and is actively accomplished through appropriate enzymes. And it takes place in a very precise window of the reproductive cycle of the lymphocyte. Now, let's go to the selection. It is almost certainly based upon exposure of each mutated clone to the antigen stored on APCs, and the following inhibition or stimulation of the clones is probably effected by the usual T regulation lymphocytes, and certainly in a very precise manner, very strictly dependent on the measured affinity. So, in this second process we see again the application of targeted and controlled random variation, like in the building of the primary repertoire, but this time it: a) starts with a pool of antibodies which are already a functional island of the space (they have been selected from the primary repertoire by the exposition to the antigen). b) is followed by measurement of the resulting affinity after the variation, and the consequent stimulation or inhibition of the clone. In other words, here the information inherent in the antigen plays a fundamental role, and "guides" the process towards its goal. In hope that's clear enough. If you have any doubts, please ask. Now, in the following post, my comments and my point.gpuccio
January 28, 2009
January
01
Jan
28
28
2009
06:18 AM
6
06
18
AM
PDT
Mark: My compliments, you have understood much about the immune system, but you are still missing some key points (not your fault). And I am afarid you are also missing my point in citing that system as an example (again, not your fault: I wrote very quickly, giving too many things for granted). So I will proceed this way: in this post I will try to make a general review of how the immune system works, staying as simple as possible (not an easy task). And in my next post I will elaborate better on my point about that system in our discussion. First of all, it should be clear that for simplicity we will palk only of the B cell response (antibody mediated). So, B cells are specialized lymphocytes present in the body wherever there is lymphatic tissue, and in the blood. They derive from hematopoietic stem cells, which become committed to maturation as B lymphocytes. During the first part of life (fetal life and the first few years) the immune system undergoes a process of ontogenesis and maturation which brings it tu full functionality. During that process, mature B cells are formed which we will call here "virgin" B cells (because they have never met the antigen). Now, a very important thing happens during the maturation of virgin B cells. In the single clones of B cells, a recombination happens at the level of DNA, in a series of genes which are implied in antibody production. As a consequence of that DNA recombination, each final clone of B cells has a different DNA (in that tiny part), and produces a different antibody. Each of those clones is represented by a number of lymphocytes with the same DNA recombination, and is distributed in the body. So, that is what is called the primary antibody repertoire. It is through it that we can have the primary immune response to almost any antigen in nature. Now, we have to notice some important points: a) The target of antibodies are mainly small peptide segments, usually about ten aminoacids long, called epitopes. While antigens are big molecules, usually proteins, epitopes are the real unit which is recognized by antibodies. So, the combinatorial space of possible epitopes is big, but not huge (about 20^10). b) The primary repertoire is achieve through a process of recombination which includes many random components, in a very controlled scenario. That's what allow to achieve a repertoire which is blind, in the sense that it covers in an interspersed, random way, the combinatorial space of possible epitopes. No information from outer antigens is used in that process. In the ens, primary antibodies are in enough number (nobody knows how many) and sufficiently interspersed in the space, that almost any possible antigen will meet one, and usually many, antibodies which can bind to it. Indeed, the primary immune response is policlonal. But for the same reasons the affinity of the primary antibodies to one specific antigen is almost always low. c) B (and T) lymphocites are practically the only cells in the body which undergo a somatic modification of their DNA. Theur DNA is therefore slightly different from the DNA of all other cells. Now, let's go to the primary response. When some antigen enters the organisms, it is processed by specialized cell, the antigen presenting cells, whose role is to expose the epitopes on their surface, correctly associated to other molecules, so that virgin B cells may be exposed to them, until the clones which have an antibody which can bind the epitope are found and their proliferation begins, stimulated and controlled by other lymphocytes, the regulatory T cells. The competent clones proliferate and start producing their antibodies, giving so a first defense to the organism, after just a few days from the exposure to the antigen. That is the primary response. It is specific, but at low levels of affinity. Well, I think I will stop here and continue in the next post.gpuccio
January 28, 2009
January
01
Jan
28
28
2009
05:42 AM
5
05
42
AM
PDT
Gpuccio Re #169 Thanks to you I am learning a lot of biochemistry. Debate is very powerful learning process if properly facilitated – in this case we are doing well at facilitating ourselves. The example of antibody maturation is rather interesting. If I understand it correctly, it really shows up the difference between non-design approaches and design approaches to solving problems. In particular, because it is a restricted problem, it makes clear the negative nature of the evidence for the design – at least in this case. As I am not a biochemist I will check my understanding of some basics: 1) When an antigen enters a vertebrate system then after an initial generic response the vertebrate immune system creates an antibody that is specific to that antigen (it locks on to it and marks it for destruction by T cells) 2) This specific antigen is created by vastly increasing the usual rate of mutation in some specific genes and areas of genes (but not in the germ cells so the mutation is not inherited) 3) Once a successful antibody is created presumably there is then some feedback to the genes to increase production of the successful antibody I will also assume that 4) The target area (the combinations that will successfully lock onto the antigen) is so small and the domain so large that it is effectively impossible to reach the target by simply spinning the wheels and hoping to come down with the right combination. 5) There no value in “partially fitting” the antigen – a mutation either fits or it doesn’t Now we have an interesting scientific problem. A non-design approach is to explore how that search might work. Options include: * The antigen somehow causes mutations which are closer to the target * The target area of all possible antigens is not randomly distributed around the domain but is clustered – so if the starting position(s) for mutations was in the cluster it would greatly increases the chances of hitting the target * Mutations are not random (random in the sense that any position in the relevant genes is equally likely to mutate and is equally likely to chance to one of the other three base pairs). And this lack of randomness increases the chances of hitting a target. There may be other options to explore but these exist and I understand they are currently being actively researched. Now what about the design approach? You are positing that an intelligence intervenes every time an antigen enters an individual vertebrate to direct the search for an antibody. The evidence for this intervention is the improbability of all current non-design solutions. Now here is the crunch. What evidence is there for this design explanation other than the failure of known non-design explanations? When talking of bacterial flagella etc this point gets obscured. It is possible to argue for irreducible complexity, and talk about codes and symbols etc, as positive evidence for design, because you are talking about the mystery of the whole transcription system etc. But in this case things are a simpler. We are taking most of biology and the transcription system for granted. As far as this issue is concerned all the rest might be designed or not. We are only asking how do mutations end up creating the appropriate antibodies? You can see the negative nature of the design explanation by imagining that someone comes up with a plausible non-design explanation. What evidence remains for a design explanation? None. By finding a plausible non-design explanation you have removed all evidence for design. Ergo the only evidence for design was the lack of a plausible non-design explanation.Mark Frank
January 28, 2009
January
01
Jan
28
28
2009
03:12 AM
3
03
12
AM
PDT
djmullen: How boring. Typical behavior once again. When the discussion is no more rewarding, just: a)Make extreme and false affirmations about what the other has said: "Let’s see, all I have to do to convince you is to basically document every single evolutionary step from the first reproducing chemical to you, me and every other protein in every other organism on the planet?" I would have been happy with a single example for one protein, just as a start... 2)Shoot a series of irrelevant and rather silly questions as a form of personal attack: "Fine. Let’s see you do the same thing. You say that some proteins could not possibly have been produced by a series of small evolutionary steps. OK. Which proteins?" Practically all of them. "If they weren’t produced by evolution, then what steps were taken to produce them?" Please see my detailed asnwers to Mark at #134 and #169. In brief, possibly the same mutations and selections hypothesized by darwinists, but intelligently guided, and not random. "Was some mega-mutation necessary to produce some particular protein, changing hundreds or thousands of DNA bases at once?" Not necessarily. But it's always a possibility. "Which bases were changed?" Those which, according to the design, needed to be changed. "What were they before and after the change?" Before the change they were those of the ancestor, after the change those of the target protein. "When did this happen?" In the course of natural history, as new species appeared. But some of my IDist friends would prefer the theory of frontloading, which I personally don't like very much. You choose. "How was it done?" I don't know. That's open to research. But again, if you read my answers to Mark, you will find some suggestions. "How was the correct pattern found?" By design. Again, in my answers to Mark you will find two different possibilities: a) the designer knew the solution; b) the designer knew the searched for function, and could measure it. "Please document your claims. Saying “The Designer” is not documenting anything unless you can tell me about The Designer." Then how is it that you are here, on an ID blog? Let me understand, you are listening to people who have always stated clearly that they have, at present, no scientific knowledge about the designer, and you go on discussing in detail many serious issues of the theory with them, and now suddenly you come out with such fundamental disbelief in its basic premises? Tell me, what's happened? "Who is he, where is he, how did he get this info?" My personal (non scientific) opinion? He is a God, He is transcendent, but also very much present in His creation, He can easily get all the info He needs. But I would not discuss these non scientific points here. "Do you have any independent evidence of his existence other than your protein sequences which you claim can’t be produced by evolution?" A lot. But it's not scientific evidence, at least not yet. But, at the scientific level, I am very happy with my protein sequences which can’t be produced by evolution. Very happy indeed. 3) Bring the personal attack at another level, challenging the other's competence and doubting his "authority", possibly tying him to some other supposedly uncomfortable people: "While we’re on this subject, what is your expertise in proteins? I’ve been reading your messages about BLASTing protein sequences and I’m uncomfortably reminded of Salvador Cordova’s Avida fiasco. Have you had any training on proteins or the BLAST software?" First of all, my expertise in proteins is not your business. This is a blog, and not a scientific journal. And I, and all the other sincere people who come here, have never spoken "from authority", or requested any academic title or position from others. Second, I feel honored of being compared to Salvador Cordova. And third, as I have told that many times here and it's not a secret, I am a Medical Doctor. And finally, I am not really looking forward to your response.gpuccio
January 28, 2009
January
01
Jan
28
28
2009
01:44 AM
1
01
44
AM
PDT
gpuccio: Let's see, all I have to do to convince you is to basically document every single evolutionary step from the first reproducing chemical to you, me and every other protein in every other organism on the planet? Fine. Let's see you do the same thing. You say that some proteins could not possibly have been produced by a series of small evolutionary steps. OK. Which proteins? If they weren't produced by evolution, then what steps were taken to produce them? Was some mega-mutation necessary to produce some particular protein, changing hundreds or thousands of DNA bases at once? Which bases were changed? What were they before and after the change? When did this happen? How was it done? How was the correct pattern found? Please document your claims. Saying "The Designer" is not documenting anything unless you can tell me about The Designer. Who is he, where is he, how did he get this info? Do you have any independent evidence of his existence other than your protein sequences which you claim can't be produced by evolution? While we're on this subject, what is your expertise in proteins? I've been reading your messages about BLASTing protein sequences and I'm uncomfortably reminded of Salvador Cordova's Avida fiasco. Have you had any training on proteins or the BLAST software?djmullen
January 28, 2009
January
01
Jan
28
28
2009
12:39 AM
12
12
39
AM
PDT
Mark (#156): I come back to you with real pleasure. Yous posts are always intelligent, sincere and creative. I agree with you that frameshift mutations as a source of functional variation should not ba a classical darwinian concept. The fact that they are pursuing it so earnestly, apparently even after the debunking of nylonase, is just a sign, for me, of how desperate they are. Apparently, not all darwinist biologists share JayM's faith in an unfragmented proteome. "So if we come across a protein which is radically different from others and there is no evidence that frameshift was involved, then there is a problem knowing how that protein was generated." You bet! "But it is a problem for all theories that assume DNA is created by modification of parent DNA. Unless you believe God inserts complete DNA strings into cells from time to time then, this equally a problem for a non-Darwinian theory." Yes and no. The implementation of information remains a problem, but if you have not to look for that information, or at least if you can look for it with intelligent means, the main obstacle is overcome. Let's take an example. In the case of antibody maturation, which I have cited, the intelligent procedure embedde in the immune system realizes a very significant increase of the affinity of the primary antibody response in just a few months. As I have said, that is attained through targeted hypermutation and intelligent selection through direct measurement of the affinity of the new clones to the antigen. And still, that is a rather indirect method, because the system does not know in advance the sequence to be found, but only the function to be measured. But let's imagine that an intelligent force, which knows the sequence to be obtained, like the weasel algorithm, could just "fix", by some biochemical, or quantum, method which we presently don't know, any random variation which is correct. Or, better still, just induce the right variation directly, "guiding" the apparently random events at the molecular level. Wouldn't that make the implementation of the information extremely easier? You see, the real problem is the search. It's the search which is completely out of any possibility, without any intelligence and any information about the result to be attained. But, as GAs show, if you have a good oracle, you are in all another situation. And, in a sense, God could well insert complete DNA strings into cells from time to time. After all, that's what plasmids and transposons, and even ERVs, seem to do. Why couldn't God do the same, directly or indirectly? Frankly, frameshift mutations are so problematic that I don't believe that even the Designer would use them. But I think there is some literature about genes which can be read in two different ways, both functional, through a different (frameshift) start of transcription (but I can't remember the details now). Well, that would be tremendously smart design! Like some brilliant enigmas, or some paintings by Escher.gpuccio
January 27, 2009
January
01
Jan
27
27
2009
03:52 PM
3
03
52
PM
PDT
CJYman:
Since CSI is a measurement of finding a specified or pre-specified pattern at better than chance performance, problem specific information is required to find it (according to NFLT). Problem specific information is measured as active information within these two papers. Do you see the connection?
One problem with the connection that you're drawing is equivocation on the word "chance". Under the active info framework, chance means a uniform distribution. Under the old CSI framework, it meant any distribution, including the distribution conferred by RM+NS. Dembski in NFL: Chance as I characterize it thus includes necessity, chance (as it is ordinarily used), and their combination. Dembski in his Specification paper: Moreover, H, here, is the relevant chance hypothesis that takes into account Darwinian and other material mechanisms. So products of evolution have no CSI, by definition, in spite of the fact that evolutionary processes have boatloads of active information.R0b
January 27, 2009
January
01
Jan
27
27
2009
03:43 PM
3
03
43
PM
PDT
JayM (# 157): I appreciate your efforts, but I am afraid I can only repeat for you what I have already said about djmullen: your characterization is inaccurate. "The underlying partial homologies are evidence that supports MET mechanisms of incremental change." Absolutely not. There is nothing in the homologies which shows incremental change in the sense of a gradual modification of the function. The conservation of domains has a functional basis, but sometimes the same domains and 3d strictures are obtained, in different species, with very different primary sequences. And there are a lot of different domains, and different foldings, whose biochemical nature is completely different. If you were familiar with the biology of proteins, you would never say the things you say. It is true that darwinists affirm that proteins have formed by incremental change in function, but that is only one of the many just so stories. There is no empirical confirmation for that, and a lot of theoretical impossibilities. Why do you think that ID exists at all? "You can’t consider just the full protein as though it came into being all at once." I am not saying that it came into being at once, I am saying that it was designed. But you seem to ignore that parts of one protein are not functional. You need a whole protein, with its correct folding and domain, to get a function. Very big proteins can be multidomain, and can be deconstructed in different functional portions, but proteins like myoglobin are very compact globular proteins, and the whole sequence must be there. That's why myoglobin has approximately the same length in most species, with only minor variations. That is true of many important proteins. "An evolutionary biologist would probably say it came from a protein that was about 153 amino acids long. Or from a very slightly different protein that was about 154 amino acids long. Unless ID researchers can show this is a mathematical impossibility, the MET mechanisms remain scientifically credible." We all know what evolutionary biologists would say, and that's why we are here in ID. A 153 aminoacids myoglobin would still be a myoglobin, with the same 3D structure and the same function. Not all myoglobins are "exactly" 154 aa (that is the length of the human form, and of most examples). Some are of 146, 147, 149 and so on. But those are minor difference, in part probably due to neutral evolution. But the protein is essentially the same. But there is no model which can explain the step by step derivation of myoglobin from some other simpler protein with a different function. And remember, such a model should explain not only the variations in aa sequence, but also the variations in function which allow the selecrion and expansion of each single intermediate. Again, that's why I am in ID, and not in the darwinian field. You can stay where you like, or just go on observing and posting. "Could you point to some literature cites that support this claim? I have looked for such information for quite some time, because it seems that this is a very promising area for ID research. I haven’t found any clear results that show that genome space is fragmented in this way." There is no need to support my claim. The data are there, for anybody to look at them. I have just showed you a few examples. Go to SCOP, do some blast homework, and you will see for yourself. Everybody who works with proteins knows what I am saying. "While I appreciate your effort, that isn’t a particularly rigorous process for tracking homologies. You might identify a new homology with that approach, but failure to find one after a handful of searches doesn’t invalidate the entire corpus of evolutionary molecular biology." Blastp is one of the most currently used tools to look for homologies. I have also used ClustalX, with the same results. Do you really believe that there is a way to find significant homologies between two proteins like dinG and clpA? Not even darwinists are so smart. And the entire corpus of evolutionary molecular biology is based on false assumptions, and that's what invalidates it. "Taking humans as an example, if MET mechanisms are sufficient you should see similar molecular constructs, for a certain level of complexity, in chimpanzees." Again you miss the point. There are proteins which are almost identical in extremely different species. That is easily explained as functional conservation, but tells us nothing about the emergence of the function. Neutral evolution, and especially synonymous mutations, are interesting tools to try to understand variations due to time and random errors, but tell us nothing of the mechanism of function generation. "MET does not claim that the whole protein came into being in one fell swoop. We have to address what MET actually says, not what we wish it said." I am still waiting for them to say how that happened. Can you help? "I will be delighted if you have really identified the disconnected islands that disprove MET, and if you have you should definitely publish your results, but I don’t see support for that in what you’ve presented so far." If you can't see what is under the eyes of all, I really can't help you. "ID opponents make some strained arguments, but one that is valid is that anyone who discovered what you claim here would be world famous. It stretches credibility to think that everyone with access to the same tools you used could be Expelled and prevented from publishing." I would be very pleased if I could become famous so easily. Unfortunately, one does not become famous for saying what is known to all, and is easily retrieved in public databases. I am just discussing the interpretation of what is known to all. The only reason why I spent all this time giving you some real example, is that you (and djmullen) were saying completely wrong things with a certainty which, I hope, could derive only from your ignorance of the matter.gpuccio
January 27, 2009
January
01
Jan
27
27
2009
03:15 PM
3
03
15
PM
PDT
CJYman:
The only formal proof presented is that CSI won’t generate itself through only chance and law and this is implicit within these two papers.
Just for clarification, are you saying that this formal proof is implicit in these two papers? (An implicit formal proof sounds like an oxymoron to me.) If not, where is this formal proof to be found? Thanks.R0b
January 27, 2009
January
01
Jan
27
27
2009
02:38 PM
2
02
38
PM
PDT
Prof_P.Olofsson:
The recurring use of the uniform distribution in many attempts to apply math to biology is a model assumption must be argued just like any other model assumption.
Indeed, Olle Haggstrom's response to the active info approach is titled "Uniform distribution is a model assumption". The question asked by Haggstrom and others is, why should we always expect uniform randomness? In fact, it's an impossible expectation. If everything were characterized by a uniform distribution, then that would be a non-uniform distribution of distributions.R0b
January 27, 2009
January
01
Jan
27
27
2009
02:02 PM
2
02
02
PM
PDT
Prof_P.Olofsson:
Informal claims such as “it is as difficult to find a search as it is to find a target” are not supported by the paper unless you make a lot of arbitrary assumptions.
One of the arbitrary assumptions involved in measuring active information even violates Dembski's own repeated warning. He has told us several times that "how we measure information needs to be independent of whatever procedure we use to individuate the possibilities under consideration." And yet, the active information measure depends very much on how we individuate the possibilities.R0b
January 27, 2009
January
01
Jan
27
27
2009
01:21 PM
1
01
21
PM
PDT
CJYman[162], Let me clarify: I am questioning the claims that the "search for a search" paper is pro-ID as was claimed in the introduction. In order to be considered pro-ID, it would have to have some implications for biology. I ask, for example, (1) what is the rationale behind claiming that search algorithms are chosen according to probability distributions, in particular the K-W distribution [154]? and (2) if they are, why do these distributions have to be uniform [159]? and point out (3) even if they are uniform, they are still quite likely to beat random search [155].Prof_P.Olofsson
January 27, 2009
January
01
Jan
27
27
2009
01:16 PM
1
01
16
PM
PDT
Hello Prof. Olofsson, I'm not sure that I follow you, however if what you are stating is true and actually has practical effect (ie: actually happens in the real world), why doesn't anyone just show that background noise (chance) and an arbitrary set of laws (set of laws collected without any consideration for future results -- absent foresight) will produce systems which process signs/units and evolve into greater and greater specified complexity, and just falsify ID theory and be done with it. If evolution is so powerful absent previous foresight, why do programmers bother programming boundary conditions for future known targets into the evolutionary algorithms used to solve problems (ie: max efficiency antenna shape)?CJYman
January 27, 2009
January
01
Jan
27
27
2009
12:36 PM
12
12
36
PM
PDT
JayM: "That’s great, but it still seems to cede the field of biology to modern evolutionary theory." It cedes biology to evolution and it takes evolution for itself. JayM: "An aside due to my inner geek: Is there a formal proof that CSI cannot be generated from evolution simulations?" The only formal proof presented is that CSI won't generate itself through only chance and law and this is implicit within these two papers. Since CSI is a measurement of finding a specified or pre-specified pattern at better than chance performance, problem specific information is required to find it (according to NFLT). Problem specific information is measured as active information within these two papers. Do you see the connection? CSI can be transferred through evolution, but will never be generated from scratch by evolution. Why? Because these two papers show that active information continually regresses to higher and higher levels of search. So, if active information is necessary for the transferring of CSI, then evolution can't generate CSI. It can only transform active information into CSI. JayM: "I don’t believe that “all simulations of evolution require foresight.” We’ve discussed several different genetic algorithms in recent threads here and Dawkin’s Weasel is the exception rather than the rule." My apologies for not being clear. If we are discussing any simulation relevant to biology, then evolution is the discovery and increasing of CSI. Can you provide any simulation of biological evolution which discovered CSI from initial conditions that were not programmed with any consideration for future results? JayM: "I’m not sure how this is relevant to the two papers, though." It's more relevant to the NFLT, where it is important that problem specific information be incorporated into the behavior of the algorithm (almost an exact quote), thus searching for a given pattern becomes as difficult as searching for the problem specific information necessary to discover that pattern at better than chance performance. These two papers merely provide a metric for measuring problem specific information and showing that it regresses to higher and higher level searches. The relevance to ID is that we seem to have only three choices which I've already discussed ... an infinite regress of active info., a falsification of the paper by showing active info generating through background noise and arbitrary collection of laws, or use a foresighted system to program the initial conditions -- actually incorporating problem specific information through knowledge of the future problem to be solved.CJYman
January 27, 2009
January
01
Jan
27
27
2009
12:34 PM
12
12
34
PM
PDT
And, finally, I know that my comments [154,155,159], although directly related to the Dembski & Marks paper, are aside of the main topics of the thread. If somebody is interested, we may take it off the air as well.Prof_P.Olofsson
January 27, 2009
January
01
Jan
27
27
2009
12:07 PM
12
12
07
PM
PDT
And, to [154] and [155], I'd like to add a third question: Even if we adopt the model that searches are being searched for via a probability distribution, how can we argue that the distribution must be uniform? Even if you don't believe the darwininan search can be successful, you have to admit that it exists and is reasonably well understood: DNA is replicated, mutations cause imperfections which leads to a new genotypes (vastly simplified). How would you argue that this search is no more likely that one that is uniform over the entire sequence space ("blind search") which cannot even be given a reasonable biological or phyiscal meaning? The recurring use of the uniform distribution in many attempts to apply math to biology is a model assumption must be argued just like any other model assumption.Prof_P.Olofsson
January 27, 2009
January
01
Jan
27
27
2009
11:55 AM
11
11
55
AM
PDT
1 2 3 4 5 6 10

Leave a Reply