Uncommon Descent Serving The Intelligent Design Community

Evolution driven by laws? Not random mutations?

Categories
Evolutionary biology
Intelligent Design
News
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

So claims a recent book, Arrival of the Fittest, by Andreas Wagner, professor of evolutionary biology at U Zurich in Switzerland (also associated with the Santa Fe Institute). He lectures worldwide and is a fellow of the American Association for the Advancement of Sciences.

From the book announcement:

Can random mutations over a mere 3.8 billion years solely be responsible for wings, eyeballs, knees, camouflage, lactose digestion, photosynthesis, and the rest of nature’s creative marvels? And if the answer is no, what is the mechanism that explains evolution’s speed and efficiency?

In Arrival of the Fittest, renowned evolutionary biologist Andreas Wagner draws on over fifteen years of research to present the missing piece in Darwin’s theory. Using experimental and computational technologies that were heretofore unimagined, he has found that adaptations are not just driven by chance, but by a set of laws that allow nature to discover new molecules and mechanisms in a fraction of the time that random variation would take.

From a review (which is careful to note that it is not a religious argument):

The question “how does nature innovate?” often elicits a succinct but unsatisfying response – random mutations. Andreas Wagner first illustrates why random mutations alone cannot be the cause of innovations – the search space for innovations, be it at the level of genes, protein, or metabolic reactions is too large that makes the probability of stumbling upon all the innovations needed to make a little fly (let alone humans) too low to have occurred within the time span the universe has been around.

He then shows some of the fundamental hidden principles that can actually make innovations possible for natural selection to then select and preserve those innovations.

Like interacting parallel worlds, this would be momentous news if it is true. But someone is going to have to read the book and assess the strength of the laws advanced.

One thing for sure, if an establishment figure can safely write this kind of thing, Darwin’s theory is coming under more serious fire than ever. But we knew, of course, when Nature published an article on the growing dissent within the ranks about Darwinism.

In origin of life research, there has long been a law vs. chance controversy. For example, Does nature just “naturally” produce life? vs. Maybe if we throw enough models at the origin of life… some of them will stick?

Note: You may have to apprise your old schoolmarm that Darwin’s theory* is “natural selection acting on random mutations,” not “evolution” in general. It is the only theory that claims sheer randomness can lead to creativity, in conflict with information theory. See also: Being as Communion.

*(or neo-Darwinism, or whatever you call what the Darwin-in-the-schools lobby is promoting or Evolution Sunday is celebrating).*

Follow UD News at Twitter!

Comments
DNA_Jock: Just to understand your position about the TS fallacy, I would like to ask you an explicit answer to the following hypothetical, extreme scenario: You land on a new planet, of which you know very little. Apparently, there are no living inhabitants. On the mountain walls, you can observe two kinds of marks, sometimes arranged linearly. Both marks can be explained by some environmental process in the planet, and even their linear arrangement can be explained rather easily. In general, the sequences of the two marks appear aspecific, as you would expect. But you arrive at a particular wall, where there are only two long linear sequences of marks. You are a good mathematician, and after a while something in the two sequences disturbs you a little. After some further reflection, you notice that the two marks, if interpreted as binary symbols, correspond in the first sequence to the first 10^6 bits of pi, and in the second sequence to the first 10^6 bits of e. After a long consideration, you conclude that with all your knowledge of physical laws in the universe, there is no natural process which can explain those specific sequences. They could be random outcomes, like all the other similar apparently random sequences you observed on the planet. Or they could be designed by some alien visitor or old inhabitant, of which you have no other trace. So, my question, and please answer it explicitly. Is there a problem of design inference, or is your recognition of the correspondence of the two sequences to two important mathematical constants only an example of Texas Sharpshooter fallacy? Just to understand your position.gpuccio
November 7, 2014
November
11
Nov
7
07
2014
12:12 AM
12
12
12
AM
PDT
kairosfocus:
LH: I gave an outline of an absolutely routine way to measure functionally specific info, with a guide as to how it would address the flagellum.
KF, Learned Hand asked for an "explicit calculation", not an "outline". You say the calculation is "absolutely routine". Then perform it! Surely you are competent enough to perform an "absolutely routine" calculation, aren't you? Show us your explicit and "absolutely routine" calculation of how much "functionally specific info" is contained in the bacterial flagellum. P.S. Dembski's CSI argument is circular, as I explained above. If you disagree, you need to show where my argument fails, rather than tossing out distractive red herring talking points designed to polarise and confuse the atmosphere. Please do better.keith s
November 6, 2014
November
11
Nov
6
06
2014
11:42 PM
11
11
42
PM
PDT
DNA_Jock: You say: "I should expand on why Durston’s assumption is invalid: He is ignoring the effect of purifying selection." I will comment on your brief comments about Durston later, but for the moment I would like to understand what you mean here. Negative selection (purifying selection) is exactly the reason why function constrains sequence. IOWs, we observe different possible functional sequences with the same function because neutral selection allows them, while negative selection eliminates all the rest of variation. That's how proteins traverse their functional space. In what sense is that an argument against Durston? Please, explain.gpuccio
November 6, 2014
November
11
Nov
6
06
2014
10:52 PM
10
10
52
PM
PDT
DNA_Jock: "This IS the Texas Sharpshooter fallacy. He characterizes the activity, THEN writes the specification. By way of illustration, you have specified ATP synthase. You have never specified adenosine pentaphosphate synthase. Why? Because you have never observed it. You and your biochemist are saying “Look at this protein; how unlikely is that?” It’s a post-hoc specification." I that all you can say? Have you understood my point? The specification is made post-hoc, in the sense that the description of the function is given after we observe it, but the function is not post-hoc: the function exists independently. I think that you are strangely mixing two different problems. One is that the definition of the function of a protein is done from the observation of the protein. As I have said, this is post-hoc only in a chronological sense, not in a logic sense: we are not imagining the functionality because we see it. We realize that the functionality exists because we see it. There is an absolute objectivity in the ability of an enzyme to accelerate a reaction, like there is an absolute objectivity in the ability of a text to covey meaning. These things are not "painted" post hoc. So, your interpretation of the need to explain them as a fallacy is really a fallacy. The second aspect is what I call "the problem of all possible functions". That is the problem I have discussed in my post which I had referred you to. It is also the main line of "defense" which is used by neo darwinists to criticize ID. Very simply, it is the attempt to show that the functional space is so filled with function that it is extremely easy to find function by RV. That is the purpose of the wrong Szostak paper. That is the purpose of the few similar papers which try, without succeeding, to make the point, and that is probably the purpose of the Wagner "arguments". Nothing of those attempts, as far as I know, even goes near to starting to show what it tries to show. That the functional space is not filled with function in general is well shown by the simple fact that a long enough string of text with good meaning in English can invariably be distinguished by any random or algorithmic string of text by dFSCI. This is a fact, and I have many times challenged anyone to give any counterexample. How do you explain that? Isn't that the Texas Sharpshooter fallacy? Isn't the English meaning of a text, according to your argument, only a target painted around the string? How can you use such wrong arguments only to deny the value of functional information? This is a very serious fallacy: this is cognitive bias of the worst species. My explicit point in defining dFSCI is that any partition which generates a target space whose probability is extremely small and which cannot be explained by necessity can be empirically used to infer design with 100% specificity. This is empirically true. How could it be empirically true, if it were only a false idea due to a logic fallacy? The argument that the functional space of proteins, in particular, could be so connected that a simple algorithm like NS can explain it has nothing to do with the Sharpshooter fallacy: it is an attempt to explain functional information algorithmically. That algorithmic explanation must be supported by facts to be accepted as credible. Her is what I have written about tha "any possible function" reasoning.
I usually call this objection the “any possible function” argument. In brief, it says that it is wrong to compute the probability of a specific function (which is what dFSCI does, because dFSCI is specific for a defined function), when a lot of other functional gens could arise. IOWs, the true subset of which we should compute the probability is the subset of all functional genes, which is much more difficult to define. You add the further argument that the same gen can have many functions. That would complicates the computation even more, because, as I have said many times, dFSCI is computed for a specific function, explicitly defined, and not for all the possible functions of the observed object (the gene). I don’t agree that these objections, however reasonable, are relevant. For many reasons, that I will try to explain here. a) First of all, we must remember that the concept of dFSCI, before we apply it to biology, comes out as a tool to detect human design. Well, as I have tried to explain, dFSCI is defined for a specific function, not for all possible functions, and not for the object. IOWs, it is the complexity linked to the explicitly defined function. And yet, it can detect human design with 100% specificity. So, when we apply it to biological context, we can reasonably expect a similar behaviour and specificity. This is the empirical observation. But why does that happen? Why doesn’t dFSCI fail miserably in detecting human design? Why doesn’t it give a lot of false positives, if the existence of so many possible functions in general, and of so many possible functions fro the same object, should be considered a potential hindrance to its specificity? The explanation is simple, and it is similar to the reason why the second law of thermodinamics works. The simple fact is, if the ration between specified states and non specified states is really low, no specified state will ever be observed. Indeed, no ordered state is ever observed in the molecules of a gas even if there are potentially a lot of ordered states. The subset of ordered states is however trivial if compared to the subset of non ordered states. That’s exactly the reason why dFSCI, if we use an appropriate threshold of complexity, can detect human design with 100% specificity. The number of functionally specified states are simply too rare, is the total search space is big enough. I will give an example with language. If we take one of Shakespeare’s sonnets, we are absolutely confident that it was designed, even if after all it is not a very long composition, and even if we don’t make the necessary computations of its dFSCI. And yet, we could reason that there are a lot of sequences of characters of the same length which have meaning in english, and would be specified just the same. And we could reason that there are certainly a lot of other sequences of characters of the same length which have meaning in other known languages. And certainly a lot of sequences of characters of the same length which have meaning in possible languages that we don’t know. And that the same sequence, in principle, could have different meanings in other unknown languages, on other planets, and so on. Does any of those reasonings lower our empirical certainty that the sonnet was designed? Not at all. Why? Because it is simply too unlikely that such a specific sequence of characters, with such a specific, and beautiful meaning in English, could arise in a random system, even if given a lot of probabilistic resources. And how big is the search space here? My favourite one, n. 76, is 582 characters long, including spaces. Considering an alphabet of about 30 characters, the search space, if I am not wrong, should be of 2800 bits. And this is the search space, not the dFSCI. If we define the function as “any sequence which has good meaning in English”, the dFSCI is certainly much lower. As I have argued, the minimal dFSCI of ATP synthase alpha+beta subunit is about 1600 bits. Its search space if about 4500 bits, much higher than the Shakespeare sonnet’s search space. So, why should we doubt that ATPsyntase alpha+beta subunit was designed? For lack of time, I will discuss the other reasons against this argument, and the other arguments, in the following posts. By the way, here is Shakespeare’s sonnet n. 76, for the enjoyment of all! Why is my verse so barren of new pride, So far from variation or quick change? Why with the time do I not glance aside To new-found methods, and to compounds strange? Why write I still all one, ever the same, And keep invention in a noted weed, That every word doth almost tell my name, Showing their birth, and where they did proceed? O! know sweet love I always write of you, And you and love are still my argument; So all my best is dressing old words new, Spending again what is already spent: For as the sun is daily new and old, So is my love still telling what is told.
And:
Some further thoughts on the argument of “any possible function”, continuing from my previous post. b) Another big problem is that the “any possible function” argument is not really true. Even if we want to reason in that sense (which, as explained in my point a, is not really warranted), we should at most consider “any possible function which is really useful in the specific context in which it arises”. And the important point is, the more a context is complex, the more difficult it is to integrate a new function in it, unless it is very complex. In a sense, for example, it is very unlikely that a single protein, even if it has a basic biochemical function, may be really useful in a biological context unless it is integrated in what already exists. That integration usually requires a lot of additional information: transciptional, post transcriptional and post translational regulation, transport and localization in the correct cellular context and, usually, coordination with other proteins or structures. IOWs, in most cases we would have an additional problem of irreducible complexity, which should be added to the basic complexity of the molecule. Moreover, in a beings which is already efficient (think of prokaryotes, practically the most efficient reproductors in the whole history of our planet), it is not likely at all that a single new biochemical function can really help the cell. That brings us to the following point: c) Even the subset of useful new functions in the context is probably too big. Indeed, as we will discuss better later, if the neo darwinian model were true, the only functions which are truly useful would be those whihc can confer a detectable reproductive advantage. IOWs, those which are “visible” to NS. Even if we do not consider, for the moment, the hypothetical role of naturally selectable intermediates (we will do that later), still a new single functional protein which is useful, but does not confer a detectable reproductive advantage would very likely be lost, because it could not be expanded by positive selection (be fixed in the population) nor be conserved by negative selection. So, even if we reason about “any possible function”, that should become “any possible function which can be so useful in the specific cellular context in which it arises, that it can confer a detectable, naturally selectable reproductive advantage. That is certainly a much smaller subset than “any possible function”. Are you sure that 2^50 is still a reasonable guess? After all we have got only about 2000 basic protein superfamilies in the course of natural history. Do you think that we have only “scratched the surface” of the space of possible useful protein configurations in our biological context? And how do you explain that about half of those superfamilies were already present in LUCA, and that the rate of appearance of new superfamilies has definitely slowed down with time? d) Finally, your observation about the “many different ways that a gene might perform any of these functions”. You give the example of different types of flagella. But flagella are complex structures made of many different parts, and again a very strong problem of irreducible complexity applies. Moreover, as I have said, I have never tried to compute dFSCI for such complex structures (OK, I have given the example of the alpha-beta part of ATP synthase, but that is really a single structure that is part of a single multi-chain protein). That’s the reason why I compute dFSCI preferably for single proteins, with a clear biochemical function. If an enzyme is conserved, we can assume that the specific sequence is necessary for the enzymatic reaction, and not for other things. And, in general, that biochemical reaction will be performed only by that structure in the proteome (with some exceptions). The synthesis of ATP from a proton gradient is accomplished by ATP synthase. That is very different from saying, for example, that flight can be accomplished by many different types of wings.
And:
My aim is not to say that all proteins are designed. My aim is to make a design inference for some (indeed, many) proteins. I have already said that I consider differentiation of individual proteins inside a superfamily/family as a “borderline” issue. It has no priority. The priority is, definitely, to explain how new sequences emerge. That’s why I consider superfamilies. Proteins from different superfamilies are completely unrelated at sequence level. Therefore, your argument is indeed in favor of my reasoning. As I have said many times, assuming an uniform distribution is reasonable, but is indeed optimistic in favor of the neo darwinian model. There is no doubt that related or partially related states have higher probability of being reached in a random walk. Therefore, their probability is higher that 1/N. That also means, obviously, that the probability of reaching an unrelated state is certainly lower than 1/N, which is the probability of each state in a uniform distribution. For considerations similar to some that I have already done (the number of related states is certainly much smaller than the number of unrelated states), I don’t believe that the difference is significant. However, 1/N is a higher threshold for the probability of reaching an unrelated state, which is what the dFSCI of a protein family or superfamily is measuring.
And:
dFSCI is a tool which works perfectly even if it is defined for a specific function. The number of really useful functions, that can be naturally selected in a specific cellular context, is certainly smnall enough that it can be overlooked. Indeed, as we are speaking of logarithmic values, even if we considered the only empirical number that we have: 2000 protein superfamilies that have a definite role in all biological life as we know it today, that is only 11 bits. How can you think that it matters, when we are computing dFSCI in the order of 150 to thousands of bits? Moreover, even if we consider the probabiliti of finding one of the 2000 superfamilies in one attempt, the mean functional complexity in the 35 families studied by Durston is 543 bits. How do you think that 11 bits more or less would count? And there is another important point which is often overlooked. 543 bits (mean complexity) means that we have 1:2^543 probabilities to find one superfamily in one attempt, which is already well beyond my cutoff of 150 bits, and also beyond Dembski’s UPB of 520 bits. But the problem is, biological beings have not found one protein superfamily once. They have found 2000 independent protein superfamilies, each with a mean probability of being found of 1:2^543. Do you want to use the binomial distribution to compute the probability of having 2000 successes of that kind? Now, some of the simplest families could have been found, perhaps. The lowest value of complexity in Durston’s table is 46 bits (about 10 AAs). It is below my threshold of 150 bits, so I would not infer design for that family (Ankyrin). However, 10 AAs are certainly above the empirical thresholds suggested by Behe and Axe, from different considerations. But what about Paramyx RNA Polymerase (1886 bits), or Flu PB2 (2416 bits), or Usher (1296 bits)? If your reason of “aggregating” all useful functional proteins worked, we should at most find a few examples of the simplest ones, which are much more likely to be found, and not hundreds of complex ones, which is what we observe.
So, I will sum up my arguments with a simple question to you: How is it that the perfect ability to distinguish a piece of text in good English 600 characters long form any randomly generated string of characters is empirically valid? Why, for example, the common objection made here many times, that a random system could generate strings in any language, not only in English, or encrypted strings, does not make our ability to recognize English strings from random string any less valid?gpuccio
November 6, 2014
November
11
Nov
6
06
2014
10:47 PM
10
10
47
PM
PDT
KS: Kindly cf the just above, as a FYI. KFkairosfocus
November 6, 2014
November
11
Nov
6
06
2014
10:33 PM
10
10
33
PM
PDT
LH: I gave an outline of an absolutely routine way to measure functionally specific info, with a guide as to how it would address the flagellum. There is nothing mysterious or difficult there providing you have basic familiarity with how info is routinely measured and reported in say file sizes, preferably in bits. I see no good reason to make an imaginary hyperskeptical mountain out of a mole hill, when a world of technology out there routinely does what I said, just look at a folder window on your PC, in details mode if you doubt me -- it's that commonplace. There is no in principle difference between ASCII strings, binary digit -- bit -- strings, and R/DNA strings, with protein AA strings just expressing the implicit coded in functionality in the R/DNA strings. If you need a 101, kindly go here on in context, in my always linked, to see the basic reasoning behind info measurement. If you want basic logic behind a metric of FSCO/I as a beyond threshold concept that effectively gives the per aspect explanatory filter in an equation, try here. Of course that simple expression: Chi_500 = I*S - 500, functionally specific bits beyond the solar system threshold . . . is rather like the notorious summary table in a report document: a small amount of table can take a lot of work on the ground to properly fill in. KF PS: And, KS, that metric is not circular. As for Dembski's metric MODEL of CSI, kindly nore specified complexity was first observed and stated on the record by Orgel in 1973, 32 years before Dembski developed his model. Which turns out to be an info beyond a threshold metric, the above expression gives a boil 'er down form. CSI is observable and objectively recognisable as a target zone that is separately independently specifiable and deeply isolated to 1 in 10^150 or a similar scale of a config space. Once such has been specified, it is maximally unlikely that blind explorations traceable to chance and mechanical necessity will find it. But, especially in the relevant case where specification is based on observable function, there are trillions of cases in point that show the reliable pattern that FSCO/I and wider CSI come about by design. Thus such is a reliable sign of design. Per inductive inference to best, observationally anchored explanation backed up by needle in haystack search challenge analysis. Induction on trillions of cases in point with no clear counter-instances is NOT question begging. Which, should be patent. Save to the selectively hyperskeptical.kairosfocus
November 6, 2014
November
11
Nov
6
06
2014
10:32 PM
10
10
32
PM
PDT
D-J: n = 1 is a strawman, as you know. What you substituted is, there is one world of life, when what is relevant is that for protein families there have been many types of organisms and a lot more individuals, allowing chance driven random walks around the AA-sequence space. Relevant proteins have some variability but not indefinite plasticity. That is, we do see island of function patterns. When it comes to Monte Carlo runs, it seems to me that we do not invalidate such because the sims are programmed and set up then run by presumably intelligent programmers, once the dynamics are reasonable and appropriate randomness [or often pseudo- . . . ] is injected. Such are capable of exploring a space of stochastic possibilities and displaying the pattern of likely enough outcomes to show up in a set of runs. Which is the point. Utterly remote possibilities do not usually show up in such searches of a space of possibilities. Which is the further point. Next, the relevant issue within living forms is origin of novel body plans by blind chance and/or mechanical necessity from some ancestral form, credibly involving -- you can calc on back of envelope or look at genome sizes etc -- 10 - 100+ mn new bases. the possibilities space involved, multiplied by the known isolation of clusters of protiens, further multiplied by the challenge that to get new cell types, tissues, organs and integrated systems requires creating multiple, well-matched correctly arranged parts that interact to achieve relevant config-specific function, point to a sharp limitation of ability to explore spaces of relevant space in ways that would make blind discovery of novel islands of function plausible. On either solar system or observed cosmos scope resources. That is the context in which the simple, easily verified pattern that functionally specific, complex organisation and linked information [FSCO/I for short] is observable, with trillions of cases we have seen arise. In every observed case, with reliability, the cause involves intelligently directed configuration, aka design. Next, you managed to point to a key aspect of the point of islands of function without recognising it: purifying selection is the selective removal of alleles [= one of a number of alternative forms of the same gene or same genetic locus . . . ] that are deleterious. This can result in stabilizing selection through the purging of deleterious variations that arise. Yes, some muts are directly lethal from early embryological stages, others later on. Yet others result in inability to compete with normal forms and --save for the sort of artificial intervention such as with so-called fancy goldfish -- would die out, stabilising the general pop pattern -- Blythe's emphasis on what natural selection, so-called would do. Under other certain circumstances -- I have in mind caves in Mexico -- normal function in the form of eyes is disadvantageous and loss of eyes in fish resulted. Note, loss of function. In every one of these and other cases, natural selection serves as a subtracter of information, a culler not a creative adder. That addition comes from somewhere else, per evo mat assumptions, from a non-foresighted, blind, happenstance process, aka chance variation. By any number of possible mechanisms. Thus we run right into the search limitations of such blind processes. But then we do need to go back to the n = 1 issue. On evo mat abiogenesis models, we are looking at Darwin's warm ponds, comet bodies, oceans, gas giant moons and the like, across the Sol system and the wider cosmos. We have known thermodynamics forces, known chemistry, known multitudes of venues, a known reasonable upper state-change rate for atomic level processes of about 10^-13 - 10^-15 s. That does not boil down to n = 1 for sim runs. Nor, does it substantiate the question-begging assertion or assumption that there was a lucky breakthrough of blind forces that dis create gated, encapsulated, code-using, metabolising, von Neumann self replicator using, cell based life. Just the opposite. Forces of diffusion and breakdown of energetically uphill molecules and competing cross-reactions alone point strongly against abiogenesis. That's why Orgel and Shapiro had the following sharp exchange some years ago, on the utter implausibility of metabolism and genes first models -- in a context where the common factor is origin of requisite FSCO/I -- that resulted in mutual ruin:
[[Shapiro:] RNA's building blocks, nucleotides contain a sugar, a phosphate and one of four nitrogen-containing bases as sub-subunits. Thus, each RNA nucleotide contains 9 or 10 carbon atoms, numerous nitrogen and oxygen atoms and the phosphate group, all connected in a precise three-dimensional pattern . . . . [[S]ome writers have presumed that all of life's building could be formed with ease in Miller-type experiments and were present in meteorites and other extraterrestrial bodies. This is not the case. A careful examination of the results of the analysis of several meteorites led the scientists who conducted the work to a different conclusion: inanimate nature has a bias toward the formation of molecules made of fewer rather than greater numbers of carbon atoms, and thus shows no partiality in favor of creating the building blocks of our kind of life . . . . To rescue the RNA-first concept from this otherwise lethal defect, its advocates have created a discipline called prebiotic synthesis. They have attempted to show that RNA and its components can be prepared in their laboratories in a sequence of carefully controlled reactions, normally carried out in water at temperatures observed on Earth . . . . Unfortunately, neither chemists nor laboratories were present on the early Earth to produce RNA . . . [[Orgel:] If complex cycles analogous to metabolic cycles could have operated on the primitive Earth, before the appearance of enzymes or other informational polymers, many of the obstacles to the construction of a plausible scenario for the origin of life would disappear . . . . It must be recognized that assessment of the feasibility of any particular proposed prebiotic cycle must depend on arguments about chemical plausibility, rather than on a decision about logical possibility . . . few would believe that any assembly of minerals on the primitive Earth is likely to have promoted these syntheses in significant yield . . . . Why should one believe that an ensemble of minerals that are capable of catalyzing each of the many steps of [[for instance] the reverse citric acid cycle was present anywhere on the primitive Earth [[8], or that the cycle mysteriously organized itself topographically on a metal sulfide surface [[6]? . . . Theories of the origin of life based on metabolic cycles cannot be justified by the inadequacy of competing theories: they must stand on their own . . . . The prebiotic syntheses that have been investigated experimentally almost always lead to the formation of complex mixtures. Proposed polymer replication schemes are unlikely to succeed except with reasonably pure input monomers. No solution of the origin-of-life problem will be possible until the gap between the two kinds of chemistry is closed. Simplification of product mixtures through the self-organization of organic reaction sequences, whether cyclic or not, would help enormously, as would the discovery of very simple replicating polymers. However, solutions offered by supporters of geneticist or metabolist scenarios that are dependent on “if pigs could fly” hypothetical chemistry are unlikely to help.
In short, there is no justification for n = 1. From the plausibility perspective we have no good (non ideological, question begging) reason to hold that blind watchmaker abiogenesis occurred even once. From the opportunity perspective we have had endless numbers of opportunities across an observable cosmos, so number of trial runs is very large. Of course, to make my search the config space point I have set up a toy case of 10^57 sol system atoms each observing a tray of 500 coins flipped and examined every 10^-14 s. That is the extreme for our sol system, and parallels a similar case with 10^80 atomic observers for the observed cosmos. The result is, the degree of exploration of even such a toy space of possibilities using up sol system resources is so tiny that we have no good reason to expect blind discovery of anything but the bulk of the possibilities, near 50:50 H-T, in no particular order. The point of the sim parable is obvious, save to thos4 committed not to see it. and in that case 10^57 runs for 10^17 s, is not exactly n = 1. The overall point is plain. There is no good reason to believe that either OOL or origin of major body plans occurred by blind watchmaker thesis type mechanisms. And, every reason to see that known stochastic processes would cause exploration of AA space, culled for function, among protein families. But of course survival of a family of proteins across life forms or exploration of the resulting islands of function across time are not the real issue, it is blind watchmaker mechanism arrival at the many molecular level islands implicit in viable body plans. Dozens of them. But, in the end, the bottomline for UD is simple: for two years running, there has been an open invitation challenge to provide an essay in support of the blind watchmaker type thesis for the tree of life, from root to twigs. Over two years, a year past I had to cobble together a composite answer that was most unsatisfactory and since there has been no interest in making the case. I see plenty of interest in attacking and discrediting design thought and even outright enthusiasm to attack or ridicule design supporters, but very little sign of interest in actually making the blind watchmaker case. When, in fact, a readable 6,000 word or so essay that can use multimedia, infographics, and onward links etc to heart's content would be immediately devastating to design theory. Methinks the dog that refused to bark is the most telling argument of all on the real balance of the matter on the merits. If you doubt me, simply go here. I'se be waiting for the woof-woof . . . KFkairosfocus
November 6, 2014
November
11
Nov
6
06
2014
10:07 PM
10
10
07
PM
PDT
Mung: Then perhaps wd400 can explain just how magical and mysterious and yes, miraculous, it is that such diversity of life can come about through exploration of only a tiny and closely related area of the space of all possible genomes. We're waiting.Mung
November 6, 2014
November
11
Nov
6
06
2014
07:41 PM
7
07
41
PM
PDT
keiths:
The most laughable thing about Dembski’s CSI is that even if it were measurable, it would still be useless.
lol. pathetic. really pathetic. laughable. poor keiths.Mung
November 6, 2014
November
11
Nov
6
06
2014
07:33 PM
7
07
33
PM
PDT
DNA_Jock:
Obviously, this problem is made worse by people who start new threads because they feel like it, or who post supposedly seminal, comments-closed pontifications.
:-)keith s
November 6, 2014
November
11
Nov
6
06
2014
06:19 PM
6
06
19
PM
PDT
I should expand on why Durston's assumption is invalid: He is ignoring the effect of purifying selection.DNA_Jock
November 6, 2014
November
11
Nov
6
06
2014
06:13 PM
6
06
13
PM
PDT
Silver Asiatic,
You appear to be quoting from Dembski. Could you refer me to the pages that contain that summary?
It's on p. 18 of Specification: The Pattern That Signifies Intelligence.keith s
November 6, 2014
November
11
Nov
6
06
2014
06:10 PM
6
06
10
PM
PDT
Gpuccio:
For example, a biochemist can study a system and find some new enzymatic activity. Let’s say that he isolates the protein, and verifies that it is really the responsible of the enzymatic activity. So he defines the activity, and says that the protein is functional, and can do that particular thing.
This IS the Texas Sharpshooter fallacy. He characterizes the activity, THEN writes the specification. By way of illustration, you have specified ATP synthase. You have never specified adenosine pentaphosphate synthase. Why? Because you have never observed it. You and your biochemist are saying “Look at this protein; how unlikely is that?” It’s a post-hoc specification.
I don’t think so. The main assumption in Durston’s method is that the functional space has been mostly traversed during evolution by neutral variation. IOWs, that the variety of sequences we observe for a function is a reliable sample of the target space.
A representative sample of the entire target space? That is a terrible, completely unsupported assumption. Also note that it isn’t relevant to kairosfocus’s problem, which is that “each site in an amino acid protein sequence is assumed to be independent”, of which Durston himself says “In reality, we know that this is not the case”. If you are going to make an approximation, you have to be able to support the claim that it is fit-for-purpose.
Other methods will be developed, as our understanding of the functional space of protein improves. The point is: functional complexity is a true and important dimension, it can be analyzed, and it is extremely relevant for the problem of the origin of biological information. Denying this is denying science itself.
On this we agree. But color me underwhelmed with the efforts to date of Durston and Axe etc.DNA_Jock
November 6, 2014
November
11
Nov
6
06
2014
05:52 PM
5
05
52
PM
PDT
DNA_Jock: "Finally, I don’t see the probabilities that arise as being of much practical use, since Durston analyzed the observed sequence variation in extant, optimized sequences. This tells you very, very little about the size of the target space, and (because of the way he did the analysis) absolutely nothing about the existence of correlations between positions." I don't think so. The main assumption in Durston's method is that the functional space has been mostly traversed during evolution by neutral variation. IOWs, that the variety of sequences we observe for a function is a reliable sample of the target space. As Durston attributes a probability of change to each position given the functional restraint, it is extremely likely that variation which use correlations between different positions are included in the sample. After all, he analyzes very old molecules, and those molecules have been the target of a lot of neutral variation in the course of evolution. Would any neo darwinist believe that the same principle which is supposed to generate all new functional proteins has not been able to test most or all functional sequences of a same function, with the help of negative selection? Obviously, Durston's method is an approximation. I have also given here an argument about why it should in general underestimate functional complexity. But it is measuring functional complexity. Maybe the measure is not precise. Maybe it is biased. But it is the simplest method we have at present. Other methods will be developed, as our understanding of the functional space of protein improves. The point is: functional complexity is a true and important dimension, it can be analyzed, and it is extremely relevant for the problem of the origin of biological information. Denying this is denying science itself.gpuccio
November 6, 2014
November
11
Nov
6
06
2014
04:12 PM
4
04
12
PM
PDT
DNA_Jock: "It’s your choice, but I honestly believe you would be happier if you moved." No. But thank you for believing it, I consider it as an expression of affection. To be fair, I will give it back to you: I honestly believe that, if you really looked at the ID arguments without any bias, you could seriously consider to move. I don't know if you would be happier or not (although I suspect you would), but your intellectual honesty would certainly prompt, or at least tempt you to do that. :)gpuccio
November 6, 2014
November
11
Nov
6
06
2014
04:00 PM
4
04
00
PM
PDT
DNA_Jock at #474: n my cited comments I was essentially answering the objection that computing dFSCI should take into account all possible functions. If your problem is about post-specification, here is my view, which is very simple. It is completely false that functional post-specification is an example of the Texas sharpshooter fallacy. Here is the reason. First of all, I take from Wikipedia a very simple description of the essential fallacy:
The name comes from a joke about a Texan who fires some gunshots at the side of a barn, then paints a target centered on the biggest cluster of hits and claims to be a sharpshooter.
Now I quote here my definition of functional specification, from my OP on the subject:
So, the general definitions: c) Specification. Given a well defined set of objects (the search space), we call “specification”, in relation to that set, any explicit objective rule that can divide the set in two non overlapping subsets: the “specified” subset (target space) and the “non specified” subset. IOWs, a specification is any well defined rule which generates a binary partition in a well defined set of objects. d) Functional Specification. It is a special form of specification (in the sense defined above), where the rule that specifies is of the following type: “The specified subset in this well defined set of objects includes all the objects in the set which can implement the following, well defined function…” . IOWs, a functional specification is any well defined rule which generates a binary partition in a well defined set of objects using a function defined as in a) and verifying if the functionality, defined as in b), is present in each object of the set. It should be clear that functional specification is a definite subset of specification. Other properties, different from function, can in principle be used to specify. But for our purposes we will stick to functional specification, as defined here.
Now, if we see someone shooting, say, to a distant wall, then we go to the wall, and paint targets around each shot, and then we say that the shots were targeted and well shot, then we are in full in the fallacy. But if we see someone shooting to a distant wall, then we go to the wall, and see that there were targets painted there, and that they were there before the shooting, then we can well infer that the shooter is very good. Even if we observe the targets only after the shots have been fired. This is the difference between invalid post-specification and valid post-specification. Invalid post-specification is the trick used by many neo darwinists to criticize the concept of CSI. Even Mark has used it, although in perfect good faith. It goes this way. You take a string that has come out of a random system. We know that, like any single string of that length, that particular string has probability 1/n of being "extracted", if the system is fair and has uniform probability distribution. But the point is, the string has no special property which identifies it, except for its specific sequence. So, I can take that string, and say: "See, I got a very unlikely result. That happens all the time!" This is exactly the infamous "deck of cards" fallacy, which many neo darwinists regularly use against ID. The point is, what I got is an extremely likely result: a random string with no special property, except its specific sequence. Can I use that sequence to specify a function? Sure. I can define my function as "any string which has thos particular sequence". Is that a valid specification? Yes, but only if I use it as a pre-specification, because the probability of getting again that particular string is extremely low. Mark tried something like that when he tried to define a function for some random numbers, saying that they pointed to specific items in some catalogue (I don't remember exactly what). In this way, he was trying to use the sequence already obtained to specify something after having obtained the string. He was making the sequence functional after having obtained it. The correct way to deal with the problem, instead, is: what is the probability of getting a random number (not too big) which points to some item in some catalogue? And the answer is obviously: "very high". Obtaining the same number again, instead, has a very low probability. But functional specification is completely different. If a protein coding gene codes for a very efficient protein, let's say an enzyme, which can accelerate a biochemical reaction beyond any natural rate, that is not a target which ia am painting after the protein has been observed. It is a target that I am observing after the protein has been observed. I see the protein working, and I know that the target has been found. But the target exists independently. For example, a biochemist can study a system and find some new enzymatic activity. Let's say that he isolates the protein, and verifies that it is really the responsible of the enzymatic activity. So he defines the activity, and says that the protein is functional, and can do that particular thing. Note that, to do that, IOWs to define the function, even to establish how to measure it, and possibly useful thresholds of activity for a biological context, the researcher has no need to know the AA sequence of the protein. Why? Because the observation and definition of the function is completely independent from any knowledge of the digital sequence which implements it IOWs, the researcher is not painting a target around the shot. He is only observing that the shot has hit a well defined target. That's why functional specification can perfectly and validly be used as a post-specification. The function objectively generates a binary partition in the search space. That partition is independent from any knowledge of what sequences can implement it. Getting a sequence from the functional partition, if it is really small, is always unlikely. Observing unexpected hits of such extremely small target spaces is something which needs an explanation, and cannot be explained as a reasonable effect of random variation. I hope that answers your "Texas sharpshooter" objection.gpuccio
November 6, 2014
November
11
Nov
6
06
2014
03:55 PM
3
03
55
PM
PDT
Keith s 469 You appear to be quoting from Dembski. Could you refer me to the pages that contain that summary?Silver Asiatic
November 6, 2014
November
11
Nov
6
06
2014
02:58 PM
2
02
58
PM
PDT
KF, Thanks for the response. A couple of thoughts in response. First, would you mind giving us your explicit calculation? What's the equation? I ask because yours seems quite different from Dembski's, and the specific point I was exploring was gpuccio's assertion that CSI is simple and consistent. The fact that no two people seem to be using the same formula (or acronym!) suggest to me that it is neither. In particular, your usage doesn't seem to address P(T|H), which is obviously a significant part of the CSI calculation. I could be wrong about that. My math skills are quite poor, so I may be missing an important part of your explanation. If you are using P(T|H), how are you calculating H? If you are not using P(T|H), why not? Second, thank you for pointing out the Orgel quote. I saw it before, and frankly I suspect it's quote-mining. He uses the word "complexity," but I don't think it supports the desired inference that he was talking about the same kind of "complexity" you are. The language he uses suggests very strongly that he does not define "complex" in the same way as Dembski. He seems to be using a fairly conventional definition, in which a simple and homogeneous object is not "complex." Dembski's probability-based definition is very different. I think the assertion that Orgel's work is substantially in accordance with Dembski's is therefore wrong and misleading. Barry Arrington seems to have been mislead, for one; he asserted very confidently that Orgel "uses the terms complex and specified in exactly the sense Dembski uses the terms." The language you quoted doesn't support that belief. But perhaps you've read the Orgel paper, which I have not. Does he, in fact, use a different definition of "complex"?Learned Hand
November 6, 2014
November
11
Nov
6
06
2014
02:46 PM
2
02
46
PM
PDT
kf, I am entertained that you view Durston et al as using "the world of life as a long running Monte Carlo that susses out what works, what is flexible, what isn’t". As I noted to you on your 'elephant in the room' thread, YOU should be concerned at this usage because, to be a valid Monte Carlo run, there must be no intervention. You and I should both be concerned that n=1 is a [cough] rather low n for Monte Carlo. Finally, I don't see the probabilities that arise as being of much practical use, since Durston analyzed the observed sequence variation in extant, optimized sequences. This tells you very, very little about the size of the target space, and (because of the way he did the analysis) absolutely nothing about the existence of correlations between positions. This is what the author had to say on that subject: “…as noted in our paper, each site in an amino acid protein sequence is assumed to be independent of all other sites in the sequence. In reality, we know that this is not the case.” [cited by bornagain77, emphasis added] As I have pointed out to you previously, any bit-counting method (including BTW every one of your examples) assumes that there is no correlation between the positions, that is , it assumes independence. I note, in this regard, that you have never responded to my question (posted on the elephant thread), viz: You have already admitted that you assume independence and that this assumption is incorrect (“in info contexts reduce info capacity”), but you have asserted that this error is “not material”. How big is the error? How do you know? Please be as precise and concise as you can.DNA_Jock
November 6, 2014
November
11
Nov
6
06
2014
02:41 PM
2
02
41
PM
PDT
Gpuccio, Thank you for the kind words. Of course, I too am one of the unbanned. I have my own theory about what precipitates banning here at UD; as you note, there is something rather capricious about it. I think pretty much all chat boards suffer from the “more heat than light” problem and the irrelevant clutter problem, to a greater or lesser degree. TSZ is far from perfect in this regard, but UD is far worse: witness vishnu’s reported failure to notice, a mere 8 comments up-thread, that his interlocutor had been gagged. Obviously, this problem is made worse by people who start new threads because they feel like it, or who post supposedly seminal, comments-closed pontifications. In our previous conversation (which I guess has now migrated here, mea culpa), you stated that you had addressed the Texas Sharpshooter problem. But reviewing your cited comments (146 and 149 on that thread), I find that you do not address the fallacy of the post-hoc specification, but rather you embrace it, writing “dFSCI is defined for a specific function, not for all possible functions, and not for the object.” On this thread you repeat this position, stating “I compute dFSI only for an explicitly defined function. If I don’t recognize a function, I cannot compute dFSI nor make a design inference.” This IS the Texas Sharpshooter fallacy. I commend you for your honesty, and for being smart enough to see that encryption completely defeats design detection. Many design-proponents have yet to cotton on to this one. Unfortunately, encryption is merely an extreme example of a more general problem for DD: context is an essential input. Although I disagree with your conclusions, I applaud your effort to try to understand the complex role of RM+NS in the evolution of biological sequences. You, at least are making the effort. It’s your choice, but I honestly believe you would be happier if you moved.DNA_Jock
November 6, 2014
November
11
Nov
6
06
2014
02:39 PM
2
02
39
PM
PDT
Adapa, You are incorrect: Joe was not banned from TSZ for posting links to porn. He was placed in moderation for posting a link to a close-up of female genitalia; he was banned when he refused to promise not to do it again. Nothing if not classy, that Joe.DNA_Jock
November 6, 2014
November
11
Nov
6
06
2014
02:35 PM
2
02
35
PM
PDT
gpuccio: Why is everyone so eager to invite us to TSZ? Adapa: Because unlike UD posters at TSZ don’t get silently banned merely for presenting dissenting opinions. I’m sure you and the rest of the ID proponents are quite content in your safe snuggly little pillow fort here. But the real scientific world is out there
Bwahahaha! Thanks for that one! That made my day The delusions of some peopleVishnu
November 6, 2014
November
11
Nov
6
06
2014
01:04 PM
1
01
04
PM
PDT
Great, keith s pollutes this thread with his tripe. CSI is not circular. Detecting design is not useless. Just because keith s can twist reality doesn't mean reality is twisted.Joe
November 6, 2014
November
11
Nov
6
06
2014
12:25 PM
12
12
25
PM
PDT
Adapa:
They can’t make a case here when they’ve been banned.
LoL! They were banned for not making a case, duh.
Hundreds of ID critics have been banned at UD over the years and now it’s started again.
For a good reason
You are the only person in the history of TSZ to be banned there
That is because most people avoid it or don't even know it exists.
and that was for posting links to porn.
Liar. I guess lying makes you feel better, though.Joe
November 6, 2014
November
11
Nov
6
06
2014
12:23 PM
12
12
23
PM
PDT
Cross-posting this from the other thread:
Joe G:
Asking for help from IDists- Richie has been booted but his ghost spews on. Richie sed:
In the discussion* it shown that Barry wanted a demonstration of CSI being made by natural forces, whilst Demski defines CSI as only to be ‘counted’ in the absence of them.
Can anyone reference Denmbski saying that or anything like that?
Joe, you really should learn more about ID. Yes, Dembski says exactly that. His CSI equation contains a P(T|H) term, and he describes H as follows:
Moreover, H, here, is the relevant chance hypothesis that takes into account Darwinian and other material mechanisms.
Silver Asiatic:
I have never seen that from Dembski and I suspect its a misreading. By taking his conclusion “only intelligent agency is known to produce it” as if it was the premise, he is criticized as giving a circular argument.
It’s not a misreading, and yes, Dembski’s argument is hopelessly circular. This is why scientists laugh at ID. One of its leading lights, the so-called “Isaac Newton of information theory”, is making a freshman logic mistake. That makes his concept of CSI useless. Pitiful, isn’t it?
keith s
November 6, 2014
November
11
Nov
6
06
2014
11:48 AM
11
11
48
AM
PDT
Joe If the TSZ ilk can’t make their case here why should anyone believe they can fare any better over there They can't make a case here when they've been banned. Hundreds of ID critics have been banned at UD over the years and now it's started again. You are the only person in the history of TSZ to be banned there and that was for posting links to porn.Adapa
November 6, 2014
November
11
Nov
6
06
2014
10:54 AM
10
10
54
AM
PDT
Adapa- If the TSZ ilk can't make their case here why should anyone believe they can fare any better over there? Their garbage is garbage regardless of the forum. SheeshJoe
November 6, 2014
November
11
Nov
6
06
2014
10:49 AM
10
10
49
AM
PDT
keith s- Your cartoon version of CSI in your hands is totally useless and circular. ID's version of CSI is also useless in your hands but it is far from circular. CSI is a hallmark of intelligent design because all observations and experiences demonstrate only intelligent agencies can produce it. And if we ever observe some other process producing it then CSI will cease to be a hallmark of design.Joe
November 6, 2014
November
11
Nov
6
06
2014
10:47 AM
10
10
47
AM
PDT
gpuccio Why is everyone so eager to invite us to TSZ? Because unlike UD posters at TSZ don't get silently banned merely for presenting dissenting opinions. I'm sure you and the rest of the ID proponents are quite content in your safe snuggly little pillow fort here. But the real scientific world is out there, in scientific journals and laboratories and even in uncensored science blogs. If your goal is only to backslap other ID proponents you're doing great. If your goal is to present a positive case for ID to others you're not in the game, not even on the bench. You're hiding in the darkest corner of the locker room.Adapa
November 6, 2014
November
11
Nov
6
06
2014
10:42 AM
10
10
42
AM
PDT
The most laughable thing about Dembski's CSI is that even if it were measurable, it would still be useless. The argument from CSI is hopelessly circular.keith s
November 6, 2014
November
11
Nov
6
06
2014
10:38 AM
10
10
38
AM
PDT
1 6 7 8 9 10 24

Leave a Reply