Uncommon Descent Serving The Intelligent Design Community

Axe on specific barriers to macro-level Darwinian Evolution due to protein formation (and linked islands of specific function)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A week ago, VJT put up a useful set of excerpts from Axe’s 2010 paper on proteins and barriers they pose to Darwinian, blind watchmaker thesis evolution. During onward discussions, it proved useful to focus on some excerpts where Axe spoke to some numerical considerations and the linked idea of islands of specific function deeply isolated in AA sequence and protein fold domain space, though he did not use those exact terms.

I think it worth the while to headline the clips, for reference (instead of leaving them deep in a discussion thread):

_________________

ABSTRACT: >> Four decades ago, several scientists suggested that the impossibility of any evolutionary process sampling anything but a miniscule fraction of the possible protein sequences posed a problem for the evolution of new proteins. This potential problem—the sampling problem—was largely ignored, in part because those who raised it had to rely on guesswork to fill some key gaps in their understanding of proteins. The huge advances since that time call for a care -ful reassessment of the issue they raised. Focusing specifically on the origin of new protein folds, I argue here that the sampling problem remains. The difficulty stems from the fact that new protein functions, when analyzed at the level of new beneficial phenotypes, typically require multiple new protein folds, which in turn require long stretches of new protein sequence. Two conceivable ways for this not to pose an insurmountable barrier to Darwinian searches exist. One is that protein function might generally be largely indifferent to protein sequence. The other is that rela-tively simple manipulations of existing genes, such as shuffling of genetic modules, might be able to produce the necessary new folds. I argue that these ideas now stand at odds both with known principles of protein structure and with direct experimental evidence . . . >>

Pp 5 – 6: >> . . . we need to quantify a boundary value for m, meaning a value which, if exceeded, would solve the whole sampling problem. To get this we begin by estimating the maximum number of opportunities for spontane-ous mutations to produce any new species-wide trait, meaning a trait that is fixed within the population through natural selection (i.e., selective sweep). Bacterial species are most conducive to this because of their large effective population sizes. 3 So let us assume, generously, that an ancient bacterial species sustained an effective population size of 10 ^10 individuals [26] while passing through 10^4 generations per year. After five billion years, such a species would produce a total of 5 × 10 ^ 23 (= 5 × 10^ 9 x 10^4 x 10 ^10 ) cells that happen (by chance) to avoid the small-scale extinction events that kill most cells irrespective of fitness. These 5 × 10 ^23 ‘lucky survivors’ are the cells available for spontaneous muta-tions to accomplish whatever will be accomplished in the species. This number, then, sets the maximum probabilistic resources that can be expended on a single adaptive step. Or, to put this another way, any adaptive step that is unlikely to appear spontaneously in that number of cells is unlikely to have evolved in the entire history of the species.

In real bacterial populations, spontaneous mutations occur in only a small fraction of the lucky survivors (roughly one in 300 [27]). As a generous upper limit, we will assume that all lucky survivors happen to receive mutations in portions of the genome that are not constrained by existing functions 4 , making them free to evolve new ones. At most, then, the number of different viable genotypes that could appear within the lucky survivors is equal to their number, which is 5 × 10^ 23 . And again, since many of the genotype differences would not cause distinctly new proteins to be produced, this serves as an upper bound on the number of new protein sequences that a bacterial species may have sampled in search of an adaptive new protein structure.

Let us suppose for a moment, then, that protein sequences that produce new functions by means of new folds are common enough for success to be likely within that number of sampled sequences. Taking a new 300-residue structure as a basis for calculation (I show this to be modest below), we are effectively supposing that the multiplicity factor m introduced in the previous section can be as large as 20 ^300 / 5×10^ 23 ~ 10 ^366 . In other words, we are supposing that particular functions requiring a 300-residue structure are real-izable through something like 10 ^366 distinct amino acid sequences. If that were so, what degree of sequence degeneracy would be implied? More specifically, if 1 in 5×10 23 full-length sequences are supposed capable of performing the function in question, then what proportion of the twenty amino acids would have to be suit-able on average at any given position? The answer is calculated as the 300 th root of (5×10 23 ) -1 , which amounts to about 83%, or 17 of the 20 amino acids. That is, by the current assumption proteins would have to provide the function in question by merely avoid-ing three or so unacceptable amino acids at each position along their lengths.

No study of real protein functions suggests anything like this degree of indifference to sequence. In evaluating this, keep in mind that the indifference referred to here would have to charac-terize the whole protein rather than a small fraction of it. Natural proteins commonly tolerate some sequence change without com- plete loss of function, with some sites showing more substitutional freedom than others. But this does not imply that most mutations are harmless. Rather, it merely implies that complete inactivation with a single amino acid substitution is atypical when the start-ing point is a highly functional wild-type sequence (e.g., 5% of single substitutions were completely inactivating in one study [28]). This is readily explained by the capacity of well-formed structures to sustain moderate damage without complete loss of function (a phenomenon that has been termed the buffering effect [25]). Conditional tolerance of that kind does not extend to whole proteins, though, for the simple reason that there are strict limits to the amount of damage that can be sustained.

A study of the cumulative effects of conservative amino acid substitutions, where the replaced amino acids are chemically simi-lar to their replacements, has demonstrated this [23]. Two unrelat-ed bacterial enzymes, a ribonuclease and a beta-lactamase, were both found to suffer complete loss of function in vivo at or near the point of 10% substitution, despite the conservative nature of the changes. Since most substitutions would be more disruptive than these conservative ones, it is clear that these protein functions place much more stringent demands on amino acid sequences than the above supposition requires.

Two experimental studies provide reliable data for estimating the proportion of protein sequences that perform specified func -tions [–> note the terms] . One study focused on the AroQ-type chorismate mutase, which is formed by the symmetrical association of two identical 93-residue chains [24]. These relatively small chains form a very simple folded structure (Figure 5A). The other study examined a 153-residue section of a 263-residue beta-lactamase [25]. That section forms a compact structural component known as a domain within the folded structure of the whole beta-lactamase (Figure 5B). Compared to the chorismate mutase, this beta-lactamase do-main has both larger size and a more complex fold structure.

In both studies, large sets of extensively mutated genes were produced and tested. By placing suitable restrictions on the al-lowed mutations and counting the proportion of working genes that result, it was possible to estimate the expected prevalence of working sequences for the hypothetical case where those restric-tions are lifted. In that way, prevalence values far too low to be measured directly were estimated with reasonable confidence.

The results allow the average fraction of sampled amino acid substitutions that are functionally acceptable at a single amino acid position to be calculated. By raising this fraction to the power l, it is possible to estimate the overall fraction of working se-quences expected when l positions are simultaneously substituted (see reference 25 for details). Applying this approach to the data from the chorismate mutase and the beta-lactamase experiments gives a range of values (bracketed by the two cases) for the preva-lence of protein sequences that perform a specified function. The reported range [25] is one in 10 ^77 (based on data from the more complex beta-lactamase fold; l = 153) to one in 10 ^53 (based on the data from the simpler chorismate mutase fold, adjusted to the same length: l = 153). As remarkable as these figures are, par-ticularly when interpreted as probabilities, they were not without precedent when reported [21, 22]. Rather, they strengthened an existing case for thinking that even very simple protein folds can place very severe constraints on sequence.  [–> Islands of function issue.]

Rescaling the figures to reflect a more typical chain length of 300 residues gives a prevalence range of one in 10 ^151 to one in 10 ^104 . On the one hand, this range confirms the very highly many-to-one mapping of sequences to functions. The corresponding range of m values is 10 ^239 (=20 ^300 /10 ^151 ) to 10 ^286 (=20 ^300 /10 ^104 ), meaning that vast numbers of viable sequence possibilities exist for each protein function. But on the other hand it appears that these functional sequences are nowhere near as common as they would have to be in order for the sampling problem to be dis-missed. The shortfall is itself a staggering figure—some 80 to 127 orders of magnitude (comparing the above prevalence range to the cutoff value of 1 in 5×10 23 ). So it appears that even when m is taken into account, protein sequences that perform particular functions are far too rare to be found by random sampling.>>

Pp 9 – 11: >> . . . If aligned but non-matching residues are part-for-part equivalents, then we should be able to substitute freely among these equivalent pairs without impair-ment. Yet when protein sequences were even partially scrambled in this way, such that the hybrids were about 90% identical to one of the parents, none of them had detectable function. Considering the sensitivity of the functional test, this implies the hybrids had less than 0.1% of normal activity [23]. So part-for-part equiva-lence is not borne out at the level of amino acid side chains.

In view of the dominant role of side chains in forming the bind-ing interfaces for higher levels of structure, it is hard to see how those levels can fare any better. Recognizing the non-generic [–> that is specific and context sensitive] na-ture of side chain interactions, Voigt and co-workers developed an algorithm that identifies portions of a protein structure that are most nearly self-contained in the sense of having the fewest side-chain contacts with the rest of the fold [49]. Using that algorithm, Meyer and co-workers constructed and tested 553 chimeric pro-teins that borrow carefully chosen blocks of sequence (putative modules) from any of three natural beta lactamases [50]. They found numerous functional chimeras within this set, which clearly supports their assumption that modules have to have few side chain contacts with exterior structure if they are to be transport-Able.

At the same time, though, their results underscore the limita-tions of structural modularity. Most plainly, the kind of modular-ity they demonstrated is not the robust kind that would be needed to explain new protein folds. The relatively high sequence simi-larity (34–42% identity [50]) and very high structural similarity of the parent proteins (Figure 8) favors successful shuffling of modules by conserving much of the overall structural context. Such conservative transfer of modules does not establish the ro-bust transportability that would be needed to make new folds. Rather, in view of the favorable circumstances, it is striking how low the success rate was. After careful identification of splice sites that optimize modularity, four out of five tested chimeras were found to be completely non-functional, with only one in nine being comparable in activity to the parent enzymes [50]. In other words, module-like transportability is unreliable even under extraordinarily favorable circumstances [–> these are not generally speaking standard bricks that will freely fit together in any freely plug- in compatible pattern to assemble a new structure] . . . .

Graziano and co-workers have tested robust modularity directly by using amino acid sequences from natural alpha helices, beta strands, and loops (which connect helices and/or strands) to con-struct a large library of gene segments that provide these basic structural elements in their natural genetic contexts [52]. For those elements to work as robust modules, their structures would have to be effectively context-independent, allowing them to be com-bined in any number of ways to form new folds. A vast number of combinations was made by random ligation of the gene segments, but a search through 10^8 variants for properties that may be in-dicative of folded structure ultimately failed to identify any folded proteins. After a definitive demonstration that the most promising candidates were not properly folded, the authors concluded that “the selected clones should therefore not be viewed as ‘native-like’ proteins but rather ‘molten-globule-like’” [52], by which they mean that secondary structure is present only transiently, flickering in and out of existence along a compact but mobile chain. This contrasts with native-like structure, where secondary structure is locked-in to form a well defined and stable tertiary Fold . . . .

With no discernable shortcut to new protein folds, we conclude that the sampling problem really is a problem for evolutionary accounts of their origins. The final thing to consider is how per-vasive this problem is . . . Continuing to use protein domains as the basis of analysis, we find that domains tend to be about half the size of complete protein chains (compare Figure 10 to Figure 1), implying that two domains per protein chain is roughly typical. This of course means that the space of se-quence possibilities for an average domain, while vast, is nowhere near as vast as the space for an average chain. But as discussed above, the relevant sequence space for evolutionary searches is determined by the combined length of all the new domains needed to produce a new beneficial phenotype. [–> Recall, courtesy Wiki, phenotype: “the composite of an organism’s observable characteristics or traits, such as its morphology, development, biochemical or physiological properties, phenology, behavior, and products of behavior (such as a bird’s nest). A phenotype results from the expression of an organism’s genes as well as the influence of environmental factors and the interactions between the two.”]

As a rough way of gauging how many new domains are typi-cally required for new adaptive phenotypes, the SUPERFAMILY database [54] can be used to estimate the number of different protein domains employed in individual bacterial species, and the EcoCyc database [10] can be used to estimate the number of metabolic processes served by these domains. Based on analysis of the genomes of 447 bacterial species 11, the projected number of different domain structures per species averages 991 (12) . Compar-ing this to the number of pathways by which metabolic processes are carried out, which is around 263 for E. coli,13 provides a rough figure of three or four new domain folds being needed, on aver-age, for every new metabolic pathway 14 . In order to accomplish this successfully, an evolutionary search would need to be capable of locating sequences that amount to anything from one in 10 ^159 to one in 10 ^308 possibilities 15 , something the neo-Darwinian model falls short of by a very wide margin. >>
____________________

Those who argue for incrementalism or exaptation and fortuitous coupling or Lego brick-like modularity or the like need to address these and similar issues. END

PS: Just for the objectors eager to queue up, just remember, the Darwinism support essay challenge on actual evidence for the tree of life from the root up to the branches and twigs is still open after over two years, with the following revealing Smithsonian Institution diagram showing the first reason why, right at the root of the tree of life:

Darwin-ToL-full-size-copy

No root, no shoots, folks.  (Where, the root must include a viable explanation of gated encapsulation, protein based metabolism and cell functions, code based protein assembly and the von Neumann self replication facility keyed to reproducing the cell.)

Comments
MeThink, Wagner's Arrival of Fittest - LIBRARY Dembski's Being as Communion - INFORMATION Information comes before Library. That is the gist of Dembski's new book. It is a profound book. It destroys Naturalistic Materialism. A Daisy Cutter of a bomb. The improbabilities of ID no longer matter. Information preceding Matter matters. As Dembski writes, "Matter is a myth." Kaboom. Read the book.ppolish
November 16, 2014
November
11
Nov
16
16
2014
07:05 PM
7
07
05
PM
PDT
jstanley01
KF “Boiling down we are back at front loading here, with a switch waiting to be flipped.” Looks like it to me. The fans of the book in question evidently think not. I’m popping popcorn.
KF is too invested in IFSCO so I don't think he will ever change his stance, but perhaps you may. A Genotype network is not 'front loading'. Imagine your Facebook network. Your immediate friends will have interest similar to yours. As you venture out in your network and traverse your friend's friend network, or further on to friend's friend's friend network, you will encounter someone with totally different interest (akin to new function - this may or may not help in generating a new phenotype). Now imagine this Facebook network of yours balled up.It took many steps to reach the node where the person with a totally different interest exist on a 2d network. In your balled network, you have to travel maximum half of the steps to reach the same person. Now imagine this in higher and higher dimensions. You will find you have to travel not even a fraction of 1 step to reach the person, and you will find a huge number of persons with dissimilar interests (akin to new functions - many of it helping to build a new phenotype or at-least help this generation survive better to start the search all over again with the advantage of the new phenotype) in just that fraction of 1 step, and that's the reason improbabilities of ID don't matter.Me_Think
November 16, 2014
November
11
Nov
16
16
2014
06:43 PM
6
06
43
PM
PDT
And my read on the book is if that Source older than time wanted to design a Universal Library that enabled Free Will, the library imagined by Wagner would be pretty cool:)
This is your take after reading the entire book ? You didn’t form an opinion of robustness of genotype, or the vanishing improbabilities of finding new functions at hyper dimensions ? Amazing! If you chose to admire the frame of a Mona Lisa painting rather than Mona Lisa, what can I say ?Me_Think
November 16, 2014
November
11
Nov
16
16
2014
06:19 PM
6
06
19
PM
PDT
And my read on the book is if that Source older than time wanted to design a Universal Library that enabled Free Will, the library imagined by Wagner would be pretty cool:)
This is your take after reading the entire book ? You didn't form an opinion of robustness of gentypes, or the vanishing improbabilities of finding new functions at hyper dimensions ? Amazing. If you chose to admire the frame of a Mona Lisa painting rather then Mona Lisa, what can I say ?Me_Think
November 16, 2014
November
11
Nov
16
16
2014
06:16 PM
6
06
16
PM
PDT
Z, Reality check. Please, tell me whether or no just about any arrangement of 6500 C3 parts will serve as a first class fishing reel. Patently not. There is a wiring diagram specificity involved that selects a narrow circle of functional states from the possible clumped or scattered ones; that is, "islands of function" is a term for an undeniable fact . . . one that does not go away because the tech involved is molecular, not clanking metal etc parts. One, I would like to see some current objectors simply have the responsiveness to evidence to simply acknowledge. Likewise for contextually relevant English language comments vs oceans of possible gibberish. Likewise for interwoven codes in old microcontrollers in the days when memory was much more hard to come by. Likewise, the FSCO/I in D/RNA specifying proteins etc. In short, it seems that an evident reality is giving you oceans of trouble, islands of function, which are in action all around you. I note too, FSCO/I as you know or should know, is relevant to config spaces that start at 500 bits and up. That's why complexity is there as part of the core description. The attempted dismissals are utterly tangential to the point. KFkairosfocus
November 16, 2014
November
11
Nov
16
16
2014
05:26 PM
5
05
26
PM
PDT
Spoiler Alert....finished Arrival of Fittest and this is last paragraph: "When we begin to study nature's libraries we aren't just investigating life's innovabilty or that of technology. We are shedding new light on one of the most durable and fascinating subjects in all of philosophy. And we learn that life's creativity draws from a source that is older than life, and perhaps older than time." And my read on the book is if that Source older than time wanted to design a Universal Library that enabled Free Will, the library imagined by Wagner would be pretty cool:) Dembski's latest book "Being From Communion" lays a wonderful Theistic foundation to the libraries described by Wagner. "...perhaps older than time." Yes, perhaps.ppolish
November 16, 2014
November
11
Nov
16
16
2014
03:34 PM
3
03
34
PM
PDT
MT It seems to me that describing how "networks" and "neighborhoods" in "hyper dimensions" operate within the constraints of natural laws + time + chance is going to be a lot more complicated than describing how natural selection does so via differential survival rates. Lack of help along this line could explain why I'm not a scared as, perhaps, I should be. KF "Boiling down we are back at front loading here, with a switch waiting to be flipped." Looks like it to me. The fans of the book in question evidently think not. I'm popping popcorn.jstanley01
November 16, 2014
November
11
Nov
16
16
2014
09:42 AM
9
09
42
AM
PDT
kairosfocus: the relevant threshold is 500 bits, 72 ACSII characters, and change processes are driven by chance and necessity without intelligent configuration. Your claim was that they were isolated islands. If we can walk from one to the other, even intelligently, then they are not isolated islands.Zachriel
November 16, 2014
November
11
Nov
16
16
2014
09:13 AM
9
09
13
AM
PDT
Zachriel, the relevant threshold is 500 bits, 72 ACSII characters, and change processes are driven by chance and necessity without intelligent configuration. One Swiss Army knife has many uses, there are programs for microcontrollers that have interwoven multifunctional code that is instructions one way and data the next. Such are actually even more tightly constrained for the obvious reason. KFkairosfocus
November 16, 2014
November
11
Nov
16
16
2014
08:24 AM
8
08
24
AM
PDT
kairosfocus: Why, when on evidence FSCO/I will naturally come in islands of function — because to get correctly arranged parts to work together to achieve a function there needs to be specific configs that follow a correct wiring diagram [with the vastly wider set of clumped or scattered but non functional configs excluded], where such plans are routinely produced by design? We can provide a couple of counterexamples. In another thread, Me_Think discussed Schultes & Bartel, One Sequence, Two Ribozymes: Implications for the Emergence of New Ribozyme Folds, Science 2000, who showed a pathway from one functional fold to another functional fold even while maintaining the original function. This shows that the so-called islands are connected. Language is often said to have FSCO/I. We can show a pathway from a single-letter word through longer words and phrases to a complete poem in rhyme, again showing that the so-called islands are connected. Sea of Beneficence http://www.zachriel.com/mutagenation/Sea.htm Beware a War of Words http://www.zachriel.com/mutagenation/Beware.htmZachriel
November 16, 2014
November
11
Nov
16
16
2014
07:18 AM
7
07
18
AM
PDT
KS: I draw your attention:
FSCO/I will naturally come in islands of function in much larger config spaces — because to get correctly arranged parts to work together to achieve a function there needs to be specific configs that follow a correct wiring diagram [with the vastly wider set of clumped or scattered but non functional configs excluded], where such plans are routinely produced by design . . .
I can back that up on billions of cases in point. Can you show why any old parts can be oriented any old how, put any old where and can be connected any which ways, and will readily achieve interactive complex function? Where we deal with at least 500 bits of complexity? Do you see why I am having a serious problem with your rejection of the commonplace fact that functionality dependent on interacting multiple parts depends crucially on how they are wired up? KFkairosfocus
November 16, 2014
November
11
Nov
16
16
2014
04:21 AM
4
04
21
AM
PDT
Rex, sadly, yes. KFkairosfocus
November 16, 2014
November
11
Nov
16
16
2014
04:11 AM
4
04
11
AM
PDT
KS, 62:
You’re bluffing, KF. Selection makes all the difference in the world, and you know it. (Try running Weasel — the non-latching variety :-) — without selection sometime. Make sure you have a few quintillion lifetimes to spare. You’ll need them.)
Natural selection, so called, subtracts hereditable variations through culling; it patently does not innovate and add in the info in the first place. That's the job of chance variation on an already functioning life form. Where did that come from? Another of same. And that . . . oh, OOL. Where did that come from by blind watchmaker processes, on what observed capacity to do such? Oh, it must have been that. Why? Well, the alternative, we must not allow to put his unwelcome Foot on the doorstep of our temple of a priori materialism controlled Science. Why? Oh, this God idea is irrational, demon-like fairy tales that we have to get the people to grow up from. Why? Well God is nonsense! Why? . . . well, in any case we rule a datum line that excludes OOL from our theory. Why? Because we have no robust theory. Why, when on evidence FSCO/I will naturally come in islands of function -- because to get correctly arranged parts to work together to achieve a function there needs to be specific configs that follow a correct wiring diagram [with the vastly wider set of clumped or scattered but non functional configs excluded], where such plans are routinely produced by design? There you go bringing the supernatural in again! But, isn't design a process of intelligently directed configuration as we routinely see designers carrying out all around, rather than an empty declaration here is a gap for God to fit into? That is, isn't evidence of such evidence of design, which we can inspect, whoever did it? There you go bringing the supernatural in again! (See the ideological question begging and agenda? Notice, how KS has tried to dismiss the implications of correctly arranged parts working together to achieve function pivoting on specified wiring diagram complexity without facing the issue head on on cases such as the 6500 c3 reel and many others? Notice, the suspicious vagueness of his dismissals, and the lack of concrete cases on definitively observed evidence of FSCO/I from lucky noise leading to novel body plans with co-ordinated functionality, from OOL on up through the tree of life?) Okay, he did give us Weasel, having already argued that it is not a good example of evo by CV + NS. Take Weasel, start with the initial phrase which Dawkins admits is a nonsense phrase. Non functional. Subtract -- culled out by the powers of NS. Poof, no Weasel phrase left. Start again with another nonsense phrase. Subtract again. Repeat . . . Fail. As for non-latching varieties of Weasel, KS you full well know that (a) the results published by Dawkins showed latching behaviour, (b) you full well know that by adjusting parameters latching will show up in many runs of supposed non latching reconstructions, where (c) quite conveniently the original code is nowhere to be found. Also, you full well know (d) Weasel is targetted search that rewards non functional proximity to a target. That is, as usual, we have a case of intelligent design being used in an argument that tries to undermine intelligent design. Fail. Please think again. KFkairosfocus
November 16, 2014
November
11
Nov
16
16
2014
04:03 AM
4
04
03
AM
PDT
KF, all great comments but I think this one is worth repeating:
Axe’s remarks in the OP (which are being of course side lined and ignored as usual in the haste to get back to favourite question begging talking points and personalities such as “your’e bluffing” etc.).
RexTugwell
November 16, 2014
November
11
Nov
16
16
2014
03:47 AM
3
03
47
AM
PDT
KS, 76:
It’s a disaster for ID, because the genotype networks show how easy it is for unguided evolution to move through the library and gain access to new functions. Please explain to kairosfocus that the ‘islands of function’ objection is toast, but everything’s okay, because “information is ID cement.”
Have you ever spoken to a librarian about (a) what it takes for contents to be there to go in the library? (b) what it takes to see to it that co-ordinated access is feasible without unacceptably large search costs? In short, you are back at we got the information from nowhere for nothing and the handy control switches are just waiting to be flipped. Please see the just above to JS for more on the specific case of flight. KFkairosfocus
November 16, 2014
November
11
Nov
16
16
2014
03:34 AM
3
03
34
AM
PDT
JS: From that SFI review:
. . . very small genetic changes can radically alter the phenotype. Some such alterations portend certain death, but a few lead to powerful new innovations: the ability to fly, for example, or the first light-sensitive cells eventually leading to photosynthesis. Searching all the genetic possibilities at random would take forever, but a species — with all the same functions, but widely varying genes — can search millions of genetic options all at once, dramatically increasing evolution’s efficiency. Robustness itself is a response to environmental complexity, Wagner argues. To withstand heat, cold, moisture, and dryness, living things developed a modular toolset of molecules such as amino acids, which combined in complex ways to produce a range of innovations in response to any given problem.
The only realistic way that small shifts in genomes could create the musculature, wings, nervous controls etc to fly would be for it to throw a switch or bank of switches in a control routine. In short, we are here begging he question, where did the info to be on the right switch position, all co-ordinated and properly arranged to be expressed embryologically and in the real world environments come from? Silence. Apart from, oh we get something for and from nothing all the time in evolution, the magic of chance. Did these folks ever talk with an airplane -- better yet a drone -- designer about the number of challenges to be solved? All at once or fail. Boiling down we are back at front loading here, with a switch waiting to be flipped. Front loading is a design hypothesis. KFkairosfocus
November 16, 2014
November
11
Nov
16
16
2014
03:27 AM
3
03
27
AM
PDT
PPS: Clipping 22, just to underscore how repeated corrections on the merits have little impact on those whose habitual rhetorical tactic is to drum out talking points over and over again regardless of cogent concerns and correctives. So, let us see if at long last they will now actually address the concerns: >> It has long since been pointed out that config spaces are multidimensional, and that representation on coords giving degrees of freedom per component bring in for each: location relative to an origin, 3 degrees of freedom (x,y,z), plus yaw, pitch, roll (we can use the ox axis as polar axis to define the equivalent of North). Six dimensions per part. Next, we have n parts, n being about 60 for the Abu 6500 3c, i.e. we see 360 dimensions to its config space. For a body of gas, n is of order 10^20 or better, etc. Now, what MT (who has been previously corrected but has ignored it) is raising is effectively that once we have an initial location in the config space and undertake a random walk with drift, we go to a neighbourhood ball of other points, which as the space becomes arbitrarily large becomes an ever smaller (eventually effectively vanishingly small) fraction of the space. This allows us to see how MT has begged the key questions and has as a result handed back the problem as though it were the solution, strawmannising and begging the question: 1 –> WLOG, we can discuss on digital strings, in effect chains of structured y/n q’s that, taken together specify the overall specific config. (That’s how AutoCAD etc work.) 2 –> For a space of possibilities for 500 bits, we easily see that 2^500 = 3.27*10^150 possibilities, while at typical fast chem rxn rates, the 10^57 atoms of the sol system could only undertake about 10^87 or so states. The ratio of possible search to space of possibilities is about as a one straw sized blindly chosen sample to a cubical haystack comparably thick as our galaxy. This is the needle in haystack, vs sparse search problem. 3 –> Now, as the Abu 6500 c3 shows, when functionality depends on specific organised interaction of many correctly located, oriented, matching, coupled parts it sharply confines functionality to isolated islands in the config space. That is, we face the problem of deeply isolated islands of function as the needles in the haystack. (There are vastly more clumped but non-functional ways to arrange the parts [shake the reel parts up in a bag] oreven more ways to have them scattered about, than ways consistent with functionality.) 4 –> Whether a blind watchmaker chance plus necessity search is a finely dispersed dust in the config space, or it is a connected dynamic-stochastic random walk with drift [think, air molecules moving around within an air mass at random, but the body as a whole is drifting as part of a wind], or a combination of the two or the like, we are looking at sparse blind search in a space utterly dominated by non-functional configs. 5 –> This implies the challenge of a search for a golden search [S4GS] that puts one in an extraordinarily lucky state, on or just conveniently next to an island of function. Where as searches of a space of cardinality W cells are subsets, the set of searches is the power set of cardinality 2^W. And higher order searches are even more deeply exponential. 6 –> S4GS is exponentially harder than direct blind search. So, a simple reasonably random ( not too far off from a flat random sample) sample is a reasonable estimator of likelihood of success. Where the very name, needle in haystack, points out how unlikely such would be to succeed. Thus, the strawman problem. 7 –> Also, implicit in the notion that a sparse search gets out of the config space challenge, is the notion of a vast continent of closely connected functional states, that is easily accessible from plausible initial conditions. The case of the 6500 c3 reel and things like protein assembly in the cell or the complex integrative flow network of cellular metabolism should serve to show how this begs the question. 8 –> In reply, we say, show us this sort of config space topology. Where as just one case the freshly dead show us already just how close to functional, non functional states can be.>>kairosfocus
November 16, 2014
November
11
Nov
16
16
2014
03:18 AM
3
03
18
AM
PDT
PS: Just to flesh out here is my reply to MT, from 28 above: >> I comment on points interwoven with your argument: [MT:] >> A ball (representing the search volume) with constant radius occupies ever-decreasing fractions of a cube’s volume as dimensions increases.>> a: Yes, the neighbourhood [Mathematical senses are intended, extending Hamming distance] of a point in a config space of large dimensionality and range of possible configs will increasingly be a tiny fraction of the space. b: Mix in sharply restricted resources of about 10^87 possible atomic event scale moves in the sol system [10^111 for the cosmos as a whole as observed] will be a vanishingly small fraction of at least 3.27 * 10^150 to 1.07*10^301 possibilities for just 500 – 1,00 bits to specify cells in the space, i.e. as many dimensions. c: FSCO/I for reasons already pointed out will be deeply isolated and you have a blind no steering intelligence search on chance plus necessity, a dynamic-stochastic process. d: Sampling theory will rapidly tell you that under such circumstances you have little or no warrant for hoping to find zones of interest X that are isolated in the space, where the set of clusters of cells z1, z2, . . . zn (the islands of function collectively) is a very small fraction, for reasons based on constraints on configs imposed by interactive functionally specific organisation. e: Blind chance and mechanical necessity is not a reasonable search paradigm. Intelligent design routinely produces FSCO/I. >>I will quote Wagner himself: This volume decreases not just for my example of a 15 percent ratio of volumes, but for any ratio, even one as high as 75 percent, where the volume drops to 49 percent in three dimensions, to 28 percent in four, to 14.7 percent in five, and so on, to ever-smaller fractions. >> f: As the proportion of searchable cells relative to the possibilities W falls away exponentially with number of bits, the search becomes ever more sparse and likely to be unfruitful. Beyond 500 – 1,000 bits of space (and bits is WLOG) it is patently futile. Matters not if you have a dust or a random walk with drift or whatever combi of the two or whatever. g: You are inadvertently confirming the empirical strength of the logic of the design inference explanatory filter. >>What this means: In a network of N nodes and N-1 neighbors, if in 1 dimension, 10 steps are required to to discover new genotype/procedure, in higher dimension, this 10 steps reduces drastically to fraction of 1 step ! >> h: Again, restating the problem of sparse blind search for needles in a vast haystack as if that were the solution. i: The implicit assumption in the context of the Tree of Life model, is that you are already on an imagined vast continent of function, with nicely behaved fitness functions that allow near-neighbourhood searches to branch on up to the twigs such as we are on. j: That is why I first put up the Smithsonian TOL to remind us that all of this has to start with blind watchmaker mechanisms in Darwin’s pond or the like, and you have to find the shoreline of function in a context of gated, encapsulated self-assembling metabolic automata that use codes to control assembly machines to make the vital proteins, which are needed in the hundreds for just the first relevant cell. k: Where there is zero reason to believe on evidence that the sort of islands of function imposed by interactive functional organisation vanish for ribosomes or embryologically and ecologically feasible body plans. l: So, the issue of resolving the blind watchmaker thesis on empirical evidence and evident reason — not imposed a priori Lewontin-Sagan style materialist ideology — remains. Perhaps, you too would wish to take a serious try at the 2-year long TOL challenge? >> Merely dressing the sparse search for needles in a large haystack problem up in somewhat different terms about neighbourhoods in n dimensions, does not change the problem. Again, MT has been restating the problem and presenting the problem as the solution, withthe underlying question being begged forst being getting to the first life in cells from Darwin's pond and the like and the second major one being begged being getting to major body plans requiring 10 - 100+ mn base increments in DNA. With as a side serving, the point that AA chains in proteins are not merely like lego brick modules but as folding-functioning interactions occur all along the chains, there is a wholeness issue that means that the modularity hope fails . . . cf Axe's remarks in the OP (which are being of course side lined and ignored as usual in the haste to get back to favourite question begging talking points and personalities such as "your'e bluffing" etc.).kairosfocus
November 16, 2014
November
11
Nov
16
16
2014
03:15 AM
3
03
15
AM
PDT
F/N: Again, as shown in 22 and 28 above, mere repetition of assertions that repackage the problem and present it as if sparse blind search for needles in haystacks were the solution, only reveals the lack of real answers to the challenge. And BTW, that is the exact reason why the challenge to address the tree of life from the root up is so pivotal. If Darwinists cannot freely address this and matter of factly point tot he empirical evidence that shows how OOL is in reach of Darwin's pond or the like and how major body plans are in reasonable reach of incremental blind chance variation and culling by differential reproductive success in ecological environments, they do not have answers apart from ideological question-begging. Notice, no serious and solid answers forthcoming to that challenge for over two years now. It remains the truth that the only observed source of FSCO/I is intelligently directed configuration (backed up by the sparse blind search for needles in a very large haystack issue), and so we have excellent reason to infer that the copious FSCO/I in life forms from the cell up, strongly indicates design as material cause. KFkairosfocus
November 16, 2014
November
11
Nov
16
16
2014
03:02 AM
3
03
02
AM
PDT
jstanley01 @ 79
Actually, whether species can overcome the odds against innovation via a random search of “millions of genetic options all at once” ought to be a point that is empirically testable. Does Wagner report any?
'All at once' refers to the fraction of 1 step needed to reach a new function in a hyper dimension network. So, yes it is proven. You all are stuck to general probabilities. What Wagner describes is network and neighborhoods at hyper dimensions where your probabilities have little value.Me_Think
November 15, 2014
November
11
Nov
15
15
2014
09:30 PM
9
09
30
PM
PDT
Actually, whether species can overcome the odds against innovation via a random search of "millions of genetic options all at once" ought to be a point that is empirically testable. Does Wagner report any?jstanley01
November 15, 2014
November
11
Nov
15
15
2014
09:14 PM
9
09
14
PM
PDT
jstanley01, That's the best criticism you could come up with? The use of the word "millions" in a review of the book?keith s
November 15, 2014
November
11
Nov
15
15
2014
08:43 PM
8
08
43
PM
PDT
From Paleolibrarian's Review, it sounds to me like Wagner's Arrival of the Fittest represents an update to Punctuated Equilibrium's just-so yarns. Not that there's anything wrong with that, sez he. The Santa Fe Institute's review, I'd guess, nutshells the core of Wagner's argument with pith, writing:
Wagner shows how ... [s]earching all the genetic possibilities at random would take forever, but a species — with all the same functions, but widely varying genes — can search millions of genetic options all at once, dramatically increasing evolution’s efficiency.
Millions? Really? Wow. That's a lot! Behold Your Doom Intelligent Design Ad-vo-cates! Ooo. Scary. But if you thought that was horrific, just you wait until somebody publishes Arrival of the Library. Then your cheap tuxedos are really gonna bust a seam, buckaroos!jstanley01
November 15, 2014
November
11
Nov
15
15
2014
08:30 PM
8
08
30
PM
PDT
ppolish,
Look, a hyperastronomical library like Wagner describes is full of information. Information is ID cement. A hyperastronomical library is a place where ID folk would feel comfortable?Might be a breakthrough for ID.
Are you kidding? It's a disaster for ID, because the genotype networks show how easy it is for unguided evolution to move through the library and gain access to new functions. Please explain to kairosfocus that the 'islands of function' objection is toast, but everything's okay, because "information is ID cement." If "information is ID cement", then genotype networks are ID cement shoes.keith s
November 15, 2014
November
11
Nov
15
15
2014
08:09 PM
8
08
09
PM
PDT
me-think, in case you do not know, Neo-Darwinism is about 'creationism' too. More specifically, it is about how desperately atheists want Theism/Creationism to not be true: "Instead of presenting scientific evidence that shows atheism to be true (or probable), the neo-atheists moralize about how much better the world would be if only atheism were true. Far from demonstrating that God does not exist, the neo-atheists merely demonstrate how earnestly they desire that God not exist.8 The God of Christianity is, in their view, the worst thing that could befall reality. According to Richard Dawkins, for instance, the Judeo-Christian God “is arguably the most unpleasant character in all of fiction. Jealous and proud of it; a petty, unjust unforgiving control-freak; a vindictive, bloodthirsty ethnic-cleanser; a misogynistic homophobic racist, infanticidal, genocidal, filicidal, pestilential, megalomaniacal, sadomasochistic, capriciously malevolent bully.”9 Dawkins’s obsession with the Christian God borders on the pathological. Yet, he underscores what has always been the main reason people reject God: they cannot believe that God is good. Eve, in the Garden of Eden, rejected God because she thought he had denied her some benefit that she should have, namely, the fruit from the Tree of the Knowledge of Good and Evil. 10 Clearly, a God who denies creatures benefits that they think they deserve cannot be good. Indeed, a mark of our fallenness is that we fail to see the irony in thus faulting God. Should we not rather trust that the things God denies us are denied precisely for our benefit? Likewise, the neo-atheists find lots of faults with God, their list of denied benefits being much longer than Eve’s—no surprise here since they’ve had a lot longer to compile such a list!" William Dembski - pg. 10-11 - Finding a Good God in an evil World - design inference http://designinference.com/documents/2009.05.end_of_xty.pdf Charles Darwin's use of theology in the Origin of Species - STEPHEN DILLEY Abstract This essay examines Darwin's positiva (or positive) use of theology in the first edition of the Origin of Species in three steps. First, the essay analyses the Origin's theological language about God's accessibility, honesty, methods of creating, relationship to natural laws and lack of responsibility for natural suffering; the essay contends that Darwin utilized positiva theology in order to help justify (and inform) descent with modification and to attack special creation. Second, the essay offers critical analysis of this theology, drawing in part on Darwin's mature ruminations to suggest that, from an epistemic point of view, the Origin's positiva theology manifests several internal tensions. Finally, the essay reflects on the relative epistemic importance of positiva theology in the Origin's overall case for evolution. The essay concludes that this theology served as a handmaiden and accomplice to Darwin's science. http://journals.cambridge.org/action/displayAbstract;jsessionid=376799F09F9D3CC8C2E7500BACBFC75F.journals?aid=8499239&fileId=S000708741100032X Methodological Naturalism: A Rule That No One Needs or Obeys - Paul Nelson - September 22, 2014 Excerpt: It is a little-remarked but nonetheless deeply significant irony that evolutionary biology is the most theologically entangled science going. Open a book like Jerry Coyne's Why Evolution is True (2009) or John Avise's Inside the Human Genome (2010), and the theology leaps off the page. A wise creator, say Coyne, Avise, and many other evolutionary biologists, would not have made this or that structure; therefore, the structure evolved by undirected processes. Coyne and Avise, like many other evolutionary theorists going back to Darwin himself, make numerous "God-wouldn't-have-done-it-that-way" arguments, thus predicating their arguments for the creative power of natural selection and random mutation on implicit theological assumptions about the character of God and what such an agent (if He existed) would or would not be likely to do.,,, ,,,with respect to one of the most famous texts in 20th-century biology, Theodosius Dobzhansky's essay "Nothing in biology makes sense except in the light of evolution" (1973). Although its title is widely cited as an aphorism, the text of Dobzhansky's essay is rarely read. It is, in fact, a theological treatise. As Dilley (2013, p. 774) observes: "Strikingly, all seven of Dobzhansky's arguments hinge upon claims about God's nature, actions, purposes, or duties. In fact, without God-talk, the geneticist's arguments for evolution are logically invalid. In short, theology is essential to Dobzhansky's arguments.",, http://www.evolutionnews.org/2014/09/methodological_1089971.html Nothing in biology makes sense except in light of theology? - Dilley S. - 2013 Abstract This essay analyzes Theodosius Dobzhansky's famous article, "Nothing in Biology Makes Sense Except in the Light of Evolution," in which he presents some of his best arguments for evolution. I contend that all of Dobzhansky's arguments hinge upon sectarian claims about God's nature, actions, purposes, or duties. Moreover, Dobzhansky's theology manifests several tensions, both in the epistemic justification of his theological claims and in their collective coherence. I note that other prominent biologists--such as Mayr, Dawkins, Eldredge, Ayala, de Beer, Futuyma, and Gould--also use theology-laden arguments. I recommend increased analysis of the justification, complexity, and coherence of this theology. http://www.ncbi.nlm.nih.gov/pubmed/23890740bornagain77
November 15, 2014
November
11
Nov
15
15
2014
07:54 PM
7
07
54
PM
PDT
ppolish @ 70,
Keith, I think I was a bit misleading. As a raging Christian, my job/joy is to convert you – not vice versa. Look, a hyperastronomical library like Wagner describes is full of information. Information is ID cement. A hyperastronomical library is a place where ID folk would feel comfortable?Might be a breakthrough for ID. A proof of ID? I did not even think that would be possible
Great ! Can you please explain to other ID proponents how easy it is to traverse the network of 'libraries'(it is a metaphor for genotype network) and that all those ID improbabilities are absurd ?Me_Think
November 15, 2014
November
11
Nov
15
15
2014
07:51 PM
7
07
51
PM
PDT
bornagain77, Thank you for accepting that ID is about Creationism.Me_Think
November 15, 2014
November
11
Nov
15
15
2014
07:45 PM
7
07
45
PM
PDT
bornagain77, Thank you for accepting that ID is about Creationism.Me_Think
November 15, 2014
November
11
Nov
15
15
2014
07:44 PM
7
07
44
PM
PDT
"don’t bring in God as an explanation, unless you think God is the ID agent." ha ha ha,,,, If you haven't noticed by now, I do think 'God is the ID agent', and I think that our present science is definitely strong enough to make that inference,,, In fact, as illustrated in post 58, I think the evidence is overwhelming towards the inference towards God as the 'ID agent'.bornagain77
November 15, 2014
November
11
Nov
15
15
2014
07:36 PM
7
07
36
PM
PDT
Keith, I think I was a bit misleading. As a raging Christian, my job/joy is to convert you - not vice versa. Look, a hyperastronomical library like Wagner describes is full of information. Information is ID cement. A hyperastronomical library is a place where ID folk would feel comfortable?Might be a breakthrough for ID. A proof of ID? I did not even think that would be possibleppolish
November 15, 2014
November
11
Nov
15
15
2014
07:29 PM
7
07
29
PM
PDT
1 2 3 4 5

Leave a Reply