Uncommon Descent Serving The Intelligent Design Community

Axe on specific barriers to macro-level Darwinian Evolution due to protein formation (and linked islands of specific function)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
arroba Email

A week ago, VJT put up a useful set of excerpts from Axe’s 2010 paper on proteins and barriers they pose to Darwinian, blind watchmaker thesis evolution. During onward discussions, it proved useful to focus on some excerpts where Axe spoke to some numerical considerations and the linked idea of islands of specific function deeply isolated in AA sequence and protein fold domain space, though he did not use those exact terms.

I think it worth the while to headline the clips, for reference (instead of leaving them deep in a discussion thread):

_________________

ABSTRACT: >> Four decades ago, several scientists suggested that the impossibility of any evolutionary process sampling anything but a miniscule fraction of the possible protein sequences posed a problem for the evolution of new proteins. This potential problem—the sampling problem—was largely ignored, in part because those who raised it had to rely on guesswork to fill some key gaps in their understanding of proteins. The huge advances since that time call for a care -ful reassessment of the issue they raised. Focusing specifically on the origin of new protein folds, I argue here that the sampling problem remains. The difficulty stems from the fact that new protein functions, when analyzed at the level of new beneficial phenotypes, typically require multiple new protein folds, which in turn require long stretches of new protein sequence. Two conceivable ways for this not to pose an insurmountable barrier to Darwinian searches exist. One is that protein function might generally be largely indifferent to protein sequence. The other is that rela-tively simple manipulations of existing genes, such as shuffling of genetic modules, might be able to produce the necessary new folds. I argue that these ideas now stand at odds both with known principles of protein structure and with direct experimental evidence . . . >>

Pp 5 – 6: >> . . . we need to quantify a boundary value for m, meaning a value which, if exceeded, would solve the whole sampling problem. To get this we begin by estimating the maximum number of opportunities for spontane-ous mutations to produce any new species-wide trait, meaning a trait that is fixed within the population through natural selection (i.e., selective sweep). Bacterial species are most conducive to this because of their large effective population sizes. 3 So let us assume, generously, that an ancient bacterial species sustained an effective population size of 10 ^10 individuals [26] while passing through 10^4 generations per year. After five billion years, such a species would produce a total of 5 × 10 ^ 23 (= 5 × 10^ 9 x 10^4 x 10 ^10 ) cells that happen (by chance) to avoid the small-scale extinction events that kill most cells irrespective of fitness. These 5 × 10 ^23 ‘lucky survivors’ are the cells available for spontaneous muta-tions to accomplish whatever will be accomplished in the species. This number, then, sets the maximum probabilistic resources that can be expended on a single adaptive step. Or, to put this another way, any adaptive step that is unlikely to appear spontaneously in that number of cells is unlikely to have evolved in the entire history of the species.

In real bacterial populations, spontaneous mutations occur in only a small fraction of the lucky survivors (roughly one in 300 [27]). As a generous upper limit, we will assume that all lucky survivors happen to receive mutations in portions of the genome that are not constrained by existing functions 4 , making them free to evolve new ones. At most, then, the number of different viable genotypes that could appear within the lucky survivors is equal to their number, which is 5 × 10^ 23 . And again, since many of the genotype differences would not cause distinctly new proteins to be produced, this serves as an upper bound on the number of new protein sequences that a bacterial species may have sampled in search of an adaptive new protein structure.

Let us suppose for a moment, then, that protein sequences that produce new functions by means of new folds are common enough for success to be likely within that number of sampled sequences. Taking a new 300-residue structure as a basis for calculation (I show this to be modest below), we are effectively supposing that the multiplicity factor m introduced in the previous section can be as large as 20 ^300 / 5×10^ 23 ~ 10 ^366 . In other words, we are supposing that particular functions requiring a 300-residue structure are real-izable through something like 10 ^366 distinct amino acid sequences. If that were so, what degree of sequence degeneracy would be implied? More specifically, if 1 in 5×10 23 full-length sequences are supposed capable of performing the function in question, then what proportion of the twenty amino acids would have to be suit-able on average at any given position? The answer is calculated as the 300 th root of (5×10 23 ) -1 , which amounts to about 83%, or 17 of the 20 amino acids. That is, by the current assumption proteins would have to provide the function in question by merely avoid-ing three or so unacceptable amino acids at each position along their lengths.

No study of real protein functions suggests anything like this degree of indifference to sequence. In evaluating this, keep in mind that the indifference referred to here would have to charac-terize the whole protein rather than a small fraction of it. Natural proteins commonly tolerate some sequence change without com- plete loss of function, with some sites showing more substitutional freedom than others. But this does not imply that most mutations are harmless. Rather, it merely implies that complete inactivation with a single amino acid substitution is atypical when the start-ing point is a highly functional wild-type sequence (e.g., 5% of single substitutions were completely inactivating in one study [28]). This is readily explained by the capacity of well-formed structures to sustain moderate damage without complete loss of function (a phenomenon that has been termed the buffering effect [25]). Conditional tolerance of that kind does not extend to whole proteins, though, for the simple reason that there are strict limits to the amount of damage that can be sustained.

A study of the cumulative effects of conservative amino acid substitutions, where the replaced amino acids are chemically simi-lar to their replacements, has demonstrated this [23]. Two unrelat-ed bacterial enzymes, a ribonuclease and a beta-lactamase, were both found to suffer complete loss of function in vivo at or near the point of 10% substitution, despite the conservative nature of the changes. Since most substitutions would be more disruptive than these conservative ones, it is clear that these protein functions place much more stringent demands on amino acid sequences than the above supposition requires.

Two experimental studies provide reliable data for estimating the proportion of protein sequences that perform specified func -tions [–> note the terms] . One study focused on the AroQ-type chorismate mutase, which is formed by the symmetrical association of two identical 93-residue chains [24]. These relatively small chains form a very simple folded structure (Figure 5A). The other study examined a 153-residue section of a 263-residue beta-lactamase [25]. That section forms a compact structural component known as a domain within the folded structure of the whole beta-lactamase (Figure 5B). Compared to the chorismate mutase, this beta-lactamase do-main has both larger size and a more complex fold structure.

In both studies, large sets of extensively mutated genes were produced and tested. By placing suitable restrictions on the al-lowed mutations and counting the proportion of working genes that result, it was possible to estimate the expected prevalence of working sequences for the hypothetical case where those restric-tions are lifted. In that way, prevalence values far too low to be measured directly were estimated with reasonable confidence.

The results allow the average fraction of sampled amino acid substitutions that are functionally acceptable at a single amino acid position to be calculated. By raising this fraction to the power l, it is possible to estimate the overall fraction of working se-quences expected when l positions are simultaneously substituted (see reference 25 for details). Applying this approach to the data from the chorismate mutase and the beta-lactamase experiments gives a range of values (bracketed by the two cases) for the preva-lence of protein sequences that perform a specified function. The reported range [25] is one in 10 ^77 (based on data from the more complex beta-lactamase fold; l = 153) to one in 10 ^53 (based on the data from the simpler chorismate mutase fold, adjusted to the same length: l = 153). As remarkable as these figures are, par-ticularly when interpreted as probabilities, they were not without precedent when reported [21, 22]. Rather, they strengthened an existing case for thinking that even very simple protein folds can place very severe constraints on sequence.  [–> Islands of function issue.]

Rescaling the figures to reflect a more typical chain length of 300 residues gives a prevalence range of one in 10 ^151 to one in 10 ^104 . On the one hand, this range confirms the very highly many-to-one mapping of sequences to functions. The corresponding range of m values is 10 ^239 (=20 ^300 /10 ^151 ) to 10 ^286 (=20 ^300 /10 ^104 ), meaning that vast numbers of viable sequence possibilities exist for each protein function. But on the other hand it appears that these functional sequences are nowhere near as common as they would have to be in order for the sampling problem to be dis-missed. The shortfall is itself a staggering figure—some 80 to 127 orders of magnitude (comparing the above prevalence range to the cutoff value of 1 in 5×10 23 ). So it appears that even when m is taken into account, protein sequences that perform particular functions are far too rare to be found by random sampling.>>

Pp 9 – 11: >> . . . If aligned but non-matching residues are part-for-part equivalents, then we should be able to substitute freely among these equivalent pairs without impair-ment. Yet when protein sequences were even partially scrambled in this way, such that the hybrids were about 90% identical to one of the parents, none of them had detectable function. Considering the sensitivity of the functional test, this implies the hybrids had less than 0.1% of normal activity [23]. So part-for-part equiva-lence is not borne out at the level of amino acid side chains.

In view of the dominant role of side chains in forming the bind-ing interfaces for higher levels of structure, it is hard to see how those levels can fare any better. Recognizing the non-generic [–> that is specific and context sensitive] na-ture of side chain interactions, Voigt and co-workers developed an algorithm that identifies portions of a protein structure that are most nearly self-contained in the sense of having the fewest side-chain contacts with the rest of the fold [49]. Using that algorithm, Meyer and co-workers constructed and tested 553 chimeric pro-teins that borrow carefully chosen blocks of sequence (putative modules) from any of three natural beta lactamases [50]. They found numerous functional chimeras within this set, which clearly supports their assumption that modules have to have few side chain contacts with exterior structure if they are to be transport-Able.

At the same time, though, their results underscore the limita-tions of structural modularity. Most plainly, the kind of modular-ity they demonstrated is not the robust kind that would be needed to explain new protein folds. The relatively high sequence simi-larity (34–42% identity [50]) and very high structural similarity of the parent proteins (Figure 8) favors successful shuffling of modules by conserving much of the overall structural context. Such conservative transfer of modules does not establish the ro-bust transportability that would be needed to make new folds. Rather, in view of the favorable circumstances, it is striking how low the success rate was. After careful identification of splice sites that optimize modularity, four out of five tested chimeras were found to be completely non-functional, with only one in nine being comparable in activity to the parent enzymes [50]. In other words, module-like transportability is unreliable even under extraordinarily favorable circumstances [–> these are not generally speaking standard bricks that will freely fit together in any freely plug- in compatible pattern to assemble a new structure] . . . .

Graziano and co-workers have tested robust modularity directly by using amino acid sequences from natural alpha helices, beta strands, and loops (which connect helices and/or strands) to con-struct a large library of gene segments that provide these basic structural elements in their natural genetic contexts [52]. For those elements to work as robust modules, their structures would have to be effectively context-independent, allowing them to be com-bined in any number of ways to form new folds. A vast number of combinations was made by random ligation of the gene segments, but a search through 10^8 variants for properties that may be in-dicative of folded structure ultimately failed to identify any folded proteins. After a definitive demonstration that the most promising candidates were not properly folded, the authors concluded that “the selected clones should therefore not be viewed as ‘native-like’ proteins but rather ‘molten-globule-like’” [52], by which they mean that secondary structure is present only transiently, flickering in and out of existence along a compact but mobile chain. This contrasts with native-like structure, where secondary structure is locked-in to form a well defined and stable tertiary Fold . . . .

With no discernable shortcut to new protein folds, we conclude that the sampling problem really is a problem for evolutionary accounts of their origins. The final thing to consider is how per-vasive this problem is . . . Continuing to use protein domains as the basis of analysis, we find that domains tend to be about half the size of complete protein chains (compare Figure 10 to Figure 1), implying that two domains per protein chain is roughly typical. This of course means that the space of se-quence possibilities for an average domain, while vast, is nowhere near as vast as the space for an average chain. But as discussed above, the relevant sequence space for evolutionary searches is determined by the combined length of all the new domains needed to produce a new beneficial phenotype. [–> Recall, courtesy Wiki, phenotype: “the composite of an organism’s observable characteristics or traits, such as its morphology, development, biochemical or physiological properties, phenology, behavior, and products of behavior (such as a bird’s nest). A phenotype results from the expression of an organism’s genes as well as the influence of environmental factors and the interactions between the two.”]

As a rough way of gauging how many new domains are typi-cally required for new adaptive phenotypes, the SUPERFAMILY database [54] can be used to estimate the number of different protein domains employed in individual bacterial species, and the EcoCyc database [10] can be used to estimate the number of metabolic processes served by these domains. Based on analysis of the genomes of 447 bacterial species 11, the projected number of different domain structures per species averages 991 (12) . Compar-ing this to the number of pathways by which metabolic processes are carried out, which is around 263 for E. coli,13 provides a rough figure of three or four new domain folds being needed, on aver-age, for every new metabolic pathway 14 . In order to accomplish this successfully, an evolutionary search would need to be capable of locating sequences that amount to anything from one in 10 ^159 to one in 10 ^308 possibilities 15 , something the neo-Darwinian model falls short of by a very wide margin. >>
____________________

Those who argue for incrementalism or exaptation and fortuitous coupling or Lego brick-like modularity or the like need to address these and similar issues. END

PS: Just for the objectors eager to queue up, just remember, the Darwinism support essay challenge on actual evidence for the tree of life from the root up to the branches and twigs is still open after over two years, with the following revealing Smithsonian Institution diagram showing the first reason why, right at the root of the tree of life:

Darwin-ToL-full-size-copy

No root, no shoots, folks.  (Where, the root must include a viable explanation of gated encapsulation, protein based metabolism and cell functions, code based protein assembly and the von Neumann self replication facility keyed to reproducing the cell.)

Comments
keiths:
I will be quoting liberally from Andreas Wagner’s new book Arrival of the Fittest.
When? I honestly believe I have provided more quotes from the book than you have. Am I wrong? Mung
keiths:
I will be quoting liberally from Andreas Wagner’s new book Arrival of the Fittest.
When? I honestly believe I have provided more quote from the book than you have. Am I wrong? Mung
MeThink, make a denominator gigantic and you end up with a really small fraction. Obviously. At higher Dimensions, not only is the cube volume more, the Hypercube is clustered too. I will give an example :
Imagine your Facebook network. Your immediate friends will have interest similar to yours. As you venture out in your network and traverse your friend’s friend network, or further on to friend’s friend’s friend network, you will encounter someone with totally different interest (akin to new function – this may or may not help in generating a new phenotype). Now imagine this Facebook network of yours balled up.It took many steps to reach the node where the person with a totally different interest exist on a 2d network. In your balled network, you have to travel maximum half of the steps to reach the same person. Now imagine this in higher and higher dimensions (perhaps like crushing the balled Facebook further). You will find you have to travel not even a fraction of 1 step to reach the person, and you will find a huge number of persons with dissimilar interests (akin to new functions – many of it helping to build a new phenotype or at-least help this generation survive better to start the search all over again with the advantage of the new phenotype) in just that fraction of 1 step, and that’s the reason improbabilities of ID don’t matter.
I don't think you are familiar with 'search'. You should read up your own ID concepts like 'No Free Lunch', 'Conservation of Information (yes it is about search though oddly named)' - which discuss search. IF you want to read more about landscape search, read Axe papers which talks of sparse landscapes. And the “10 steps reduces drastically to a fraction of 1 step!” – How do you take a tiny fraction of 1 step? Just think about stepping? I think Wagner was referring to distance not tiny fractions of 1 step. I was talking of Random walk (which is a stochastic process)- it is not literal walk of-course, so although not exact 1 step, it will be close to 1 step - of course this may not be true for every hypercube network. The steps required may vary, and yes it can be correlated to distance in network. BTW, the programming code behind Wagner’s hyperastronomical library simulation public and subject to peer review? Just curious. Not sure about that, but both Wagner and Zurich university have lot of related software and material : Publications Software data you can search in University of Zurich website too. Me_Think
MeThink, make a denominator gigantic and you end up with a really small fraction. And the "10 steps reduces drastically to a fraction of 1 step!" - How do you take a tiny fraction of 1 step? Just think about stepping? I think Wagner was referring to distance not tiny fractions of 1 step. BTW, the programming code behind Wagner's hyperastronomical library simulation public and subject to peer review? Just curious. ppolish
ppolish @124
If there are random walkers, they are cheating by following the paths laid done for the guided walkers by the mathematically designed Hypercube. The random walkers have stumbled upon a secret of ID.
Where did guided walkers come from ? What path are you talking about ? Network is still a search and it is random, only since the dimensions are high, the search space is reduced drastically. I think that concept has been discussed above in this thread. Here: @ 11
Imagine a solution circle (the circle within which solution exists) of 10 cm inside a 100 cm square search space. The area which needs to be searched for solution is pi x 10 ^2 = 314.15 The total Search area is 100 x 100 = 10000. The % area to be searched is (314.15/10000) x 100 = 3.14% In 3 dimensions,the search area will be 4/3 x pi x 10^3 Area to search is now cube (because of 3 dimensions) = 100^3. Thus the % of area to be searched falls to just 4188.79/100^3 = 0.41 % only. Hypervolume of sphere with dimension d and radius r is: (Pi^d/2 x r^d)/r(d/2+1) HyperVolume of Cube = r^d At 10 dimensions, the volume to search reduces to just: 0.000015608 % But in nature, the actual search area is incredibly small. As wagner points out in Chapter six, In the number of dimensions where our circuit library exists—get ready for this—the sphere contains neither 0.1 percent, 0.01 percent, nor 0.001 percent. It contains less than one 10^ -100th of the library
@ 26
The concept is quite simple: A ball (representing the search volume) with constant radius occupies ever-decreasing fractions of a cube’s volume as dimensions increases. I will quote Wagner himself: This volume decreases not just for my example of a 15 percent ratio of volumes, but for any ratio, even one as high as 75 percent, where the volume drops to 49 percent in three dimensions, to 28 percent in four, to 14.7 percent in five, and so on, to ever-smaller fractions. What this means: In a network of N nodes and N-1 neighbors, if in 1 dimension, 10 steps are required to to discover new genotype/procedure, in higher dimension, this 10 steps reduces drastically to fraction of 1 step !
HTH. I don't think I can explain any better. Me_Think
Thank you MeThink, that makes sense. It appears a random walker would not need to travel much further than a guided walker to reach destination. If there are random walkers, they are cheating by following the paths laid done for the guided walkers by the mathematically designed Hypercube. The random walkers have stumbled upon a secret of ID. Unless the mathematically elegant Hypercube structure just poofed into existence not. MeThink, I'll understand if you need to do a face palm:) ppolish
ppolish @ 122 The network is based on real genotype and metabolism data. You have to use computers to do a random walk because there is no other way. Eg : Take the 5000 metabolisms required for life, the number of vertex of the hypercube graph (which is the representation of network at 5000 dimensions) will be 2^n = 2^5000 = 1.4 x 10^1505 The number of edges of the graph will be 2^(-1+5000) x 5000 = 3.5 x 10^1508 so you need a cluster of computers to do all network and random walk calculations - based on real data Me_Think
MeThink, Wagner discovered and/or invented the hyper dimensional library in his high powered computer lab. Did he test his idea in an actual Bio Lab? Is it even testable, or maybe more like a "multiverse" or "many worlds" kind of idea. ppolish
Me_Think, I'm just trying to fill in the gaps that keiths promised to fill. In case you missed it: keiths:
I will be quoting liberally from Andreas Wagner’s new book Arrival of the Fittest. I highly recommend this book to anyone involved in the ID debate, whether pro or con. You will be hearing about it again and again, so you need to understand its contents.
Feel free to quote from the book and help fill in the gaps. Mung
Since keiths can't be bothered, I'll continue:
But the biggest mystery about evolution eluded his [Darwin's] theory. and he [Darwin] couldn't even get close to solving it.
Nothing new here. Nothing non-Darwinian. And certainly nothing anti-Darwinian. Move along. Mung
Mung @ 188 Do you really think 'intelligence' in Wagner's book refers to Intelligent Designer ? Can you explain how you came to that astonishing conclusion ? Me_Think
Andreas Wagner presents a compelling, authoritative, and up-to-date case for bottom up intelligence in in biological evolution. - George Dyson
Mung
Mung, Of course it reveals the hyperdimensional structure and how it can help in reducing the improbabilities of new phenotype 'search'. Me_Think
A radical departure from the mainstream perspective on Darwininan evolution. - Rolf Dobelli
But not non-Darwinian. And certainly not anti-Darwinian. keiths sez so. Mung
..reveals the astonishing hidden structure of evolution, long overlooked by biologists... - Philip Ball
Mung
keiths:
I will be quoting liberally from Andreas Wagner’s new book Arrival of the Fittest.
When? keiths:
I will be quoting liberally from Andreas Wagner’s new book Arrival of the Fittest. I highly recommend this book to anyone involved in the ID debate, whether pro or con. You will be hearing about it again and again, so you need to understand its contents.
Well gee, since I got tired of waiting for you to do more than posture:
...contains brand new scientific insights... - Matt Ridley
Nothing new here. Move along. Mung
ppolish: maybe “oasis” instead of “island” is better metaphor. Step in the direction of one grain of sand amidst the Innumerable grains of hyperastronomical sand. That's not just a different metaphor, but a different claim, which was that there exists no pathway. Your new metaphor is interesting, but we're considering selectable pathways, not neutral evolution, so we know which way to step. If there are only a few selectable paths, then evolution will eventually hit upon it. On the other hand, if there are a great multitude of pathways, it's possible that evolution could stall on local peaks; however, recombination allows jumping between local peaks. Zachriel
Zachriel, maybe "oasis" instead of "island" is better metaphor. Step in the direction of one grain of sand amidst the Innumerable grains of hyperastronomical sand. Sure, the oasis is just a step away. One really small step. But which step? ppolish
kairosfocus: Z, Reality check. You didn't respond, but just repeated your claim. If we can walk between the purported islands without getting our feet wet, then they aren't islands — by definition. Zachriel
franklin, see my response here Mung
mung
When do you plan to begin quoting from the book?
when do you plan on returning to our conversation to defend your assertions about hemoglobin? never? in case you've forgotten you abandoned your claims in this thread: https://uncommondesc.wpengine.com/intelligent-design/denying-the-truth-is-not-the-same-as-not-knowing-it franklin
keiths:
Natural Selection can preserve innovations, but it cannot create them.
This is from the book? Which page? not keiths:
Natural Selection can eliminate innovations, but it cannot create them.
This is from the book? Which page? If natural selection can preserve innovations why can't it eliminate innovations? Mung
keiths:
I will be quoting liberally from Andreas Wagner’s new book Arrival of the Fittest. I highly recommend this book to anyone involved in the ID debate, whether pro or con. You will be hearing about it again and again, so you need to understand its contents.
When do you plan to begin quoting from the book? Mung
keiths:
You’re bluffing, KF. Selection makes all the difference in the world, and you know it. (Try running Weasel — the non-latching variety) — without selection sometime. Make sure you have a few quintillion lifetimes to spare. You’ll need them.)
laughable. really. hilarious. pathetic. keiths appeals to the weasel algorithm, one in which the desired outcome is programmed in from the beginning, along with a "fitness function" that ensures the desired outcome. who is doing the bluffing here? Sure, if we know what we want we can devise an algorithm to get there. But that requires a pre-specified target and a designed fitness function. If that's what keiths means by "selection" making "all the difference in the world" I don't think any ID proponent would disagree. Mung
keiths:
My friends at AtBC would never forgive me if I didn’t egg you on.
troll Mung
kairosfocus:
KS: I draw your attention:
FSCO/I will naturally come in islands of function in much larger config spaces — because to get correctly arranged parts to work together to achieve a function there needs to be specific configs that follow a correct wiring diagram [with the vastly wider set of clumped or scattered but non functional configs excluded], where such plans are routinely produced by design . . .
I can back that up on billions of cases in point. Can you show why any old parts can be oriented any old how, put any old where and can be connected any which ways, and will readily achieve interactive complex function? Where we deal with at least 500 bits of complexity? Do you see why I am having a serious problem with your rejection of the commonplace fact that functionality dependent on interacting multiple parts depends crucially on how they are wired up?
KF, please read Arrival of the Fittest so that the rest of us no longer have to listen to your inane arguments about fishing reels and "islands of function". Your Designer is shivering in the cold. Let him retreat to the next gap, out of kindness if nothing else. keith s
kairosfocus,
As for non-latching varieties of Weasel, KS you full well know that (a) the results published by Dawkins showed latching behaviour, (b) you full well know that by adjusting parameters latching will show up in many runs of supposed non latching reconstructions, where (c) quite conveniently the original code is nowhere to be found.
My friends at AtBC would never forgive me if I didn't egg you on. As you full well know or should know, latching is a red herring, led away from the path to truth and then carried away to strawman caricatures soaked in toxic oil of ad hominems and set alight to cloud, poison, polarise and confuse the atmosphere. Non-latching versions of Weasel work just fine, but remove selection and you'll be waiting quintillions of lifetimes for convergence. Your arguments fail utterly because they do not take selection into account. Please do better. keith s
Ok. Metaphysics comes under Philosophy. I understand why Dembski would says things like 'Matter is a Myth' in that book. Me_Think
MeThink, Wagner imagines multiple Libraries, metabolic etc. But even a metaphorical Library contains information. Information first, a hyperastronomical library second. Dembski's book is classified on Amazon under "Logic and Language", currently #12:) He himself refers to the book as metaphysical. His book describes the underpinnings of Science. Would that be considered a Science Book? ppolish
ppolish @ 99 You missed one basic point in the book. Library is a metaphor for genotype network. Matter is a myth ? Elements that make up everything is a myth ? Is Dembski's book making a scientific argument or philosophical argument ? Me_Think
MeThink, Wagner's Arrival of Fittest - LIBRARY Dembski's Being as Communion - INFORMATION Information comes before Library. That is the gist of Dembski's new book. It is a profound book. It destroys Naturalistic Materialism. A Daisy Cutter of a bomb. The improbabilities of ID no longer matter. Information preceding Matter matters. As Dembski writes, "Matter is a myth." Kaboom. Read the book. ppolish
jstanley01
KF “Boiling down we are back at front loading here, with a switch waiting to be flipped.” Looks like it to me. The fans of the book in question evidently think not. I’m popping popcorn.
KF is too invested in IFSCO so I don't think he will ever change his stance, but perhaps you may. A Genotype network is not 'front loading'. Imagine your Facebook network. Your immediate friends will have interest similar to yours. As you venture out in your network and traverse your friend's friend network, or further on to friend's friend's friend network, you will encounter someone with totally different interest (akin to new function - this may or may not help in generating a new phenotype). Now imagine this Facebook network of yours balled up.It took many steps to reach the node where the person with a totally different interest exist on a 2d network. In your balled network, you have to travel maximum half of the steps to reach the same person. Now imagine this in higher and higher dimensions. You will find you have to travel not even a fraction of 1 step to reach the person, and you will find a huge number of persons with dissimilar interests (akin to new functions - many of it helping to build a new phenotype or at-least help this generation survive better to start the search all over again with the advantage of the new phenotype) in just that fraction of 1 step, and that's the reason improbabilities of ID don't matter. Me_Think
And my read on the book is if that Source older than time wanted to design a Universal Library that enabled Free Will, the library imagined by Wagner would be pretty cool:)
This is your take after reading the entire book ? You didn’t form an opinion of robustness of genotype, or the vanishing improbabilities of finding new functions at hyper dimensions ? Amazing! If you chose to admire the frame of a Mona Lisa painting rather than Mona Lisa, what can I say ? Me_Think
And my read on the book is if that Source older than time wanted to design a Universal Library that enabled Free Will, the library imagined by Wagner would be pretty cool:)
This is your take after reading the entire book ? You didn't form an opinion of robustness of gentypes, or the vanishing improbabilities of finding new functions at hyper dimensions ? Amazing. If you chose to admire the frame of a Mona Lisa painting rather then Mona Lisa, what can I say ? Me_Think
Z, Reality check. Please, tell me whether or no just about any arrangement of 6500 C3 parts will serve as a first class fishing reel. Patently not. There is a wiring diagram specificity involved that selects a narrow circle of functional states from the possible clumped or scattered ones; that is, "islands of function" is a term for an undeniable fact . . . one that does not go away because the tech involved is molecular, not clanking metal etc parts. One, I would like to see some current objectors simply have the responsiveness to evidence to simply acknowledge. Likewise for contextually relevant English language comments vs oceans of possible gibberish. Likewise for interwoven codes in old microcontrollers in the days when memory was much more hard to come by. Likewise, the FSCO/I in D/RNA specifying proteins etc. In short, it seems that an evident reality is giving you oceans of trouble, islands of function, which are in action all around you. I note too, FSCO/I as you know or should know, is relevant to config spaces that start at 500 bits and up. That's why complexity is there as part of the core description. The attempted dismissals are utterly tangential to the point. KF kairosfocus
Spoiler Alert....finished Arrival of Fittest and this is last paragraph: "When we begin to study nature's libraries we aren't just investigating life's innovabilty or that of technology. We are shedding new light on one of the most durable and fascinating subjects in all of philosophy. And we learn that life's creativity draws from a source that is older than life, and perhaps older than time." And my read on the book is if that Source older than time wanted to design a Universal Library that enabled Free Will, the library imagined by Wagner would be pretty cool:) Dembski's latest book "Being From Communion" lays a wonderful Theistic foundation to the libraries described by Wagner. "...perhaps older than time." Yes, perhaps. ppolish
MT It seems to me that describing how "networks" and "neighborhoods" in "hyper dimensions" operate within the constraints of natural laws + time + chance is going to be a lot more complicated than describing how natural selection does so via differential survival rates. Lack of help along this line could explain why I'm not a scared as, perhaps, I should be. KF "Boiling down we are back at front loading here, with a switch waiting to be flipped." Looks like it to me. The fans of the book in question evidently think not. I'm popping popcorn. jstanley01
kairosfocus: the relevant threshold is 500 bits, 72 ACSII characters, and change processes are driven by chance and necessity without intelligent configuration. Your claim was that they were isolated islands. If we can walk from one to the other, even intelligently, then they are not isolated islands. Zachriel
Zachriel, the relevant threshold is 500 bits, 72 ACSII characters, and change processes are driven by chance and necessity without intelligent configuration. One Swiss Army knife has many uses, there are programs for microcontrollers that have interwoven multifunctional code that is instructions one way and data the next. Such are actually even more tightly constrained for the obvious reason. KF kairosfocus
kairosfocus: Why, when on evidence FSCO/I will naturally come in islands of function — because to get correctly arranged parts to work together to achieve a function there needs to be specific configs that follow a correct wiring diagram [with the vastly wider set of clumped or scattered but non functional configs excluded], where such plans are routinely produced by design? We can provide a couple of counterexamples. In another thread, Me_Think discussed Schultes & Bartel, One Sequence, Two Ribozymes: Implications for the Emergence of New Ribozyme Folds, Science 2000, who showed a pathway from one functional fold to another functional fold even while maintaining the original function. This shows that the so-called islands are connected. Language is often said to have FSCO/I. We can show a pathway from a single-letter word through longer words and phrases to a complete poem in rhyme, again showing that the so-called islands are connected. Sea of Beneficence http://www.zachriel.com/mutagenation/Sea.htm Beware a War of Words http://www.zachriel.com/mutagenation/Beware.htm Zachriel
KS: I draw your attention:
FSCO/I will naturally come in islands of function in much larger config spaces — because to get correctly arranged parts to work together to achieve a function there needs to be specific configs that follow a correct wiring diagram [with the vastly wider set of clumped or scattered but non functional configs excluded], where such plans are routinely produced by design . . .
I can back that up on billions of cases in point. Can you show why any old parts can be oriented any old how, put any old where and can be connected any which ways, and will readily achieve interactive complex function? Where we deal with at least 500 bits of complexity? Do you see why I am having a serious problem with your rejection of the commonplace fact that functionality dependent on interacting multiple parts depends crucially on how they are wired up? KF kairosfocus
Rex, sadly, yes. KF kairosfocus
KS, 62:
You’re bluffing, KF. Selection makes all the difference in the world, and you know it. (Try running Weasel — the non-latching variety :-) — without selection sometime. Make sure you have a few quintillion lifetimes to spare. You’ll need them.)
Natural selection, so called, subtracts hereditable variations through culling; it patently does not innovate and add in the info in the first place. That's the job of chance variation on an already functioning life form. Where did that come from? Another of same. And that . . . oh, OOL. Where did that come from by blind watchmaker processes, on what observed capacity to do such? Oh, it must have been that. Why? Well, the alternative, we must not allow to put his unwelcome Foot on the doorstep of our temple of a priori materialism controlled Science. Why? Oh, this God idea is irrational, demon-like fairy tales that we have to get the people to grow up from. Why? Well God is nonsense! Why? . . . well, in any case we rule a datum line that excludes OOL from our theory. Why? Because we have no robust theory. Why, when on evidence FSCO/I will naturally come in islands of function -- because to get correctly arranged parts to work together to achieve a function there needs to be specific configs that follow a correct wiring diagram [with the vastly wider set of clumped or scattered but non functional configs excluded], where such plans are routinely produced by design? There you go bringing the supernatural in again! But, isn't design a process of intelligently directed configuration as we routinely see designers carrying out all around, rather than an empty declaration here is a gap for God to fit into? That is, isn't evidence of such evidence of design, which we can inspect, whoever did it? There you go bringing the supernatural in again! (See the ideological question begging and agenda? Notice, how KS has tried to dismiss the implications of correctly arranged parts working together to achieve function pivoting on specified wiring diagram complexity without facing the issue head on on cases such as the 6500 c3 reel and many others? Notice, the suspicious vagueness of his dismissals, and the lack of concrete cases on definitively observed evidence of FSCO/I from lucky noise leading to novel body plans with co-ordinated functionality, from OOL on up through the tree of life?) Okay, he did give us Weasel, having already argued that it is not a good example of evo by CV + NS. Take Weasel, start with the initial phrase which Dawkins admits is a nonsense phrase. Non functional. Subtract -- culled out by the powers of NS. Poof, no Weasel phrase left. Start again with another nonsense phrase. Subtract again. Repeat . . . Fail. As for non-latching varieties of Weasel, KS you full well know that (a) the results published by Dawkins showed latching behaviour, (b) you full well know that by adjusting parameters latching will show up in many runs of supposed non latching reconstructions, where (c) quite conveniently the original code is nowhere to be found. Also, you full well know (d) Weasel is targetted search that rewards non functional proximity to a target. That is, as usual, we have a case of intelligent design being used in an argument that tries to undermine intelligent design. Fail. Please think again. KF kairosfocus
KF, all great comments but I think this one is worth repeating:
Axe’s remarks in the OP (which are being of course side lined and ignored as usual in the haste to get back to favourite question begging talking points and personalities such as “your’e bluffing” etc.).
RexTugwell
KS, 76:
It’s a disaster for ID, because the genotype networks show how easy it is for unguided evolution to move through the library and gain access to new functions. Please explain to kairosfocus that the ‘islands of function’ objection is toast, but everything’s okay, because “information is ID cement.”
Have you ever spoken to a librarian about (a) what it takes for contents to be there to go in the library? (b) what it takes to see to it that co-ordinated access is feasible without unacceptably large search costs? In short, you are back at we got the information from nowhere for nothing and the handy control switches are just waiting to be flipped. Please see the just above to JS for more on the specific case of flight. KF kairosfocus
JS: From that SFI review:
. . . very small genetic changes can radically alter the phenotype. Some such alterations portend certain death, but a few lead to powerful new innovations: the ability to fly, for example, or the first light-sensitive cells eventually leading to photosynthesis. Searching all the genetic possibilities at random would take forever, but a species — with all the same functions, but widely varying genes — can search millions of genetic options all at once, dramatically increasing evolution’s efficiency. Robustness itself is a response to environmental complexity, Wagner argues. To withstand heat, cold, moisture, and dryness, living things developed a modular toolset of molecules such as amino acids, which combined in complex ways to produce a range of innovations in response to any given problem.
The only realistic way that small shifts in genomes could create the musculature, wings, nervous controls etc to fly would be for it to throw a switch or bank of switches in a control routine. In short, we are here begging he question, where did the info to be on the right switch position, all co-ordinated and properly arranged to be expressed embryologically and in the real world environments come from? Silence. Apart from, oh we get something for and from nothing all the time in evolution, the magic of chance. Did these folks ever talk with an airplane -- better yet a drone -- designer about the number of challenges to be solved? All at once or fail. Boiling down we are back at front loading here, with a switch waiting to be flipped. Front loading is a design hypothesis. KF kairosfocus
PPS: Clipping 22, just to underscore how repeated corrections on the merits have little impact on those whose habitual rhetorical tactic is to drum out talking points over and over again regardless of cogent concerns and correctives. So, let us see if at long last they will now actually address the concerns: >> It has long since been pointed out that config spaces are multidimensional, and that representation on coords giving degrees of freedom per component bring in for each: location relative to an origin, 3 degrees of freedom (x,y,z), plus yaw, pitch, roll (we can use the ox axis as polar axis to define the equivalent of North). Six dimensions per part. Next, we have n parts, n being about 60 for the Abu 6500 3c, i.e. we see 360 dimensions to its config space. For a body of gas, n is of order 10^20 or better, etc. Now, what MT (who has been previously corrected but has ignored it) is raising is effectively that once we have an initial location in the config space and undertake a random walk with drift, we go to a neighbourhood ball of other points, which as the space becomes arbitrarily large becomes an ever smaller (eventually effectively vanishingly small) fraction of the space. This allows us to see how MT has begged the key questions and has as a result handed back the problem as though it were the solution, strawmannising and begging the question: 1 –> WLOG, we can discuss on digital strings, in effect chains of structured y/n q’s that, taken together specify the overall specific config. (That’s how AutoCAD etc work.) 2 –> For a space of possibilities for 500 bits, we easily see that 2^500 = 3.27*10^150 possibilities, while at typical fast chem rxn rates, the 10^57 atoms of the sol system could only undertake about 10^87 or so states. The ratio of possible search to space of possibilities is about as a one straw sized blindly chosen sample to a cubical haystack comparably thick as our galaxy. This is the needle in haystack, vs sparse search problem. 3 –> Now, as the Abu 6500 c3 shows, when functionality depends on specific organised interaction of many correctly located, oriented, matching, coupled parts it sharply confines functionality to isolated islands in the config space. That is, we face the problem of deeply isolated islands of function as the needles in the haystack. (There are vastly more clumped but non-functional ways to arrange the parts [shake the reel parts up in a bag] oreven more ways to have them scattered about, than ways consistent with functionality.) 4 –> Whether a blind watchmaker chance plus necessity search is a finely dispersed dust in the config space, or it is a connected dynamic-stochastic random walk with drift [think, air molecules moving around within an air mass at random, but the body as a whole is drifting as part of a wind], or a combination of the two or the like, we are looking at sparse blind search in a space utterly dominated by non-functional configs. 5 –> This implies the challenge of a search for a golden search [S4GS] that puts one in an extraordinarily lucky state, on or just conveniently next to an island of function. Where as searches of a space of cardinality W cells are subsets, the set of searches is the power set of cardinality 2^W. And higher order searches are even more deeply exponential. 6 –> S4GS is exponentially harder than direct blind search. So, a simple reasonably random ( not too far off from a flat random sample) sample is a reasonable estimator of likelihood of success. Where the very name, needle in haystack, points out how unlikely such would be to succeed. Thus, the strawman problem. 7 –> Also, implicit in the notion that a sparse search gets out of the config space challenge, is the notion of a vast continent of closely connected functional states, that is easily accessible from plausible initial conditions. The case of the 6500 c3 reel and things like protein assembly in the cell or the complex integrative flow network of cellular metabolism should serve to show how this begs the question. 8 –> In reply, we say, show us this sort of config space topology. Where as just one case the freshly dead show us already just how close to functional, non functional states can be.>> kairosfocus
PS: Just to flesh out here is my reply to MT, from 28 above: >> I comment on points interwoven with your argument: [MT:] >> A ball (representing the search volume) with constant radius occupies ever-decreasing fractions of a cube’s volume as dimensions increases.>> a: Yes, the neighbourhood [Mathematical senses are intended, extending Hamming distance] of a point in a config space of large dimensionality and range of possible configs will increasingly be a tiny fraction of the space. b: Mix in sharply restricted resources of about 10^87 possible atomic event scale moves in the sol system [10^111 for the cosmos as a whole as observed] will be a vanishingly small fraction of at least 3.27 * 10^150 to 1.07*10^301 possibilities for just 500 – 1,00 bits to specify cells in the space, i.e. as many dimensions. c: FSCO/I for reasons already pointed out will be deeply isolated and you have a blind no steering intelligence search on chance plus necessity, a dynamic-stochastic process. d: Sampling theory will rapidly tell you that under such circumstances you have little or no warrant for hoping to find zones of interest X that are isolated in the space, where the set of clusters of cells z1, z2, . . . zn (the islands of function collectively) is a very small fraction, for reasons based on constraints on configs imposed by interactive functionally specific organisation. e: Blind chance and mechanical necessity is not a reasonable search paradigm. Intelligent design routinely produces FSCO/I. >>I will quote Wagner himself: This volume decreases not just for my example of a 15 percent ratio of volumes, but for any ratio, even one as high as 75 percent, where the volume drops to 49 percent in three dimensions, to 28 percent in four, to 14.7 percent in five, and so on, to ever-smaller fractions. >> f: As the proportion of searchable cells relative to the possibilities W falls away exponentially with number of bits, the search becomes ever more sparse and likely to be unfruitful. Beyond 500 – 1,000 bits of space (and bits is WLOG) it is patently futile. Matters not if you have a dust or a random walk with drift or whatever combi of the two or whatever. g: You are inadvertently confirming the empirical strength of the logic of the design inference explanatory filter. >>What this means: In a network of N nodes and N-1 neighbors, if in 1 dimension, 10 steps are required to to discover new genotype/procedure, in higher dimension, this 10 steps reduces drastically to fraction of 1 step ! >> h: Again, restating the problem of sparse blind search for needles in a vast haystack as if that were the solution. i: The implicit assumption in the context of the Tree of Life model, is that you are already on an imagined vast continent of function, with nicely behaved fitness functions that allow near-neighbourhood searches to branch on up to the twigs such as we are on. j: That is why I first put up the Smithsonian TOL to remind us that all of this has to start with blind watchmaker mechanisms in Darwin’s pond or the like, and you have to find the shoreline of function in a context of gated, encapsulated self-assembling metabolic automata that use codes to control assembly machines to make the vital proteins, which are needed in the hundreds for just the first relevant cell. k: Where there is zero reason to believe on evidence that the sort of islands of function imposed by interactive functional organisation vanish for ribosomes or embryologically and ecologically feasible body plans. l: So, the issue of resolving the blind watchmaker thesis on empirical evidence and evident reason — not imposed a priori Lewontin-Sagan style materialist ideology — remains. Perhaps, you too would wish to take a serious try at the 2-year long TOL challenge? >> Merely dressing the sparse search for needles in a large haystack problem up in somewhat different terms about neighbourhoods in n dimensions, does not change the problem. Again, MT has been restating the problem and presenting the problem as the solution, withthe underlying question being begged forst being getting to the first life in cells from Darwin's pond and the like and the second major one being begged being getting to major body plans requiring 10 - 100+ mn base increments in DNA. With as a side serving, the point that AA chains in proteins are not merely like lego brick modules but as folding-functioning interactions occur all along the chains, there is a wholeness issue that means that the modularity hope fails . . . cf Axe's remarks in the OP (which are being of course side lined and ignored as usual in the haste to get back to favourite question begging talking points and personalities such as "your'e bluffing" etc.). kairosfocus
F/N: Again, as shown in 22 and 28 above, mere repetition of assertions that repackage the problem and present it as if sparse blind search for needles in haystacks were the solution, only reveals the lack of real answers to the challenge. And BTW, that is the exact reason why the challenge to address the tree of life from the root up is so pivotal. If Darwinists cannot freely address this and matter of factly point tot he empirical evidence that shows how OOL is in reach of Darwin's pond or the like and how major body plans are in reasonable reach of incremental blind chance variation and culling by differential reproductive success in ecological environments, they do not have answers apart from ideological question-begging. Notice, no serious and solid answers forthcoming to that challenge for over two years now. It remains the truth that the only observed source of FSCO/I is intelligently directed configuration (backed up by the sparse blind search for needles in a very large haystack issue), and so we have excellent reason to infer that the copious FSCO/I in life forms from the cell up, strongly indicates design as material cause. KF kairosfocus
jstanley01 @ 79
Actually, whether species can overcome the odds against innovation via a random search of “millions of genetic options all at once” ought to be a point that is empirically testable. Does Wagner report any?
'All at once' refers to the fraction of 1 step needed to reach a new function in a hyper dimension network. So, yes it is proven. You all are stuck to general probabilities. What Wagner describes is network and neighborhoods at hyper dimensions where your probabilities have little value. Me_Think
Actually, whether species can overcome the odds against innovation via a random search of "millions of genetic options all at once" ought to be a point that is empirically testable. Does Wagner report any? jstanley01
jstanley01, That's the best criticism you could come up with? The use of the word "millions" in a review of the book? keith s
From Paleolibrarian's Review, it sounds to me like Wagner's Arrival of the Fittest represents an update to Punctuated Equilibrium's just-so yarns. Not that there's anything wrong with that, sez he. The Santa Fe Institute's review, I'd guess, nutshells the core of Wagner's argument with pith, writing:
Wagner shows how ... [s]earching all the genetic possibilities at random would take forever, but a species — with all the same functions, but widely varying genes — can search millions of genetic options all at once, dramatically increasing evolution’s efficiency.
Millions? Really? Wow. That's a lot! Behold Your Doom Intelligent Design Ad-vo-cates! Ooo. Scary. But if you thought that was horrific, just you wait until somebody publishes Arrival of the Library. Then your cheap tuxedos are really gonna bust a seam, buckaroos! jstanley01
ppolish,
Look, a hyperastronomical library like Wagner describes is full of information. Information is ID cement. A hyperastronomical library is a place where ID folk would feel comfortable?Might be a breakthrough for ID.
Are you kidding? It's a disaster for ID, because the genotype networks show how easy it is for unguided evolution to move through the library and gain access to new functions. Please explain to kairosfocus that the 'islands of function' objection is toast, but everything's okay, because "information is ID cement." If "information is ID cement", then genotype networks are ID cement shoes. keith s
me-think, in case you do not know, Neo-Darwinism is about 'creationism' too. More specifically, it is about how desperately atheists want Theism/Creationism to not be true: "Instead of presenting scientific evidence that shows atheism to be true (or probable), the neo-atheists moralize about how much better the world would be if only atheism were true. Far from demonstrating that God does not exist, the neo-atheists merely demonstrate how earnestly they desire that God not exist.8 The God of Christianity is, in their view, the worst thing that could befall reality. According to Richard Dawkins, for instance, the Judeo-Christian God “is arguably the most unpleasant character in all of fiction. Jealous and proud of it; a petty, unjust unforgiving control-freak; a vindictive, bloodthirsty ethnic-cleanser; a misogynistic homophobic racist, infanticidal, genocidal, filicidal, pestilential, megalomaniacal, sadomasochistic, capriciously malevolent bully.”9 Dawkins’s obsession with the Christian God borders on the pathological. Yet, he underscores what has always been the main reason people reject God: they cannot believe that God is good. Eve, in the Garden of Eden, rejected God because she thought he had denied her some benefit that she should have, namely, the fruit from the Tree of the Knowledge of Good and Evil. 10 Clearly, a God who denies creatures benefits that they think they deserve cannot be good. Indeed, a mark of our fallenness is that we fail to see the irony in thus faulting God. Should we not rather trust that the things God denies us are denied precisely for our benefit? Likewise, the neo-atheists find lots of faults with God, their list of denied benefits being much longer than Eve’s—no surprise here since they’ve had a lot longer to compile such a list!" William Dembski - pg. 10-11 - Finding a Good God in an evil World - design inference http://designinference.com/documents/2009.05.end_of_xty.pdf Charles Darwin's use of theology in the Origin of Species - STEPHEN DILLEY Abstract This essay examines Darwin's positiva (or positive) use of theology in the first edition of the Origin of Species in three steps. First, the essay analyses the Origin's theological language about God's accessibility, honesty, methods of creating, relationship to natural laws and lack of responsibility for natural suffering; the essay contends that Darwin utilized positiva theology in order to help justify (and inform) descent with modification and to attack special creation. Second, the essay offers critical analysis of this theology, drawing in part on Darwin's mature ruminations to suggest that, from an epistemic point of view, the Origin's positiva theology manifests several internal tensions. Finally, the essay reflects on the relative epistemic importance of positiva theology in the Origin's overall case for evolution. The essay concludes that this theology served as a handmaiden and accomplice to Darwin's science. http://journals.cambridge.org/action/displayAbstract;jsessionid=376799F09F9D3CC8C2E7500BACBFC75F.journals?aid=8499239&fileId=S000708741100032X Methodological Naturalism: A Rule That No One Needs or Obeys - Paul Nelson - September 22, 2014 Excerpt: It is a little-remarked but nonetheless deeply significant irony that evolutionary biology is the most theologically entangled science going. Open a book like Jerry Coyne's Why Evolution is True (2009) or John Avise's Inside the Human Genome (2010), and the theology leaps off the page. A wise creator, say Coyne, Avise, and many other evolutionary biologists, would not have made this or that structure; therefore, the structure evolved by undirected processes. Coyne and Avise, like many other evolutionary theorists going back to Darwin himself, make numerous "God-wouldn't-have-done-it-that-way" arguments, thus predicating their arguments for the creative power of natural selection and random mutation on implicit theological assumptions about the character of God and what such an agent (if He existed) would or would not be likely to do.,,, ,,,with respect to one of the most famous texts in 20th-century biology, Theodosius Dobzhansky's essay "Nothing in biology makes sense except in the light of evolution" (1973). Although its title is widely cited as an aphorism, the text of Dobzhansky's essay is rarely read. It is, in fact, a theological treatise. As Dilley (2013, p. 774) observes: "Strikingly, all seven of Dobzhansky's arguments hinge upon claims about God's nature, actions, purposes, or duties. In fact, without God-talk, the geneticist's arguments for evolution are logically invalid. In short, theology is essential to Dobzhansky's arguments.",, http://www.evolutionnews.org/2014/09/methodological_1089971.html Nothing in biology makes sense except in light of theology? - Dilley S. - 2013 Abstract This essay analyzes Theodosius Dobzhansky's famous article, "Nothing in Biology Makes Sense Except in the Light of Evolution," in which he presents some of his best arguments for evolution. I contend that all of Dobzhansky's arguments hinge upon sectarian claims about God's nature, actions, purposes, or duties. Moreover, Dobzhansky's theology manifests several tensions, both in the epistemic justification of his theological claims and in their collective coherence. I note that other prominent biologists--such as Mayr, Dawkins, Eldredge, Ayala, de Beer, Futuyma, and Gould--also use theology-laden arguments. I recommend increased analysis of the justification, complexity, and coherence of this theology. http://www.ncbi.nlm.nih.gov/pubmed/23890740 bornagain77
ppolish @ 70,
Keith, I think I was a bit misleading. As a raging Christian, my job/joy is to convert you – not vice versa. Look, a hyperastronomical library like Wagner describes is full of information. Information is ID cement. A hyperastronomical library is a place where ID folk would feel comfortable?Might be a breakthrough for ID. A proof of ID? I did not even think that would be possible
Great ! Can you please explain to other ID proponents how easy it is to traverse the network of 'libraries'(it is a metaphor for genotype network) and that all those ID improbabilities are absurd ? Me_Think
bornagain77, Thank you for accepting that ID is about Creationism. Me_Think
bornagain77, Thank you for accepting that ID is about Creationism. Me_Think
"don’t bring in God as an explanation, unless you think God is the ID agent." ha ha ha,,,, If you haven't noticed by now, I do think 'God is the ID agent', and I think that our present science is definitely strong enough to make that inference,,, In fact, as illustrated in post 58, I think the evidence is overwhelming towards the inference towards God as the 'ID agent'. bornagain77
Keith, I think I was a bit misleading. As a raging Christian, my job/joy is to convert you - not vice versa. Look, a hyperastronomical library like Wagner describes is full of information. Information is ID cement. A hyperastronomical library is a place where ID folk would feel comfortable?Might be a breakthrough for ID. A proof of ID? I did not even think that would be possible ppolish
ppolish
MT, the question of “How did God do it” has driven spectacular Science throughout the ages. Don’t diss the strength of the HDGDI? question.
bornagain
actually me-think, contrary to what you believe, without God as the basis for reasoning, there could be no arguing whether a point is valid or not in the first place,,,
That's true and I am not contesting that point at all but please don't bring in God as an explanation, unless you think God is the ID agent. Me_Think
keith s, you live in a fantasy land. Advances in molecular biology continue to reveal deeper, and deeper, levels of integrated functional complexity in life, that our best engineers and computer programmers can only dream of imitating, and Darwinian processes have yet to change even one bacteria, or fruit fly, into another bacteria, or fruit fly, despite decades of trying to force bacteria, or fruit flies, to change into something different. ========= Scant search for the Maker Excerpt: But where is the experimental evidence? None exists in the literature claiming that one species has been shown to evolve into another. Bacteria, the simplest form of independent life, are ideal for this kind of study, with generation times of 20 to 30 minutes, and populations achieved after 18 hours. But throughout 150 years of the science of bacteriology, there is no evidence that one species of bacteria has changed into another, in spite of the fact that populations have been exposed to potent chemical and physical mutagens and that, uniquely, bacteria possess extrachromosomal, transmissible plasmids. Since there is no evidence for species changes between the simplest forms of unicellular life, it is not surprising that there is no evidence for evolution from prokaryotic to eukaryotic cells, let alone throughout the whole array of higher multicellular organisms. - Alan H. Linton - emeritus professor of bacteriology, University of Bristol. http://www.timeshighereducation.co.uk/story.asp?storycode=159282 'No matter what we do to a fruit fly embryo there are only three possible outcomes, a normal fruit fly, a defective fruit fly, or a dead fruit fly. What we never see is primary speciation much less macro-evolution' – Jonathan Wells Peer-Reviewed Research Paper on Plant Biology Favorably Cites Intelligent Design and Challenges Darwinian Evolution - Casey Luskin December 29, 2010 Excerpt: Many of these researchers also raise the question (among others), why — even after inducing literally billions of induced mutations and (further) chromosome rearrangements — all the important mutation breeding programs have come to an end in the Western World instead of eliciting a revolution in plant breeding, either by successive rounds of selective “micromutations” (cumulative selection in the sense of the modern synthesis), or by “larger mutations” … and why the law of recurrent variation is endlessly corroborated by the almost infinite repetition of the spectra of mutant phenotypes in each and any new extensive mutagenesis experiment instead of regularly producing a range of new systematic species… (Wolf-Ekkehard Lönnig, “Mutagenesis in Physalis pubescens L. ssp. floridana: Some Further Research on Dollo’s Law and the Law of Recurrent Variation,” Floriculture and Ornamental Biotechnology Vol. 4 (Special Issue 1): 1-21 (December 2010).) http://www.evolutionnews.org/2010/12/peer-reviewed_research_paper_o042191.html Dr. Wolf-Ekkehard Lönnig, (retired) Senior Scientist (Biology), Max Planck Institute for Plant Breeding Research, Emeritus, Cologne, Germany. Dr. Wolf-Ekkehard Lönnig on the Law of Recurrent Variation, pt. 1 - podcast http://intelligentdesign.podomatic.com/entry/2013-12-09T17_31_28-08_00 "Dr. Wolf-Ekkehard Lönnig on the Law of Recurrent Variation, pt. 2" - podcast http://intelligentdesign.podomatic.com/entry/2013-12-11T15_59_50-08_00 "Dr. Wolf-Ekkehard Lönnig on the Law of Recurrent Variation, pt.3" - podcast http://intelligentdesign.podomatic.com/entry/2013-12-13T16_47_09-08_00 bornagain77
actually me-think, contrary to what you believe, without God as the basis for reasoning, there could be no arguing whether a point is valid or not in the first place,,, The Atheist's Guide to Intellectual Suicide - James N. Anderson, PhD - video https://vimeo.com/75897668 bornagain77
ppolish,
Keith, what with all the “ID is in trouble”? In your book, weren’t they always in trouble?
Yes, but a) it keeps getting worse, and b) you ID folks need to be reminded of it. Due to the historically heavy censorship at UD, a lot of you have been sheltered and seem to think that ID is still viable. My job is to help disabuse you of that notion. :-)
In a nutshell, why are they in trouble.
Are you asking generally, or specifically with respect to Wagner's book? keith s
MT, the question of "How did God do it" has driven spectacular Science throughout the ages. Don't diss the strength of the HDGDI? question. ppolish
bornagain77 @ 63 Please don't bring God into threads for non-theological topics. A simple statement that ' Since God is omnipotent, He would have done this' makes all other arguments- both ID and nonID - invalid. Me_Think
so keith s, you do not contest my 15 points and then accuse me of not defending them??? Being consistent not a strong point for you is it! bornagain77
Keith, what with all the "ID is in trouble"? In your book, weren't they always in trouble? In a nutshell, why are they in trouble. And don't say "Read the Book" again. You're sounding like a fundamentalist preacher man:) ppolish
KF throws word salad at the problem:
MT, I have the distinct impression you have largely repeated yourself. That’s unfortunate given the cautions in 22 and 28 above. In sum, you cannot repackage the problem of sparse needle in haystack search as limited by available atomic and temporal resources and present it as the solution. And, failing to reckon with the implications of multi-part organised interaction to achieve function thus sharply constraining clusters of configs that work relative to clumped or scattered non-functional configs (i.e. islands of function) does not move your case forward. As to, the working configs not existing in some sort of platonic forms space ahead of time, no-one has argued for that, the issue is, if a functional config is possible, it is possible; perhaps best seen with digital text strings which in principle could be counted up from 000 . . . 0 to 111 . . . 1; though in praxis atomic and temporal resources would long have been exhausted. Blind Watchmaker mechanisms simply lack the capability to blindly sample such spaces to find clusters of configs that work for much the same reason. Intelligent designers use skill etc to synthesise configs that will work, getting around the sparse search challenge. KF
You're bluffing, KF. Selection makes all the difference in the world, and you know it. (Try running Weasel -- the non-latching variety :-) -- without selection sometime. Make sure you have a few quintillion lifetimes to spare. You'll need them.) To argue against the effectiveness of selection, you need to demonstrate, rather than merely assert, that "islands of function" are an effective barrier to evolution. You can't do that, and everyone knows it. Wagner's book demonstrates that the reality is exactly the opposite of what you've been hoping for. Read it and discover how much trouble ID is in. keith s
ba77,
Actually I am very, very, comfortable in my Theistic Christian beliefs...
Oh, I know you're comfortable in your beliefs -- just not in your ability to defend them. Hence the spam reflex. keith s
BA, this book reads like a Michio Kaku book. Speculative Science. Reads nothing like "Darwin's Doubt" to me, which is so very well constructed and written. And Dembski's new book changed the way I look at stuff in a profound way. The Wagner book is revealing complexities built on complexities built on more complexities that makes me appreciate Dembski even more:) And yes, Keith, I know that Darwin himself knew "Origin" of the species was beyond his understanding. That whole "Creator's Breath" thing. Except it's not a thing lol:) ppolish
"Wagner has you running scared, doesn’t he, BA?" LOL, Project your own deep seeded feelings and insecurities much??? Actually I am very, very, comfortable in my Theistic Christian beliefs and how science has confirmed those beliefs beyond any reasonable doubt. But I can see where, because of un-dealt with sin, unrepentant sinners would be 'running scared', hiding from what our scientific evidence is now clearly telling us. ,,, For instance,,,
1. Naturalism/Materialism predicted time-space energy-matter always existed. Whereas Theism predicted time-space energy-matter were created. Big Bang cosmology now strongly indicates that time-space energy-matter had a sudden creation event approximately 14 billion years ago. 2. Naturalism/Materialism predicted that the universe is a self sustaining system that is not dependent on anything else for its continued existence. Theism predicted that God upholds this universe in its continued existence. Breakthroughs in quantum mechanics reveal that this universe is dependent on a ‘non-local’, beyond space and time, cause for its continued existence. 3. Naturalism/Materialism predicted that consciousness is a ‘emergent property’ of material reality and thus should have no particularly special position within material reality. Theism predicts consciousness precedes material reality and therefore, on that presupposition, consciousness should have a ‘special’ position within material reality. Quantum Mechanics reveals that consciousness has a special, even a central, position within material reality. - 4. Naturalism/Materialism predicted the rate at which time passed was constant everywhere in the universe. Theism predicted God is eternal and is outside of time. – Special Relativity has shown that time, as we understand it, is relative and comes to a complete stop at the speed of light. (Psalm 90:4 – 2 Timothy 1:9) - 5. Naturalism/Materialism predicted the universe did not have life in mind and that life was ultimately an accident of time and chance. Theism predicted this universe was purposely created by God with man in mind. Scientists find the universe is exquisitely fine-tuned for carbon-based life to exist in this universe. Moreover it is found, when scrutinizing the details of physics and chemistry, that not only is the universe fine-tuned for carbon based life, but is specifically fine-tuned for life like human life (R. Collins, M. Denton).- 6. Naturalism/Materialism predicted complex life in this universe should be fairly common. Theism predicted the earth is extremely unique in this universe. Statistical analysis of the hundreds of required parameters which enable complex organic life to be possible on earth gives strong indication the earth is extremely unique in this universe (Gonzalez). - 7. Naturalism/Materialism predicted it took a very long time for life to develop on earth. Theism predicted life to appear abruptly on earth after water appeared on earth (Genesis 1:10-11). Geo-chemical evidence from the oldest sedimentary rocks ever found on earth indicates that complex photo-synthetic life has existed on earth as long as water has been on the face of earth. - 8. Naturalism/Materialism predicted the first life to be relatively simple. Theism predicted that God is the source for all life on earth. The simplest life ever found on Earth is far more complex than any machine man has made through concerted effort. (Michael Denton PhD) - 9. Naturalism/Materialism predicted the gradual unfolding of life would (someday) be self-evident in the fossil record. Theism predicted complex and diverse animal life to appear abruptly in the seas in God’s fifth day of creation. The Cambrian Explosion shows a sudden appearance of many different and completely unique fossils within a very short “geologic resolution time” in the Cambrian seas. - 10. Naturalism/Materialism predicted there should be numerous transitional fossils found in the fossil record, Theism predicted sudden appearance and rapid diversity within different kinds found in the fossil record. Fossils are consistently characterized by sudden appearance of a group/kind in the fossil record(disparity), then rapid diversity within that group/kind, and then long term stability and even deterioration of variety within the overall group/kind, and within the specific species of the kind, over long periods of time. Of the few dozen or so fossils claimed as transitional, not one is uncontested as a true example of transition between major animal forms out of millions of collected fossils. - 11. Naturalism/Materialism predicted animal speciation should happen on a somewhat constant basis on earth. Theism predicted man was the last species created on earth – Man (our genus ‘modern homo’ as distinct from the highly controversial ‘early homo’) is the last generally accepted major fossil form to have suddenly appeared in the fossil record. (Tattersall; Luskin)– 12. Naturalism/Materialism predicted much of the DNA code was junk. Theism predicted we are fearfully and wonderfully made – ENCODE research into the DNA has revealed a “biological jungle deeper, denser, and more difficult to penetrate than anyone imagined.”. - 13. Naturalism/Materialism predicted a extremely beneficial and flexible mutation rate for DNA which was ultimately responsible for all the diversity and complexity of life we see on earth. Theism predicted only God created life on earth – The mutation rate to DNA is overwhelmingly detrimental. Detrimental to such a point that it is seriously questioned whether there are any truly beneficial, information building, mutations whatsoever. (M. Behe; JC Sanford) - 14. Naturalism/Materialism predicted morality is subjective and illusory. Theism predicted morality is objective and real. Morality is found to be deeply embedded in the genetic responses of humans. As well, morality is found to be deeply embedded in the structure of the universe. Embedded to the point of eliciting physiological responses in humans before humans become aware of the morally troubling situation and even prior to the event even happening. 15. Naturalism/Materialism predicted that we are merely our material bodies with no transcendent component to our being, and that we die when our material bodies die. Theism predicted that we have minds/souls that are transcendent of our bodies that live past the death of our material bodies. Transcendent, and ‘conserved’, (cannot be created or destroyed), ‘non-local’, (beyond space-time matter-energy), quantum entanglement/information, which is not reducible to matter-energy space-time, is now found in our material bodies on a massive scale.
As you can see when we remove the artificial imposition of the materialistic philosophy, from the scientific method, and look carefully at the predictions of both the materialistic philosophy and the Theistic philosophy, side by side, we find the scientific method is very good at pointing us in the direction of Theism as the true explanation. - In fact it is even very good at pointing us to Christianity:
General Relativity, Quantum Mechanics, Entropy & The Shroud Of Turin - (video) http://vimeo.com/34084462
bornagain77
ba77:
pp you sound like you are recommending a science fiction novel rather than a nose to the grindstone theory with empirical support
Wagner has you running scared, doesn't he, BA? keith s
ppolish:
BA, Wagner does not as such throw Natural Selection under the bus – but moves it to the back of the bus.
ppolish, Your bias is getting the best of you. Wagner isn't relegating selection to "the back of the bus". Darwin himself would have agreed with Wagner's statement:
Natural Selection can preserve innovations, but it cannot create them.
In the Origin, Darwin wrote something practically identical:
...unless profitable variations do occur, natural selection can do nothing.
keith s
Well, that's my story, KF, and I'm sticking to it! Axel
pp you sound like you are recommending a science fiction novel rather than a nose to the grindstone theory with empirical support bornagain77
BA, Wagner does not as such throw Natural Selection under the bus - but moves it to the back of the bus. On page.7 "The power of Natural Selection is beyond dispute, but this power has limits. Natural Selection can PRESERVE innovations, but it cannot CREATE them. And calling the change that creates them random is just another way of admitting our ignorance about it. Nature's many innovations - many uncannily perfect - call for natural principles that accelerate life's ability to innovate, its INNOVABILITY. ...... What we have found so far already tells us that there is much more to evolution than meets the eye. It tells us the principles of innovabilty are concealed, even beyond the molecular architecture of DNA, in a hidden world of life with an otherworldly beauty. These principles are the subject of this book." ppolish
Axel, yikes! KF kairosfocus
Well, I believe it happened to be true, when they used they old eather footballs, particularly when waterlogged; and there have been many an old footballer with some form of dementia, as a direct result of consequent brain injuries. Likewise with cricket balls, it's by no means unknown for blokes to get a bone in their hand broken, I believe. Not a lot of margin for error, and those balls, as you know, would not exactly be soft at 70 or 80 mph or whatever. I suspect my autonomic intelligence is brighter than my conscious brain, as I would really like to have been able to catch a fast ball, but my hand just said, 'No!'; and it would appear to have total authority on such occasions! Axel
as to: "Natural Selection comes AFTER Innovation. Wagner makes that clear at the start.. Arrival of Fittest comes before Survival of Fittest. Hence the name of the book." So basically Wagner throws Natural Selection under the bus? But alas, as kf points out, without guidance, as grossly inadequate as Natural Selection was/is, you are left with merely an origin of life scenario on steroids. i.e. Basically Wagner, by throwing NS to the curb, is relying on miracle after miracle after miracle instead of just one miracle at the origin of life. bornagain77
Axel, I always thought heading a ball was an invitation to headaches and cumulatively, worse. But then, I was disinclined. KF kairosfocus
PP: The old ocean vents scenario, a version on Darwin's pond along with comets (in the news these days), gas giant moons, clay beds etc. That too is old hat and a fail I am afraid. There is a reason why the metabolism and genes first schools have come to mutual ruin. KF kairosfocus
MT, I have the distinct impression you have largely repeated yourself. That's unfortunate given the cautions in 22 and 28 above. In sum, you cannot repackage the problem of sparse needle in haystack search as limited by available atomic and temporal resources and present it as the solution. And, failing to reckon with the implications of multi-part organised interaction to achieve function thus sharply constraining clusters of configs that work relative to clumped or scattered non-functional configs (i.e. islands of function) does not move your case forward. As to, the working configs not existing in some sort of platonic forms space ahead of time, no-one has argued for that, the issue is, if a functional config is possible, it is possible; perhaps best seen with digital text strings which in principle could be counted up from 000 . . . 0 to 111 . . . 1; though in praxis atomic and temporal resources would long have been exhausted. Blind Watchmaker mechanisms simply lack the capability to blindly sample such spaces to find clusters of configs that work for much the same reason. Intelligent designers use skill etc to synthesise configs that will work, getting around the sparse search challenge. KF kairosfocus
Natural Selection comes AFTER Innovation. Wagner makes that clear at the start.. Arrival of Fittest comes before Survival of Fittest. Hence the name of the book. Chapter One (What Darwin Didn't Know) basically concludes that Darwin would have to study hard just to pass a modern day High School Biology class. He'd do OK when they got to The Finches part, but years away from understanding Post Doc Behe stuff. ppolish
The next time you see a Hyper dimensional structure walking around with your eyeballs and not your overactive imagination, you let me know will ya?!? :) As well, Natural Selection, to the extent it does do anything, is grossly inadequate to do the work required of it because of what is termed ‘the princess and the pea’ paradox. The devastating ‘princess and the pea’ paradox is clearly elucidated by Dr. John Sanford, at the 8:14 minute mark, of this following video,,, Genetic Entropy – Dr. John Sanford – Evolution vs. Reality – video http://vimeo.com/35088933 Dr. Sanford points out, in the preceding video, that Natural Selection acts at the coarse level of the entire organism (phenotype) and yet the vast majority of mutations have effects that are only ‘slightly detrimental’, and have no noticeable effect on phenotypes, and are thus far below the power of Natural Selection to remove from genomes before they spread throughout the population. Here is a peer-reviewed paper by Dr. Sanford on the subject: “Selection Threshold Severely Constrains Capture of Beneficial Mutations” - John Sanford - September 6, 2013 Excerpt of concluding comments: Our findings raise a very interesting theoretical problem — in a large genome, how do the millions of low-impact (yet functional) nucleotides arise? It is universally agreed that selection works very well for high-impact mutations. However, unless some new and as yet undiscovered process is operating in nature, there should be selection breakdown for the great majority of mutations that have small impact on fitness.,,, We show that selection breakdown is not just a simple function of population size, but is seriously impacted by other factors, especially selection interference. We are convinced that our formulation and methodology (i.e., genetic accounting) provide the most biologically-realistic analysis of selection breakdown to date. http://www.worldscientific.com/doi/pdf/10.1142/9789814508728_0011 Here are a few more notes on this insurmountable ‘princess and the pea’ paradox: Evolution vs. Genetic Entropy - Andy McIntosh - video https://vimeo.com/91162565 The GS Principle (The Genetic Selection Principle) – Abel – 2009 Excerpt: The GS Principle, sometimes called “The 2nd Law of Biology,” states that selection must occur at the molecular/genetic level, not just at the fittest phenotypic/organismic level, to produce and explain life.,,, Natural selection cannot operate at the genetic level. http://www.bioscience.org/2009/v14/af/3426/fulltext.htm bornagain77
Hypergeometric dimensions ARE fascinating. Not just the actual hypergeometric dimensions but also the idea of them. It's like part of Nature (Man) has become self aware. It's like Nature created Man in its own image. Of course Nature is not fundamental. It emerged. Created. Nature is doomed of course, but the Creator will remain. Always has, always will. ppolish
bornagain77 @ 42
And he definitely is not referring to the 3 dimensions that natural selection is dealing with!
What do you mean ? Wagner's arguments are based on Hyper dimensions which are structural dimensions. Please see wiki page for better understanding. Me_Think
And he definitely is not referring to the 3 dimensions that natural selection is dealing with! With apologies to C.S. Lewis,,,
If I find in myself a desire 4 dimensional quarter power scaling which no experience 3-Dimensional materialistic process in this world can satisfy explain, the most probable explanation is that I was made for another world. C.S. Lewis (Mere Christianity, Bk. III, chap. 10, “Hope”)
A few notes in regards to the claim that we were made for a higher dimension,,,
"Regardless, it is impossible for me to adequately describe what I saw and felt. When I try to recount my experiences now, the description feels very pale. I feel as though I'm trying to describe a three-dimensional experience while living in a two-dimensional world. The appropriate words, descriptions and concepts don't even exist in our current language. I have subsequently read the accounts of other people's near-death experiences and their portrayals of heaven and I able to see the same limitations in their descriptions and vocabulary that I see in my own." Mary C. Neal, MD - To Heaven And Back pg. 71 Dr. Mary Neal's Near-Death Experience – Sept. 2014) video https://www.youtube.com/watch?v=as6yslz-RDw "I started to move toward the light. The way I moved, the physics, was completely different than it is here on Earth. It was something I had never felt before and never felt since. It was a whole different sensation of motion. I obviously wasn't walking or skipping or crawling. I was not floating. I was flowing. I was flowing toward the light. I was accelerating and I knew I was accelerating, but then again, I didn't really feel the acceleration. I just knew I was accelerating toward the light. Again, the physics was different - the physics of motion of time, space, travel. It was completely different in that tunnel, than it is here on Earth. I came out into the light and when I came out into the light, I realized that I was in heaven." Barbara Springer - Near Death Experience - The Tunnel - video https://vimeo.com/79072924
It is also very interesting to point out that the 'light at the end of the tunnel', reported in many Near Death Experiences(NDEs), is also corroborated by Special Relativity when considering the optical effects for traveling at the speed of light. Please compare the similarity of the optical effect, noted at the 3:22 minute mark of the following video, when the 3-Dimensional world ‘folds and collapses’ into a tunnel shape around the direction of travel as a 'hypothetical' observer moves towards the ‘higher dimension’ of the speed of light, with the ‘light at the end of the tunnel’ reported in very many Near Death Experiences: (Of note: This following video was made by two Australian University Physics Professors with a supercomputer.)
'Seeing Relativity' - Approaching The Speed Of Light Optical Effects - video https://www.youtube.com/watch?v=JQnHTKZBTI4
bornagain77
“[while a number of philosophical ideas] may be logically consistent with present quantum mechanics, …materialism is not.” Eugene Wigner Don't be nasty, Eugene... But seriously, does anyone else double up with laughter when they see an absolutely foundational and seminal, philosophical point, nevertheless, of enormous controversy on here, settled with unambiguous finality, in a few words ... almost as if, in passing? Quantum mechanics is the ultimate game-changer, but they always draw stumps and take off home with their bat and ball. They're like me trying to catch a very fast cricket ball: my mind tells me to stick out my mitt, but a subliminal voice tells me: 'Don't daft, after I've stuck it out to within a few inches away of it.' Likewise heading a ball! Axel
bornagain77 @ 38
Although Wagner and me-think appeal to higher dimensions in order to solve the insurmountable barrier imposed by a blind search in the real world, it is interesting to note that higher dimensions, specifically the quarter-power scaling which is ubiquitous through biology
Wagner is referring to Hypergeometric dimensions - not dimensions of space as in, say string theory. Me_Think
Moreover, quantum entanglement/information, which Einstein termed 'spooky action at a distance', has been verified to be 'non-local' to an almost absurd level of precision, (70 standard deviations):
Closing the last Bell-test loophole for photons - Jun 11, 2013 Excerpt:– requiring no assumptions or correction of count rates – that confirmed quantum entanglement to nearly 70 standard deviations.,,, http://phys.org/news/2013-06-bell-test-loophole-photons.html Looking beyond space and time to cope with quantum theory – 29 October 2012 Excerpt: “Our result gives weight to the idea that quantum correlations somehow arise from outside spacetime, in the sense that no story in space and time can describe them,” http://www.quantumlah.org/highlight/121029_hidden_influences.php
That quantum entanglement, which conclusively demonstrates that ‘information’ in its pure ‘quantum form’ is completely transcendent of any time and space constraints (Bell Aspect, Leggett, Zeilinger, etc..), should be found in molecular biology on such a massive scale is a direct empirical falsification of Darwinian claims, for how can the beyond space and time, ‘non-local’, quantum entanglement effect in biology possibly be explained by a material (matter/energy) cause when the quantum entanglement effect falsified material particles as its own causation in the first place? Appealing to the probability of various 'random' configurations of material particles, as Darwinism does, simply will not help since a timeless/spaceless cause must be supplied which is beyond the capacity of the material particles themselves to supply! In other words, to give a coherent explanation for an effect that is shown to be completely independent of any time and space constraints one is forced to appeal to a cause that is itself not limited to time and space! i.e. Put even more simply, you cannot explain a effect by a cause that has been falsified by the very same effect you are seeking to explain! Improbability arguments of various ‘special’ configurations of material particles, which have been a staple of the arguments against neo-Darwinism, simply do not apply since the cause is not within the material particles in the first place! And although Naturalists have proposed various, far fetched, naturalistic scenarios to try to get around the Theistic implications of quantum non-locality, none of the ‘far fetched’ naturalistic solutions, in themselves, are compatible with the reductive materialism that undergirds neo-Darwinian thought.
"[while a number of philosophical ideas] may be logically consistent with present quantum mechanics, ...materialism is not." Eugene Wigner Quantum Physics Debunks Materialism - video playlist https://www.youtube.com/watch?list=PL1mr9ZTZb3TViAqtowpvZy5PZpn-MoSK_&v=4C5pq7W5yRM Why Quantum Theory Does Not Support Materialism By Bruce L Gordon, Ph.D Excerpt: The underlying problem is this: there are correlations in nature that require a causal explanation but for which no physical explanation is in principle possible. Furthermore, the nonlocalizability of field quanta entails that these entities, whatever they are, fail the criterion of material individuality. So, paradoxically and ironically, the most fundamental constituents and relations of the material world cannot, in principle, be understood in terms of material substances. Since there must be some explanation for these things, the correct explanation will have to be one which is non-physical – and this is plainly incompatible with any and all varieties of materialism. http://www.4truth.net/fourtruthpbscience.aspx?pageid=8589952939
Thus, as far as empirical science itself is concerned, Neo-Darwinism is falsified in its claim that information is ‘emergent’ from a materialistic basis. bornagain77
Although Wagner and me-think appeal to higher dimensions in order to solve the insurmountable barrier imposed by a blind search in the real world, it is interesting to note that higher dimensions, specifically the quarter-power scaling which is ubiquitous through biology, quater power scaling which operates as if it were 4-Dimensional, provides its own unique falsification to neo-Darwinian claims:
The predominance of quarter-power (4-D) scaling in biology Excerpt: Many fundamental characteristics of organisms scale with body size as power laws of the form: Y = Yo M^b, where Y is some characteristic such as metabolic rate, stride length or life span, Yo is a normalization constant, M is body mass and b is the allometric scaling exponent. A longstanding puzzle in biology is why the exponent b is usually some simple multiple of 1/4 (4-Dimensional scaling) rather than a multiple of 1/3, as would be expected from Euclidean (3-Dimensional) scaling. http://www.nceas.ucsb.edu/~drewa/pubs/savage_v_2004_f18_257.pdf
Jerry Fodor and Massimo Piatelli-Palmarini put the insurmountable problem that this higher 4-dimensional power scaling presents to Darwinian explanations as such:
“Although living things occupy a three-dimensional space, their internal physiology and anatomy operate as if they were four-dimensional. Quarter-power scaling laws are perhaps as universal and as uniquely biological as the biochemical pathways of metabolism, the structure and function of the genetic code and the process of natural selection.,,, The conclusion here is inescapable, that the driving force for these invariant scaling laws cannot have been natural selection.” Jerry Fodor and Massimo Piatelli-Palmarini, What Darwin Got Wrong (London: Profile Books, 2010), p. 78-79
Here is, what a Darwinist termed, a ‘horrendously complex’ metabolic pathway (which operates as if it were ’4-Dimensional):
ExPASy - Biochemical Pathways - interactive schematic http://biochemical-pathways.com/#/map/1
And remember, Darwinian evolution has yet to explain a single gene/protein of those ‘horrendously complex’ metabolic pathways.
"Charles Darwin said (paraphrase), 'If anyone could find anything that could not be had through a number of slight, successive, modifications, my theory would absolutely break down.' Well that condition has been met time and time again. Basically every gene, every protein fold. There is nothing of significance that we can show that can be had in a gradualist way. It's a mirage. None of it happens that way. - Doug Axe PhD. - Nothing In Molecular Biology Is Gradual - video http://www.metacafe.com/watch/5347797/
The reason why a ‘higher dimensional’ 4-Dimensional structure, such as a ‘horrendously complex’ metabolic pathway, would be, for all intents and purposes, completely invisible to a 3-Dimensional process, such as Natural Selection, is best illustrated by ‘flatland’:
Flatland – 3D to 4D shift – Dr. Quantum – video http://www.youtube.com/watch?v=BWyTxCsIXE4
I personally hold that the reason why internal physiology and anatomy operate as if they were four-dimensional, instead of as if they three dimensional, is because of exactly what Darwinian evolution has consistently failed to explain the origination of. i.e. functional information. ‘Higher dimensional’ information, which is bursting at the seams in life in every DNA, RNA and protein molecule, simply cannot be reduced to any 3-dimensional energy-matter basis. This point is easily demonstrated by the fact that the same exact information can be stored on an almost endless variety of material substrates. Moreover, Dr. Andy C. McIntosh, who is the Professor of Thermodynamics Combustion Theory at the University of Leeds (the highest teaching/research rank in U.K. university hierarchy), has written a peer-reviewed paper in which he holds that it is 'non-material information' which is constraining the local thermodynamics of a cell to be in such a extremely high non-equilibrium state:
Information and Thermodynamics in Living Systems - Andy C. McIntosh - May 2013 Excerpt: The third view then that we have proposed in this paper is the top down approach. In this paradigm, the information is non-material and constrains the local thermodynamics to be in a non-equilibrium state of raised free energy. It is the information which is the active ingredient, and the matter and energy are passive to the laws of thermodynamics within the system. As a consequence of this approach, we have developed in this paper some suggested principles of information exchange which have some parallels with the laws of thermodynamics which undergird this approach.,,, http://www.worldscientific.com/doi/pdf/10.1142/9789814508728_0008
Dr. McIntosh's contention that 'non-material information' must be constraining life to be so far out of thermodynamic equilibrium has now been borne out empirically. i.e. It is now found that 'non-local', beyond space-time matter-energy, Quantum entanglement/information 'holds' DNA (and proteins) together:
Quantum entanglement holds together life’s blueprint - 2010 Excerpt: When the researchers analysed the DNA without its helical structure, they found that the electron clouds were not entangled. But when they incorporated DNA’s helical structure into the model, they saw that the electron clouds of each base pair became entangled with those of its neighbours. “If you didn’t have entanglement, then DNA would have a simple flat structure, and you would never get the twist that seems to be important to the functioning of DNA,” says team member Vlatko Vedral of the University of Oxford. http://neshealthblog.wordpress.com/2010/09/15/quantum-entanglement-holds-together-lifes-blueprint/ Quantum Information/Entanglement In DNA - short video https://vimeo.com/92405752 Coherent Intrachain energy migration at room temperature - Elisabetta Collini and Gregory Scholes - University of Toronto - Science, 323, (2009), pp. 369-73 Excerpt: The authors conducted an experiment to observe quantum coherence dynamics in relation to energy transfer. The experiment, conducted at room temperature, examined chain conformations, such as those found in the proteins of living cells. Neighbouring molecules along the backbone of a protein chain were seen to have coherent energy transfer. Where this happens quantum decoherence (the underlying tendency to loss of coherence due to interaction with the environment) is able to be resisted, and the evolution of the system remains entangled as a single quantum state. http://www.scimednet.org/quantum-coherence-living-cells-and-protein/
bornagain77
I don't know about you all but I find hyperbolic dimensions ,the shrinking of space to be searched and lowering probabilities fascinating. Me_Think
"Perhaps, you too would wish to take a serious try at the 2-year long TOL challenge?" Wagner proposes ocean vents created first life. Lots of first life in many places, but one of those first lifes beat out the other first lifes ("through natural selection or chance") to become the only surviving first life. "It has to be true" says Wagner. Well then, there you have it. A cool story for sure. ppolish
P. falciparum should have read Wagner's book before making 10^20 attempts at finding a way to resist chloroquine. RexTugwell
Correction @ 33: so even if the entire search space is gargantuan, the search space the space to be searched for new phenotype is so tiny that all those improbable probabilities vanish. Me_Think
Liesa, KF, Joe There is no preexisting solution circle , the random walk through network is till a new genotype/phenotype/metabolism/process is found. This step is 10 in a large network in 1 dimension. In a hyper dimension, this step is reduced to a fraction of one step, so even if the entire search space is gargantuan, the search space is so tiny that all those improbable probabilities vanish. Excerpt from Wagner:
Like the metabolic library, the protein library is a high-dimensional cube, with similar texts near one another. Each protein text perches on one vertex of this hypercube, and just like in the metabolic library, each protein has many immediate neighbors, proteins that differ from it in exactly one letter and that occupy adjacent corners of the hypercube. If you wanted to change the first of the amino acids in a protein comprising merely a hundred amino acids, you would have nineteen other amino acids to choose from, yielding nineteen neighbors that differ from the protein in the first amino acid. By the same process, the protein has nineteen neighbors that differ from it in the second amino acid, nineteen neighbors that differ from it in the third, the fourth, the fifth, and all the way through the hundredth amino acid. So all in all, our protein has 100 × 19 or 1,900 immediate neighbors. A neighborhood like this is already large, and it would be even larger if you changed not one but two or more amino acids. Clearly, this can’t be bad for innovation: With one or a few amino acid changes, evolution can explore many proteins.
I suggest you all read the book to understand Wagner's arguments. (Note: I am not asking you to buy - there are a myriad places from where you can borrow or download.) Me_Think
Joe, you have a point just as natural selection is not a selection only an analogy. However we can use that to summarise dynamic-stochastic walks [oops] that are blind [oops another personification] or non-foresighted [yikes, double oops] across abstract configuration spaces [oops not observable like space about us another analogy], and having a challenge [oh, trouble again . . . ] to encounter islands of function [whoops yet another analogy]. KF kairosfocus
BTW, unguided evolution is not a search Joe
keith s:
I will be quoting liberally from Andreas Wagner’s new book Arrival of the Fittest.
And we know what you post will be pure speculation and will be evidence-free. So quote away. Joe
Me_Think @26 But what you say about number of steps needed to find a new phenotype can only make sense granted you're already in your circle of solutions. That's the first thing. And the second is that it anyway doesn't make any sense since why then are you comparing the volume of that circle to the volume of the configuration space? To make numbers look nore impressive? It's just a silly trick, looks like that Lesia
MT, 26: Pardon, but repeating an error and not responding to its correction (which is in 22 above and is already a repeat of earlier corrections that you also ignored . . . ) above, is not helping to move things forward. What you re doing is repackaging the problem and presenting it back as the solution, no the sparse search for needles in a haystack is the problem not the solution, whether you try random walks or dusts [scattered samples) makes little difference I comment on points interwoven with your argument: >> A ball (representing the search volume) with constant radius occupies ever-decreasing fractions of a cube’s volume as dimensions increases.>> a: Yes, the neighbourhood [Mathematical senses are intended, extending Hamming distance] of a point in a config space of large dimensionality and range of possible configs will increasingly be a tiny fraction of the space. b: Mix in sharply restricted resources of about 10^87 possible atomic event scale moves in the sol system [10^111 for the cosmos as a whole as observed] will be a vanishingly small fraction of at least 3.27 * 10^150 to 1.07*10^301 possibilities for just 500 - 1,00 bits to specify cells in the space, i.e. as many dimensions. c: FSCO/I for reasons already pointed out will be deeply isolated and you have a blind no steering intelligence search on chance plus necessity, a dynamic-stochastic process. d: Sampling theory will rapidly tell you that under such circumstances you have little or no warrant for hoping to find zones of interest X that are isolated in the space, where the set of clusters of cells z1, z2, . . . zn (the islands of function collectively) is a very small fraction, for reasons based on constraints on configs imposed by interactive functionally specific organisation. e: Blind chance and mechanical necessity is not a reasonable search paradigm. Intelligent design routinely produces FSCO/I. >>I will quote Wagner himself: This volume decreases not just for my example of a 15 percent ratio of volumes, but for any ratio, even one as high as 75 percent, where the volume drops to 49 percent in three dimensions, to 28 percent in four, to 14.7 percent in five, and so on, to ever-smaller fractions. >> f: As the proportion of searchable cells relative to the possibilities W falls away exponentially with number of bits, the search becomes ever more sparse and likely to be unfruitful. Beyond 500 - 1,000 bits of space (and bits is WLOG) it is patently futile. Matters not if you have a dust or a random walk with drift or whatever combi of the two or whatever. g: You are inadvertently confirming the empirical strength of the logic of the design inference explanatory filter. >>What this means: In a network of N nodes and N-1 neighbors, if in 1 dimension, 10 steps are required to to discover new genotype/procedure, in higher dimension, this 10 steps reduces drastically to fraction of 1 step ! >> h: Again, restating the problem of sparse blind search for needles in a vast haystack as if that were the solution. i: The implicit assumption in the context of the Tree of Life model, is that you are already on an imagined vast continent of function, with nicely behaved fitness functions that allow near-neighbourhood searches to branch on up to the twigs such as we are on. j: That is why I first put up the Smithsonian TOL to remind us that all of this has to start with blind watchmaker mechanisms in Darwin's pond or the like, and you have to find the shoreline of function in a context of gated, encapsulated self-assembling metabolic automata that use codes to control assembly machines to make the vital proteins, which are needed in the hundreds for just the first relevant cell. k: Where there is zero reason to believe on evidence that the sort of islands of function imposed by interactive functional organisation vanish for ribosomes or embryologically and ecologically feasible body plans. l: So, the issue of resolving the blind watchmaker thesis on empirical evidence and evident reason -- not imposed a priori Lewontin-Sagan style materialist ideology -- remains. Perhaps, you too would wish to take a serious try at the 2-year long TOL challenge? KF kairosfocus
F/N: I responded to WJM vs Ewert and KS on alleged circularity in the understanding of CSI here. I clip: ___________ >> 1: FSCO/I — the operationally relevant thing — is observable as a phenomenon in and of itself. It depends on multiple, correctly arranged and coupled, interacting components to achieve said functionality. 2: That tight coupling and organisation with interaction sharply constrains the clusters of possible configs consistent with the functionality. Where, 3: There are a great many more clumped configs in the possibilities space that are non functional. (An assembled Abu 6500 C3 reel will work, you can shake up a bag of parts as long as you like, generating all sorts of clumped configs, which predictably will not.) 4: The number of ways to scatter the parts is even hugely more, and again, non functional. 5: The wiring diagram for the reel is highly informational, and the difference between scattered or clumped at random in a bag and properly assembled is manifest. That is, qualitatively observable. 6: The wiring diagram can be specified in a string of structured y/n q’s defining the functional cluster of states (there are tolerances, it is not a single point.) That allows us to quantify the info in bits, functionally specific info. 7: Now, let us define a world as a 1 m^3 cubic vat in which parts are floating around based on some version of Brownian motion, with maybe drifts, governed by let’s just use Newtonian dynamics. Blind chance and mechanical necessity. 8: It is maximally unlikely that under these circumstances a successful 6500 C3 will be assembled. 9: By contrast, feed in some programmed assembly robots, that find and clump parts then arrange in a complete reel per the diagram . . . quite feasible. And such would with high likelihood, succeed. 10: So, we see that blind chance and mechanical necessity will predictably not find the island of function (it is highly improbable on such a mechanism) but is quite readily achieved on intelligently directed configuration. 11: Now, observe sitting there on your desk, a 6500 c3 reel. It is not known how it came to be, to you. But it exhibits FSCO/I . . . just the gear train alone is decisive on that, never mind the carbontex slipping clutch drag and other features such as the spool on bearings etc. 12: On your knowledge of config spaces, islands of function and the different capabilities of the relevant mechanisms, you would be fully entitled to hold FSCO/I is a reliable sign of design, and to — having done a back of envelope calc on the possibility space of configs and the search limitations of the sol system (sparse, needle in haystack search) — hold that it is maximally implausible that a blind dynamic-stochastic mechanism as described or similar could reasonably account for the reel. 13: Thus, the reasoning that infers design on FSCO/I is not circular, but is empirically and analytically grounded. 14: It extends to the micro world also. For, say the protein synthesis mechanism in the ribosome and associated things, is a case of an assembly work cell with tape based numerical control. There is no good reason to infer that such a system with so much of FSCO/I came about by blind chance and mechanical necessity on the gamut of the observable cosmos. But, assembly according to a plan, makes sense. 15: Some will object by inserting self replication and an imagined deep past. That simply inadvertently highlights that OOL is pivotal, as the ribosome system is key to the cell and proteins. 16: Where, the origin of the additional capacity of self replication becomes important, and brings to bear Paley’s thought exercise of the time keeping self replicating watch in Ch II of his 1804 Nat Theol. (Which, for coming on 160 years, seems to have been shunted to one side in haste to dismiss his watch vs stone in the field argument. And BTW, Abu started as a watch making then taxi meter manufacturing company, then turned to the SIMPLER machine, fishing reels, when WW II cut off markets. A desperation move that launched a legend.) 17: So, FSCO/I remains a pivotal issue, once we start from the root of the TOL. And, it allows us to see how it is that design is a better explanation for specified, functional complexity than blind chance and mechanical necessity. (Never mind side tracks on nested hierarchies and the like.) >> And with a follow up to MF on the relationship with Irreducible Complexity: >> IC entities are linked to FSCO/I, as in that case the interactive organised complex functionality includes a core of parts that are each necessary for the core functionality. IC is thus a subset of FSCO/I, which is the relevant form of CSI. By contrast dFSCI is another sub set of FSCO/I, but in many cases due to redundancies [error correcting codes come to mind], there will be no set of core parts in a data string such that if any one of such is removed function ceases. CSI is a superset that abstracts specification away from being strictly functional. >> ____________ KF kairosfocus
InVivoVeritas, KF and Lesia, The concept is quite simple: A ball (representing the search volume) with constant radius occupies ever-decreasing fractions of a cube’s volume as dimensions increases. I will quote Wagner himself:
This volume decreases not just for my example of a 15 percent ratio of volumes, but for any ratio, even one as high as 75 percent, where the volume drops to 49 percent in three dimensions, to 28 percent in four, to 14.7 percent in five, and so on, to ever-smaller fractions.
What this means: In a network of N nodes and N-1 neighbors, if in 1 dimension, 10 steps are required to to discover new genotype/procedure, in higher dimension, this 10 steps reduces drastically to fraction of 1 step ! Me_Think
Me_Think @11, wait, what your calculations in there show is not that only 3.14% of the search space are to be searched for solution, or I'm not getting you the right way. You say, for instance, >In 3 dimensions,the search area will be 4/3 x pi x 10^3 >Area to search is now cube (because of 3 dimensions) = 100^3. >Thus the % of area to be searched falls to just 4188.79/100^3 = 0.41 % only. But that's your problem in fact! - You've just shown that under described conditions the chance for a randomly picked point to be from your circle of solutions is 4188.79/100^3 - that's what you've shown (and, obviously, the greater the overall volume of the configuration space and the smaller the circle of solutions - the lower are the chances). And could you be more specific about these hypervolumes - it's not maths but the idea behind these calculations which is not clear, it's just a maths trick of an illusionist it seems, which in no way solves anything, because you just redefine the way you calculate probabilities it seems (you take these geometric probabilities but use concepts valid for hyperbolic geometry instead of Euclidean, right?), but how does it help solve the real search issues? What is in there which is so devastating for ID? Lesia
F/N: Almost anything dissolved in water will reduce its freezing point, starting with common salt. A far better challenge to address is origin of the protein synthesis system and associated codes, as well as of the cluster of relevant proteins in AA chain space, as in the OP, and the similar challenge to explain body plans. As in codes + regulation --> proteins --> cell types (with self replication requiring codes etc in a vNSR) --> tissues --> organs --> systems --> organisms with body plans. In short, back to the challenge. KF kairosfocus
Mung, two years ago KS dodged the challenge to warrant the blind watchmaker claim from the root in OOL up, to try to get back to the favourite tactic of objecting to design theory similar to the latest foray. On this one, he simply refuses to acknowledge the fatal flaws and his rhetorical black knight status per the skit. I still say -- as my old Gramps used to, every tub must stand on its own bottom. KF kairosfocus
MT & IVV: IVV is right. It has long since been pointed out that config spaces are multidimensional, and that representation on coords giving degrees of freedom per component bring in for each: location relative to an origin, 3 degrees of freedom (x,y,z), plus yaw, pitch, roll (we can use the ox axis as polar axis to define the equivalent of North). Six dimensions per part. Next, we have n parts, n being about 60 for the Abu 6500 3c, i.e. we see 360 dimensions to its config space. For a body of gas, n is of order 10^20 or better, etc. Now, what MT (who has been previously corrected but has ignored it) is raising is effectively that once we have an initial location in the config space and undertake a random walk with drift, we go to a neighbourhood ball of other points, which as the space becomes arbitrarily large becomes an ever smaller (eventually effectively vanishingly small) fraction of the space. This allows us to see how MT has begged the key questions and has as a result handed back the problem as though it were the solution, strawmannising and begging the question: 1 --> WLOG, we can discuss on digital strings, in effect chains of structured y/n q's that, taken together specify the overall specific config. (That's how AutoCAD etc work.) 2 --> For a space of possibilities for 500 bits, we easily see that 2^500 = 3.27*10^150 possibilities, while at typical fast chem rxn rates, the 10^57 atoms of the sol system could only undertake about 10^87 or so states. The ratio of possible search to space of possibilities is about as a one straw sized blindly chosen sample to a cubical haystack comparably thick as our galaxy. This is the needle in haystack, vs sparse search problem. 3 --> Now, as the Abu 6500 c3 shows, when functionality depends on specific organised interaction of many correctly located, oriented, matching, coupled parts it sharply confines functionality to isolated islands in the config space. That is, we face the problem of deeply isolated islands of function as the needles in the haystack. (There are vastly more clumped but non-functional ways to arrange the parts [shake the reel parts up in a bag] oreven more ways to have them scattered about, than ways consistent with functionality.) 4 --> Whether a blind watchmaker chance plus necessity search is a finely dispersed dust in the config space, or it is a connected dynamic-stochastic random walk with drift [think, air molecules moving around within an air mass at random, but the body as a whole is drifting as part of a wind], or a combination of the two or the like, we are looking at sparse blind search in a space utterly dominated by non-functional configs. 5 --> This implies the challenge of a search for a golden search [S4GS] that puts one in an extraordinarily lucky state, on or just conveniently next to an island of function. Where as searches of a space of cardinality W cells are subsets, the set of searches is the power set of cardinality 2^W. And higher order searches are even more deeply exponential. 6 --> S4GS is exponentially harder than direct blind search. So, a simple reasonably random ( not too far off from a flat random sample) sample is a reasonable estimator of likelihood of success. Where the very name, needle in haystack, points out how unlikely such would be to succeed. Thus, the strawman problem. 7 --> Also, implicit in the notion that a sparse search gets out of the config space challenge, is the notion of a vast continent of closely connected functional states, that is easily accessible from plausible initial conditions. The case of the 6500 c3 reel and things like protein assembly in the cell or the complex integrative flow network of cellular metabolism should serve to show how this begs the question. 8 --> In reply, we say, show us this sort of config space topology. Where as just one case the freshly dead show us already just how close to functional, non functional states can be. KF kairosfocus
Me_think at # 11, #14, 18# I doubt that you think correctly. Either you do not understand what Wagner says or both you and Wagner are wrong. If in a 1 dimension search context the Target (or solution) Space (i.e. TS) is 1/10 of the Search Space (SS) then the chance of finding a solution randomly is 1/10 (unguided search Success Ratio SR = 1/10). [This is a Very Generous Ratio]. As we move to higher dimensional Search Contexts we maintain On Each Dimension A Constant Ratio of Target Space/Search Space of 1/10. The operator “**” means “power”. For example: “10 ** 2” means: “10 power 2” Legend: SR = Success Ratio 1 Dimension Search Context: SR = 1/10 2 Dimensions Search Context: SR = 1/10 * 1/10 = (1/10) ** 2 = 1/100 3 Dimensions Search Context: SR = 1/10 * 1/10 * 1/10 = (1/10) ** 3 = 1/1000 ………………………. 5000 Dimensions Search Context: SR = (1/10) ** 5000 = 0.00000000000 ……..001 where the number of 0-es between the decimal point and the final “1” is 5000-1 = 4999 zeroes = 1 / (10**500). In other words this means that for a 5000 dimensional search context the chances of success in a blind (unguided) search are 1 in 10 ** 5000 which practically means NIL, NADA, ZERO. Me_Think_You_Are_Wrong Regards InVivoVeritas
ppolish @ 19
Why haven’t fish come up with igloos?
Yeah I have been wondering too. If igloos are so critical, the ID agent would have endowed fish with cool looking igloos without waiting for poor evolution to do it's duty. Me_Think
Why haven't fish come up with igloos? They've been around since even before the whole feet thing. FCOL. ppolish
bornagain77 @ 15,
Darwinian debating tactic #19, ,,, when you have absolutely no observational evidence, use mathematical fantasy,,, also known as the ‘there’s a chance’ law of probability,,
As the hyperbolic dimensions increases, the search space decreases - isn't it true ? and if only one 10^ -100th space needs to be searched,obviously the probability of finding new genotype or new process is very close to 1. I don't see any fantasy here - do you ? Me_Think
Right you are keiths. It's only a scratch . http://en.wikipedia.org/wiki/Slow_slicing Yes, you may still have your arm, you may still have your hand, you may still have your finger, it's just a scratch, after all. Mung
Reading the authors interview it sounds like evolution can find an anti-freeze protein if you assume there are kazillions of anti-freeze proteins to be found. I guess it easier to believe there are kazillions of anti-freeze protein than to believe the fish were created to adapted to it's environment. If his theory is true I wonder why humans haven't found an anti-freeze protein. Smidlee
Darwinian debating tactic #19, ,,, when you have absolutely no observational evidence, use mathematical fantasy,,, also known as the 'there's a chance' law of probability,, Dumb and Dumber 'There's a Chance' https://www.youtube.com/watch?v=KX5jNnDMfxA Darwinism Not Proved Absolutely Impossible Therefore Its True - Plantinga http://www.metacafe.com/watch/10285716/ Perhaps Wagner can grace Chaitin with his mathematical wisdom??? Active Information in Metabiology – Winston Ewert, William A. Dembski, Robert J. Marks II – 2013 Except page 9: Chaitin states [3], “For many years I have thought that it is a mathematical scandal that we do not have proof that Darwinian evolution works.” In fact, mathematics has consistently demonstrated that undirected Darwinian evolution does not work.,, Consistent with the laws of conservation of information, natural selection can only work using the guidance of active information, which can be provided only by a designer. http://bio-complexity.org/ojs/index.php/main/article/view/BIO-C.2013.4/BIO-C.2013.4 Chaitin is quoted at 10:00 minute mark of following video in regards to Darwinism lack of a mathematical proof - Dr. Marks also comments on the honesty of Chaitin in personally admitting that his long sought after mathematical proof for Darwinian evolution failed to deliver the goods. https://www.youtube.com/watch?v=No3LZmPcwyg&feature=player_detailpage#t=600 bornagain77
Smidlee @ 13
So it’s guided unguided evolution? It’s sound like magic without a magician , engineering without and engineer, desisn without a designer.
Wagner shows that to find a new phenotype in a genotypic network, a search space of only less than one 10^ -100th of 'the library' needs to searched.Refer to comment # 11 for more details or read Chapter Six of Wagner's book Me_Think
"Wagner’s book is bad for ID and good for unguided evolution. Whether it qualifies as “non-Darwinian” isn’t going to change that." It sounds a lot like Shapiro's Natural Genetic Engineering. So it's guided unguided evolution? It's sound like magic without a magician , engineering without and engineer, desisn without a designer. Smidlee
Mung, What makes you singularly ineffective as an ID advocate is that you focus on minutiae while the ID battle is being lost elsewhere. Wagner's book is bad for ID and good for unguided evolution. Whether it qualifies as "non-Darwinian" isn't going to change that. Keep up the good work. keith s
ppolish @ 6
Keith, Arrival of the Fittest is a cool book so far for me. Just finished chapter 3 where Wagner proposes a 5000 Dimension Universal Library. And I thought 10 Dimension String Theory was wild:)
You are confusing hyperbolic geometry dimension with dimensions of universe space. You don't realize how devastating the hyperbolic geometric dimension is to ID's view that search space is too large to search for new genotypes. I will explain: Imagine a solution circle (the circle within which solution exists) of 10 cm inside a 100 cm square search space. The area which needs to be searched for solution is pi x 10 ^2 = 314.15 The total Search area is 100 x 100 = 10000. The % area to be searched is (314.15/10000) x 100 = 3.14% In 3 dimensions,the search area will be 4/3 x pi x 10^3 Area to search is now cube (because of 3 dimensions) = 100^3. Thus the % of area to be searched falls to just 4188.79/100^3 = 0.41 % only. Hypervolume of sphere with dimension d and radius r is: (Pi^d/2 x r^d)/r(d/2+1) HyperVolume of Cube = r^d At 10 dimensions, the volume to search reduces to just: 0.000015608 % But in nature, the actual search area is incredibly small. As wagner points out in Chapter six,
In the number of dimensions where our circuit library exists—get ready for this—the sphere contains neither 0.1 percent, 0.01 percent, nor 0.001 percent. It contains less than one 10^ -100th of the library
Me_Think
Sorry KF, you're going to have to read the book and then explain it to me. I can send you my copy when I'm done:) But author Wagner seems to stress his Universal Library is really really large. ppolish
It's always interesting to go back through previous threads to see what Darwinists argued then and what they say now. keiths @ 10:
There is nothing anti-Darwinian about Wagner’s thesis.
Mung @ 45:
That’s about as non-darwinian as you can get.
keiths @ 193:
No, Mung’s objective was to argue that Wagner’s book was non-Darwinian, when it clearly isn’t.
keiths @ 215:
There is nothing anti-Darwinian about Wagner’s thesis.
This provides a good glimpse into the psychology of visitors to UD such as keiths. I never said Wagner's book was anti-Darwinian and keiths even acknowledges this to be the case (more than the one time I showed). According to keiths, the book is clearly not non-darwinian because it is not anti-darwinian. And therefore I must be wrong to say anything about it is non-Darwinian. keiths has not shown that the book is not non-Darwinian, and in fact he now wants to argue that the book is anti-ID. Therefore it must be Darwinian. Right? Mung
KF, Perhaps keiths can answer "The Challenge." We're still waiting for Darwin's Champion to appear Mung
pp: feel free to explain and expand. Is this a 5,000 dimension config space, which is rather small for such -- x1, x2 . . x5000? (Phase spaces routinely go over 10^20 or more dimensions.) KF kairosfocus
Keith, Arrival of the Fittest is a cool book so far for me. Just finished chapter 3 where Wagner proposes a 5000 Dimension Universal Library. And I thought 10 Dimension String Theory was wild:) ppolish
KS: I interpret your claims as an offer to answer based on observational evidence that warrants here and now the causal adequacy of blind watchmaker mechanisms from the root of the Darwinist Tree of life up. Kindly cf the PS I added to the OP, and the diagram from the Smithsonian. KF PS: If you care to look here, you will see why it is a matter of fairly easily observed fact (commonplace all around us) that function dependent on specific configuration of interacting components imposes stringent limits on functional clusters of configs, relative to vastly many more that are clumped but non functional, and even more that are scattered. If you cannot tell why that is when confronted by an Abu 6500 C3 reel and a bag of parts for same to be shaken up till they somehow assemble in functional form, then -- with all due respect -- you have a problem with patent facts to go with the longstanding one of denying self-evident first truths of reasoning. If you think that this does not relate to molecular processes in the cell, then ponder the NC machine that uses coded strings to control protein assembly. Selective hyperskepticism and/or denialism about the reality of islands of functional configs in the space of possible clumped/ scattered configs is not a healthy sign. kairosfocus
keith s, I know you are overjoyed whenever you find anything that might cast doubt on the design inference, but, I hate to burst your nihilistic bubble, unsubstantiated criticism to Axe's work is a dime a dozen. Almost every claim that unguided evolution can produce functional proteins is based on 'assuming the conclusion'. i.e. Evolution is assumed as true throughout the process of investigation and it never allowed to be questioned:
Proteins Did Not Evolve Even According to the Evolutionist’s Own Calculations but so What, Evolution is a Fact - Cornelius Hunter - July 2011 Excerpt: For instance, in one case evolutionists concluded that the number of evolutionary experiments required to evolve their protein (actually it was to evolve only part of a protein and only part of its function) is 10^70 (a one with 70 zeros following it). Yet elsewhere evolutionists computed that the maximum number of evolutionary experiments possible is only 10^43. Even here, giving the evolutionists every advantage, evolution falls short by 27 orders of magnitude. http://darwins-god.blogspot.com/2011/07/response-to-comments-proteins-did-not.html Now Evolution Must Have Evolved Different Functions Simultaneously in the Same Protein - Cornelius Hunter - Dec. 1, 2012 Excerpt: In one study evolutionists estimated the number of attempts that evolution could possibly have to construct a new protein. Their upper limit was 10^43. The lower limit was 10^21. These estimates are optimistic for several reasons, but in any case they fall short of the various estimates of how many attempts would be required to find a small protein. One study concluded that 10^63 attempts would be required for a relatively short protein. And a similar result (10^65 attempts required) was obtained by comparing protein sequences. Another study found that 10^64 to 10^77 attempts are required. And another study concluded that 10^70 attempts would be required. In that case the protein was only a part of a larger protein which otherwise was intact, thus making the search easier. These estimates are roughly in the same ballpark, and compared to the first study giving the number of attempts possible, you have a deficit ranging from 20 to 56 orders of magnitude. Of course it gets much worse for longer proteins. http://darwins-god.blogspot.com/2012/12/now-evolution-must-have-evolved.html?showComment=1354423575480#c6691708341503051454
The following article exposes how Darwinists severely twist the data once it reaches the popular press:
The Hierarchy of Evolutionary Apologetics: Protein Evolution Case Study - Cornelius Hunter - January 2011 http://darwins-god.blogspot.com/2011/01/hierarchy-of-evolutionary-apologetics.html
i.e. Nobody, not even Darwinists themselves, ever demonstrate that unguided Darwinian processes can produce functional proteins, but they assume throughout the process of investigation that Darwinism has done so and argue vigorously from that perspective, especially once the research reaches the level of the popular press. But the question being asked all along is exactly that, i.e. 'can unguided processes generate functional proteins?',,, All the empirical evidence we have says that unguided Darwinian processes are grossly inadequate as to the generation of novel proteins. If you disagree with that fact, then please post the exact peer-reviewed paper that refutes Dr. Behe's 'First Rule' paper which examined 4 decades of laboratory evolution experiments and found not even a single novel protein and 'that even the great majority of helpful mutations degrade the genome to a greater or lesser extent',,,
“The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain - Michael Behe - December 2010 Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain. http://behe.uncommondescent.com/2010/12/the-first-rule-of-adaptive-evolution/
Michael Behe talks about the preceding paper on this podcast:
Michael Behe: Challenging Darwin, One Peer-Reviewed Paper at a Time - December 2010 http://intelligentdesign.podomatic.com/player/web/2010-12-23T11_53_46-08_00
Moreover, Axe has defended his work on numerous occasions. The following is one of my favorite defences by him:
Show Me: A Challenge for Martin Poenie - Douglas Axe August 16, 2013 Excerpt: Poenie want to be free to appeal to evolutionary processes for explaining past events without shouldering any responsibility for demonstrating that these processes actually work in the present. That clearly isn't valid. Unless we want to rewrite the rules of science, we have to assume that what doesn't work (now) didn't work (then). It isn't valid to think that evolution did create new enzymes if it hasn't been demonstrated that it can create new enzymes. And if Poenie really thinks this has been done, then I'd like to present him with an opportunity to prove it. He says, "Recombination can do all the things that Axe thinks are impossible." Can it really? Please show me, Martin! I'll send you a strain of E. coli that lacks the bioF gene, and you show me how recombination, or any other natural process operating in that strain, can create a new gene that does the job of bioF within a few billion years. http://www.evolutionnews.org/2013/08/a_challenge_for075611.html
So basically keith s, since we have all seen the over the top bluff and bluster of Darwinists before, if you truly want to falsify ID then show us the empirical evidence from Lenski's e-coli, or some such other similar experiment, where a novel protein, or better yet a molecular machine, was generated by unguided Darwinian processes . bornagain77
keiths:
I will be quoting liberally from Andreas Wagner’s new book Arrival of the Fittest. I highly recommend this book to anyone involved in the ID debate, whether pro or con.
You've already stated that there's nothing new in it. Now that you're actually reading it have you changed your mind? Mung
Cross-posting this from vjtorley's new thread: Vincent, Of all the points you raise in your OP, Axe’s argument is going to be the most fun for me to criticize, but also the most technically involved. I will be quoting liberally from Andreas Wagner’s new book Arrival of the Fittest. I highly recommend this book to anyone involved in the ID debate, whether pro or con. You will be hearing about it again and again, so you need to understand its contents. Denyse did an OP on the book, thinking it was anti-Darwinian. Boy oh boy, was she ever wrong. This book is full of bad news for ID. It’s well-written and fascinating. I think that ID supporters will enjoy it, if they can get past the sinking feeling they’ll experience when they realize the dire implications for ID. The ‘islands of function’ argument for ID was already unsustainable, but this book nails the coffin lid shut. Just thought I’d give readers advance notice in case they want to order the book or download it onto their e-readers. PS Thanks again, Denyse, for bringing the book to my attention. :-) keith s
Hallelujah! A KF thread with open comments! keith s

Leave a Reply