Uncommon Descent Serving The Intelligent Design Community

Sean Pitman on evolution of mitochondria

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
mitochondria/Louisa Howard

From Detecting Design:

Now, it is true that mitochondrial organelles are quite unique and very interesting. Unlike any other organelle, except for chloroplasts, mitochondria appear to originate only from other mitochondria. They contain some of their own DNA, which is usually, but not always, circular – like circular bacterial DNA (there are also many organisms that have linear mitochondrial chromosomes with eukaryotic-style telomeres). Mitochondria also have their own transcriptional and translational machinery to decode DNA and messenger RNA and produce proteins. Also, mitochondrial ribosomes and transfer RNA molecules are similar to those found in bacteria, as are some of the components of their membranes. In 1970, these and other similar observations led Dr. Lynn Margulis to propose an extracellular origin for mitochondria in her book, Origin of Eukaryotic Cells (Margulis, 1970). However, despite having their own DNA, mitochondria do not contain anywhere near the amount of DNA needed to code for all mitochondria-specific proteins. Over 99% of the proteins needed for mitochondrial function are actually produced outside of the mitochondria themselves. The DNA needed to code for these proteins is located within the cell’s nucleus and the protein sequences are assembled in the cytoplasm of the cell before being imported into the mitochondria (Endo and Yamano, 2010). It is hypothesized that these necessary genes were once part of the mitochondrial genome, but were then transferred and incorporated into the eukaryotic nuclear DNA over time. Not surprisingly then, none of the initial mtDNAs investigated by detailed sequencing, including animal mtDNAs, look anything like a typical bacterial genome in the way in which genes are organized and expressed (Michael Gray, 2012).

It is interesting to note at this point that Margulis herself wasn’t really very Darwinian in her thinking. She opposed competition-oriented views of evolution and stressed the importance of symbiotic or cooperative relationships between species. She also argued that standard neo-Darwinism, which insists on the slow accrual of mutations by gene-level natural selection, “is in a complete funk” (Link).

But what about all of those similarities between mitochondria and bacteria? It would seem like these similarities should overwhelmingly support the theory of common ancestry between bacteria and mitochondria.

Well, the problem with Darwinian thinking in general is that too much emphasis is placed on the shared similarities between various creatures without sufficient consideration of the uniquely required functional differences. These required differences are what the Darwinian mechanism cannot reasonably explain beyond the lowest levels of functional complexity (or minimum structural threshold requirements). The fact of the matter is that no one has ever observed nor has anyone ever published a reasonable explanation for how random mutations combined with natural selection can produce any qualitatively novel protein-based biological system that requires more than a few hundred specifically arranged amino acid residues – this side of trillions upon trillions of years of time. Functionally complex systems that require a minimum of multiple proteins comprised of several thousand specifically-coded amino acid residue positions, like a rotary flagellar motility system or ATPsynthase (illustrated), simply don’t evolve. It just doesn’t happen nor is it remotely likely to happen in what anyone would call a reasonable amount of time (Link). And, when it comes to mitochondria, there are various uniquely functional features that are required for successful symbiosis – that bacteria simply do not have. In other words, getting a viable symbiotic relationship established to begin with isn’t so simple from a purely naturalistic perspective. More.

See also: Cells were complex even before mitochondria?: Researchers: Our work demonstrates that the acquisition of mitochondria occurred late in cell evolution, host cell already had a certain degree of complexity

and Life continues to ignore what evolution experts say (symbiosis can happen)

Follow UD News at Twitter!

Comments
seanpit: It is the argument regarding the “degree” of structure that is key here.
That’s right. And the only way to answer that question is to examine the actual landscape. You claim that there is no significant structure with regards to long English texts. You proposed a process to test that claim, but you haven’t been able to provide an operational definition of “beneficial function”.
The actual landscape for protein-based systems has been examined in fair detail and very clear patterns have emerged – as higher and higher levels of functional complexity are evaluated. 1) It is clear that there is an exponential decrease in the ratio of potentially beneficial vs. non-beneficial sequences. 2) The degree of structure or non-randomness or linearity to the arrangement of these targets does not increase as you claims would require. 3) The minimum Hamming distance between potential targets increases, in a linear manner. These same features also exist within the English language system or computer programs or any other system of meaningful information/function that is based on a specific arrangement of “characters” (i.e., letters, or amino acids, or 0s and 1s, etc). This means, of course, that any mechanism like the Darwinian mechanism will experience an exponential decline in effectiveness with each step up the ladder of functional complexity. You’re algorithms are no different. Even your word evolution algorithm, as non-Darwinian as it is, also experiences an exponential increase in the time required to find longer defined sequences – without regard to a sequential increase in beneficial function. Adding this additional qualification would only enhance the exponential nature of the pattern. I’m not sure, then, why you are so convinced that the Darwinian mechanism is so clearly responsible for the origin of no many qualitatively novel high levels systems of function? Upon what is your faith based? Where is your own evidence along these lines which trumps the evidence I’ve provided here?seanpit
March 24, 2016
March
03
Mar
24
24
2016
07:18 AM
7
07
18
AM
PDT
seanpit: Beyond the demonstrated reality that the steppingstones in protein sequence space... The stepping stones are in the lines of descent from the common ancestor. In any case, while you have retreated from your position concerning having to cross oceans of meaningless sequences to evolve 7-letter words, you still claim that longer sequences can't be similarly crossed. You proposed a process which entails determining the "beneficial function" of a letter sequence. Please provide an operational definition of "beneficial function" for letter sequences so that we can test your claim.Zachriel
March 24, 2016
March
03
Mar
24
24
2016
07:16 AM
7
07
16
AM
PDT
Zachriel: Pointing out that there are stepping stones across a creek does not imply that the stepping stones were put there by design. Beyond the demonstrated reality that the steppingstones in protein sequence space are no longer closely spaced (or significantly linear in their arrangement) beyond very low levels of functional complexity, it is mistaken to argue that if such a situation were ever observed that it would be more consistent with a mindless cause than with an intelligent cause. If the phenomenon in question goes significantly beyond what the known powers of mindless natural processes are likely to generate, but remain within what the known powers of intelligent design are able to generate, the most rational hypothesis is that of intelligent design to explain a given phenomenon. Here are a few examples along these lines: http://cdn.wonderfulengineering.com/wp-content/uploads/2014/11/Gravity-Glue-%E2%80%93-Michael-Grab-Rock-Balancing-Art9.jpg http://cdn.earthporm.com/wp-content/uploads/2014/11/gravity-stone-balancing-michael-grab-12.jpg http://www.thisiscolossal.com/wp-content/uploads/2015/01/cover-1.jpg https://katemckinnon.files.wordpress.com/2014/02/screen-shot-2014-02-23-at-8-23-44-pm.png While it would be obvious for most people coming upon such scenes that intelligent design had been at work (Michael Grab in these cases), the question is how is such a conclusion so obvious when it comes to objects like stacked rocks or a nice path of closely-spaced steppingstones across a lake or ocean where there are no other steppingstones for vast distances all around? http://www.educatetruth.com/wp-content/uploads/2014/01/stepping-stones.jpg http://www.picturesofengland.com/img/X/1031263.jpg Without a very compelling natural explanation, why would any reasonable person conclude that such extremely rare steppingstones are all lined up, as pictured in the links above, by random chance or some unknown mindless mechanism? Yet, this is what is often being done now by many evolutionists and methodological naturalists in general. During the latest debate between Krauss, Meyer, Lamoureux the now popular standby multiple-universe or multiverse argument was forwarded by Krauss to explain the extreme fine tuning of our universe – a situation for which there is simply no compelling naturalistic reason. What’s especially interesting about the multiverse argument, as pointed out by Meyer toward the end, is that it undermines the very basis of science itself. It can be used to explain anything and everything – and therefore nothing. It is essentially identical to the “God did it” hypothesis (which is different from the God-only hypothesis or that only an intelligent designer could have likely produced the phenomenon in question) since it can be used to explain, without invoking intelligent manipulation, the stacked rocks or steppingstones pictured above – or something like Arnold Schwarzenegger winning the California lottery 10 times in a row (just happened to be in the right universe at the time). This kind of desperation to avoid admitting that intelligent design, of any kind, could have been involved in the origin and/or the diversity of life on this planet, or the fine-tuned features of the universe, is not based on science, but upon a naturalistic philosophy that strongly resembles the blind-faith religion of various fundamentalist fanatics - fundamentalists who are bound to avoid any other conclusion regardless of the evidence presented. Sean Pitmanseanpit
March 24, 2016
March
03
Mar
24
24
2016
07:04 AM
7
07
04
AM
PDT
bill cole: Where this gets foggy is when you say there are structures that guide that starts to cross over to the design inference. Pointing out that there are stepping stones across a creek does not imply that the stepping stones were put there by design.Zachriel
March 23, 2016
March
03
Mar
23
23
2016
03:22 PM
3
03
22
PM
PDT
Sean Thank you very much for the explanation. I think that progress in understanding is going on here which is great. Zachriel I see you understand the sequential space problem which is a good start. You are proposing solutions which is great. Where this gets foggy is when you say there are structures that guide that starts to cross over to the design inference. We know that intelligence evolved on earth the question is when and how. How was the incredible capability of cells inserted into or generated by cells that control the most precise nano manufacturing capability in the world:cell metabolism and the cell cycle. I am ok any way the data takes us.bill cole
March 23, 2016
March
03
Mar
23
23
2016
03:15 PM
3
03
15
PM
PDT
seanpit: It is the argument regarding the “degree” of structure that is key here. That's right. And the only way to answer that question is to examine the actual landscape. You claim that there is no significant structure with regards to long English texts. You proposed a process to test that claim, but you haven't been able to provide an operational definition of "beneficial function".Zachriel
March 23, 2016
March
03
Mar
23
23
2016
01:57 PM
1
01
57
PM
PDT
Bill Cole, Zachriel is arguing that because there is some structure to the location of beneficial sequences at very low levels of sequence space (i.e., their location isn't entirely random relative to each other), that such non-random patterns exist in all levels of sequence space - to the same degree. It is the argument regarding the “degree” of structure that is key here. As best as I've been able to figure, evolutionists, like Zachriel and many others, consistently appear to argue that the degree of non-randomness actually increases with increasing levels of sequence space – to compensate for the exponential decrease in beneficial vs. non-beneficial options. The overall effect, according to those like Zachriel, seems to be an essentially linear increase in the average time required to achieve success with each step up the ladder of functional complexity – not an exponential increase. What’s the evidence to support this notion? – according to them? Well, it has to do with sequence homologies… The fact is that there is a certain degree of non-randomness to functional sequences at all levels of functional complexity. Larger protein-based systems, like sentences in the English language system, are usually comprised of fairly common subsystems and subdomains. The argument, then, is that more complex systems, like the hemoglobin molecule for example, can easily be evolved in a reasonable amount of time by simply linking up these smaller subsystems or subsequences that already exist as parts of other systems of function. The similarities, or homologies, between these subsystems are used as evidence of common evolutionary ancestry. As an even more striking example, consider the multipart flagellar motility system. This system of around 40 different specifically arranged structural protein parts supposedly evolved by simply linking up the individual pre-existing proteins together to produce each steppingstone in the pathway toward full-blown flagellar motility. After all, every single protein in the flagellar motility system, except for one, shares non-random homologies with some other protein in the gene pool that is part of a different system of function. It seems intuitively obvious then that these homologies strongly suggest common evolutionary ancestry. Never mind that such homologies also exist in systems created by intelligent design – like computer codes or even the works of Shakespeare. So, how does one tell if a given homology is the result of deliberate design (as in conservation of design) or the result of non-designed evolutionary ancestry? Well, it all depends if the homologies are homologous enough to cross the gaps in sequence space in a reasonable amount of time without the need to invoke intelligent design. For lower levels of functional complexity, requiring fewer than a few hundred specifically arranged characters, the homologies are significant enough so that the hypothesis of intelligent design need not be invoked. However, the problem for the evolutionary perspective is that these homologies are not homologous enough beyond these very low levels. Beyond the level of 1000 specifically arranged characters, the needed homologies simply aren’t there for a successful “swap” to be realized with a single mutation – or even several dozen mutations of any kind. Why not? Contrary to the claims of Zachriel, evolution at these higher levels would require numerous additional modification mutations to produce a successful concatenation mutation that is functionally beneficial to a selectable degree – because the minimum Hamming gap distance are simply far too large at these levels to maintain all of the required subsystems within the genome that would be needed to realize a swap mutation that would be successful without any additional modifying mutations. That is why, in short, these evolutionary scenarios for how evolution must have produce these high-level systems are nothing more than just-so stories. They are statistically untenable and they never happen in real life – not beyond the level of 1000 specifically arranged amino acid residues. There’s not a single observable example of evolution in action at this level described in literature. Not a single one of the flagellar steppingstones have ever been crossed in real life. There’s just nothing supporting such notions beyond wishful thinking and a vivid imagination. In short, then, the science of detecting the activity of intelligent design (which is used all the time in mainstream sciences – such as forensic science, anthropology, and even SETI) is based on the idea that the phenomenon in question can only be reasonably explained by invoking intelligent design. For a detailed description of this problem, specifically dealing with the claims of Nick Matzke regarding flagellar evolution, see the following links: http://www.detectingdesign.com/flagellum.html http://www.detectingdesign.com/NickMatzke.html http://www.detectingdesign.com/JasonRosenhouse.htmlseanpit
March 23, 2016
March
03
Mar
23
23
2016
12:09 PM
12
12
09
PM
PDT
bill cole: Are you saying there is a cellular mechanism that can narrow the sequential space of the genome? The distribution of words and phrases in sequence space is not random, but highly structured. By analogy, it shows how intuitive notions concerning high-dimension spaces are not necessarily accurate. bill cole: This would support James Shapiro’s theory of natural genetic engineering. It just shows that organic molecules exist in a highly structured universe. So, for instance, small changes in amino acid sequence often result in small changes in the three-dimensional structure of the protein, meaning that there are selectable pathways to increased specificity.Zachriel
March 23, 2016
March
03
Mar
23
23
2016
09:18 AM
9
09
18
AM
PDT
Zachriel
He suggested wordspace as a proxy. From our experience, the space is highly structured.
I don't understand this point. Are you saying there is a cellular mechanism that can narrow the sequential space of the genome? If so can you support with evidence of this mechanism? This would support James Shapiro's theory of natural genetic engineering.bill cole
March 23, 2016
March
03
Mar
23
23
2016
08:53 AM
8
08
53
AM
PDT
bill cole: I understand this is your opinion but I think that Sean is right here. Intuition is a valuable resource, but can often mislead us. bill cole: I understand in certain cases an adaptive change may take a few changes but at some point you need very different function and now Sean’s ocean awaits you and that ocean is longer then our universe. That's the claim. He suggested wordspace as a proxy. From our experience, the space is highly structured. He provided a process to test this proposition @36, but that entails selection for beneficial function, which is something for which hasn't been able to provide an operational definition. bill cole: Try to imagine designing the DNA sequences that produce hemoglobin. Wouldn't know how to design hemoglobin. However, hemoglobin is composed of a number of protein subunits called globins, which form a family congruent with the phylogenetics of organisms, having diverged from a common ancestor. https://www3.nd.edu/~aseriann/CHAP7B.html/img028.gif Hemoglobin is an example of how a large protein can evolve by concatenating smaller proteins. 4^umpteenth means very little when discussing such a process. http://antranik.org/wp-content/uploads/2011/12/hemoglobin-molecular-structure-alpha-beta-globin-chain-with-heme.jpgZachriel
March 23, 2016
March
03
Mar
23
23
2016
08:38 AM
8
08
38
AM
PDT
Zachriel
Sure. However, Sean Pitman’s claim about wordspace was that you had to cross oceans of meaningless sequences, which was not correct.
I understand this is your opinion but I think that Sean is right here. I understand in certain cases an adaptive change may take a few changes but at some point you need very different function and now Sean's ocean awaits you and that ocean is longer then our universe. Sequential space is essentially infinite when sequences get over 100 aa long. The mechanism you are trying to support says you can mutate your way through infinity to find advantage. The other point is that protein sequences are very sophisticated in what they do, Try to imagine designing the DNA sequences that produce hemoglobin. How would you go about this? Ok what are the baby steps to get to hemoglobin. Try to imagine this.bill cole
March 23, 2016
March
03
Mar
23
23
2016
08:03 AM
8
08
03
AM
PDT
seanpit: The question is, why is a particular “word” or “phrase” more functionally beneficial compared to what came before? – not compared to a random sequence. Longer words were apparently more "beneficial" in your original statement. They are generally more complex, at the very least in terms of syllables, and often encapsulate multiple concepts. Longer snips of Shakespeare might reasonably be considered more "beneficial" then shorter snips of Shakespeare. But we're happy to defer to your own operational definition of "beneficial" with regards to phrases — once you provide one. seanpit: I’m sorry you misunderstood what I was saying. That's fine. As we have shown, you don't have to cross oceans of meaningless words to find seven-letter words, or ten-letter words, or twelve-letter words. Glad that's settled. seanpit: Not beyond very low levels of functional complexity. There are examples of increasing specificity, such as in Lenski's experiment which involve multiple moving parts. seanpit: Also, your algorithm does not allow for random walks – despite the fact that random walks are quite common in real life. We'd be happy to include random walks, but it wasn't material to your original proposed process. Now, provide an operational definition of “beneficial function” for letter sequences per your proposed process @36. seanpit: I explained in some detail why “nonhomologous” swapping of stable sequences would work for a while after individual point mutations no longer worked. Such swapping of larger sequences just moves the problem up a level is all YOU cited the paper which said “More generally, our simulations demonstrate that the efficient search of large regions of protein space requires a hierarchy of genetic events, each encoding higher-order structural substitutions. We show how the complex protein function landscape can be navigated with these moves.” In any case, in all that you posted, we didn’t see an operational definition of “beneficial function” for letter sequences.Zachriel
March 23, 2016
March
03
Mar
23
23
2016
07:47 AM
7
07
47
AM
PDT
Zachriel,
As word evolution is based on a word dictionary, per his original proposal, it would seemingly be reasonable to use a phrase dictionary for multi-word sequences. Indeed, a word is reasonably considered more “functional” than a random sequence of letters, and a sequence of words from Shakespeare would seemingly qualify as more “functional” than a random sequence of words. Sean Pitman rejected this proposal, but has been unable to offer any other way to directly test his claim. He’s left waving in the general direction of big numbers. It’s obvious you see.
The question is, why is a particular “word” or “phrase” more functionally beneficial compared to what came before? – not compared to a random sequence. Remember, for the Darwinian mechanism to work, the steppingstone sequence must be made up of sequences that each show improved beneficial function compared to the previous steppingstone in the sequence. Neither your word or phrase evolution programs make selections in this way. Your Phrasenation program, in particular, shows selected partial sentences and phrases that are clearly nonsensical, much less sequentially beneficial. In short, I was trying to get you to think about the nature of sequence space and the exponential decay of potential targets within that space. While I’m sorry that your programs do not truly reflect the limitations of the Darwinian mechanism, the evidence regarding the exponential decline of evolutionary potential with each step up the ladder of functional complexity is overwhelming.
Sean Pitman’s claim about wordspace was that you had to cross oceans of meaningless sequences, which was not correct.
Again, I’m sorry you misunderstood what I was saying. Why on Earth do you think that I always cited the limit of 1000 saars if I thought that evolutionary progress would actually stall out within sequence spaces of less than 10 characters? How does make any sense to you given your description of my position? Of course evolution works within such relatively small spaces – despite the exponentially increasing amounts of time required even at such low levels. It is this pattern of exponentially increasing amounts of time that is of primary importance here. I’m still mystified as to why you don’t see this pattern as relevant?
An increase in specificity is something that can evolve, as can be easily shown.
Not beyond very low levels of functional complexity. When you’re talking about systems of function that require a minimum of more than 1000 specifically arranged characters, such systems are so far apart in sequence space that getting from one to any other would require trillions upon trillions of years. And, natural selection cannot come to the rescue here because natural selection cannot work at all until the next beneficial sequence is first discovered. You think that such large gap distances can be crossed by single recombination mutations. As I’ve explained, at the level of 1000 saars or above, the odds that anything already exists within a given gene pool which could be successfully recombined with another sequence to produce a qualitatively novel functional system are essentially nil. It just doesn’t happen and statistically it is extremely unlikely to happen.
The website doesn’t say that, but refers to phrases. However, based on your statement about crossing oceans, your claim is that selectable transitions do not exist even for 7-letter words. That means they will *never* evolve, per the algorithm. Selection is absolute, per your original statement “Start with a short 2 or 3-letter word and see how many words you can evolve that require greater and greater minimum sequence requirements.” If the sequence doesn’t form a valid word, it never enters the population.
Again, I never said that “transitions” or the possibility for successful random mutations of various kinds do not exist for 7-character sequences. I’ve consistently explained that they do exist at such low levels – which is somewhat unusual for Intelligent Design proponents to argue. Most IDist argue that there are no examples at all of functional evolution. This isn’t true. There are many examples of true evolution in action. It is just that all of these examples are at very low levels of functional complexity and show an exponential stalling effect with each step up the ladder of functional complexity. So, clearly, your description of my position is a mischaracterization of my true position – a strawman. By the way, you do realize what a translocation mutation is? – right? It is a large jump across “oceans” of sequence space. Sure, the “oceans” at such low levels are very small relatively speaking, but the idea is the same. At such very low levels the odds of taking a large successful leap across sequence spaces aren’t too bad – especially for a larger population undergoing extremely high reproductive and mutation rates. Also, your algorithm does not allow for random walks – despite the fact that random walks are quite common in real life. Such random walks would also have fairly good success at these low levels of functional complexity – as I’ve explained many times already in this thread.
seanpit: Your mechanism appears to be largely based on random sampling of sequence space that heavily favors locations very close to the starting point location.
Yes. It’s called evolution.
Indeed. And, despite your extremely generous mutation and reproductive rates, such “evolution” shows an exponential increase in required time with each step up the ladder of functional complexity. Even your own word-evolution algorithm, which isn’t based on sequentially increasing beneficial function, shows this non-linear increase in average time with each increase in the minimum size requirement.
Immediately followed by “nonhomologous DNA ‘swapping’ of low-energy structures is a key step in searching protein space”. Word evolution involves non-homologous swapping. In any case, in all that you posted, we didn’t see an operational definition of “beneficial function” for letter sequences.
Of course! As I previously explained in some detail, “nonhomologous” swapping of stable sequences will work for a while after individual point mutations no longer work. Such swapping of larger sequences just moves the problem up a level is all – like using whole words instead of individual letters for the “alphabet” of options for various positions within a sequence. What is interesting here is that point mutations work very well at very low levels of functional complexity – like evolving short individual words of less than 5 or 6 letters in length. However, very quickly the ability of point mutations to find new targets drops off, exponentially in fact, with additional steps up the ladder of functional complexity. At this point, as previously explained, the only way forward to cross the growing Hamming gap distances and more and more uniform distribution of functionally beneficial islands is to resort to recombination or “swap” mutations involving larger pre-existing sequences within the gene pool of options. While this does help for a little while, the success of these swap mutations also starts to drop off, exponentially, with each step up the ladder of functional complexity. Why? Because, in short, gene pools are limited. They can only store a limited number of sequences. As the minimum Hamming gap distance continues to increase, the odds that what is needed to undergo a successful “swap” decrease in an exponential manner. That is why the average time to achieve success, even with swap mutations, continues to increase in an exponential manner until, beyond the level of 1000 saars, trillions of years of time isn’t enough.seanpit
March 23, 2016
March
03
Mar
23
23
2016
07:26 AM
7
07
26
AM
PDT
bill cole: It is the current evolutionary claim that life’s diversity is created by partial stochastic mechanisms. Sure. However, Sean Pitman's claim about wordspace was that you had to cross oceans of meaningless sequences, which was not correct. bill cole: Sean and I are skeptical of this because of exponential growth of the sequential space of proteins. Skepticism is fine, however, it doesn't substitute for evidence. In the case of the wordscape, given a operational definition of "beneficial", we should be able to test your intuition. bill cole: We also know that the chance of proteins folding to function varies depending of function and nuclear proteins that work together with other proteins are highly sequence specific. An increase in specificity is something that can evolve, as can be easily shown. seanpit: On your website you’re specifically claiming that I said that evolution was impossible, would take “zillions of years”, at the level of 7-character sequence space. The website doesn't say that, but refers to phrases. However, based on your statement about crossing oceans, your claim is that selectable transitions do not exist even for 7-letter words. That means they will *never* evolve, per the algorithm. seanpit: Beyond this, are you telling me that even at 7-character sequence space your random search algorithm never makes a wrong choice? Selection is absolute, per your original statement "Start with a short 2 or 3-letter word and see how many words you can evolve that require greater and greater minimum sequence requirements." If the sequence doesn't form a valid word, it never enters the population. seanpit: Your mechanism appears to be largely based on random sampling of sequence space that heavily favors locations very close to the starting point location. Yes. It's called evolution. seanpit: Very quickly your mechanism will stall out – at very low levels of functional complexity. That's your claim, but something you haven't been able to show. seanpit: "We demonstrate further that even the DNA shuffling approach is incapable of evolving substantially new protein folds. " Immediately followed by "nonhomologous DNA 'swapping' of low-energy structures is a key step in searching protein space". Word evolution involves non-homologous swapping. In any case, in all that you posted, we didn't see an operational definition of “beneficial function” for letter sequences.Zachriel
March 23, 2016
March
03
Mar
23
23
2016
06:24 AM
6
06
24
AM
PDT
Zachriel,
seanpit: Why then lie about what I actually said?
What you said was “If I want to evolve a new 7-letter word starting with meaningful 7-letter word, I will have to swim through this ocean of meaningless words.” This statement is demonstrably false.
First off, that’s not what you’re claiming on your website. On your website you’re specifically claiming that I said that evolution was impossible, would take “zillions of years”, at the level of 7-character sequence space. That’s simply not true and you know it. Why else do you suppose I’ve always drawn the line at 1000, not 7, specifically arranged characters? Why the need to lie about where I’ve always drawn the line for the limits of evolutionary potential? Beyond this, are you telling me that even at 7-character sequence space your random search algorithm never makes a wrong choice? If you set up the parameters like I described above, your own algorithm would indeed take exponentially greater amounts of time to find targets at this very low level. Even your own massive reproductive and mutation rates take greater amounts of time at the 7-character level as compared to lower levels that represents a non-linear increase in time...
seanpit: Isn’t your own algorithm is based on “random sampling”
Calling evolution random sampling is an equivocation. If you mean evolution, then use the term evolution, which we have defined a population undergoing random mutation and random recombination and selection.
I’m speaking specifically about the part of the Darwinian mechanism that takes place before selection takes place. Random mutations can come in the form of random sampling of sequence space or random walks. Your mechanism appears to be largely based on random sampling of sequence space that heavily favors locations very close to the starting point location. While this will indeed help your algorithms find closely-spaced steppingstones more readily, it will not help you as the minimum distance between the starting point and the next closest steppingstone increases in a linear manner with each step up the ladder of functional complexity. Very quickly your mechanism will stall out – at very low levels of functional complexity. Your “Phrasenation” algorithm isn’t based on sequentially increasing beneficial function, but on template matching to any portion of a pre-established template – without any regard to the actual function or meaning of the evolving sequence. Therefore, it fails to qualify as a valid example of “evolution” – at least in the Darwinian sense of the term. It is very much in line with Dawkins’ “Methinks it is like a weasel” algorithm – identical in fact.
seanpit: The fact is, there simply is no significant statistical difference for success between random sampling and random walks
A random walk or a random sampling of the entire space would take about 10^10 trials to find a ten-letter word, while evolution accomplishes the task in about 10^5 trials.
And, I’ve already explained to you why this is: 1) Your “trials” are based on an extraordinarily high reproductive rate and mutation rate. 2) Your population size is very high considering the level of functional complexity under consideration. 3) Targets within sequence space are more clustered at these very low levels with more very short Hamming distances of 1 much much more likely at these very low levels. 4) Your own algorithm (as limited as it is) demonstrates a non-linear, even exponential, increase in the amount of time required to find targets at higher and higher levels. 5) Higher and higher level systems, beyond these extremely low levels of functional complexity, continue to show that exponentially more and more amounts of time are required until evolutionary progress completely stalls out, this side of trillions of years of time, shy of the level of 1000 saars. 6) Higher and higher levels of functional complexity continue to show a more and more randomly uniform distribution of functionally beneficial systems – or even stable proteins. 7) It’s an undeniable fact that these uniformly distributed targets (even when you’re only talking about stable proteins) are exponentially reduced in relative numbers with each step up the ladder of functional complexity.
seanpit: If you’re biasing your “random sampling” to positions located very short distances from your starting point, of course this would help find closely-spaced targets.
It’s called evolution.
Yes - based on random sampling with an emphasis on finding targets with a very small Hamming distance of 1 from the starting point – which only has any remote hope of success at very very low levels of functional complexity (even given extremely high reproductive and mutations rates that you use in your algorithm). While true Darwinian evolution of protein-based systems in real living things isn’t quite as heavily dependent on this particular type of random mutation (or the extremely high reproductive/mutation rates you use), the same basic problem is realized in both cases – an exponential increase in the average time required to take the next step up the ladder of functional complexity.
Your claim concerned words. (The study you cite concerns sequences that share the same fold, comparable to a study of word synonyms, not the distribution of enzymes in general, or the distribution of enzymes in nature.) In any case, we’re still waiting for a operational definition of “beneficial function” for letter sequences.
The same situation is true for all systems of meaningful information based on a specific sequence of characters. It doesn’t matter if the sequences produce the same qualitative function, the sequences that produce these functions are essentially randomly distributed in sequence space – without significant clustering. The same is true for all other types of functional sequences as well - with more and more prominence at higher and higher levels. Additional information along these lines include the finding that protein families are separated from each other by gaps of non-foldable / non-functional sequences. This is true even for very small sequence spaces comprised of only 16 binary characters in the sequence simulating certain features of protein folding: “This produces a frustration barrier, e.g., a region of frustrated sequences between each pair of minimally frustrated families. Any stepwise mutational path between one minimally frustrated sequence family and another must then visit a region of slow or nonfolding sequences… In the case of real proteins, the sequences in these high frustration regions are much less likely to meet physiological requirements on foldability (of course, real physiological requirements can be much more extensive than this). If the sequences in these regions do not meet the physiological criteria, then they cannot participate in biochemical processes, which means that they will be physiologically excluded. If the requirement is sufficient, the region between two families will be completely excluded, which cuts sequence space into separate fast-folding, stable parts. This provides a mechanism for partitioning protein sequence information into evolutionarily stable, biochemically useful (foldable) subsets… Thus, because p(x) is the best path, a gap will occur, completely separating the sequence families… A spontaneous double or triple exchange mutation is required to mutate across the gap.” http://www.pnas.org/content/95/18/10682.full Such non-beneficial “gaps” in sequence space become more and more prominent at higher and higher levels of functional complexity. With each step up the ladder, the minimum Hamming “gap distances” between potentially beneficial islands grow in a linear manner. This is why such gaps become problematic by the time the level of 100aa sequences are considered. At this point “point mutation alone is incapable of evolving systems with substantially new protein folds. We demonstrate further that even the DNA shuffling approach is incapable of evolving substantially new protein folds.” http://www.pnas.org/content/96/6/2591.full The authors in this particular article go on to argue that, because of the lack of usefulness of point mutations at the level of 100aa systems, evolution, at this point, became almost entirely dependent upon “nonhomologous DNA ‘swapping’ of low-energy structures.” In other words, if carefully selected stable protein folds are “swapped” at the 100aa level, novel stable proteins may be discovered with some rare beneficial proteins realized. That is why, usually, when the successful evolution of qualitatively novel protein-based systems is realized at the level of 100aa systems, it isn’t based on point mutations, but on multi-character indel mutations that consist of stable protein folds. It becomes quite clear, then, that by the time the level of 100-character sequence space is being searched, point mutations become pretty much pointless when it comes to evolutionary progress – because of linearly expanding Hamming gap distances within such higher level spaces. Of course, as one keeps moving up the ladder of functional complexity these minimum Hamming gap distances keep increasing in a linear manner. Fairly quickly these gap distances become so large that even the swapping of stable protein folds cannot cross the distance in a single bound. At this point, multiple specific swaps must be realized to achieve success. This means, of course, that the average time required increases exponentially so that only rarely do we see real-time examples of evolutionary progress at the level of 200 or 300 saars. By the time the level of 1000 saars is reached (usually involving multiple specifically arranged proteins within a system), finding a qualitatively novel system requires numerous specific “swaps” of stable protein sequences each consisting of dozens of specifically arranged amino acid residues. At this point, the statistical odds against success get so large that trillions upon trillions of years are required to overcome these odds, and “evolution” completely stalls out. And, that, in short, is why I’ve always drawn the line of the limit to evolutionary potential at 1000 saars (not 7). In any case, I grow tired of your dishonest and very repetitive strawmen misrepresentations. If you have nothing substantive to add to this discussion, beyond your very dishonest claim that I somehow said that evolution couldn't possibly work within the extremely low level of 7-character sequence space, I don't see the point of continuing to go round and round here...seanpit
March 23, 2016
March
03
Mar
23
23
2016
01:08 AM
1
01
08
AM
PDT
Zachriel
As word evolution is based on a word dictionary, per his original proposal, it would seemingly be reasonable to use a phrase dictionary for multi-word sequences. Indeed, a word is reasonably considered more “functional” than a random sequence of letters, and a sequence of words from Shakespeare would seemingly qualify as more “functional” than a random sequence of words. Sean Pitman rejected this proposal, but has been unable to offer any other way to directly test his claim. He’s left waving in the general direction of big numbers. It’s obvious you see.
It is the current evolutionary claim that life's diversity is created by partial stochastic mechanisms. Sean and I are skeptical of this because of exponential growth of the sequential space of proteins. We also know that the chance of proteins folding to function varies depending of function and nuclear proteins that work together with other proteins are highly sequence specific. The bottom line is the evolutionary theory does not have a mechanism that passes the sniff test. How in the world can a stochastic process overcome the almost infinite statistical space of a long sequence?bill cole
March 22, 2016
March
03
Mar
22
22
2016
06:06 PM
6
06
06
PM
PDT
seanpit: Beyond the fact that this isn’t true (a randomly located target within 7-character sequence space could quickly be discovered by the Darwinian mechanism in an evolving colony of organisms) It's your claim that you have to cross an ocean of meaningless sequences to find a seven-letter word that is false. seanpit: Why then lie about what I actually said? What you said was “If I want to evolve a new 7-letter word starting with meaningful 7-letter word, I will have to swim through this ocean of meaningless words.” This statement is demonstrably false. seanpit: Isn’t your own algorithm is based on “random sampling” Calling evolution random sampling is an equivocation. If you mean evolution, then use the term evolution, which we have defined a population undergoing random mutation and random recombination and selection. seanpit: The fact is, there simply is no significant statistical difference for success between random sampling and random walks A random walk or a random sampling of the entire space would take about 10^10 trials to find a ten-letter word, while evolution accomplishes the task in about 10^5 trials. seanpit: If you’re biasing your “random sampling” to positions located very short distances from your starting point, of course this would help find closely-spaced targets. It's called evolution. seanpit: For short RNA molecules ... For short proteins Your claim concerned words. (The study you cite concerns sequences that share the same fold, comparable to a study of word synonyms, not the distribution of enzymes in general, or the distribution of enzymes in nature.) In any case, we're still waiting for a operational definition of "beneficial function" for letter sequences.Zachriel
March 22, 2016
March
03
Mar
22
22
2016
03:09 PM
3
03
09
PM
PDT
Zachriel,
seanpit: I never said that evolution at the level of 7-character sequences would take “zillions of years”
Actually, if you have to cross oceans of meaningless sequences, then you can *never* evolve 7-letter words stepwise.
Beyond the fact that this isn’t true (a randomly located target within 7-character sequence space could quickly be discovered by random mutations in a colony of organisms), I never said that evolution at the level of 7-letter sequence was impossible – just the opposite in fact. You've known this for a long time - before you created your website in fact. Why then lie about what I actually said? Why not say that I actually drew the limit to evolutionary progress at 1000 saars? – not at 7? Why not just present what I actually said? That evolution works at low levels, like 7-character sequences and the like, but then experiences and exponential decline until it completely stalls out at the level of functional/meaningful systems that require a minimum of at least 1000 specifically arranged characters? What advantage does it give you to lie and make it appear like I said something very different about the limit to evolutionary progress? You don't think it makes you look rather desperate?
seanpit: The odds of success are essentially the same
That’s demonstrably false. As noted above a random sampling would take about 10^10 trials to find a ten-letter word, while evolution accomplishes the task in about 10^5 trials. That’s not “essentially the same”.
Isn't your own algorithm is based on “random sampling” Zachriel?! The difference is in the location of targets within sequence space - not the method of random sampling - right? The fact is, there simply is no significant statistical difference for success between random sampling and random walks, or any other random search algorithm, given a random location of a rare target.
Equivocation. Random sampling above referred to random sampling of the entire space, as contrasted with evolution and a random walk.
Huh? What do you mean by “evolution” here? How does "evolution" take place until either random walk or random sampling succeeds?! Then, and only then, will there be evolution in the sense that the seletable target has been discovered - right?! Also, how are your algorithms not undergoing “random sampling of the entire space”? If you’re biasing your “random sampling” to positions located very short distances from your starting point, of course this would help find closely-spaced targets. However, as the minimum target distance increases such a random search method will not help you. There will still be an exponential stalling effect with each step up the ladder of functional complexity – as your own algorithms demonstrate!
Which reference? [regarding random distributions of proteins etc.]
For short RNA molecules: “The sequences folding into a common structure are distributed randomly in sequence space. No clustering is visible.” http://www.sciencedirect.com/science/article/pii/S1359027897000370 For short proteins: “The data do not indicate a significant amount of clustering… We found essentially no homology between the inverse folded sequences and no discernible clustering. The distribution of amino acids in these sequences is essentially random.” http://www.sciencedirect.com/science/article/pii/S1359027897000370
If the sequences approach randomness, then they are not compressible. We know English is compressible, and we know that there are only a few tens-of-thousands of valid sequences that can be found between spaces.
You don’t get it. While protein sequences, like meaningful English sequences, are always compressible to some degree or another (because of the specific features I just described for you in my previous post), the degree of compressibility decreases with increasing minimum size and/or specificity requirements . . . quickly producing an fairly uniformly random appearance in the distribution (i.e., not really very predictable or significantly “compressible” beyond a certain point. And, the degree of compressibility is reduced with each increase in the minimum size and/or specificity requirement of the meaningful/beneficial sequence under consideration. After all, meaningful sequences, while not entirely random are also not entirely predictable either. In other words, they cannot be compressed as a truly predictable sequence can be compressed – like the infinite number Pi for example. This means that they are located between purely randomly generated sequences and those that are highly predictable or non-random. Meaningful/functional sequences therefore increase the appearance of their random location within sequence spaces with each step up the ladder of functional complexity.seanpit
March 22, 2016
March
03
Mar
22
22
2016
02:43 PM
2
02
43
PM
PDT
seanpit: Zachriel fails to recognize the pattern of declining evolutionary potential – a pattern that is non-linear, exponential in fact, with each level up the ladder of functional complexity. We're willing to keep an open mind. What we have shown is that, contrary to your claim, you don't have to cross an ocean of meaningless sequences to evolve seven-letter or even longer words. We know that it is the structure of the landscape that determines whether evolutionary search will be effective. Having tested the wordscape extensively, your larger claim appears to false. However, it's up to you to provide a rigorous definition of "beneficial function" per your own proposed process @36, "Select based on changes in beneficial function".Zachriel
March 22, 2016
March
03
Mar
22
22
2016
02:02 PM
2
02
02
PM
PDT
bill cole: As the sequence increases in length 10^5 becomes just as big of a problem as 10^10. Well, that's one of Sean Pitman's claims with regards to word evolution, but he can't provide a rigorous definition of "beneficial function" in order to test his claim. As word evolution is based on a word dictionary, per his original proposal, it would seemingly be reasonable to use a phrase dictionary for multi-word sequences. Indeed, a word is reasonably considered more "functional" than a random sequence of letters, and a sequence of words from Shakespeare would seemingly qualify as more "functional" than a random sequence of words. Sean Pitman rejected this proposal, but has been unable to offer any other way to directly test his claim. He's left waving in the general direction of big numbers. It's obvious you see. Now, you are making the claim, but have as little evidence as he does — other than waving in the general direction of big numbers. It's obvious you see. seanpit: I never said that evolution at the level of 7-character sequences would take “zillions of years" Actually, if you have to cross oceans of meaningless sequences, then you can *never* evolve 7-letter words stepwise. seanpit: The odds of success are essentially the same That's demonstrably false. As noted above a random sampling would take about 10^10 trials to find a ten-letter word, while evolution accomplishes the task in about 10^5 trials. That's not "essentially the same". seanpit: How are your algorithms not based on random sampling of the surrounding search space? Equivocation. Random sampling above referred to random sampling of the entire space, as contrasted with evolution and a random walk. seanpit: I’ve already given you the relevant references regarding the more and more randomly uniform nature of potential targets within sequence space numerous times. Which reference? seanpit: It’s the overall pattern that’s important here – not the demonstration of some comprehensibility by itself. If the sequences approach randomness, then they are not compressible. We know English is compressible, and we know that there are only a few tens-of-thousands of valid sequences that can be found between spaces.Zachriel
March 22, 2016
March
03
Mar
22
22
2016
01:55 PM
1
01
55
PM
PDT
Bill Cole,
I think you are doing interesting work here but unfortunately not really validating current evolutionary mechanisms as viable including RMNS and neutral theory. As the sequence increases in length 10^5 becomes just as big of a problem as 10^10. If you jump out of the 50th floor of a building the results are generally the same as jumping out of the 100th floor.
Exactly! Zachriel fails to recognize the pattern of declining evolutionary potential - a pattern that is non-linear, exponential in fact, with each level up the ladder of functional complexity. He thinks that because there is a bit of non-randomness to the distribution of lower-level target sequences that this remains true for higher and higher levels of functional complexity - to the same degree. This clearly isn't true. The degree of "structure" to the location of potentially beneficial sequences at high levels of functional complexity would have to be truly amazing indeed, even designed, if evolutionary progress were to be tenable at these high levels. The problem, of course, is that the needed degree of structure simply isn't there - not remotely. It's not even there at the relatively low level of just 1000 specifically arranged characters...seanpit
March 22, 2016
March
03
Mar
22
22
2016
01:36 PM
1
01
36
PM
PDT
Zachriel,
What you said was that If I want to evolve a new 7-letter word starting with meaningful 7-letter word, I will have to swim through this ocean of meaningless words.” But you don’t have to swim through meaningless sequences to evolve 7-letter words. Your claim is false.
Yet again, I never said that evolution at the level of 7-character sequences would take “zillions of years” – which is what you claim on your website. That's a bold-faced lie - and you know it. As you know, I’ve always said that evolution completely stalls out, this side of a practical eternity of time, at the level of 1000 specifically arranged characters – not 7. You clearly know this - since 2004. So why do you claim something I never said on your website? Why misrepresent me like this? Why the need to lie and paint a false picture of my actual position? Too hard to deal with a limit of 1000 specifically arranged characters? Why not just be honest about what I'm really saying? Beyond this, yet again, even in your own algorithm (with the use of a sizable population and an enormous reproductive rate and mutation rate) searches numerous 7-character sequences before it finds defined “words” – and we’re not even talking about beneficial function here. Yet again, just because a potential target is just 1 mutation away from the starting position in hyperdimensional sequence space does not mean that a random walker (especially a single random walker) doesn’t have to swim through quite a few non-target sequences. Your argument that “random sampling” somehow avoids this problem is nonsense. The odds of success are essentially the same and get exponentially worse with each step up the ladder of functional complexity.
That is false. There are many selectable pathways in the region of shorter (ten or fewer letter) words. There are stepping stones!
Again, while it’s true that at very low levels of functional complexity (requiring fewer than 10 specifically arranged characters) that the odds are very good that the distance between a given starting point within a large population and the next closest potential target will be just one mutation, that doesn’t mean that these closely-spaced steppingstone are easy to find in higher dimensional space as compared with 2 or 3-character sequences – resulting in a non-linear increase in the average number of random walk mutations or random sampling mutations from a given starting point. And, with each step up the ladder of functional complexity these closely-spaced steppingstones become exponentially less and less common and therefore much more difficult to find within a given span of time for a steady-state population. And, as you move farther and farther up the ladder of functional complexity, the odds that a single mutation of any kind finding a higher level beneficial system drop off exponentially as well until it is essentially impossible that such a situation exists – well before you reach the level of 1000 saars.
seanpit: Random walks or even random sampling, if you prefer, will be successful, very quickly in fact at such low levels (given a reasonably-sized population).
Random walks and random sampling are much slower than evolution in the region of shorter (ten or fewer letter) words. A random sampling would take about 10^10 trials to find a ten-letter word, while evolution accomplishes the task in about 10^5 trials.
What? How are your algorithms not based on random sampling of the surrounding search space? What is your definition of “evolution”? The evolutionary mechanism in particular? After all, according to most evolutionists, “evolution” must be based on a random search algorithm. That’s how the Darwinian mechanism works – via random sampling or random walks into the surrounding search space. It isn’t until a target beneficial sequence is actually discovered that natural selection comes into play. Beyond this, even your own algorithms “accomplish the task” in far less than 1e5 trials when you’re talking 2 or 3- character sequences. Even as they currently stand, your algorithms take exponentially longer to evolve longer words. And, your 10-letter words would take significantly longer, exponentially so, if you reduced your population size to 2 or 3 and your reproductive rate to 2 or 3 per individual per generation and your mutation rate to one mutation per individual per generation. Why the non-linear increase in the success of your own algorithms? – even starting with a fairly large population, very high reproductive rates, and very high mutation rates?
seanpit: Beyond this, your assumption that meaningful/beneficial sequences will remain significantly clustered beyond the lowest levels of functional complexity is demonstrably false.
Then demonstrate it.
I’ve already given you the relevant references regarding the more and more randomly uniform nature of potential targets within sequence space numerous times. It’s not like this is some kind of secret.
Even simple statistical tests show that texts of English are not evenly distributed in sequence space. If so, English texts would not be compressible, when, in fact, they are highly compressible.
As I’ve explained before, protein-based systems are also compressible – but become less and less so with each step up the ladder of functional complexity. Again, the observation that beneficial protein-based systems (or even stable systems in general) take on a more and more randomly uniform distribution within higher and higher levels of sequence space has been published many times. You see, while words are often based on similar smaller clusters of “letters”, and while phrases are often based on similar underlying “words”, and while sentences are often based on similar “phrases”, such similarities start to break down, more and more with each additional increase in the minimum size and/or specifically of a beneficial higher-level sequence. It's the overall pattern that's important here - not the demonstration of some comprehensibility by itself. What is the pattern of comprehensibility at various levels of functional complexity? The very same thing happens to protein-based systems. This is why non-beneficial gaps start to grow, in a linear manner, between potentially beneficial sequences with each step up the ladder of functional complexity. And, of course, with each linear increase in the minimum non-beneficial gap distance, the average time for a random search algorithm to achieve success increases exponentially.seanpit
March 22, 2016
March
03
Mar
22
22
2016
01:24 PM
1
01
24
PM
PDT
Z
. As noted above a random sampling would take about 10^10 trials to find a ten-letter word, while evolution accomplishes the task in about 10^5 trials.
interesting, and I think this is a good simulation of certain proteins. I think you are doing interesting work here but unfortunately not really validating current evolutionary mechanisms as viable including RMNS and neutral theory. As the sequence increases in length 10^5 becomes just as big of a problem as 10^10. If you jump out of the 50th floor of a building the results are generally the same as jumping out of the 100th floor.bill cole
March 22, 2016
March
03
Mar
22
22
2016
12:39 PM
12
12
39
PM
PDT
bill cole: Can you explain how you came up with this hypothesis? The contrary hypothesis was by Sean Pitman, that evolving seven-letter words would require crossing oceans of meaningless sequences. The test of the hypothesis is empirical. Short words can evolve (a population subject to random point mutations and recombination) from short words to long words, with each intermediate being a word as found in the dictionary. As noted above a random sampling would take about 10^10 trials to find a ten-letter word, while evolution accomplishes the task in about 10^5 trials. http://www.zachriel.com/mutagenation/Zachriel
March 22, 2016
March
03
Mar
22
22
2016
11:57 AM
11
11
57
AM
PDT
Zachriel
That is false. There are many selectable pathways in the region of shorter (ten or fewer letter) words. There are stepping stones!
Can you explain how you came up with this hypothesis?bill cole
March 22, 2016
March
03
Mar
22
22
2016
11:48 AM
11
11
48
AM
PDT
seanpit: As already explained, you do have to “swim” through a small ocean of meaningless/non-beneficial sequences to find relatively rare targets – even at the level of 7-characters sequences. That is false. There are many selectable pathways in the region of shorter (ten or fewer letter) words. There are stepping stones! seanpit: Random walks or even random sampling, if you prefer, will be successful, very quickly in fact at such low levels (given a reasonably-sized population). Random walks and random sampling are much slower than evolution in the region of shorter (ten or fewer letter) words. A random sampling would take about 10^10 trials to find a ten-letter word, while evolution accomplishes the task in about 10^5 trials. seanpit: Beyond this, your assumption that meaningful/beneficial sequences will remain significantly clustered beyond the lowest levels of functional complexity is demonstrably false. Then demonstrate it. It's your claim, after all. You proposed a process to test your proposition, which hinges on "beneficial function". Your original claim seemed to take a dictionary as a test of "beneficial function" for words, so not sure why a dictionary of phrases isn't sufficient. Please provide an unambiguous measure of "beneficial function" for letter sequences. seanpit: With each step up the ladder of functional complexity the distribution of potential targets takes on an more and more randomly uniform appearance. Even simple statistical tests show that texts of English are not evenly distributed in sequence space. If so, English texts would not be compressible, when, in fact, they are highly compressible. In addition, we can simply inspect English texts and note that between spaces there are only a few tens-of-thousands of possible sequences.Zachriel
March 22, 2016
March
03
Mar
22
22
2016
07:29 AM
7
07
29
AM
PDT
Zachriel,
seanpit: You falsely claim, here and on your website, that I drew the limit for evolutionary progress at the level of single words.
What you said was that If I want to evolve a new 7-letter word starting with meaningful 7-letter word, I will have to swim through this ocean of meaningless words.” But you don’t have to swim through meaningless sequences to evolve 7-letter words. Your claim is false.
As already explained, you do have to “swim” through a small ocean of meaningless/non-beneficial sequences to find relatively rare targets – even at the level of 7-characters sequences. Does this therefore mean that evolutionary progress is statistically impossible at the level of 7-character sequences? Of course not! Random walks or even random sampling, if you prefer, will be successful, very quickly in fact at such low levels (given a reasonably-sized population). Why? Because the potential targets are relatively close and because there is still some reasonable clustering at this very low level. Yet, you falsely claim, on your website, that I said it would take “zillions of years” for success at such low levels to be realized. That’s a deliberate lie since you know full well that I've always said that evolution at such low levels happens all the time. You also know that I’ve always drawn the line for a complete stalling of evolutionary progress at the level of 1000 specifically arranged characters (amino acid residues or letters in the English language system or characters in other systems such as computer codes – etc) – which is a far cry from the very short sequences in your word evolution algorithm. Clearly then, you’ve built a strawman misrepresentation that you know isn’t true. Why? Where’s the advantage for you for such an obvious misrepresentation of someone else’s position? – beyond an effort to make it look worse than it really is? – in an effort to downplay something you think might be true? Beyond this, your assumption that meaningful/beneficial sequences will remain significantly clustered beyond the lowest levels of functional complexity is demonstrably false. With each step up the ladder of functional complexity the distribution of potential targets takes on an more and more randomly uniform appearance. This particular feature has been published numerous times in mainstream literature. So, given the reality of this situation, what on Earth makes you think that the decline in evolutionary progress, with each step up the ladder of functional complexity, will follow anything other than an exponential decay pattern? How can you possibly believe that there will only be a linear decline in evolutionary potential? – when your own algorithms suggest otherwise? – given the parameters that I’ve suggested for you above? Sean Pitman DetectingDesign.comseanpit
March 22, 2016
March
03
Mar
22
22
2016
07:06 AM
7
07
06
AM
PDT
Me_Think,
First, points which lay on edge in smaller dimensions will come nearer in higher dimension – it will no longer be on edge.
Again, you seem to be confusing the fact that because the shortest possible linear distance (“as a crow flies”) between starting point and target does in fact significantly decrease in higher dimensions that therefore the average number of random walk steps required to find the target will also decrease in the same manner. This simply isn’t true. You’d see why if you actually sat down and did the math for the average number of random walk steps required to reach a particular target within various dimensions of sequence space (given the same ratio of very rare targets vs. non-targets). You say that the relevant math for random walks “is not relevant”, but I fail so see how? It seems to me like you simply don’t want to actually calculate the odds of a random walk hitting a particular target within a given number of steps…
Second, you don’t search the space- you search the ‘search space’. If there are totally 5,000 metabolic pathways, it is the ‘search space’ in both small and higher dimensions. IOW 5,000 metabolic pathways are spread across volume in higher dimension. So, in a 1 unit circle, those 5,000 metabolisms are spread across area of 3.14159 (Pix1^2). In a 10 dimension sphere, those 5,000 pathways are spread across(2Pi^(10/2)/Gamma[10/2])/10 volume of 2.55016. In 20 dimensions, the 5,000 metabolic pathways are spread across a volume of just 0.0258069; in a 30 dimension, the metabolic pathways are spread across a volume of just 0.0000219154 ! It takes more than 4 steps to reach edge of a unit circle but in higher dimension, it is not even a single step. In fact, in every dimensions after 5, the volume will keep decreasing and hence the random walk step to reach edge or anywhere inside will decrease too.
It is very difficult for me to follow your argument here. First off, given a particular radius, the space or “volume” of potential options within various dimensions obviously increases - exponentially. Of course, given a set number of potential options within various dimensions, the “radius” would in fact decrease exponentially with each increase in dimension. However, the volume or number of potential options or “positions” within sequence space would not decrease, but would remain the same. Likewise, the number of potential targets vs. non-targets would also remain the same. This also means that even though the shortest possible distance between the starting point and the target would in fact decrease dramatically with each increase in dimensional space, the average number of random walk steps needed for a random walker to find the target would not decrease at all – not even a little bit. https://en.wikipedia.org/wiki/N-sphere#Other_relations Why might this be? – given such a dramatic decrease in the linear distance with each increase in dimensional space? Because, 1) the number of options doesn’t change, 2) the ratio of target vs. non-target options doesn’t change, and 3) each option or location within sequence space maintains equal odds of being hit by a random walk step. That means, of course, that our rare target will not get hit by a random walk step any faster at higher vs. lower dimensions. It’s simple math. Remember now, what counts is the average number of steps required to hit a target – not the odds of getting from one side of sequence space to the other. That’s irrelevant. You can bounce all over sequence space, from one side to the other, but that doesn’t increase your odds of a random walker hitting the target. Now, please, sit down and do the math for the average number of random walk steps it takes to reach a specific target at higher and higher dimensions. It simply doesn’t change – as my previous illustrations highlighted for you. But, you need to do the math for yourself, and show your work as to how higher dimensions could possibly reduce the average number of random walk steps to find a very rare target…seanpit
March 22, 2016
March
03
Mar
22
22
2016
06:48 AM
6
06
48
AM
PDT
seanpit: You falsely claim, here and on your website, that I drew the limit for evolutionary progress at the level of single words. What you said was that If I want to evolve a new 7-letter word starting with meaningful 7-letter word, I will have to swim through this ocean of meaningless words.” But you don’t have to swim through meaningless sequences to evolve 7-letter words. Your claim is false.Zachriel
March 20, 2016
March
03
Mar
20
20
2016
08:41 AM
8
08
41
AM
PDT
seanpit: Your version of “selective evolution” appears to be nothing more than a random sampling of the sequence space surrounding a starting position until a target is found. It's your version — evolving words one letter at a time. seanpit: “When it comes to locating small targets in large spaces, random sampling and random walks are equally ineffective.” Given two frisbees separated on a vast landscape, both random sampling and random walks will eventually find the other frisbee. Selective evolution will not. seanpit: And, in real life, each step up the ladder of functional complexity (i.e., minimum structural threshold requirements) will continue to require exponentially greater and greater amounts of time to realize via the Darwinian mechanism of random mutation and function-based selection. Repeatedly handwaving in the general direction of big numbers doesn't constitute an argument. seanpit: You’re only using these “statistical tests” at very low levels of functional complexity within the English language system. Statistical tests of an entire library show that English texts are not scattered randomly in sequence space. Otherwise, the text would not be compressible, which it is. For instance, there are only a few tens-of-thousands of possible sequences than can appear between spaces. (We call them "words".) seanpit: As far as an “unambiguous measure” of a beneficial function, I’m not sure it is possible to be clearer than the definition used for biological systems – i.e., a system that produces a survival/reproductive advantage for the organism in a given environment vs. the rest of its peers. You're just rewording the question. How do we test the "survival" of a sequence of letters? It's your claim, after all. Let's try this: A sequence of words from Shakespeare have obviously survived. So we might use the works of Shakespeare as a dictionary, or even just a single play, say Hamlet. If a word sequence matches something in Shakespeare, then it has met the test of survival. seanpit: If we’re not talking about biological evolution, then our conversation is over. YOU made the claim about words. Sure, it is meant to be representative of biological evolution, but it was specifically a claim about words. As such, either you need to support it or abandon it. At this point, we're still trying to apply some rigor to your notion of "2) Select based on changes in beneficial function".Zachriel
March 20, 2016
March
03
Mar
20
20
2016
06:49 AM
6
06
49
AM
PDT
1 2 3 4 6

Leave a Reply