Uncommon Descent Serving The Intelligent Design Community

Sean Pitman on evolution of mitochondria

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
mitochondria/Louisa Howard

From Detecting Design:

Now, it is true that mitochondrial organelles are quite unique and very interesting. Unlike any other organelle, except for chloroplasts, mitochondria appear to originate only from other mitochondria. They contain some of their own DNA, which is usually, but not always, circular – like circular bacterial DNA (there are also many organisms that have linear mitochondrial chromosomes with eukaryotic-style telomeres). Mitochondria also have their own transcriptional and translational machinery to decode DNA and messenger RNA and produce proteins. Also, mitochondrial ribosomes and transfer RNA molecules are similar to those found in bacteria, as are some of the components of their membranes. In 1970, these and other similar observations led Dr. Lynn Margulis to propose an extracellular origin for mitochondria in her book, Origin of Eukaryotic Cells (Margulis, 1970). However, despite having their own DNA, mitochondria do not contain anywhere near the amount of DNA needed to code for all mitochondria-specific proteins. Over 99% of the proteins needed for mitochondrial function are actually produced outside of the mitochondria themselves. The DNA needed to code for these proteins is located within the cell’s nucleus and the protein sequences are assembled in the cytoplasm of the cell before being imported into the mitochondria (Endo and Yamano, 2010). It is hypothesized that these necessary genes were once part of the mitochondrial genome, but were then transferred and incorporated into the eukaryotic nuclear DNA over time. Not surprisingly then, none of the initial mtDNAs investigated by detailed sequencing, including animal mtDNAs, look anything like a typical bacterial genome in the way in which genes are organized and expressed (Michael Gray, 2012).

It is interesting to note at this point that Margulis herself wasn’t really very Darwinian in her thinking. She opposed competition-oriented views of evolution and stressed the importance of symbiotic or cooperative relationships between species. She also argued that standard neo-Darwinism, which insists on the slow accrual of mutations by gene-level natural selection, “is in a complete funk” (Link).

But what about all of those similarities between mitochondria and bacteria? It would seem like these similarities should overwhelmingly support the theory of common ancestry between bacteria and mitochondria.

Well, the problem with Darwinian thinking in general is that too much emphasis is placed on the shared similarities between various creatures without sufficient consideration of the uniquely required functional differences. These required differences are what the Darwinian mechanism cannot reasonably explain beyond the lowest levels of functional complexity (or minimum structural threshold requirements). The fact of the matter is that no one has ever observed nor has anyone ever published a reasonable explanation for how random mutations combined with natural selection can produce any qualitatively novel protein-based biological system that requires more than a few hundred specifically arranged amino acid residues – this side of trillions upon trillions of years of time. Functionally complex systems that require a minimum of multiple proteins comprised of several thousand specifically-coded amino acid residue positions, like a rotary flagellar motility system or ATPsynthase (illustrated), simply don’t evolve. It just doesn’t happen nor is it remotely likely to happen in what anyone would call a reasonable amount of time (Link). And, when it comes to mitochondria, there are various uniquely functional features that are required for successful symbiosis – that bacteria simply do not have. In other words, getting a viable symbiotic relationship established to begin with isn’t so simple from a purely naturalistic perspective. More.

See also: Cells were complex even before mitochondria?: Researchers: Our work demonstrates that the acquisition of mitochondria occurred late in cell evolution, host cell already had a certain degree of complexity

and Life continues to ignore what evolution experts say (symbiosis can happen)

Follow UD News at Twitter!

Comments
seanpit: You admit, now, that if the targets are in fact randomly distributed, uniformly, within sequence space that there would be no advantage for recombination vs. point mutations? Zachriel @25: Sure (density descreases). However, the space is not random, but highly structured. If words were random letters, then word evolution wouldn't work. seanpit: the only argument you really have left is your argument that the targets are not randomly distributed, but are lined up and clustered in related groups – – at all levels of functional complexity! Of course they're not randomly distributed. Even a simple statistical test of the English language shows letter sequences are not evenly distributed. seanpit: while this is true at low levels of functional complexity, it becomes less and less true with each step up the ladder of functional complexity. That's your claim. Apparently one in 10^20 is not sparse enough for you. seanpit: At this point, the gaps between one island and the next closest island start to grow, in a linear manner, with each additional step up the ladder. That's your claim. We can use intelligence to test this claim, because your claim isn't that we can't find the link, but that such a link doesn't exist. seanpit: The only argument you must present, at this point, is to continue to argue, in face of the overwhelming evidence ... You haven't provided any evidence. That requires looking at the actual landscape in question, something you haven't been able to unambiguously define. seanpit: Ruggedness is produced by sparseness. That is incorrect. A vast flat zero landscape with a small gentle hill in the middle is sparse, but not rugged. A random landscape of zeros and ones is rugged, but not sparse. seanpit: There is no contradiction to my position beyond very low levels of functional complexity – as I’ve already explained. Of course it directly contradicts your claim that recombination is irrelevant. “Efficient structural exploration requires intermediate nonextreme ratios between point-mutation and crossover rates." seanpit: in biology, as in the English language system, sequences can be functionally neutral, detrimental, or beneficial relative to what came before. Surely you understand that – right? Sure. And that's what we're asking. In order to test your claim about word evolution, you need to “Select based on changes in beneficial function," that is, you need an unambiguous function that returns the difference in what you call beneficial function. You haven't been able to do that, so your claim is essentially undefined.Zachriel
March 15, 2016
March
03
Mar
15
15
2016
02:22 PM
2
02
22
PM
PDT
Zachriel,
seanpit: There simply is no advantage for one random search algorithm over any other when it comes to finding rare targets within unknown locations within a sequence space of options.
That’s assuming the targets are randomly distributed, which they clearly are not.
It seems like we’re finally getting somewhere. You admit, now, that if the targets are in fact randomly distributed, uniformly, within sequence space that there would be no advantage for recombination vs. point mutations? That’s good. Now, the only argument you really have left is your argument that the targets are not randomly distributed, but are lined up and clustered in related groups - - at all levels of functional complexity! As I’ve already pointed out in fair detail in this thread, while this is true at low levels of functional complexity, it becomes less and less true with each step up the ladder of functional complexity. With each step up the ladder, the bridges of closely-spaced steppingstones within higher and higher level sequence spaces become more and more narrowed until, not too far up the ladder, they start to snap and break down completely. At this point, the gaps between one island and the next closest island start to grow, in a linear manner, with each additional step up the ladder. By the time you get to the level of 1000 saars, the minimum likely non-beneficial gap distance is over 50 mutational changes wide. At this point there is in fact a fairly uniform distribution of potential target sequences throughout sequence space. While island clusters of sequences with a given function still exist within high-level sequence space, the distribution of these island clusters relative to each other takes on the appearance of a random uniform distribution. That is why, at this point, there really is no significant advantage to recombination vs. point mutations – as you yourself would agree given such a situation. The only argument you must present, at this point, is to continue to argue, in face of the overwhelming evidence, that regardless of the level of functional complexity, even beyond the level of 1000 saars, that potentially beneficial sequences within these higher level sequence spaces are still just as lined-up and clustered as they were at very low levels of sequence space. That’s really the only argument you have left in order for the odds to work out for continued evolutionary progress, at the same rate, without an exponential stalling effect while moving up the ladder. The problem is, this notion of yours simply doesn’t hold true in real life. There simply is no web of narrow bridges of nice lined-up steppingstones at these higher levels of functional complexity. Potentially beneficial targets really do have a uniform distribution within higher levels of sequence space. That’s why your algorithms cannot work beyond very low levels of functional complexity without resorting, as you have consistently done on your website, to either intelligent manipulation or template matching to some pre-determined target sequence where selections are made in each generation without respect to changes in function. However, if you base your selection on improvements in beneficial function, your Darwinian mechanism will in fact stall out at the lowest levels of functional complexity - because sequence space does in fact take on the appearance that I describe and does not resemble what you describe.
The quoted section referred to ruggedness, not sparseness.
Ruggedness is produced by sparseness. A nice tight sequence of steppingstones where the next steppingstone in the sequence is just a bit more beneficial than the one that came before would produce a nice smooth slope to the landscape. However, a landscape that is largely flat and scattered with numerous sinkholes, comprised primarily of neutral or detrimental sequences, with only occasional clusters of beneficial sequences with sharp peaks, would produce a very “rugged” landscape. You yourself have already agreed that such a situation would in fact make point mutations and recombination mutations essentially equivalent. So, I don’t understand why you’re even arguing this point?
“Efficient structural exploration requires intermediate nonextreme ratios between point-mutation and crossover rates,” directly contradicting your position, and directly supporting ours.
There is no contradiction to my position beyond very low levels of functional complexity – as I’ve already explained. The bridges break down very quickly as you move up the ladder of functional complexity until the appearance of the islands in sequence space does in fact take on a random uniform appearance without the otherwise flatness of the rest of the vastness of the ocean of non-beneficial options within these higher-level spaces.
Given two sequences, you claim that we have to “Select based on changes in beneficial function.” If you can’t do that, then your claim is essentially undefined.
I’m sorry, but in biology, as in the English language system, sequences can be functionally neutral, detrimental, or beneficial relative to what came before. Surely you understand that – right? I’m not sure what you don’t understand about this?seanpit
March 15, 2016
March
03
Mar
15
15
2016
02:02 PM
2
02
02
PM
PDT
seanpit: Not true. That's your claim. seanpit: What are the odds that a large random leap into sequence space will happen to land on a rare target vs. the odds that a small step into sequence space will happen to land on a rare target? It depends on the fitness landscape, something you can't seem to be able to provide. seanpit: What are the odds of winning the lottery by changing one number in your starting sequence of numbers vs. changing many of them at the same time? Nothing! A sixteen-letter word is rarer than a lottery ticket (one in 10^20), yet evolution works quite efficiently at finding such words. seanpit: There simply is no advantage for one random search algorithm over any other when it comes to finding rare targets within unknown locations within a sequence space of options. That's assuming the targets are randomly distributed, which they clearly are not. seanpit: As the ratio of targets vs. non-targets gets is significantly reduced No. The quoted section referred to ruggedness, not sparseness. seanpit: Yan Cui, Wing Hung Wong, Erich Bornberg-Bauer, Hue Sun Chan "Efficient structural exploration requires intermediate nonextreme ratios between point-mutation and crossover rates," directly contradicting your position, and directly supporting ours. seanpit: Two randomly generated sequences of characters (English letters or amino acid residues) are most likely to be equally meaningless with respect to function – and therefore produce a neutral selective advantage relative to each other. Selection between two such random sequences would therefore be random as well. Probably true, but not an unambiguous measure. Given two sequences, you claim that we have to "Select based on changes in beneficial function.” If you can't do that, then your claim is essentially undefined.Zachriel
March 15, 2016
March
03
Mar
15
15
2016
12:56 PM
12
12
56
PM
PDT
Zachriel,
We’re talking about the wordscape. Recombination makes a significant difference in the behavior of evolutionary search of the wordscape.
Not true. Recombination makes no significant difference in finding target sequences more effectively (compared to point mutations) when targets are very rare relative to non-targets and there is a uniform distribution of targets within sequence space (i.e., a ratio of targets vs. non-targets of 1 in 1e500 sequences or so).
That’s your claim. Now how do you intend to support it?
Do the math yourself if you don’t believe me. What are the odds that a large random leap into sequence space will happen to land on a rare target vs. the odds that a small step into sequence space will happen to land on a rare target? It's essentially the same odds of success. What are the odds of winning the lottery by changing one number in your starting sequence of numbers vs. changing many of them at the same time? Nothing! There simply is no advantage for one random search algorithm over any other when it comes to finding rare targets within unknown locations within a sequence space of options. Imagine, if you will, a large flat square field measuring 1000 miles on each side. Now, say that there are 3 Frisbees measuring 1 foot in diameter randomly distributed out there somewhere in this large field. Now, you get 10 blindfolded men to search for these Frisbees. They have the option of taking small steps of 6 inches per step, larger steps of 3 feet per step, even larger jumps of 10 feet per jump, or even huge jumps of one mile per random jump. Which type of random walk, using small or larger steps, will be most effective in finding the Frisbees in this field? My conclusions here are also backed up by a paper published by Cui et. al. published in PNAS in 2002. Toward the end, Cui argues that:
“The benefit provided by nonhomologous recombinations [compared to point mutations] decreases as the ruggedness of the fitness landscape increases; and a very rugged landscape provides only marginal benefit compared to a less rugged landscape… When the landscape is rugged, the number of sequence explored by point mutations alone is comparable to that explored by point mutations plus [non-homologous] crossovers. This is because point mutations are more effective in finding a low-mortality area from an already well populated spot nearby, whereas when the landscape is rugged many crossover offspring are likely to end up at high-mortality spots.” Yan Cui, Wing Hung Wong, Erich Bornberg-Bauer, Hue Sun Chan, Recombinatoric exploration of novel folded structures: A heteropolymer-based model of protein evolutionary landscapes., PNAS, Vol. 99, Issue 2, 809-814, January 22, 2002.
So, there you have it. As the ratio of targets vs. non-targets gets is significantly reduced, the potential benefits of recombination mutations are also significantly reduced – to the point where there simply is no significant advantage between recombination vs. point mutations. Again, sit down and do the math for yourself. If you don’t agree, actually show me why the math favors larger steps in sequence space vs. smaller steps… beyond very low levels of functional complexity.
That’s not an unambiguous measure of fitness [reproductive fitness in a given environment]. Given two arbitrary sequences, you have to provide their relative fitness as you made explicit in your statement to “Select based on changes in beneficial function”.
Two randomly generated sequences of characters (English letters or amino acid residues) are most likely to be equally meaningless with respect to function – and therefore produce a neutral selective advantage relative to each other. Selection between two such random sequences would therefore be random as well. As I explain on my website, what’s the meaningful difference between: “quiziligook” vs. “quiziliguck”? Nothing – right? Therefore, selection between two such sequences would not show any preference, but would be entirely random.seanpit
March 15, 2016
March
03
Mar
15
15
2016
11:11 AM
11
11
11
AM
PDT
Me_Think,
This particular RNA enzyme happens to have 129 neighbors, and because we can compute their shapes, we can determine that there are forty-six new shapes in this neighborhood. That’s the number of shapes evolution can explore without genotype networks. And with them? If we only step to the text’s neutral neighbors—those with the same hammerhead shape—and determine the shape of all their neighbors, we already find 962 new shapes. And if we just walk one step further, to those neighbors’ neutral neighbors, we find 1,752 new shapes. Just two steps along this ribozyme’s genotype network, we can access almost forty times more shapes than in its immediate vicinity. The genotype network of the hammerhead shape of course extends much further than just two steps, and it has more than 1e19 members
Unfortunately, this doesn’t help solve the problem. Why not? Because, it doesn’t matter that the starting point in hyperdimenstional sequence space is surrounded by large numbers of neighbors – or that these next-door-neighbors are in turn surrounded by large numbers of neighbors (so that within a very short Levenshtein distance the total number of neighbors is absolutely enormous. None of that helps find a target sequence any faster via a random search algorithm given a particular ratio of uniformly distributed targets among non-target options. It doesn’t change the fact that with each linear increase in the Levenshtein distance that the target is located from the starting point, the average time to success increases exponentially. Do the math. Set up a program as see that I’m correct here. The basic problem, you understand, is that as the minimum Levenshtein distance increases linearly, the total number of possibilities increases exponentially. That means, of course that the total number of non-targets that could be searched increases at an exponentially greater rate as compared to the potential targets sequences. And, quite clearly, that means that the average time to successfully finding the target sequences, via any kind of random walk, increases exponentially. I’m sorry, but you haven’t solved the problem here – not by a long shot.seanpit
March 15, 2016
March
03
Mar
15
15
2016
10:35 AM
10
10
35
AM
PDT
seanpit: If the landscape where set up so that the potential targets where set up in nice little rows of closely-space steppingstones, you’d be right. We're talking about the wordscape. Recombination makes a significant difference in the behavior of evolutionary search of the wordscape. seanpit: So, at this point, at higher levels of functional complexity, do recombination mutations provide of substantive advantage over point mutations? – so that there is no exponential decline in the success rate over a given span of time? The answer to that question is no. There simply is no significant statistical advantage regardless of the type of random mutations employed. That's your claim. Now how do you intend to support it? seanpit: As I’ve already mentioned, for Darwinian evolution the answer is simple – an increase in relative reproductive fitness compared to one’s peers. That's not an unambiguous measure of fitness. Given two arbitrary sequences, you have to provide their relative fitness as you made explicit in your statement to "Select based on changes in beneficial function". It doesn't seem you can do this, so your claim is unsupported.Zachriel
March 15, 2016
March
03
Mar
15
15
2016
06:48 AM
6
06
48
AM
PDT
seanpit @ 41
This makes absolutely no difference for a random search algorithm as compared to a two or three dimensional search. The odds of success still decrease exponentially as the minimum Levenshtein distance increases linearly. Beyond this, none of the papers you reference explain how random search algorithms within hyperdimensional space can cover a linearly expanding Levenshtein distance between strings or character sequences without an exponential increase in required time…
This has been discussed long ago. Reposting from old thread: Imagine a solution circle (the circle within which solution exists) of 10 cm inside a 100 cm square search space. The area which needs to be searched for solution is pi x 10 ^2 = 314.15 The total Search area is 100 x 100 = 10000. The % area to be searched is (314.15/10000) x 100 = 3.14% In 3 dimensions,the search area will be 4/3 x pi x 10^3 Area to search is now cube (because of 3 dimensions) = 100^3. Thus the % of area to be searched falls to just 4188.79/100^3 = 0.41 % only. Hypervolume of sphere with dimension d and radius r: [Pi]^(d/2))/[CapitalGamma](d/2+1) HyperVolume of Cube = r^d At 10 dimensions, the volume to search reduces to just: 0.000015608 % But in nature, the actual search area is incredibly small. As wagner points out in Chapter six of his book (Arrival of the Fittest) :
In the number of dimensions where our circuit library exists—get ready for this—the sphere contains neither 0.1 percent, 0.01 percent, nor 0.001 percent. It contains less than one 10^ -100th of the library
The library that wagner talks about is based on actual metabolic pathways. There are 5,500 metabolic pathways. You can explore all the pathways at biocyc.org These are represented by the hypothetical library (Just like landscapes in Dembski, Axe papers). The library is an analogy – See Chapter Three Notes: This analogy is inspired by a famous short story of the Argentine author Jorge Luis Borges entitled “The Library of Babel” (Spanish original: “La biblioteca de Babel”), published in English translation in Borges (1962). The idea behind this short story, however, predates Borges. It has been used by many other authors, including Umberto Eco and Daniel Dennett Here’s another example of how RNA enzyme called hammerhead ribozyme search is made easy:
This particular RNA enzyme happens to have 129 neighbors, and because we can compute their shapes, we can determine that there are forty-six new shapes in this neighborhood. That’s the number of shapes evolution can explore without genotype networks. And with them? If we only step to the text’s neutral neighbors—those with the same hammerhead shape—and determine the shape of all their neighbors, we already find 962 new shapes. And if we just walk one step further, to those neighbors’ neutral neighbors, we find 1,752 new shapes. Just two steps along this ribozyme’s genotype network, we can access almost forty times more shapes than in its immediate vicinity. The genotype network of the hammerhead shape of course extends much further than just two steps, and it has more than 1019 members
P.S: Dimensions are mathematical representation of the structure/process features under study. It has got nothing to do with spatial dimension. I can represent the search hills in search landscape in “height and coordinate dimensions” too. Note that polytope naturally forms network. (Hyper cube is family of polytope).Me_Think
March 14, 2016
March
03
Mar
14
14
2016
07:53 PM
7
07
53
PM
PDT
Zachriel:
Then let’s avoid getting sidetracked on that issue again. Evolution, for our purposes, is defined as a population that undergoes random point-mutations and random recombination, with selection for properly spelled words.
That is artificial selection which is very different from natural selection. So thank you for admitting that your model doesn't apply to what Sean is talking about.Virgil Cain
March 14, 2016
March
03
Mar
14
14
2016
06:47 PM
6
06
47
PM
PDT
Zachriel,
seanpit: In short, please do explain to me how completely random recombination mutations can somehow maintain the odds of success, without a significant decline in the success rate, for a population maintained at a constant size – all while the ratio of targets vs. non-targets decreases at an exponential rate?
It depends on the landscape. We know that for words evolution with recombination works much differently than evolution without recombination.
If the landscape where set up so that the potential targets where set up in nice little rows of closely-space steppingstones, you’d be right. If the Levenshtein distances were consistently small, it would make a big difference. However, if the landscape were one where the targets had an apparently random essentially uniform distribution, you’d be wrong. And, this is what the landscape of functional sequence space looks like, more and more, with each step up the ladder of functional complexity. So, at this point, at higher levels of functional complexity, do recombination mutations provide of substantive advantage over point mutations? – so that there is no exponential decline in the success rate over a given span of time? The answer to that question is no. There simply is no significant statistical advantage regardless of the type of random mutations employed. Statistically, the only difference is that point mutations cover regions closer to the starting point compared to recombination or indel mutations that take larger steps into the surrounding sequence space. That’s the only real statistical difference. What this means is that if the targets are close to home, then they will be found more often by point mutations as compared to recombination mutations involving longer sequences. However, once you start talking about longer and longer Levenshtein distances to the next closest target, the odds of success start to become more and more similar between random walks based on point mutations and recombination mutations. Pretty soon, the ratio between targets and non-targets gets so low that there really is no statistical advantage between various kinds of random walks or search algorithms when it comes to finding such rare targets that are randomly distributed in an essentially uniform manner within a very large search space. It simply doesn’t matter anymore, statistically, if you take large steps or small steps. The odds of success remain essentially the same.
seanpit: 2) Select based on changes in beneficial function
How do you intend to unambiguously define “beneficial function” for longer sequences of letters so as to test your claim?
As I've already mentioned, for Darwinian evolution the answer is simple – an increase in relative reproductive fitness compared to one’s peers. The same could be true for any other system of functional information where a beneficial goal might be defined where any improvement in achieving this goal would give an advantage and therefore be preferentially selected to populate the next generation. If you aren’t modeling some kind of functional/meaningful advantage in your algorithms, you’re not modeling the Darwinian mechanism. It’s really as simple as that. And, so far, your algorithms are actually based on intelligent design or template matching – not the Darwinian mechanism where selection is only based on improved beneficial function.seanpit
March 14, 2016
March
03
Mar
14
14
2016
05:31 PM
5
05
31
PM
PDT
seanpit: In short, please do explain to me how completely random recombination mutations can somehow maintain the odds of success, without a significant decline in the success rate, for a population maintained at a constant size – all while the ratio of targets vs. non-targets decreases at an exponential rate? It depends on the landscape. We know that for words evolution with recombination works much differently than evolution without recombination. seanpit: 2) Select based on changes in beneficial function How do you intend to unambiguously define "beneficial function" for longer sequences of letters so as to test your claim?Zachriel
March 14, 2016
March
03
Mar
14
14
2016
04:51 PM
4
04
51
PM
PDT
Zachriel, The Odds of Success for Recombination Mutations: In short, please do explain to me how completely random recombination mutations can somehow maintain the odds of success, without a significant decline in the success rate, for a population maintained at a constant size - all while the ratio of targets vs. non-targets decreases at an exponential rate? Please explain the math behind this notion of yours and how it can actually work? - given that the minimum Levenshtein distance between your starting point and the next closes target is getting longer and longer. Really, please explain the basis for your assumption of steady odds here and the lack of any significant corresponding increase in non-beneficial options. I'd be most interested. (Hint: To understand the math more easily, try reducing the steady state population size to two or three and the reproductive rate to two per individual per generation. While increasing the population size and/or reproductive rate helps for a while, there is only so much that this can achieve before even a very large populations with very high reproductive rates can no longer keep up with an additional step up the ladder of functional complexity. The pattern of an exponential increase in the time required will set in at this point. Not only that, but populations that have low reproductive rates and higher mutation rates will actually devolve from their starting point fitness level. They will not even stay neutral much less "evolve" higher level systems of function over time).seanpit
March 14, 2016
March
03
Mar
14
14
2016
04:29 PM
4
04
29
PM
PDT
Zachriel,
That is demonstrably incorrect. It [recombination mutations] makes a significant difference with word-evolution. You are simply wrong.
Where's your "demonstration" beyond very low levels of functional complexity? I'm telling you that the odds are not significantly improved by recombination mutations when it comes to finding potential targets at higher levels of functional complexity (i.e., when it comes to finding functional sequences that require at least 1000 specifically arranged characters to work in a selectably beneficial manner). The odds of success for recombination mutations at such higher levels is essentially the same as it is for point mutations alone. In other words, recombination mutation don't solve the problem of an exponential increase in the time required for success as one moves up the ladder of functional complexity. The very same problem remains regardless of the types of random mutations you're using. You simply have not "demonstrably solved" this problem! - not even close! If anything, your Phrasenation program proves my point here! Now, what you have very clearly demonstrated on your website is that evolution works just fine if you throw in a little intelligent design or a bit of template matching to help out your recombination mutations - beyond very low levels of functional complexity. That makes it very easy to get across very large gaps (Levenshtein distances) of non-beneficial function in short order. Of course, if you just stick with the Darwinian mechanism things don't work so well beyond very low levels of functional complexity - regardless of the types of random mutations you decide to use in your search algorithms. As far as your other repeated questions, I've already responded in some detail. Why not go back and read what I wrote when you asked these questions the first time? Now, why not substantively address the questions I've presented to you? - instead of simply ignoring the main points I've presented regarding why your arguments and algorithms don't truly reflect the limitations of the Darwinian mechanism?seanpit
March 14, 2016
March
03
Mar
14
14
2016
04:01 PM
4
04
01
PM
PDT
seanpit: As I’ve already explained to you in fair detail (and even greater detail on my website), while genetic recombination is real ... Great! Then let's avoid getting sidetracked on that issue again. Evolution, for our purposes, is defined as a population that undergoes random point-mutations and random recombination, with selection for properly spelled words. seanpit: it doesn’t help you solve the problem if you are using truly mindless random mutations and function-based selection. The odds of success are essentially unchanged regardless of the type of mutations you use. That is demonstrably incorrect. It makes a significant difference with word-evolution. You are simply wrong. --------------------- In any case, let's return to this: seanpit: 1) Generate truly random mutations (point, indel, recombination, etc) that aren’t limited to determining and clipping out intact words or select “phrases” (something that doesn’t happen in real life). Been there, done that. seanpit: 2) Select based on changes in beneficial function – not template-matching which doesn’t happen in real life. How do you intend to do that? It’s your claim, after all, you are trying to prove. seanpit: 3) Have a reasonable maximum steady state population size with a reasonable reproductive rate and mutation rate. In other words, old sequences must “die off” as fast as new ones are “born” so that the overall population size remains the same. Been there, done that. seanpit: If you actually model how the Darwinian mechanism really works, you will quickly discover that your neat little pathways of shortly-spaced steppingstones break apart and become widely separated very quickly as you move up the ladder of functional complexity beyond your short little sequences. How do you intend to do that? It’s your claim, after all, you are trying to prove.Zachriel
March 14, 2016
March
03
Mar
14
14
2016
01:08 PM
1
01
08
PM
PDT
Virgil,
Sean, I am of the type that says organisms were designed to evolve and evolve by design. Meaning most genetic changes are directed by the organisms’ programming in response to some cue(s), environmental or internal.
The problem here is that there are very clear limitations to how much organisms can change respond to environmental changes via Mendelian variation or other forms of low-level evolution. However, Darwin argued along the lines of Zachriel that there are no such limitations to what the Darwinian mechanism can achieve. The problem, of course, is that the Darwinian mechanism is very clearly limited to very very low levels of functional complexity. So, whatever design there may be that allows for variation, such design was very limited and does not allow for the Darwinian story of origins or the development of qualitatively novel functional systems, beyond very low levels of functional complexity, within gene pools that were not already there - pre-created within the original parental gene pool.seanpit
March 14, 2016
March
03
Mar
14
14
2016
11:06 AM
11
11
06
AM
PDT
Sean, I am of the type that says organisms were designed to evolve and evolve by design. Meaning most genetic changes are directed by the organisms' programming in response to some cue(s), environmental or internal.Virgil Cain
March 14, 2016
March
03
Mar
14
14
2016
10:38 AM
10
10
38
AM
PDT
Virgil Cain,
And as I have explained to you no one can say if recombination is a happenstance occurrence. Most likely it is an intelligently designed feature that allows for genetic diversity in a short time.
This depends upon what type of genetic "recombination" you're talking about. If you're talking about meiotic recombination, it's true that this particular type of genetic recombination is highly constrained and controlled and only cuts and pastes in very specific pre-defined locations - and doesn't produce novel functionality that wasn't already there within the gene pool of options. Rather, this form of Mendelian variation simply allows for expression of various forms of pre-existing functionality that were already there within the gene pool of functional options for a particular location within the genome. However, there are in fact other far less common ways that genetic sequences can be "cut and pasted" together that are not so constrained, but are truly random in both the cutting and the pasting. Of course, the reason that I'm not making a big deal about Zachriel's use of such recombination methods is because, statistically, it really doesn't matter which method of "recombination" you're talking about when it comes to the problem of crossing larger and larger Levenshtein distances within sequence space. The odds of success remain essentially the same regardless of what types of truly random mutations are being considered.seanpit
March 14, 2016
March
03
Mar
14
14
2016
10:35 AM
10
10
35
AM
PDT
In this line, an interesting and relevant paper was once published by Lenski et. al., entitled, "The Evolutionary Origin of Complex Features" in the 2003 May issue of Nature. In this particular experiment the researchers studied 50 different populations, or “genomes”, collectively comprised of 3,600 individuals. Each individual began with 50 lines of code and no ability to perform "logic operations". Those that evolved the ability to perform logic operations were rewarded, and the rewards were larger for operations that were "more complex". After 15,873 generations, 23 of the genomes yielded descendants capable of carrying out the most complex logic operation: taking two inputs and determining if they are equivalent (the "EQU" function). The lines of code that made up these individuals ranged from 49 to 356 instructions long. The ultimately dominant type of individual contained 83 instructions and the ability to perform all nine logic functions that allowed it to gain more computer time. In principle, 16 mutations (recombinations) coupled with the three instructions that were present in the original digital ancestor could have combined to produce an organism that was able to perform the complex equivalence operation. In this particular experiment, the “recombinations” of code where limited to include particular portions or lines of code, without random cuts anywhere within the lines of code or random pasting of lines of code just anywhere within the evolving sequences – but only in particular locations. Still, even at this relatively low level of functional complexity (requiring no more than 16 mutations to achieve success) evolution of novel function didn’t happen.
"At the other extreme, 50 populations evolved in an environment where only EQU was rewarded, and no simpler function yielded energy. We expected that EQU would evolve much less often because selection would not preserve the simpler functions that provide foundations to build more complex features. Indeed, none of these populations evolved EQU, a highly significant difference from the fraction that did so in the reward-all environment (P = 4.3 x 10e-9, Fisher's exact test). However, these populations tested more genotypes, on average, than did those in the reward-all environment… because they tended to have smaller genomes, faster generations, and thus turn over more quickly. However, all populations explored only a tiny fraction of the total genotypic space. Given the ancestral genome of length 50 and 26 possible instructions at each site, there are ~5.6 x 10e70 genotypes; and even this number underestimates the genotypic space because length evolves."
Isn't that just fascinating? Even within a relatively small sequence space of just 10e70 sequences (equivalent to no more than a 50 character sentence in English), evolution stalled out with a gap distance of just 16 “neutral” mutations wide. When the intermediate steppingstones were no longer defined as “beneficial”, the relatively small neutral gap that was created successfully blocked the evolution of the EQU function (despite the hyperdimentionality of the sequence space here). Now, isn't this consistent with my predictions? This experiment was only successful when the intelligent designers were capable to defining what intermediate closely-space sequences or functions were "beneficial" for their evolving "organisms" (as in Zachriel’s “Phrasenation” algorithm) and exactly how the random mutations would be able to cut and paste lines of code. Obviously, if enough sequences or functions are defined as beneficial, producing a short average Levenshtein distance between beneficial islands or steppingstones within sequence space, then certainly this situation will result in rapid evolution - as we saw here in Lenski's 2003 demonstration. However, when neutral gaps grow in a linear manner with each step up the ladder of functional complexity, this quickly becomes a real problem for evolutionary progress - as many of Lenski's other evolution "demonstrations" have shown over the years since (when it comes to both the evolution of functionally novel computer code and novel functionality in real organisms in real life).seanpit
March 14, 2016
March
03
Mar
14
14
2016
10:27 AM
10
10
27
AM
PDT
Zachriel:
Recombination is an observed mechanism of biological evolution.
And as I have explained to you no one can say if recombination is a happenstance occurrence. Most likely it is an intelligently designed feature that allows for genetic diversity in a short time. If you ignore that then you are arguing from ignorance, which all observations and experiences have shown, is your favorite place to argue from.Virgil Cain
March 14, 2016
March
03
Mar
14
14
2016
10:08 AM
10
10
08
AM
PDT
Zachriel,
Recombination is an observed mechanism of biological evolution. If you ignore recombination, then your claim is irrelevant as an argument against evolution. However, you are correct that point-mutation alone cannot explain the evolution of complex structures.
As I’ve already explained to you in fair detail (and even greater detail on my website), while genetic recombination is real, it doesn’t help you solve the problem if you are using truly mindless random mutations and function-based selection. The odds of success are essentially unchanged regardless of the type of mutations you use. The use of recombination mutations, or whatever other type of truly random mutation you wish to consider, simply don't solve the exponential problem at hand. How can this be true? Why don't recombination mutations help to solve the problem? Again, the ability to produce functionally-beneficial recombinations at higher and higher levels of functional complexity is based on the odds of several things to include 1) the proper sequence existing pre-formed within your “pool” of options and 2) the proper cutting of just the right sequence and then pasting it into just the right location within a large pool of non-beneficial options. The problem for the Darwinian mechanism is that these problems become exponentially more and more problematic with each step up the ladder of functional complexity. You don’t recognize these as problems on your website because you are easily able to circumvent them by intelligent design. You intelligently set up your “pool” of options and then intelligently pick just the right sequences to undergo recombination at just the right time and location. Of course it’s great that you can demonstrate the effectiveness of intelligent design here, but what does this have to do with the Darwinian mechanism of truly random unguided mutations/recombinations and function-based selection where each step is sequentially beneficial compared to what came before? Your “Phrasentation” program, in particular, demonstrates my point here quite nicely. The steppingstones it selects as are clearly not sequentially beneficial when it comes to their meaning/functionality. You see, without the input of your own intelligent design, a mindless search algorithm simply can’t do the job beyond very very low levels of functional complexity this side of a practical eternity of time. Why not? Because, the number of non-beneficial options expands at an exponentially greater rate compared to those options that would actually be functionally beneficial – with each step up the ladder of functional complexity. While there might be the potential for finding a successful path, this potential becomes exponentially less and less likely with each linear increase in the minimum likely Levenshtein distance between the starting point and the next closest potentially beneficial target in sequence space. And, there is nothing in literature, absolutely nothing, that substantively addresses this problem for your Darwinian mechanism.seanpit
March 14, 2016
March
03
Mar
14
14
2016
09:53 AM
9
09
53
AM
PDT
Me_Think,
Representing biological landscape search in terms of Levenshtein distances is pretty archaic. Searches are multidimensional, which reduces the search space drastically.
While you’re correct in pointing out that the “sequence spaces” in question here are hyperdimensional, you’re mistaken to think that this reduces the distances or increases the odds of successfully finding novel beneficial target sequences over linearly expanding Levenshtein distances. You see, a linear expansion of a Levenshtein distance between a starting point and a potential target sequence will in fact decrease the odds of successfully finding this target sequence in an exponential manner – regardless of the fact that the space is hyperdimensional. This makes absolutely no difference for a random search algorithm as compared to a two or three dimensional search. The odds of success still decrease exponentially as the minimum Levenshtein distance increases linearly. Beyond this, none of the papers you reference explain how random search algorithms within hyperdimensional space can cover a linearly expanding Levenshtein distance between strings or character sequences without an exponential increase in required time… However, if you do have a real solution to this problem, by all means let me know. I’d be most interested indeed!seanpit
March 14, 2016
March
03
Mar
14
14
2016
09:34 AM
9
09
34
AM
PDT
seanpit: Just because the Levenshtein distance between one word and the next closest might be 1, that doesn’t mean that there is no random walk involved to get across even this short Levenshtein distance – i.e., that you’re guaranteed to “keep your feet dry”. You draw diagrams showing oceans, you use the word ocean. The question you raised was whether there are evolutionary steps connecting words. It's right up there @1. seanpit: Beyond this, there is no 16-letter word (or longer), that I know of, that can be built upon a sequence of words that are each separated by a Levenshtein distance of 1. Recombination is an observed mechanism of biological evolution. If you ignore recombination, then your claim is irrelevant as an argument against evolution. However, you are correct that point-mutation alone cannot explain the evolution of complex structures.Zachriel
March 14, 2016
March
03
Mar
14
14
2016
06:07 AM
6
06
07
AM
PDT
Representing biological landscape search in terms of Levenshtein distances is pretty archaic. Searches are multidimensional, which reduces the search space drastically. You can get more information at http://www.ieu.uzh.ch/wagner/publications.html and MathLab package for calculating hyper-dimension search can be found here - http://www.ieu.uzh.ch/wagner/publications-software.htmlMe_Think
March 13, 2016
March
03
Mar
13
13
2016
08:31 PM
8
08
31
PM
PDT
Zachriel:
There is no swimming with words. You can keep your feet dry from one-letter words up to 16-letter words and more. (The odds of randomly stumbling across a 16-letter word is about 1 in 10^20.)
Again, there is a “swim” or “random walk” even with spaces dealing with defined English words as “targets” – and this is without even considering the concept of sequentially improved beneficial function. Just because the Levenshtein distance between one word and the next closest might be 1, that doesn’t mean that there is no random walk involved to get across even this short Levenshtein distance – i.e., that you’re guaranteed to “keep your feet dry”. That notion is simply mistaken – as you very well know. Beyond this, there is no 16-letter word (or longer), that I know of, that can be built upon a sequence of words that are each separated by a Levenshtein distance of 1. In order to cross these Levenshtein distance gaps that are greater than 1, you resort to “recombination” or “concatenation” of pre-established sequences in your “pool” of options. http://www.zachriel.com/mutagenation/Pudding.htm This creates a problem as you move up the ladder of functional complexity where the ratio between targets and non-targets gets exponentially smaller and smaller – as I’ve already explained above and in significant detail on my own website: http://www.detectingdesign.com/flagellum.html#Calculation You don’t recognize this problem because your algorithms are not based on sequentially beneficial function (and neither are your manually-generated sequences). There is also an additional problem with your manually-generated sequences that is common for evolutionists in general. That is, you assume that a sequence that would help you cross a larger Levenshtein distance already exists within your “gene pool’ of options without actually considering the odds that such a sequence would actually exist in a particular pool just when it might be needed. You also don’t consider the odds of precisely cutting out this sequence from its original location and then precisely pasting into its new position to create the larger more functionally complex sequence. This is another fundamental problem with your thought experiments along these lines as detailed on your own website. You argue:
We know that a path exists between the single-letter word "O" and "Beware a war of words ere you err", so it is only a matter of determining how many mutations are required to discover that path.
It’s not that simple. You see, it would be quite simple if you had an infinitely large “gene pool”. Unfortunately, however, your gene pool is not only finite, but quite small relatively speaking (as is the case in real Darwinian style evolution among living organisms). There are only so many sequences your small “gene pool” can store at any given point in time. How does it know which sequences to store? – sequences which might result in longer and more and more functionally complex sequences in the future? Outside of intelligent design, there is no way to know this. That is why you resort to “template matching” algorithms, rather than function-based selection, to keep your gene pool of options on the right path toward your pre-determined goals or “templates”.
Hold it now. Your claim was that there were no single-step pathways. If that is correct, intelligence won’t help.
I never made this claim. I never said that there were no possible single-step pathways at very high levels of functional complexity. What I said is that there are no series of closely-spaced steppingstones within sequence spaces beyond very low levels of functional complexity (i.e., steppingstone pathways separated by very short Levenshtein distances of just 1 or 2). The Levenshtein distances between potential “targets” within sequence spaces at higher and higher levels of functional complexity get longer and longer with each step up the ladder. At this point one is required to make large leaps across sequence space – leaps that cover large Levenshtein distances. Of course, it is quite possible to make large leaps into sequence space, covering large Levenshtein distances, with a single bound, and be successful in landing on a target sequence (as you have demonstrated quite nicely on your website). This is always possible. However, such leaps become exponentially less and less likely to be successful in one single step using a mindless search algorithm - with each step up the ladder of functional complexity. Intelligent design, on the other hand, can find such sequential single-step pathways were each step is indeed sequentially beneficial. You see, intelligent design can plan for the future – natural selection cannot. That is why intelligent design is able to set up systems in order to cross gaps in sequence space (as you have done with your illustrations on your own website) that natural selection cannot cross within a reasonable amount of time. By intelligent design I can pre-form two sequences that I know will, when merged together in just the right way, produce a larger much more functionally complex system. Natural selection cannot plan ahead like this. Natural selection does not know what might work, ahead of time, if various sequences were concatenated this way or that way. This is the advantage that intelligent design has over natural selection… and it’s a key advantage. What you have to demonstrate, now, is that a mindless search algorithm, like the Darwinian mechanism of random mutations and function-based selection, is actually up to the job of crossing large Levenshtein distances at these higher levels of functional complexity in a reasonable amount of time – like intelligent design can do. This you have yet to achieve because your algorithms don’t use function-based selection.
How do you intend to do that? It’s your claim, after all, you are trying to prove.
I don’t know how many times I have to refer you to my website before you’ll actually read what I wrote and at least try to substantively address the problems for the Darwinian mechanism that I’ve listed: http://www.detectingdesign.com/flagellum.html#Calculation In short, it is quite clear that, beyond extremely low levels of functional complexity, there are no “networks” of potentially beneficial target steppingstones like you imagine where each steppingstone has a very close Levenshtein distance compared to the next in the pathway (your own arguments on your website illustrate this particular point quite nicely). So, by the time you get beyond the level of 1000 saars, the uniformly distributed potential target islands are extremely isolated from each other, completely surrounded, on all sides, by truly vast oceans of non-beneficial sequences. In order to cross these huge gaps between these island targets, you have to resort to multi-character mutations or the cutting and pasting of just the right large sequences in just the right positions – over and over again (as you have done on your own website – right in line with what I’m saying on my website). The odds of such successful recombinations drop off, exponentially, with each step up the ladder (if you’re actually using the Darwinian mechanism). The multiple reasons for this exponentially problem are listed in detail on my website – if you care to actually look. Suffice it to say that your own efforts to falsify my position have actually ended up supporting my own claims. You haven’t even begun to address the problem because your algorithms and your assumptions are based on either intelligent design or template matching – not the actual Darwinian mechanism of random mutations and function-based selection. . . just like I’ve always predicted since I first ran into you on Talk.Origins back in 2004.seanpit
March 13, 2016
March
03
Mar
13
13
2016
04:31 PM
4
04
31
PM
PDT
seanpit: The random walk or “swim” within such small spaces is of course relatively short indeed! There is no swimming with words. You can keep your feet dry from one-letter words up to 16-letter words and more. (The odds of randomly stumbling across a 16-letter word is about 1 in 10^20.) seanpit: However, this is not the case at higher and higher levels of functional complexity. That's your claim. Now show it. seanpit: I’ve already explained to you why point mutations and/or recombination/concatenations will not help solve the problem in real life – without the use of intelligent design (which forms the basis of the illustrations on your own website). Hold it now. Your claim was that there were no single-step pathways. If that is correct, intelligence won't help. seanpit: 1) Generate truly random mutations (point, indel, recombination, etc) that aren’t limited to determining and clipping out intact words or select “phrases” (something that doesn’t happen in real life). Been there, done that. seanpit: 2) Select based on changes in beneficial function – not template-matching which doesn’t happen in real life. How do you intend to do that? It's your claim, after all, you are trying to prove. seanpit: 3) Have a reasonable maximum steady state population size with a reasonable reproductive rate and mutation rate. In other words, old sequences must “die off” as fast as new ones are “born” so that the overall population size remains the same. Been there, done that. seanpit: If you actually model how the Darwinian mechanism really works, you will quickly discover that your neat little pathways of shortly-spaced steppingstones break apart and become widely separated very quickly as you move up the ladder of functional complexity beyond your short little sequences. How do you intend to do that? It's your claim, after all, you are trying to prove.Zachriel
March 13, 2016
March
03
Mar
13
13
2016
07:56 AM
7
07
56
AM
PDT
Zachriel,
But we’re glad you agree that word-space is generally structured so as to be navigable by an evolutionary (selectable stepwise) search.
Again, as I originally explained to you back in 2004, this is only true at very very low levels of functional complexity within very small sequence spaces. The random walk or "swim" within such small spaces is of course relatively short indeed! However, this is not the case at higher and higher levels of functional complexity. This situation changes quite dramatically beyond these very low levels. Why then do you continually misrepresent my position here and on your own website when you know that your arguing against a strawman misrepresentation of my actual position? Why not at least be honest enough to present and candidly deal with my true position? - unless that's just too hard for you? ;-)
seanpit: You can’t just keep adding single characters to your sequence where each single character addition will be functionally beneficial.
Don’t forget point mutation and recombination.
I’m not. I’ve already explained to you why point mutations and/or recombination/concatenations will not help solve the problem in real life – without the use of intelligent design (which forms the basis of the illustrations on your own website). Such necessarily precise recombinations could not be realized, without the additional input of intelligent design, by any algorithm based on a sequentially beneficial selection process that models how natural selection works in real life. Again, your manually-generated illustrations work because they are based on intelligent selection, not mindless random mutations and function-based selection. And, your "Phrasenation" algorithm works because it is based on template matching - not function-based selection. Again, you're simply not modeling natural selection. If anything, your website illustrations highlight the truth of my position - not yours.
seanpit: Try it and see. Very quickly you will come to breaks in your pathway where the distance that needs to be crossed is more than a Levenshtein distance of 1.
Try it how?
By actually modeling what the Darwinian mechanism does in real life: 1) Generate truly random mutations (point, indel, recombination, etc) that aren’t limited to determining and clipping out intact words or select “phrases” (something that doesn’t happen in real life). 2) Select based on changes in beneficial function – not template-matching which doesn’t happen in real life. 3) Have a reasonable maximum steady state population size with a reasonable reproductive rate and mutation rate. In other words, old sequences must “die off” as fast as new ones are “born” so that the overall population size remains the same. If you actually model how the Darwinian mechanism really works, you will quickly discover that your neat little pathways of shortly-spaced steppingstones break apart and become widely separated very quickly as you move up the ladder of functional complexity beyond your short little sequences. The option for the recombination of sequences or “cutting and pasting” sequences together that already exist in your established “gene pool” of options won’t help you at higher levels because of the statistical problems I’ve already explained above. In short, at higher and higher levels the odds that just the right sequences will exist, pre-formed, within the established pool of options so that only one or two or three mutations would be needed to cross the gap decrease, exponentially, with each step up the ladder.seanpit
March 13, 2016
March
03
Mar
13
13
2016
07:20 AM
7
07
20
AM
PDT
Mung: Yet more evidence for intelligent design. Thanks Obama!Zachriel
March 12, 2016
March
03
Mar
12
12
2016
09:06 AM
9
09
06
AM
PDT
Zachriel:
But we’re glad you agree that word-space is generally structured so as to be navigable by an evolutionary (selectable stepwise) search.
Yet more evidence for intelligent design.Mung
March 12, 2016
March
03
Mar
12
12
2016
08:41 AM
8
08
41
AM
PDT
seanpit: There is a “random walk” even when the next steppingstone is just 1 Levenshtein step away from the starting point (i.e., a random search algorithm is not successful every single time at this distance). There's no reasonable way to read "swim through this ocean of meaningless words," as meaning the same as there are stepping stones the entire distance. You emphasize that with your diagram. http://www.educatetruth.com/wp-content/uploads/2014/01/Sequence-Space.png But we're glad you agree that word-space is generally structured so as to be navigable by an evolutionary (selectable stepwise) search. seanpit: You can’t just keep adding single characters to your sequence where each single character addition will be functionally beneficial. Don't forget point mutation and recombination. seanpit: Try it and see. Very quickly you will come to breaks in your pathway where the distance that needs to be crossed is more than a Levenshtein distance of 1. Try it how?Zachriel
March 12, 2016
March
03
Mar
12
12
2016
07:38 AM
7
07
38
AM
PDT
One more thing you don't seem to realize when it comes to your argument on your website: http://www.zachriel.com/mutagenation/Beware.htm When you "concatenate" your words and phrases in your "evolving" sequences, you do so with the use of intelligent design. It's not a random concatenation process as happens in real life. In real life sequences are cut out at random and pasted at random within the middle of other previously functional sequences. The result is almost always a loss of function, not a gain in novel beneficial function. And, this becomes exponentially more and more true with each step up the ladder of functional complexity... You see the problem here? Just because you have all the right words, preformed, in your pool of options to produce all the works of Shakespeare does not mean that they will assemble themselves, without intelligent design, to form anything functionally meaningful/beneficial beyond very very low levels of functional complexity. All you've done here is moved the problem up a scale. Instead of concatenating individual letters to make longer and longer sequences, you've resorted to concatenating entire words and phrases. You assume that, once an individual word or phrase is formed that your problems are over - that these words and short phrases can easily concatenating themselves, randomly, in the proper order to produce longer and longer Shakespearean phrases and poems, etc... ad infinitum, without any significant gaps in sequence space to slow this process down. That's just not how it works in real life. In real life it becomes harder and harder to get even pre-existing short meaningful/functional sequences of DNA (or English words) to concatenate property together to form novel systems of function at higher and higher levels of functional complexity - exponentially more and more difficult with each step up the ladder. By the time you're talking about systems that require a minimum of more than 1000 saars, there simply are no existing two or three subsystems that are already pre-formed that can be concatenated together without requiring numerous significant modifications that are not sequentially selectable as "beneficial". Suddenly you're left with a very large non-beneficial gap problem. And, if you argue that there are several dozen or so smaller systems that could, theoretically, be concatenated properly to form the larger system. Well, you run into the statistical problem of getting them all to arrange themselves properly by random chance alone until the larger system is complete and novel beneficial function can be realized.seanpit
March 11, 2016
March
03
Mar
11
11
2016
09:40 AM
9
09
40
AM
PDT
Zachriel,
Your statement clearly states it requires a random walk. It does not. There are selectable stepwise evolutionary pathways.
There is a "random walk" even when the next steppingstone is just 1 Levenshtein step away from the starting point (i.e., a random search algorithm is not successful every single time at this distance). Of course, this random walk is a very small walk with pretty high success rates at these low levels. I never said otherwise regarding 7-character sequence space. As I explained to you way back in 2004, the "swimming distance" isn't very far at all when you're talking about finding beneficial targets within sequence spaces that are less than a dozen characters in length! I specifically explained to you, way back then, that there are in fact "bridges" or closely-spaced "pathways" of "steppingstones" within such low levels of sequence space. Again, I went on to explain to you, in some detail, that such pathways rapidly break down as you move into higher and higher levels of sequence space/functional complexity. You can't just keep adding single characters to your sequence where each single character addition will be functionally beneficial. Try it and see. Very quickly you will come to breaks in your pathway where the distance that needs to be crossed is more than a Levenshtein distance of 1. Pretty soon you're talking about minimum Levenshtein distances of 2 or 3. And, as you move up the ladder of functional complexity, the minimum Levenshtein distance that must be crossed grows, in a linear manner. And, with each linear increase in this gap distance, the average time required to cross this distance increases exponentially. I explained this all to you way back in 2004... if you will recall:
Sean Pitman: It is my position that all language systems, to include English as well as genetic and protein language systems of living cells are not lined up nice and pretty like at all and that the clustering that does indeed exist at lower levels of complexity get smaller and smaller and more and more widely spaced, in and exponential manner, as one moves up the ladder of functional complexity.
What is so confusing about this concept? And, how does your "Phrasenation" algorithm solve the problem? Hmmmm? Why did you even create your Phrasenation algorithm if you had no idea what I was talking about? Please do explain your position beyond the lowest levels of functional complexity - because your Phrasentation algorithm simply isn't helpful in modeling the actual Darwinian mechanism. I've been waiting a very long time for you to come up with something to support your primary argument that the Darwinian mechanism is actually up to the task as you claim it is... and I'm still waiting. Oh, but what about your argument:
The doggerel “O Sean Pitman” shows that at least some long sequences are not disconnected in phrase-space.
There are several problems here. First off, and most importantly, your intermediate sequences are not sequentially beneficial in meaning/function compared to what came before. That's a fundamental problem for your position. Next, your steppingstones are not separated from each other by a Levenshtein distances of just 1. And, you don't get remotely close to a 1000 character sequence where each step is not only closely spaced, but is also sequentially beneficial in function/meaning compared to what came before - which has been my long-stated limitation for evolutionary progress via any kind of Darwinian algorithm (which yours is not). Your "Phrasenation" algorithm proves my point here. Look at the phrases that it generates as they "evolve". They are not sequentially meaningful/beneficial as they evolve... not even close. Most of the time they are completely meaningless. They aren't selected based on functionally beneficial meaning at all. They are selected based on template matching. Now, you tell me, how is this remotely comparable to the Darwinian mechanism of random mutation and function-based selection? - beyond the lowest levels of functional complexity? You still really believe that your Phrasentation program/argument explains anything along these lines? Really? How so?seanpit
March 11, 2016
March
03
Mar
11
11
2016
09:06 AM
9
09
06
AM
PDT
1 3 4 5 6

Leave a Reply