Uncommon Descent Serving The Intelligent Design Community

Sean Pitman on evolution of mitochondria

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
mitochondria/Louisa Howard

From Detecting Design:

Now, it is true that mitochondrial organelles are quite unique and very interesting. Unlike any other organelle, except for chloroplasts, mitochondria appear to originate only from other mitochondria. They contain some of their own DNA, which is usually, but not always, circular – like circular bacterial DNA (there are also many organisms that have linear mitochondrial chromosomes with eukaryotic-style telomeres). Mitochondria also have their own transcriptional and translational machinery to decode DNA and messenger RNA and produce proteins. Also, mitochondrial ribosomes and transfer RNA molecules are similar to those found in bacteria, as are some of the components of their membranes. In 1970, these and other similar observations led Dr. Lynn Margulis to propose an extracellular origin for mitochondria in her book, Origin of Eukaryotic Cells (Margulis, 1970). However, despite having their own DNA, mitochondria do not contain anywhere near the amount of DNA needed to code for all mitochondria-specific proteins. Over 99% of the proteins needed for mitochondrial function are actually produced outside of the mitochondria themselves. The DNA needed to code for these proteins is located within the cell’s nucleus and the protein sequences are assembled in the cytoplasm of the cell before being imported into the mitochondria (Endo and Yamano, 2010). It is hypothesized that these necessary genes were once part of the mitochondrial genome, but were then transferred and incorporated into the eukaryotic nuclear DNA over time. Not surprisingly then, none of the initial mtDNAs investigated by detailed sequencing, including animal mtDNAs, look anything like a typical bacterial genome in the way in which genes are organized and expressed (Michael Gray, 2012).

It is interesting to note at this point that Margulis herself wasn’t really very Darwinian in her thinking. She opposed competition-oriented views of evolution and stressed the importance of symbiotic or cooperative relationships between species. She also argued that standard neo-Darwinism, which insists on the slow accrual of mutations by gene-level natural selection, “is in a complete funk” (Link).

But what about all of those similarities between mitochondria and bacteria? It would seem like these similarities should overwhelmingly support the theory of common ancestry between bacteria and mitochondria.

Well, the problem with Darwinian thinking in general is that too much emphasis is placed on the shared similarities between various creatures without sufficient consideration of the uniquely required functional differences. These required differences are what the Darwinian mechanism cannot reasonably explain beyond the lowest levels of functional complexity (or minimum structural threshold requirements). The fact of the matter is that no one has ever observed nor has anyone ever published a reasonable explanation for how random mutations combined with natural selection can produce any qualitatively novel protein-based biological system that requires more than a few hundred specifically arranged amino acid residues – this side of trillions upon trillions of years of time. Functionally complex systems that require a minimum of multiple proteins comprised of several thousand specifically-coded amino acid residue positions, like a rotary flagellar motility system or ATPsynthase (illustrated), simply don’t evolve. It just doesn’t happen nor is it remotely likely to happen in what anyone would call a reasonable amount of time (Link). And, when it comes to mitochondria, there are various uniquely functional features that are required for successful symbiosis – that bacteria simply do not have. In other words, getting a viable symbiotic relationship established to begin with isn’t so simple from a purely naturalistic perspective. More.

See also: Cells were complex even before mitochondria?: Researchers: Our work demonstrates that the acquisition of mitochondria occurred late in cell evolution, host cell already had a certain degree of complexity

and Life continues to ignore what evolution experts say (symbiosis can happen)

Follow UD News at Twitter!

Comments
Your statement clearly states it requires a random walk. It does not. There are selectable stepwise evolutionary pathways.
Darwinian mechanisms are clearly a random walkVirgil Cain
March 10, 2016
March
03
Mar
10
10
2016
04:38 PM
4
04
38
PM
PDT
seanpit: And you took this statement as me arguing that the Darwinian mechanism would stall out at 7-character sequences? Your statement clearly states it requires a random walk. It does not. There are selectable stepwise evolutionary pathways. seanpit: http://www.educatetruth.com/wp-content/uploads/2014/01/Sequence-Space.png The doggerel "O Sean Pitman" shows that at least some long sequences are not disconnected in phrase-space.Zachriel
March 10, 2016
March
03
Mar
10
10
2016
01:22 PM
1
01
22
PM
PDT
Here’s a portion of our conversation from many years ago (2004), where you first argued for your idea that there are always nice little pathways of closely-spaced beneficial sequences throughout sequence spaces regardless of the level of functional complexity under consideration:
Sean Pitman: Since you are quoting me on your website Zach, it might be good to note that I never said that a 7-letter sequences would take "zillions" of generations much less years to evolve. In fact, I have said just the opposite many times… It seems very likely to me that the next higher levels (i.e: 8, 9, 10, etc) will take only one or two generations for your population to evolve 1,000 uniquely meaningful sequences at each level. However, by the time you get to level 25, I am thinking that your population is going to start noticeably stalling in its ability to evolve the 1,000 uniquely meaningful English sequences. By level 50 I'm not sure that your population of even 100 trillion will succeed in less than a million generations...
Zachriel: You have not shown that. Indeed, there is no way to know that from mere mathematical analysis. You have to know their distribution. For all we know, words are all lined up nice and pretty in "permutation space". It turns out that many, perhaps most, of them are!
- I say that the odds are very strongly against that assertion. It is my position that all language systems, to include English as well as genetic and protein language systems of living cells are not lined up nice and pretty like at all and that the clustering that does indeed exist at lower levels of complexity get smaller and smaller and more and more widely spaced, in and exponential manner, as one moves up the ladder of functional complexity. This assertion is not only mathematically valid, it has also been experimentally supported by many thousands of experiments that have never show anything to evolve in any language system beyond the lowest levels of functional complexity. For example, there are no examples of protein functions evolving that require a minimum more than a few hundred fairly specified amino acids working together at the same time. And, this is despite well over 10^14 individual organisms working on this problem under close observation for millions of generations.
Dated: 4/29/2004 https://groups.google.com/forum/message/raw?msg=talk.origins/TdfZ8CC9Bb0/X24ZX8is6xoJ https://groups.google.com/forum/#!msg/talk.origins/TdfZ8CC9Bb0/X24ZX8is6xoJ You see, we've been over all of this before. Now, if you have some valid reason for believing that your "pathways" really do exist within high levels of sequence space (like beyond the level of 1000 saars), by all means, present your evidence. Certainly your evolution algorithms do no such thing as they aren't based on functional selection, but on template matching without respect to beneficial function.seanpit
March 10, 2016
March
03
Mar
10
10
2016
09:38 AM
9
09
38
AM
PDT
Zachriel, You wrote:
You said, “If I want to evolve a new 7-letter word starting with meaningful 7-letter word, I will have to swim through this ocean of meaningless words.” But you don’t have to swim through meaningless sequences to cross over to the next meaningful word. Your claim is false.
And you took this statement as me arguing that the Darwinian mechanism would stall out at 7-character sequences? Really? I’m sorry, but that’s not what I said here. Again, nowhere did I say that evolution would stall out at the level of 7-character sequences. Why else would I specifically draw the line where the Darwinian mechanism stalls out at “1000 specifically arranged amino acid residues (1000 saars)”? – well before I had any conversation with you? While 7-character sequence space is a small “ocean” of around 1.28 billion sequences, the ratio of potentially beneficial vs. non-beneficial sequences is only about 1 in 300k. That’s a very small ratio when you’re talking about an algorithm that analyzes tens of thousands of sequences per “generation”. Also, take into account that at such low levels of functional complexity functional sequences for clusters that are connected to each other by extensive bridges that link the clusters throughout sequence space (see link below). http://www.educatetruth.com/wp-content/uploads/2014/01/Sequence-Space.png Yet, you write:
Sean Pitman: All I was trying to demonstrate is how the ratio of potentially beneficial vs. non-beneficial changes with each increase in the minimum size and or specificity of a sequence – exponentially. Sure. However, the space is not random, but highly structured. There are selectable stepwise paths from short words to long words.
The problem here, however, is that with each step up the ladder of functional complexity the “structure” and “stepwise paths” and “bridges” between the island clusters of beneficial sequences with higher and higher level sequence space start to break down – quite rapidly in fact. So, by the time you read the level of 1000 saars, there is no “stepwise path” of closely-spaced steppingstones between your starting point(s) and the next closest potentially beneficial island within sequence space. Your starting-point island is completely surrounded, on all sides, by a truly enormous ocean of non-beneficial sequences. There is simply no way to cross over to the next closest island except by swimming blindingly within an ocean that is larger than the universe for a star that is so very far away that it isn’t even visible with the most powerful telescope. We’re talking trillions upon trillions of years, on average, to get from one “island” to any other within such an extremely sparsely populated sequence space... That, in a nutshell, is the fundamental problem for the Darwinian mechanism. There simply are no “stepwise paths” beyond very very low levels of functional complexity. They just don’t exist. Of course, you’re not alone in believing that they must exist at all levels of functional complexity. I had a debate not too long ago with a mathematician, Jason Rosenhouse, who made the same claim that you just made here. He argued that the exponentially increasing ratio of potentially beneficial vs. non-beneficial at higher and higher levels of functional complexity didn’t matter because there would always be thin little paths of steppingstones that could quickly and easily transport the evolving sequence across the vast oceans of non-beneficial sequences (see link below). http://www.detectingdesign.com/JasonRosenhouse.html#Steppingstones The problem, of course, is that this just isn’t true beyond the lowest levels of functional complexity. Such paths simply don’t exist at or beyond the level of 1000 saars. Why not? Because, it is known that functionally-beneficial sequences have an essentially uniform distribution within sequence space. It is also known that at higher levels of functional complexity the modifications needed to get from one island to the next requires more than the tweaking of just one or two residue positions. At the level of 1000 saars dozens of residue potions need to be modified to produce something qualitatively new that is also functionally beneficial to the organism. And, a non-beneficial gap distance that is a few dozen residues wide (the Levenshtein distance) is not crossable in what anyone would consider to be a reasonable amount of time. For further information on this topic see: http://www.detectingdesign.com/flagellum.html#Calculation But, of course, I've already explained this to you before during our original conversations over 10 years ago:
Sean Pitman "Well now, that also depends now doesn't it? The answer to this question is really the answer to your second question. Technically speaking, the English language system _could_ have been set up so that all 2-letter sequences surrounding the "at" sequence would be meaningfully defined." Zachriel: It wasn't. Sean Pitman: That is correct. It wasn't set up like this even though it could have been. Instead, it was set up very much like I claim it was. It is much more randomly diffuse in its setup than you and many other evolutionists seem to be capable of recognizing. At lower levels the islands and bridges are in fact quite common. But, as even you have discovered, these islands start moving rapidly away from each other and the bridges start narrowing and snapping completely, in an exponential manner, with each step up the ladder of meaningful complexity. https://groups.google.com/forum/#!msg/talk.origins/TdfZ8CC9Bb0/Ad8Fbww1TYAJ
seanpit
March 10, 2016
March
03
Mar
10
10
2016
08:51 AM
8
08
51
AM
PDT
Zachriel is still conflating artificial selection with natural selection. Sean Pittman- trying to argue with zachriel is fruitless and just leads to aggravation.Virgil Cain
March 9, 2016
March
03
Mar
9
09
2016
07:37 AM
7
07
37
AM
PDT
Sean Pitman: Where did I ever say that 7-letter words (or a good bit longer) can’t evolve in a reasonable amount of time? You said, "If I want to evolve a new 7-letter word starting with meaningful 7-letter word, I will have to swim through this ocean of meaningless words." But you don't have to swim through meaningless sequences to cross over to the next meaningful word. Your claim is false. You also said, "Getting from one meaningful 7-letter phrase to a different meaningful 7-letter phrase requires, on average, a fairly long random walk through 250,000 meaningless options." This reiterates your point that there is no selectable stepwise path, and that it requires a random walk. This is false. There are selectable stepwise pathways. Sean Pitman: All I was trying to demonstrate is how the ratio of potentially beneficial vs. non-beneficial changes with each increase in the minimum size and or specificity of a sequence – exponentially. Sure. However, the space is not random, but highly structured. There are selectable stepwise paths from short words to long words.Zachriel
March 9, 2016
March
03
Mar
9
09
2016
07:17 AM
7
07
17
AM
PDT
Zachriel:
So you’ve abandoned your original contention that words much longer than seven letters can’t evolve per the algorithm you yourself provided above. That’s all you had to say.
Where did I ever say that 7-letter words (or a good bit longer) can't evolve in a reasonable amount of time? I've never made such a claim - ever. In fact, I've specifically said, many many many times (well before you came along with your Dawkins-like evolution algorithms), that my uncrossable line for evolutionary progress is at the level of 1,000 specifically-arranged characters. How is that remotely close to a very short 7-character sequence? All I was trying to demonstrate is how the ratio of potentially beneficial vs. non-beneficial changes with each increase in the minimum size and or specificity of a sequence - exponentially. You do realize, however, that a ratio of 1 in 350,000 is quite evolvable? There is no significant limitation to evolutionary progress at this level - especially given populations that run into the trillions (as in bacterial populations for example). For next time, why not try and read all of what I've written on a particular topic like this (as per the links I've provided in this thread) before you jump to conclusions? - and make claims about my position that I've never made? I simply challenged you to see how far you could get using a true Darwinian model of random mutations and natural selection. I never told you that the uncrossable line would end up at 7 or 10 or 12-letter words. I told you that you would eventually see an exponential decline in evolutionary potential as you moved beyond these very very low levels of functional complexity - until evolutionary progress completely stalls out before reaching the level of 1000 specifically arranged characters. So far, I've been right. You're algorithms have done absolutely nothing to support the Darwinian notion that random mutations and function-based selection can evolve anything beyond the low levels of functional complexity this side of a practical eternity of time.seanpit
March 8, 2016
March
03
Mar
8
08
2016
03:33 PM
3
03
33
PM
PDT
seanpit: The ratio of defined vs. non-defined two-letter sequences is about 1 in 7. The ratio of defined vs. non-defined 3-letter sequences is about 1 in 18. The ratio for defined vs. non-defined 7-letter sequence is about 1 in 350,000… etc. That's right, which was the basis of your original argument. seanpit: Beyond this, just because a single word happens to exist within the English dictionary doesn’t mean that it has a selectable advantage over any other word in a given context. So you've abandoned your original contention that words much longer than seven letters can't evolve per the algorithm you yourself provided above. That's all you had to say.Zachriel
March 8, 2016
March
03
Mar
8
08
2016
03:10 PM
3
03
10
PM
PDT
Virgil,
I was just telling you what evolutionists like Larry Moran will say about your claims.
I've had many discussions with Larry Moran over the years. The problem with neutral evolution, as Larry knows full well, is that with each linear increase in a neutral gap, the average time to cross this gap increases exponentially - because natural selection is completely blind within such a gap and cannot, therefore, aid in the process. So, you see, the argument that neutral evolution solves the statistical problems for Darwinian evolution is nonsense. It is the problem.seanpit
March 8, 2016
March
03
Mar
8
08
2016
02:51 PM
2
02
51
PM
PDT
Zachriel, You wrote:
We’re not talking about phrases, but words as there is no disagreement as to what constitutes a valid word in the English language. Per your statement, words are functional, and per your statement longer words can’t evolve from shorter words because as the words get longer, they are more and more widely separated in letter-space. You had stated elsewhere that the limit was words of about seven in length.
First off, I never said that the limit to evolutionary progress was "words about 7-letters in length". That's not remotely true. My stated limit for evolutionary progress has been consistently placed at systems that require over "1000 specifically arranged characters" (be those characters letters or amino acid residues within proteins). Also, as you very well know, evolution is supposed to be able to start with the very simple and evolve the very complex - equivalent to starting with single words and evolving an entire Shakespearean play. You're fully aware of this. After all, didn't you write a "Phrasenation" algorithm to evolve entire phrases, poems, and longer works from Shakespeare? http://www.zachriel.com/phrasenation/ Why did you do this if you thought we were only dealing with single words in English? Beyond this, just because a single word happens to exist within the English dictionary doesn't mean that it has a selectable advantage over any other word in a given context. Again, in modeling real evolution, real natural selection, your selection process must be based on changes in function - not just matches to a pre-established target sequence. If you cannot do this, you're simply not modeling natural selection. You're not getting at the heart of the problem for the Darwinian mechanism. Of course, even without modeling natural selection, the ratio of all words within the English dictionary changes with their size. The ratio of defined vs. non-defined two-letter sequences is about 1 in 7. The ratio of defined vs. non-defined 3-letter sequences is about 1 in 18. The ratio for defined vs. non-defined 7-letter sequence is about 1 in 350,000... etc. And, this changing ratio is in regards to defined vs. non-defined - without respect to beneficial function. Still, one quickly gets the idea of what would happen to this ratio given the additional requirement of going beyond what is merely defined to what is also functionally beneficial within a given setting or environment. Clearly, the ratio would significantly decrease given such an additional requirement. The exponential nature of the problem becomes quite clear to the candid mind. Notice also that at lower levels there is a clustering effect within most language or information-based systems (to include the English language). For example, many longer words are comprised of shorter words or prefixes that end up being clustered within various regions of sequence space. Random point mutations within these clusters can move around fairly rapidly. However, such a clustering effect becomes less and less prominent with each step of the ladder of functional complexity. Gaps between clusters become wider and wider. This is not reflected in your "Phrasenation" algorithm, in particular, because you define all portions of phrases or sequences as "selectable" as long as they match your chosen target sequences. And, of course, that's a key problem with your algorithm.seanpit
March 8, 2016
March
03
Mar
8
08
2016
02:45 PM
2
02
45
PM
PDT
Sean:
Random drift takes to long when it comes to making anything beyond very low levels of functional complexity (i.e., nothing that requires a minimum of at least 1000 specifically arranged amino acid residues). It just doesn’t happen and statistically is very very unlikely to happen this side of trillions of years of time.
I was just telling you what evolutionists like Larry Moran will say about your claims.Virgil Cain
March 8, 2016
March
03
Mar
8
08
2016
02:38 PM
2
02
38
PM
PDT
Zachriel, There are two key problems with your algorithms when it comes to modeling the Darwinian mechanism of random mutations and natural selection:
1) Your algorithms don't select based on beneficial function. 2) Your mutations aren't random when it comes to where mutations take place within a sequence (i.e., in your algorithms, mutations never happen within the middle of words within a phrase for instance - unlike real mutations that affect DNA within organisms randomly in the middle of genes or other coding regions).
If you modify you're algorithms accordingly, I think you'll come up with very different results that demonstrate the exponential nature of the problem that natural selection faces in real life. Remember, it's all about the functionality of a sequence. If your algorithm does not select based on function, you haven't got a truly evolutionary algorithm.seanpit
March 8, 2016
March
03
Mar
8
08
2016
02:27 PM
2
02
27
PM
PDT
seanpit: How can you argue that a random sequence of English words is more functionally beneficial compared to any other random sequence of English words? We're not talking about phrases, but words as there is no disagreement as to what constitutes a valid word in the English language. Per your statement, words are functional, and per your statement longer words can't evolve from shorter words because as the words get longer, they are more and more widely separated in letter-space. You had stated elsewhere that the limit was words of about seven in length.Zachriel
March 8, 2016
March
03
Mar
8
08
2016
02:14 PM
2
02
14
PM
PDT
Zachriel, You wrote:
You defined function as a word that is defined or recognized as beneficial in a larger system, such as the English language.
Oh please. Words, or sequences of words, may have beneficial meaning, or no beneficial meaning, depending on context and how the words are arranged relative to each other. How can you argue that a random sequence of English words is more functionally beneficial compared to any other random sequence of English words? "are flabergasted figs tree dog" "stick wasp in weasel woods skunk" Which "phrase" is more functionally beneficial when spoken to an English-speaking person? You see, just because all of the individual words in a particular phrase may be found an English dictionary doesn't mean that the sequence is any more functionally beneficial compared to what came before. Yet, that is exactly what you have to determine if you're going to model how natural selection really works. You have to determine if the new sequence is functionally beneficial compared to what came before - i.e., that it produces some kind of functional advantage. It is not enough that it happens to match a portion of some predetermined target sequence - regardless of its own independent meaning/function. That's not now natural selection works. You have to demonstrate a functional advantage each step of the way. If you cannot do this, you're simply not modeling how natural selection really works. Now, I'll give you a starting point of any word you may find in an English dictionary. However, from that point onward you have to determine if the next evolutionary step is functionally beneficial in a given environment or situation compared to what came before. That's how natural selection works in real life. However, this isn't how your algorithms work. You write:
a at sat sate sated
Where is the change in beneficial function for each step in this sequence? While it is statistically easy to evolve between small words in the English language using single point mutations, and while it is more difficult to evolve between larger words, it is exponentially harder to evolve larger and larger sequences, to include multi-word sequences, when you have to make your selections based on functional benefits (regardless of the random search algorithm you choose to use). Also, in real life, mutations within a sequence cannot distinguish between whole "words" or portions of "words". The mutational breaks are randomly determined - unlike your algorithms. Now, this isn't a difficult concept to understand. Do you really not understand that natural selection is based on a functional advantage for the newly evolved sequence?seanpit
March 8, 2016
March
03
Mar
8
08
2016
02:00 PM
2
02
00
PM
PDT
Virgil, You wrote:
Sean, Most evolutionary biologists would agree that natural selection alone is incapable of producing complex adaptations for the reasons you mention, mainly the fact of missing selectable steps. That is why drift and neutral (construction) theory have been given new life. They are the tinkerers behind the scenes. And sometimes adaptations can emerge from that.
Random drift takes to long when it comes to making anything beyond very low levels of functional complexity (i.e., nothing that requires a minimum of at least 1000 specifically arranged amino acid residues). It just doesn't happen and statistically is very very unlikely to happen this side of trillions of years of time.seanpit
March 8, 2016
March
03
Mar
8
08
2016
01:40 PM
1
01
40
PM
PDT
seanpit: However, why do you keep missing the part where I said, “Very quickly you will find yourself running into walls of non-beneficial function“? You defined function as a word that is defined or recognized as beneficial in a larger system, such as the English language. seanpit: You do understand that the goal here is to model natural selection? – right? Do you not understand that your algorithm doesn’t do this? It's your algorithm. We're just implementing it. It's quite obvious that you meant that we take a word, such as "a", and then randomly change letters to find longer words. a at sat sate satedZachriel
March 8, 2016
March
03
Mar
8
08
2016
12:37 PM
12
12
37
PM
PDT
Sean, Most evolutionary biologists would agree that natural selection alone is incapable of producing complex adaptations for the reasons you mention, mainly the fact of missing selectable steps. That is why drift and neutral (construction) theory have been given new life. They are the tinkerers behind the scenes. And sometimes adaptations can emerge from that.Virgil Cain
March 8, 2016
March
03
Mar
8
08
2016
12:20 PM
12
12
20
PM
PDT
One more thing Zachriel. You keep quoting my original challenge like I never mentioned that natural selection was based on beneficial changes in function:
Sean Pitman: say you start with a short sequence, like a two or three-letter word that is defined or recognized as beneficial by a much larger system of function, such as a living cell or an English language system. Try evolving this short word, one letter at a time, into a longer and longer word or phrase. See how far you can go. Very quickly you will find yourself running into walls of non-beneficial function.
However, why do you keep missing the part where I said, "Very quickly you will find yourself running into walls of non-beneficial function"? You see, I've always consistently pointed out to your that selection must be based on beneficial function. Yet, your algorithms aren't based on function at all, but on template matching to pre-selected targets where any match to any portion of your pre-selected target sequence is "selectable" by your algorithm. Natural selection cannot do what your algorithms do!seanpit
March 8, 2016
March
03
Mar
8
08
2016
11:31 AM
11
11
31
AM
PDT
Zachriel, You do understand that the goal here is to model natural selection? - right? Do you not understand that your algorithm doesn't do this? Natural selection can only select based on changes in beneficial function within a given environment - that's it. Your algorithm doesn't select based on functional changes at all. Your algorithm selects based on template matching or the additional of small sequences without regard to their meaning or the overall change in function or meaning of the evolving sequence. That means that your algorithm does not actually model natural selection at all - not even a little bit. You're algorithm is doing exactly what Richard Dawkins "Weasel" algorithm did. Don't you see that? You haven't come up with anything new or helpful here at all. Not at all. In my initial discussions with you, I was trying to get you to understand the problems with a function-based selection mechanism when it comes to producing higher and higher level systems. The key problem, of course, is that as the minimum structural threshold requirements increase in a linear manner (i.e., either an increase in the minimum size and/or minimum degree of specificity) the ratio of potentially beneficial vs. non-beneficial sequences will decrease in an exponential manner. This is true for the English language system and any other information system you wish to name - to include systems based on DNA or proteins. For example, as the minimum size of a set of English characters increase from small words, to larger words, to phrases, to sentences, to paragraphs,... etc, the ratio of sequences that will be functionally beneficial will decrease in an exponential manner as compared to non-beneficial or meaningless sequences. I'm sure that even you can recognize the truth of this concept. Even Dawkins recognizes the truth of it. What happens, then, is that with each step up the ladder of functional complexity the next closest potentially beneficial island within sequence space gets farther and farther away - in a linear manner. And, with each linear increase in the minimum distance in sequence space, the average time it takes for a random search algorithm to find another qualitatively novel beneficial sequences grows exponentially. Now, go back and look at your "Phrasenation" program and notice that the vast majority of your intermediate steppingstone sequences make no sense - are not meaningful or functionally beneficial within a given environment. You're simply selecting any additional single "word", of whatever kind, and adding it to your evolving "phrase" without any consideration of if it makes meaningful sense or not - if it would be functionally advantageous in a given environment. And, eventually you end up with your "target phrase" - just like Dawkins did with his "Weasel" algorithm. That is why your algorithm "works" in such a rapid manner. However, it is also why your algorithm doesn't reflect what natural selection can do in real life. As I explained to you many years ago (see link below) your program is based on the notion that every part of a sequence in a collection of phrases like Hamlet is meaningfully beneficial as long as no partial words are present. This notion is simply ridiculous. I means that sequences like, "and in the" and ", no, not" and "is let the" are defined as meaningfully beneficial in your program. Basically, absolutely any addition to a string will be defined as beneficial as long as it is a complete word found as part of this particular sequence in Hamlet. It need not represent an intact thought much less an internally relevant thought. http://talk.origins.narkive.com/1Bty4KMM/zach-s-prasenation-evolution-program#post6 So, what are you trying to prove here? That evolution makes sense beyond very low levels of functional complexity? How have you done that any better than Richard Dawkins who admitted that his own "Weasel" algorithm doesn't really function like natural selection functions?seanpit
March 8, 2016
March
03
Mar
8
08
2016
10:39 AM
10
10
39
AM
PDT
Zachriel: Like this: Or a bit more precisely, a h ka za b q az aa ... at ytn wt t uat ... at ... sat Or with a bit of recombination, this time leaving out the stillborns and cousins. a an can cancanZachriel
March 8, 2016
March
03
Mar
8
08
2016
08:09 AM
8
08
09
AM
PDT
Zachriel conflates artificial selection with natural selection. How typical...Virgil Cain
March 7, 2016
March
03
Mar
7
07
2016
05:00 PM
5
05
00
PM
PDT
seanpit: the problem with your evolution algorithms is that nothing is selected based on beneficial meaning or function. Here's the challenge again:
Sean Pitman: say you start with a short sequence, like a two or three-letter word that is defined or recognized as beneficial by a much larger system of function, such as a living cell or an English language system. Try evolving this short word, one letter at a time, into a longer and longer word or phrase. See how far you can go. Very quickly you will find yourself running into walls of non-beneficial function.
Start with a two or three-letter word. That's easy. We can go to the dictionary and find one of those. Now, we are to "evolve this short word, one letter at a time, into a longer and longer word or phrase." Okay. We'll just stick with words for now, as there is no disagreement as to what constitutes a valid word in the English language. What does evolve mean? Well, it means randomly change a letter in a word, or randomly recombine parts of words that are already in the population. If it makes a new word, then we can add the word to the population. If you want, we could limit the population to just the longest words, but that isn't essential. That seems to be exactly what the challenge entails. ETA: Like this: a at sat sate satedZachriel
March 7, 2016
March
03
Mar
7
07
2016
02:55 PM
2
02
55
PM
PDT
Again, Zachriel, the problem with your evolution algorithms is that nothing is selected based on beneficial meaning or function. That means, of course, that your algorithms aren't doing what natural selection does in real life. Longer and longer functionally meaningful words, phrases, sentences, paragraphs, etc., become more and more separated from each other, in sequence space, so that the average number of required mutations to get from one to the next increases in an exponential manner. Not just any sequence of letters or words in English is functionally meaningful - producing some kind of advantage in a given environment. And, this is the very same problem that exists in DNA or protein sequence space. Functionally beneficial systems that require a greater minimum number and/or specificity of amino acid residues are exponentially harder to find in sequence space via any kind of random search algorithm. Surely you can understand that - if you only did the math or produced an algorithm that actually made selections based on beneficial functionality. In short, your algorithms do no "select" based on any kind of meaningful functional advantage beyond what already existed within the original "gene pool" of options. They select based only on a comparison to a pre-established target sequence without any evaluation of the functional meaning of the intermediate sequences. This is exactly the same problem Dawkins had with his "Methinks it is like a weasel" algorithm. http://www.zachriel.com/phrasenation/ http://www.zachriel.com/mutagenation/ At least Dawkins was honest enough to admit that his algorithm didn't truly reflect the Darwinian mechanism of natural selection...
Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. One of these is that, in each generation of selective 'breeding', the mutant 'progeny' phrases were judged according to the criterion of resemblance to a distant ideal target, the phrase METHINKS IT IS LIKE A WEASEL. Life isn't like that. Evolution has no long-term goal. There is no long-distance target, no final perfection to serve as a criterion for selection, although human vanity cherishes the absurd notion that our species is the final goal of evolution. In real life, the criterion for selection is always short-term, either simple survival or, more generally, reproductive success.
seanpit
March 7, 2016
March
03
Mar
7
07
2016
02:15 PM
2
02
15
PM
PDT
seanpit: The problem with Zachriel’s evolution algorithms, as I’ve mentioned to him many times before, is the same problem Dawkins has with his evolution algorithm (“Methinks it is like a weasel”). Neither uses function-based selection where each mutation is functionally beneficial compared to what came before. Sean Pitman defined the function: "start with a short sequence, like a two or three-letter word that is defined or recognized as beneficial by a much larger system of function, such as a living cell or an English language system. Try evolving this short word, one letter at a time, into a longer and longer word or phrase."Zachriel
March 5, 2016
March
03
Mar
5
05
2016
06:04 AM
6
06
04
AM
PDT
So, Dawkins and Zachriel need to go back to the drawing board and come up with a new evolutionary algorithm that actually reflects what we see in nature. If they do this, they will soon realize, if they are honest with themselves, that such algorithms stall out, in an exponential manner, with each step up the ladder of functional complexity. I wrote a Weasel program last night and it couldn't even reliably find even the first two letters 'M' 'E'.Mung
March 4, 2016
March
03
Mar
4
04
2016
06:31 PM
6
06
31
PM
PDT
Not to mention the fact that natural selection is all about survival and reproduction and in all evolutionary algorithms that is granted from the start. That means natural selection is satisfied from the get go and it has nothing left to do.Virgil Cain
March 4, 2016
March
03
Mar
4
04
2016
05:35 PM
5
05
35
PM
PDT
The problem with Zachriel's evolution algorithms, as I've mentioned to him many times before, is the same problem Dawkins has with his evolution algorithm ("Methinks it is like a weasel"). Neither uses function-based selection where each mutation is functionally beneficial compared to what came before. All of these algorithms use "target sequences" that function as templates. Each additional match to this target sequence is defined as "selectable" in these evolution algorithms. That is why they work so well and so quickly. The problem, of course, is that biological evolution does not and cannot work like this. Natural selection cannot preferentially select any novel mutation over any other until such a mutation comes along that actually produces some qualitatively novel functional change that also has a positive effect on reproductive fitness relative to all of the other individuals within that population. Using this Darwinian mechanism, finding novel functionality with greater and greater minimum size and/or specificity requirements becomes exponentially more and more difficult to achieve within a given span of time. http://www.detectingdesign.com/flagellum.html#Calculation So, Dawkins and Zachriel need to go back to the drawing board and come up with a new evolutionary algorithm that actually reflects what we see in nature. If they do this, they will soon realize, if they are honest with themselves, that such algorithms stall out, in an exponential manner, with each step up the ladder of functional complexity.seanpit
March 4, 2016
March
03
Mar
4
04
2016
01:58 PM
1
01
58
PM
PDT
Dawkins' weasel is great support for What Dr Pittman wrote. The sentence "Methinks it is like a weasel" only works in one specific case, that is in the Shakespeare play that contains it. It is meaningless in every other piece of literature. It would only do any good if it arose and was properly integrated into that play. And not surprisingly Dawkins and the evo-minions seem totally unaware of that fact.Virgil Cain
March 4, 2016
March
03
Mar
4
04
2016
09:44 AM
9
09
44
AM
PDT
Zachriel:
The origin of the life we know Just like this poem rose from simple forms, In meaning, and in kind, step-by-step.
What a nonsensical BS artist Zachriel is. Talk about closing one's eyes and blocking the sight- Zachriel is as blind as blind can be and just as mindless as natural selection.Virgil Cain
March 4, 2016
March
03
Mar
4
04
2016
09:38 AM
9
09
38
AM
PDT
Sean Pitman: say you start with a short sequence, like a two or three-letter word that is defined or recognized as beneficial by a much larger system of function, such as a living cell or an English language system. Try evolving this short word, one letter at a time, into a longer and longer word or phrase. See how far you can go. Very quickly you will find yourself running into walls of non-beneficial function.
O Sean Pitman Beware a war of words ere you err. A man wins the crown, but lowers his helm. A kiss Is a kiss, and a war can be just, but a war of words Just irks the crowd and leads you far astray. Words, you know, can lead to a clash of swords. Why do you think that you alone have it Legit when sages aver another idea? Could it be that you could see the light But choose instead to close your eyes and block The sight? The origin of the life we know Just like this poem rose from simple forms, In meaning, and in kind, step-by-step.Zachriel
March 4, 2016
March
03
Mar
4
04
2016
07:33 AM
7
07
33
AM
PDT
1 4 5 6

Leave a Reply