Uncommon Descent Serving The Intelligent Design Community

FEA and Darwinian Computer Simulations

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
arroba Email

In my work as a software engineer in aerospace R&D I use what is arguably the most sophisticated, universally applicable, finite-element analysis program ever devised by the most brilliant people in field, refined and tested for 35 years since its inception in the mid-1970s for the development of variable-yield nuclear weapons at Lawrence Livermore National Laboratory. It is called LS-DYNA (LS for Livermore Software, and DYNA for the evaluation of dynamic, nonlinear, transient systems).

A finite element is an attempt to descretize on a macro level what occurs at a molecular level in a physical system, so that a result is amenable to a practical computational solution. The learning curve for the use of this sophisticated technology is extremely steep, and the most important thing one learns is that empirical verification of the simulation results is absolutely required to validate the predictions of any FEA model.

In an LS-DYNA simulation, all the laws of physics and the mathematics that describe them are precisely known. In addition, all of the material properties associated with the physical objects are precisely quantified with empirical verification (density, modulus of elasticity, and much more).

The FEA solver computes a physical result by solving millions of differential equations with a minimal integration time step based on the time required for a disturbance traveling at the speed of sound to traverse the smallest finite element with the greatest mass density.

Even with all of this, and countless man-years of experience by sophisticated and experienced users (LS-DYNA has been used for many years in the auto industry for simulating car crashes) empirical verification is always required, by actually crashing a car to validate the FEA results.

In light of all this, consider the typical Darwinian computer simulation and the trust that could be put in one.

Darwinian computer simulations are simply a pathetic joke as they relate to biological reality. This should be obvious to anyone with experience in the field of legitimate computer simulation.

Comments
DrBot and Elizabeth Liddle bow out. Mung
I should say at this stage, that during my very interesting time here at UD, I have come to the interim conclusion that the best ID arguments are not against “Darwinian evolution” at all, but against the notion that the kind of self-replicators that are a prerequisite for Darwinian evolution are not adequately explained in non-ID terms.
It all ties in. But you've ordered Signature in the Cell, right? I've been re-reading just to prep, lol. But it's not just any self-replicators, it's the kind of information and information system we actually find in living things. Mung
Well, there’s an interesting argument! GAs start further from a solution so they have more chance of reaching one?
If you recall, and I'll find the link for you if you wish, you were the one who claimed that the initial population was unfit because they were all chosen at random. I responded that since the initial point in the search space was decided at random, there was a chance that one of the genomes could land smack dab on the best solution from the outset. So starting a a random spot in the search space is not the same as starting out further from a solution. Since we don't know in advance where in the search space a solution will be found, starting our search at random spots is as good a choice as any, and actually increases the likelihood of a successful search. You can of course test this by starting your initial population all off with the same genome and see which method performs better. The point is, even the way the initial population is seeded is purposely chosen. As is the size of the initial population. Mung
Thanks for accepting my word UPD. I appreciate it. I will also get to your latest post in due course. You raise some important points. Mung:
So let’s talk bout the initial population in a GA. Often, the specific setting of each “bit” of each chromosome/genotype is generated randomly. This is highly unlike a natural population. The result is that in a GA the begin for a search for a solution begins at random points in the space of potential solutions. This increases the likelihood of a successful search.
Well, there's an interesting argument! GAs start further from a solution so they have more chance of reaching one?
But natural populations start our very close together on the “fitness landscape.” Does it then follow that the likelihood of a successful search is decreased? Does this mean it is less likely that natural populations will find novel solutions?
heh. Well, we can "start" a simulation wherever we want. One thing that I have done is to evolve a population to survive optimally in a given "environment", then change the "environment", rather than go back to square one. And sure, they can do that. But I may be misunderstanding you (that's the trouble with metaphors, even when they are fairly specific, like "fitness landscape"). What exactly do you mean when you say: "natural populations start out very close together on the fitness landscape?" This would certainly be true during speciation, or even during the kind of temporary divergence between populations that the Grants observed in the Galapagos finches. But those kinds of divergences (where beak size distributions tend to become bimodal if there are two sizes of seed, or even trimodal if there are three, and unimodal if there is a more gaussian distribution of seed sizes) are eminently modelable by Darwinian simulations. In fact in Jonathan Weiner's book he actually cites simulations, and reports other field examples as well. What is true is that once a population has started down a path, potential solutions are constrained by what has gone before in that lineage. And in asexually reproducing species this is even more of a problem as there is far reduced opportunity to "mix and match". So in that sense, I'd agree - "natural" populations, once established, are constrained to a subset of the "solution space", by which I mean, solutions to the problem of persisting in the current environment. Once you are a tetrapod it's pretty difficult for population to explore six-legged solutions! (Although two-legged, no-legged, and winged are all possible). But this is one of the strongest arguments for Darwinian evolution rather than ID - the fact that living things form nested hierarchies of "solutions". Even if a population of tetrapods finds itself in an environment where back-legs are pretty irrelevant, but a finned tail would come in handy, it has to take a machete through uncharted solution-space, rather than simply borrow a ready-made from a friendly neighbourhood fish. I should say at this stage, that during my very interesting time here at UD, I have come to the interim conclusion that the best ID arguments are not against "Darwinian evolution" at all, but against the notion that the kind of self-replicators that are a prerequisite for Darwinian evolution are not adequately explained in non-ID terms. I don't think it's a watertight one, but I think it's the best you've got, by quite a long shot :) Elizabeth Liddle
So let's talk bout the initial population in a GA. Often, the specific setting of each "bit" of each chromosome/genotype is generated randomly. This is highly unlike a natural population. The result is that in a GA the begin for a search for a solution begins at random points in the space of potential solutions. This increases the likelihood of a successful search. But natural populations start our very close together on the "fitness landscape." Does it then follow that the likelihood of a successful search is decreased? Does this mean it is less likely that natural populations will find novel solutions? Mung
However, if you can bear it, feel free to register at Talk Rational ...
But there's no Intelligent Design forum there. ;) Mung
But hey, you say you got lost or whatever.......fine I accept your word. ;) Upright BiPed
Elizabeth, I have been on the threads that you have been on these past days sending you little hints. You had to virtually step over the one on this thread, for instance. The specific thread you and I have been participating in ("At Some Point The Obvious Becomes Transparently Obvious") has a direct link to it that has dutifully appeared on every page UD has served up for the past untold number of days. Its hardly a task to find it. (Ahem) Upright BiPed
Given that you’ve been consistently active on this forum for the past days while the operational definition you requested had been posted (and that posting brought to your attention) then these sentiments of yours must surely be called into question.
I'm afraid I have only just seen your link. I haven't yet acquired the knack of keeping up with new posts on this site, and I'm not permanently logged on. However, I knew you were busy, and was happy to wait for your response. Had I known it was there I would have responded earlier. Please don't assume that no response means I have dropped out of the conversation. Unfortunately this site, unlike traditional forums, doesn't have any way of letting participants communicate with each other to let them know if a post needs a reply, nor does it bump recently responded-to threads. However, if you can bear it, feel free to register at Talk Rational (you don't need to post) http://www.talkrational.org/index.php and contact me by PM if you need to alert me to a post here. I am "Febble" there. That applies to anyone here BTW :) In between logins an awful lot of water has flowed under the bridge sometimes, and sometimes I miss links to replies. But *puts on stern face and growls* - please don't jump to conclusions about my integrity just because you haven't received a reply. It's much more likely that I haven't seen it, or even that I've lost the link to the thread. *puts smiley face back on* :D Cheers Lizzie Elizabeth Liddle
2.3.1 Representation (Definition of Individuals) The first step in defining an EA is to link the "real world" to the "EA world", that is, to set up a bridge between the original problem context and the problem-solving space where evolution takes place. Objects forming possible solutions within the original problem context are referred to as phenotypes, while their encoding, that is, the individuals within the EA are called genotypes. The first design step is commonly called representation, as it amounts to specifying a mapping from the phenotypes onto a set of genotypes that are said to represent these phenotypes. - A.E. Eiben and J.E. Smith, Introduction to Evolutionary Computing
So what should we make of an argument that asserts that it's the encoding that is designed, not the genotype? The encoding is the genotype. The first design step. The genotype (what I have also been calling the chromosome) in a GA is designed with a future goal in mind. The design of the genotype is critical to the successful operation of the GA. GAs are fully teleological, and therefore quite unlike biological evolution. Unless, of course, evolution is also teleological and living organisms are designed. Mung
UB: Okay, your link did not seem to work for me. G kairosfocus
#114 Yes, thank you Kairos. That is exactly the post I was referring to. Upright BiPed
2.3 Components of Evolutionary Algorithms p 18 EAs have a number of components, procedures or operators that must be specified in order to define a particular EA. The most important components are: - Representation (definition of individuals) - Evaluation function (or fitness function) - Population - Parent selection mechanism - Variation operators, recombination and mutation - Survivor selection mechanism (replacement) Each of these components must be specified in order to define a particular EA. Furthermore, to obtain a running algorithm the initialisation procedure and a termination condition must be also defined. - A.E. Eiben and J.E. Smith, Introduction to Evolutionary Computing
That's at least 6 components that need to be specified, and maybe more. The more that GAs appear to mimic biological evolution, the more life appears to be designed. Mung
...the first stage of building any evolutionary algorithm is to decide on a genetic representation of a candidate solution to the problem. This involves defining the genotype and the mapping from genotype to phenotype. When choosing a representation, it is important to choose the "right" representation for the problem being solved. Getting the representation right is one of the most difficult parts of designing a good evolutionary algorithm. - A.E. Eiben and J.E. Smith, Introduction to Evolutionary Computing
genotype = chromosome defining the genotype and mapping = design chromosome design "Getting the representation right is one of the most difficult parts of designing a good evolutionary algorithm." You don't say. Lizzie, you left that out of your really important to distinguish three things in a GA post. Just sayin' Looks like there's more design in a GA than you're willing to credit. Mung
UB: I would add that in aggregate we should have at least 1,000 bits between the two, but that is probably going to be met by anything that specifies a protocol. G kairosfocus
UB: Do you mean this post? With this?
In retrospect, when I stated that recorded information requires symbols in order to exist, it would have been more correct to say that recorded information requires both symbols and the discrete protocols that actualize them. Without symbols, recorded information cannot exist, and without protocols it cannot be transferred. Yet, we know in the cell that information both exists and is transferred . . . . Your simulation should be an attempt to cause the rise of symbols and their discrete protocols (two of the fundamental requirements of recorded information between a sender and a receiver) from a source of nothing more than chance contingency and physical law. And therefore, to be an actual falsification of ID, your simulation would be required to demonstrate that indeed symbols and their discrete protocols came into physical existence by nothing more than chance and physical law.
kairosfocus
Well, now. It does look as though my assumption is justified....now doesn't it? ;) Upright BiPed
Now I must get some much needed sleep - and I'm away for a little break tomorrow so I may not be commenting here for a while. ;) DrBot
There is nothing in nature or in evolutionary theory that says that populations start out less fit and get more fit. well, yes,there is actually. It’s called “adaptation” – evolving to survive optimally in a new or changing environment.
But there is also nothing that says that populations cannot sometimes loose fitness - after all fitness is partly a result of the environment, and environments can sometimes change fast. But most of the time the trend is for adaptation to the environment (increase in fitness) DrBot
Even if the initial bit string of each candidate solution in the initial population is chosen completely at random, there is still the possibility that when one of those chromosomes is tested for fitness, it will be as fit a “genome” as you’re ever going to find during the run. You simply cannot legitimately compare the initial population in a GA with a population of organisms in nature.
Quite correct. The first round of fitness evaluations on a random population amount to a random search, and you can get lucky. It is the process that follows that is the evolutionary search. If you were modelling how a population changes over the generations you might be better starting with a population of similar individuals. If you were modelling what happened after the first self replicator came to exist (either by design or not) then you would start with a population of 1, but allow the population to grow.
And whatever else you design, you do not design the genotypes that do the stuff rather well.
Which is not the same as saying that the genotype itself is not designed, which is what the current debate is about.
you do not design the genotypes, which is not the same as saying that the genotype itself is not designed? Actually that looks exactly the same, which is why I emphasised the point about how it is the encoding scheme that is designed or imposed, not the genotype. DrBot
There is nothing in nature or in evolutionary theory that says that populations start out less fit and get more fit. well, yes,there is actually. It's called "adaptation" - evolving to survive optimally in a new or changing environment. Hence the "Origin of Species"> Elizabeth Liddle
As in, how much design is involved, and where, and why. Because if we don’t get that right, if we try to compare GA’s to biological evolution, or if we think GA’s model evolution, we’ll be fooling ourselves.
Thats quite correct, you just have to remember some of the things she said in that post no 43. For example, the real world exists, and fitness, or rather reproductive success is intrinsic - it is the result of real creatures existing in the real world. In a simulation we have to model reproductive sucess and an environment in some form so it has to be designed - models don't just appear in memory, we need to design an environment, a method of affecting reproductive success, of mutating genes. All aspects of a GA have to be designed or are imposed in some form, what matters when using them to study biological evolution is that they are a good approximation for the aspect of evolution being studied. When you play around with mutation rates in a simulated model of an aspect of biological evolution you are going to use the results of this parameter tuning to compare to emperical data. It's the same in physics, you can model gravity and matter, and design, a simulation where planets form. The fact that gravity and matter were put in to the simulation by design doesn't mean they are designed in reality (although they could be - its irrelevant to the purpose of the simulation). What matters is that your model is useful when trying to understand how planets might have formed, and produces results that can be compared to emperical data, and which can then be used to develop better models. DrBot
Elizabeth Liddle @98:
However, the common principle is the simple Darwinian one: you start off with a population of not-terribly fit individuals, you let them breed with a probability that is related to some fitness criterion (which obviously you design, but in nature could be anything, from camouflage to length of neck), and with variance. Then, a bit later, you find you have a population of individuals who can do stuff rather well that the original population did poorly or not at all.
There is nothing in nature or in evolutionary theory that says that populations start out less fit and get more fit. And in a GA each chromosome, even those in the initial population, is a potential (candidate) solution. Even if the initial bit string of each candidate solution in the initial population is chosen completely at random, there is still the possibility that when one of those chromosomes is tested for fitness, it will be as fit a "genome" as you're ever going to find during the run. You simply cannot legitimately compare the initial population in a GA with a population of organisms in nature.
And whatever else you design, you do not design the genotypes that do the stuff rather well.
Which is not the same as saying that the genotype itself is not designed, which is what the current debate is about. See my post @91. Mung
I’m not discussing evolutionary theory modelling in the main, I’m discussing how Genetic Algorithms work from an engineering perspective ...
And I'm discussing what it takes to get a GA to work in the first place. :) As in, how much design is involved, and where, and why. Because if we don't get that right, if we try to compare GA's to biological evolution, or if we think GA's model evolution, we'll be fooling ourselves. See Elizabeth's post @43. Mung
Yes, that's what I thought. Upright BiPed
You assume that the challenge was abandoned because? DrBot
Bot, have you been following the conversation? I hadn't noticed? Upright BiPed
You left out a third possible outcome beyond the glory of success or the admission of failure. That is the one where you simply disappear from the conversation after the odds of your success begin to stare you in the face. By abandoning the challenge after the requested operational definition was sorted out, you’ve not only failed to make your case, but you’ve also escaped the “downside” by not sticking around long enough to accept defeat.
Or simply that setting up an experiment to test this cannot be done in a week. You assume that the challenge was abandoned because? DrBot
Dr Liddle, do “operational definitions” have expiration dates? - - - - - - - - - - - When you wrote:
I am setting up a test of the hypothesis that, contrary to the claims of ID, Information (of a specific type, which we are currently trying to operationalise) can be generated without Intelligent Design. Obviously I will do my best to find a context that supports my hypothesis. But I may fail. That’s the downside (but also the glory) of science. On the other hand, if I succeed, then the ID argument fails.
You left out a third possible outcome beyond the glory of success or the admission of failure. That is the one where you simply disappear from the conversation after the odds of your success begin to stare you in the face. By abandoning the challenge after the requested operational definition was sorted out, you’ve not only failed to make your case, but you’ve also escaped the “downside” by not sticking around long enough to accept defeat. Why stoop to acknowledge the validity of your opponent’s argument, right? ;)
…your claim was that Information of the kind that is seen in living things could not be generated by Darwinian processes. I think it can, and I offered to demonstrate that it could. Sure it was a bit lacking in humility, I guess, but it’s not as though I was unprepared to put my efforts where my mouth is and risk hubris, I am.
Given that you’ve been consistently active on this forum for the past days while the operational definition you requested had been posted (and that posting brought to your attention) then these sentiments of yours must surely be called into question. Upright BiPed
Charles: I'm saying one of the things we can do with GAs. Yes, of course, we can model sexual selection, if that's what we are interested in (I've done that too). We can also let population size vary. We can have a very simple "phenotype" in which the phenotype is simply what the genotype does, or we can separate the functions, and even build in some stochastic slack between genotype and phenotype. However, the common principle is the simple Darwinian one: you start off with a population of not-terribly fit individuals, you let them breed with a probability that is related to some fitness criterion (which obviously you design, but in nature could be anything, from camouflage to length of neck), and with variance. Then, a bit later, you find you have a population of individuals who can do stuff rather well that the original population did poorly or not at all. And whatever else you design, you do not design the genotypes that do the stuff rather well. And often you found they've hit on a trick you would never have thought of. And sometimes they even cheat :) Elizabeth Liddle
Charles:
That has about as much relevance to this discussion as studying computer viruses for antibiotic research. There is nothing “genetic” about circuit design algorithms except in the fevered imaginations of engineer wannabes. Iterative formulaic revision is not evolution.
We were discussing Genetic Algorithms, so I guess an example of a genetic algorithm in use is irrelevant, yes ;) Its not biological evolution, it is artificial evolution. Both processes involve replication with variance and differential rates of replication. Both processes are evolution, but only one is biological. I'm not an 'engineer wannabe' I'm just an engineer and scientist. My wife tells me I'm quite a talented chef as well :)
I daresay there exists within your respective disciplines and literature an unwitting collective self-congratulatory exaggeration about the state of the art of evolutionary theory modelling.
I'm not discussing evolutionary theory modelling in the main, I'm discussing how Genetic Algorithms work from an engineering perspective although I have tried to put it in some context with respect to biology.
If you want to understand the poor state of evolutionary theory modelling, relative to engineering and the hard sciences, start with the sloppy conceptualizations used to obscure a lack of detailed factual understanding.
Lack of detailed understanding of what? How to use GA's to solve design problems?
if you say they have “evolvable genetic algorithims” instead of being reconfigured or reprogrammed you will likely lose credibility.
Well for my stuff certainly because it is irrelevant, although you can use GA's to optimise motion control algorithms. There are plenty of companies out there that do use genetic algorithms to create stuff, and they get venture capital and make a lot of money. Try these guys at Naturalmotion. They design software for computer games and animation studios that lets you use a virtual stuntman or character, equipped with virtual muscles and reflexes modelled on biology. They use, amongst other things, genetic algorithms to craft behaviours. Venture capitalists will give money for something that uses a GA if it is demonstrated that they could make money from it - i.e. that it works. DrBot
Mung:
The population size is also an aspect of the design of a GA.
Yes, but not always. You can design the population size to be variable if you choose, although you run into practical problems of computer power and memory requirements. For engineering purposes there doesn't seem much point it varying the population (but I may be wrong) It is only relevant if studying biology, and even then it isn't always relevant to the particular aspect of evolution being studied. If you are studying population dynamics then I guess variable populations in your model is fairly critical :) DrBot
DrBot:
Mung, here is an example from engineering of how to use a GA to evolve an electronic circuit
Well yeah, BY DESIGN! And we do not understand biological organisms well enough to simulate their evolution. Joseph
Damn, my last comment has disappeared under a discussion, and it's still in moderation. Gil - I asked something at 48, if/when you see this, I'd be interested to see your response. Heinrich
Mung:
That’s what I have been arguing. I fail to see why the question is even in dispute.
I've already explaind but I'll try and expand. The first problem (partly due to wikipedia) is that the terms used are hijacked from biology and applied to evolutionary computing, and different people use different terms. Within the circles I work it it is fairly normal to refer to the candidate solution as the genome, not the chromosome - but others, as you illustrate from wikipedia, use different terminology. In the end they are just words, we are both talking about the same things but with different terminology and it is the thing, not the word that is under discussion. I'm happy to use chromosome of you prefer?
The mutation operator and crossover operator employed by the genetic algorithm must take into account the chromosome’s design.
Quite true but that doesn't mean that the chromosome has to have been designed - thats just bad terminology I suspect ;) - as I already said, when you use a GA on some systems the chromosome is intrinsic to the system so you don't need to design it, but you do need to understand it in order to have mutation and crossover that actually works with the particular encoding. It is certainly true though that many scientists that use GA's for design tasks very carefully design encoding schemes and there is a lot of published work on this area. When it comes to modeling biology though what we would really want is to directly simulate biology, so we are designing the simulator to replicate the operation of a biological encoding. we are not designing the encoding scheme ourselves (even though it might have been designed by something else) DrBot
The population size is also an aspect of the design of a GA. Mung
DrBot @70: (GA’s have been used successfully to evolve circuits on FPGA’s) That has about as much relevance to this discussion as studying computer viruses for antibiotic research. There is nothing "genetic" about circuit design algorithms except in the fevered imaginations of engineer wannabes. Iterative formulaic revision is not evolution. Elizabeth Liddle @71: Another thing I do is “mate” solutions – I draw individuals at random from my population, the fitter ones having greater probability of being drawn, and then “mate” them with a second randomly drawn individual, and randomly recombine the genomes. That speeds things up a bit, and widens the search space, by loosening the linkage between useful and not-useful bits of the genome. As in life And if a Darwinism/Evolution critic were to claim that in life, individuals mate at random you would instead be lecturing us on phenotypic frequencies and adaptive genotypic selection pressures, anything but "random" mating behavior. If the researchers engineering new corn, wheat, soybean hybrids followed your lead, the world would starve. Almost nothing about life or the real world is random. Even brownian motion and weather are not random but predictible when sufficient measurements and observations are known. The two of you both are carelessly tossing double-entendre euphemisms around, as if enclosing them in quotes compensates for their fundamental misapplication. While you both consider yourselves aware of the limits to which you apply your metaphors and euphemisms, I daresay there exists within your respective disciplines and literature an unwitting collective self-congratulatory exaggeration about the state of the art of evolutionary theory modelling. Consider where you're now at versus Gil's original point:
Darwinian computer simulations are simply a pathetic joke as they relate to biological reality. This should be obvious to anyone with experience in the field of legitimate computer simulation.
If you want to understand the poor state of evolutionary theory modeling, relative to engineering and the hard sciences, start with the sloppy conceptualizations used to obscure a lack of detailed factual understanding. Elizabeth, you correctly noted (on a different discussion thread) the importance of a correct and detailed problem statement or restatement to its solution. I submit that the problem of simulating evolutionary theory without adhering to the precise meaning of terms like "in life" (i.e., in vivo), "evolution" (which is not programmed, iterative non-self-repliating revision), "random mutation" and "natural selection" will lead to innumerable irrelevant results. Conversely, DrBot, I submit that were you to seek funds from venture capitalists for your robotic servo startup, and one of them questioned how adaptable they were to changing robotic designs, if you say they have "evolvable genetic algorithims" instead of being reconfigured or reprogrammed you will likely lose credibility. Investors in companies want to understand how a product actually works, not be snowed with exaggerated marketing language. Charles
Chromosome Design
I think Genome is more appropriate than chromosome...But if you prefer chromosome then that's fine.
Well, if it's a wiki war you want! ;)
In genetic algorithms, a chromosome (also sometimes called a genome) is a set of parameters which define a proposed solution to the problem that the genetic algorithm is trying to solve. The chromosome is often represented as a simple string, although a wide variety of other data structures are also used. - Chromosome (genetic algorithm)
I just love the next section on that page. It's called Chromosome design
The design of the chromosome and its parameters is by necessity specific to the problem to be solved. The mutation operator and crossover operator employed by the genetic algorithm must take into account the chromosome's design.
That's what I have been arguing. I fail to see why the question is even in dispute. Mung
First, you have to decide that it’s actually going to be a bit string, that you’re going to represent candidate solutions using the two bits 0 and 1. That is a design decision. Second, you have to decide what each series of bits in the chromosome means. That is a design decision.
Not for the person evolving the circuit in my example. The FPGA is designed, obviously, and the method of configuring it is part of the design but it was not designed as a system for evolving circuits. In fact until Thompsons work most people didn't think you could evolve them, or it hadn't occurred to them to try. When it comes to evolving them - applying a GA - the encoding scheme is not designed by the person using the GA, it is an intrinsic part of the system being evolved so from that perspective it is imposed by the system and not a design choice. Of course you can design an indirect encoding like the one I described, in which case yes, you are designing the encoding scheme, but I still think it is incorrect to say you are designing the genome or chromosome. We may just be quibbling over semantic details here - when I say 'designing a genome' I mean manually configuring the contents of the genome, not specifying the parameters within which a genome can operate, and how it is interpreted or mapped to a phenotype.
Getting the chromosome right is a huge part, one might even say a necessary part, of getting the GA to solve the problem.
As I've pointed out above, sometimes you don't need to design it because it is intrinsic to the thing you are applying the GA to - the GA is just manipulating the configuration of a system. Now if we map this back to biology, which is what I suspect you are trying to make a point about. It makes no difference if the method of encoding and inheriting information in a cell or organism is the product of deliberate design or not. If first life was designed, rather than a product of some complex chemistry, then the encoding scheme was designed, otherwise it was the product of natural forces. Either way Evolution still occurs - Genomes get replicated with variance, resulting in differential survival rates. Evolution is a process that occurs when you have replication with variance resulting in differential rates of subsequent replication. The origin of the replicator (design or chemistry) makes no difference. God can design life to evolve, and we can study and replicate the processes. DrBot
Mung, I think Genome is more appropriate than chromosome. From wikipedia
In modern molecular biology and genetics, the genome is the entirety of an organism's hereditary information.
Although none of the terms are entirely 'right' - the FPGA is 'inherited' in one sense but in another you could say that it is an environment and every individual gets a 'turn' in the environment. Perhaps it is best to just consider the elements of inheritance that are subject to variation (the bit string) as being like a genome and the FPGA as being like the cell body, but again these are analogies to a term derived from biology. It only makes sense to take them literally if we are actually talking about GA's for direct modeling of biology, not as design tools for engineers. But if you prefer chromosome then thats fine. DrBot
Search Space If you have a bit string of length 3, you will have 8 possible configurations of the bit string. 3^2 = 8 That's the size of the search space. The only way to change increase the size is to increase the length of the bit string. You don't increase the size by swapping bits around, which is typically all that happens during crossover. Mung
Search Space Elizabeth Liddle:
Either way, crossover tends to widen the search space because without crossover, linkage tends to restrict the searchable space.
You're not using "search space" in the way I mean the term. And I don't think you're using it according to the accepted meaning of the term. See my post @62. Also:
If we are solving some problem, we are usually looking for some solution, which will be the best among others. The space of all feasible solutions (it means objects among those the desired solution is) is called search space (also state space). Each point in the search space represent one feasible solution. Each feasible solution can be "marked" by its value or fitness for the problem. We are looking for our solution, which is one point (or more) among feasible solutions - that is one point in the search space. The looking for a solution is then equal to a looking for some extreme (minimum or maximum) in the search space. The search space can be whole known by the time of solving a problem, but usually we know only a few points from it and we are generating other points as the process of finding solution continues. - Search Space
So with mutation and crossover you are generating new points in the search space, not changing the size of the search space itself.
Search Space - All possible solutions to the problem - Genetic Algorithms Overview
Changing a chromosome does not equate to increasing the number of possible solutions. Mung
Crossover Elizabeth Liddle:
Crossover can change the length of the bit string. I’ve done it both ways.
It can. Only one version of crossover listed on Wikipedia changes the length of the strings, "Cut and Splice." http://en.wikipedia.org/wiki/Crossover_%28genetic_algorithm%29 But if it does lead to a child that has a longer or shorter bit string, that has to be taken into account and is a part of the design. Mung
How are you defining each, in this context, or are you using them interchangeably?
I've been using them interchangeably. I'd prefer to use chromosome. See my post @62. Mung
Elizabeth, Point taken, we may be getting caught up in terminology - in the case of the FPGA the search space is defined by the permitted length of the bit string, which could be the total configuration possibilities for the piece of hardware, or it could be deliberately limited. This is different to the searchable space, which may only be a fraction of the actual space available and, as you point out, is determined partly by the particular GA used (and of course the topology of the search space) DrBot
DrBot:
So to answer your question, assuming I understood it, the genome or chromosome is not designed (apart from by the GA) but the encoding scheme can either be designed, chosen or imposed, depending on the task.
The chromosome is not designed by the GA. It is at times modified by one or or more operators, such as mutation, crossover, and selection. Those operators are designed. There is no "willy-nilly" about it. The encoding scheme has to be encoded in the chromosome. That's what I mean by "the chromosome is designed." Let's take your basic bit string as an example. First, you have to decide that it's actually going to be a bit string, that you're going to represent candidate solutions using the two bits 0 and 1. That is a design decision. Second, you have to decide what each series of bits in the chromosome means. That is a design decision. Even the choice to start the chromosomes by seeding them with a random sequence of bits is a design decision. Why not start them with all 0's, or all 1's? There is a coherence between the bit string and the problem that you're trying to solve. That coherence is designed. If it were not you would have little to no hope of coming up with a solution to your problem by modifying the candidate solutions. Getting the chromosome right is a huge part, one might even say a necessary part, of getting the GA to solve the problem. True? Now while neither you nor Elizabeth has come right out and stated that the chromosome does not need to be designed, you certainly leave room for people to think you are saying so. So if you want to continue in that vein, please come right out and state that how the chromosome is configured doesn't matter. Then we can put that claim to the test. But we all know that it does matter. :) Mung
Mung: you have used the words "genome" and "chromosome" with regard to a GA. How are you defining each, in this context, or are you using them interchangeably? Elizabeth Liddle
Crossover can change the length of the bit string. I've done it both ways. If you randomise the cut points, you can, as in life, end up with a shorter or longer string. Either way, crossover tends to widen the search space because without crossover, linkage tends to restrict the searchable space. Elizabeth Liddle
Elizabeth: "Another thing I do is “mate” solutions ... That speeds things up a bit, and widens the search space..." ME: "I think this language is imprecise. You’re not widening the size of the search space." DrBot: "I think that's correct for the example I gave – the size of the search space is determined by the length of the bit string." I think you're agreeing with me, lol. I just want to make sure. Elizabeth was talking of using crossover which she referred to as mating. Crossover doesn't increase the length of the bit string.
...the size of the search space is determined by the length of the bit string.
I agree. Crossover ("mating") does not widen the search space. I just felt that she probably meant to say it "widens" the search in some way, not that it changes the size of the search space. Mung
Mung
I think this language is imprecise. You’re not widening the size of the search space.
I think thats correct for the example I gave - the size of the search space is determined by the length of the bit string. It is possible to use variable length genomes though, but in the case of the example I gave the total search space would still be the length of a bit string required to fully configure an FPGA, but candidate solutions could have a genotype much shorter than this (and consequently only use a limited portion of the FPGA real estate. You can, in theory, use a variable length genome with no length limit, rendering the search space infinite, but this isn't very useful (and on a computer you have an effective limit imposed by memory). The example I gave is one of direct genotype to phenotype mapping. You could extend the experiment to an indirect mapping (although I don't know if it would help things) Basically your genotype could encode an FPGA configuration, and you then use this configuration to 'clock out' a new bit string that encodes for a new FPGA configuration which is then used to test the design fitness. Like I said, I have no idea if this would be useful but some indirect encodings can be powerful, for example by coding for repeating patterns so the pattern is only described once in the genotype, but gets repeated many times for the phenotype. To your earlier post,
That’s a bit ambiguous. It’s the fpga that is the phenotype in the example, correct? Not the binary string?
Correct.
So all that remains, at this point, imo, is to discuss whether the genotype in his example is designed.
I suppose you could describe it that way but it seems a little odd. The encoding scheme is chosen but the genotypes of individuals are random (to start with) within the bounds of the encoding scheme. In the case of the FPGA the encoding scheme is not really designed or chosen, it is imposed by the hardware - these chips are designed to be configured by a bit string so we are just using a GA to generate a bit string that will produce behavior we want. So to answer your question, assuming I understood it, the genome or chromosome is not designed (apart from by the GA) but the encoding scheme can either be designed, chosen or imposed, depending on the task. DrBot
GA’s have been used successfully to evolve circuits on FPGA’s
Not an issue in dispute. What is in disputes is how analogous is a GA to biological evolution. What aspects of a GA are designed, and which aspects are not. The more aspects of a GA that are designed, and the closer the analogy to biological evolution, the stronger the case that biological evolution is designed. Lizzie was only allowing for design at two points in a GA, I claimed there were more, and that what was being left out was a significant aspect of the function of a GA. e.g., the chromosome itself. Mung
Elizabeth:
Another thing I do is “mate” solutions ... That speeds things up a bit, and widens the search space,
I think this language is imprecise. You're not widening the size of the search space. Mung
Elizabeth Liddle @69:
Well, looks like we had better use some specific examples. Would you like to pick a GA that you have in mind?
First and foremost, let's be clear that at this time I am talking about one thing and one thing only. The "chromosome" or "genotype" in a GA. I have three claims: 1. You left the chromosome out of your list. 2. The chromosome itself is designed. 3. That's a significant oversight.
Well, looks like we had better use some specific examples. Would you like to pick a GA that you have in mind?
Take DrBot's example @70:
You have an FPGA (reconfigurable array of logic gates). The configuration is determined by a binary string (lots of 1?s and 0?s). This is the phenotype. It is connected to some test equipment (the environment).
That's a bit ambiguous. It's the fpga that is the phenotype in the example, correct? Not the binary string?
Lets generate a starting population of 30 – we generate 30 entirely random bit strings (genotypes).
In the terms I've been using, those bit strings (binary strings) is the chromosomes (or genotypes). And that's what I am talking about. So far, I think DrBot and I are on the same page. So all that remains, at this point, imo, is to discuss whether the genotype in his example is designed. I say the chromosome is designed. It has to work with the other aspects of the GA, such as the phenotype and fitness function. It's not just some isolated non-designed entity that just happens to accidentally function in the context of the GA. But do we really need to debate that? Don't you both already know it is the case?
...load the population member (random bitstring) into the fpga (map the genotype to the phenotype) then perform a fitnes test...
I mentioned the mapping requirement in an earlier post. Mung @61
2. There needs to be a mapping (also designed) of the individuals to the designed fitness function.
Now if DrBot's GA is not enough, if you want a different example, I have two I can suggest. The first is the ev program. The Java source code is available online. The second is we could look at the GA at: http://www.cleveralgorithms.com/ The entire book and code is online. Mung
heh. I did my PhD in a motor control lab, and I agree :) Elizabeth Liddle
Apart from a bit of undergraduate teaching (non symbolic AI) I've just set up a company, currently in stealth mode, developing electric servo actuator systems for autonomous robots where the servos can be programmed to behave as compliant mechanisms (for example like antagonistic muscles) rather than the normal rigid systems used in most robots today. I'm firmly of the opinion that most robots built today are somewhat of a dead end because they treat actuators and the joints they control as sources of movement only - they impose motion - whereas biological joints are variable stiffness mechanisms that can accept kinematic inputs from the environment, as well as generating motion. A really good example would be passive dynamic and ballistic walking, which Cornell university has done some seminal work on. When I say I just set up the company it basically means I'm working from home with almost no income and a 2yo daughter to manage ;) the UK job market is a bit sparse at the moment, even in the sciences! DrBot
Cool! What do you work on now? Elizabeth Liddle
You can find a page of early work by Adrian Thompson on evolvable hardware here (Adrian was one of the people who examined my PhD thesis) DrBot
Another thing I do is "mate" solutions - I draw individuals at random from my population, the fitter ones having greater probability of being drawn, and then "mate" them with a second randomly drawn individual, and randomly recombine the genomes. That speeds things up a bit, and widens the search space, by loosening the linkage between useful and not-useful bits of the genome. As in life :) Elizabeth Liddle
Mung, here is an example from engineering of how to use a GA to evolve an electronic circuit The experimental setup: You have an FPGA (reconfigurable array of logic gates). The configuration is determined by a binary string (lots of 1's and 0's). This is the phenotype. It is connected to some test equipment (the environment). The goal: Evolve an 8 bit adder. The process: Lets generate a starting population of 30 - we generate 30 entirely random bit strings (genotypes). Now we need a method of evaluating fitness - load the population member (random bitstring) into the fpga (map the genotype to the phenotype) then perform a fitnes test - generate two 8 bit values, apply them to the FPGA inputs, clock the system and measure 8 outputs, score the result by seeing how close the output is to what addding these two 8 bit values ought to be. Repeat this a few times with different random inputs and average the results to get the final fitness score. Now for reproduction and mutation (using the simplest, but not most effective method): Pick two individuals at random, test the fitness of each using the method described above. Overwrite the less fit individual with the fitter one (reproduction), then pick a random number of bits in the bitstring of the individual that was copied and flip them (Mutation). Now repeat this process a few thousand times, or impliment an algorithm that will terminate the process if the average fitness stops increasing for too long (the population gets stuck on a local maxima) or fitness reaches a high enough score. Basically individuals that score higher have a greater chance of reproducing. What you ought to end up with is an population of FPGA configurations that are reasonably good (but probably not perfect) at adding 8 bit numbers. The results will probably be variable - If you do the experiment several times then some runs would produce much better candidate solutions than others - it depends a lot on the starting populations. (GA's have been used successfully to evolve circuits on FPGA's) DrBot
Well, looks like we had better use some specific examples. Would you like to pick a GA that you have in mind? Elizabeth Liddle
Gil - can you respond to my question at 48? It's only just been released from moderation, so you might not have seen it earlier. Of course, this comment might not appear for another 4 days either. :-( Heinrich
What gets designed by the GA is an optimally fit phenotype.
The GA itself doesn't design anything.
But those genotypes can be randomly generated, so that the corresponding phenotypes are all over the shop in terms of fitness.
So I could randomly generate some strings and you could use those as the starting population in one of your GA's and it would work fine? We both know that's not true, so what on earth do you mean?
You have to have, effectively, a starting population of genotypes, and some kind of way of relating the “genotype” to a “phenotype” (essentially, what the genotype does).
You have to have, effectively, a starting population of designed genotypes. Am I missing something, or didn't you already admit that? And if you're going to toss in phenotypes, those too would have to be designed. But seriously, phenotypes in a GA? What's an example of a phenotype in one of your GA's?
Whereas in typical GA applications evolution works directly on a population of candidate solutions, in nature there is a separation between genotypes and phenotypes (candidate solutions). - An Introduction to Genetic Algorithms
In fact, there is very little indeed about a GA that is not designed. So sure, if you want to use a GA as analogous to living things and evolution go right ahead. You're actually making a stronger case for design. Since you appear to have missed the relevant material, let me expand:
In genetic algorithms, the term chromosome typically refers to a candidate solution to a problem, often encoded as a bit string. The "genes" are either single bits or short blocks of adjacent bits that encode a particular element of the candidate solution. - An Introduction to Genetic Algorithms
The encoding is designed. It's a critical part of the picture in a GA. You left it out of your things to keep in mind about GAs. Surely you aware of how important it is to get the "chromosome" right in a GA. You can't use just any old chromosome. And surely you aren't going to tell us that the chromosome doesn't have to be designed with the solution in mind. Well, evolution, or so we are told, is not like that. Mung
Well, it doesn't "magically" work, obviously. You have to have, effectively, a starting population of genotypes, and some kind of way of relating the "genotype" to a "phenotype" (essentially, what the genotype does). But those genotypes can be randomly generated, so that the corresponding phenotypes are all over the shop in terms of fitness. What gets designed by the GA is an optimally fit phenotype. Elizabeth Liddle
Elizabeth Liddle:
Mung: the initial population in a GA is usually designed, although the individuals may be extremely simple.
I thought that the initial population was generated randomly. Do you do it differently in your GA's? But it should have been clear that I am talking about the "genome" or "chromosome," the entity that is used to represent an "individual" in the "population." That representation is designed. You cannot just take any old binary string and have it magically work. You have to chose an appropriate chromosome. Mung
Mung:
In a GA, one has to encode the potential solutions into a genome. That is an extremely important part of the process, and it too is designed.
In what sense are the "potential solutions encoded into" the genome of a GA? (I mean, in some GAs they may be, but not the ones I am familiar with, unless we are using the term in different senses, which is possible, given our track record :)) Elizabeth Liddle
Mung: the initial population in a GA is usually designed, although the indivuals may be extremely simple. The final population has been "designed", from that simple prototype, by the Darwinian process. You could conclude from this that the original living things must therefore have been designed, but you could also conclude that their final diversity could be accounted to by the Darwinian process. And Darwin's theory does not attempt to address how the original simplest self-replicator came into existence. He specifically excludes this. I don't think we need infer a Designer for that part of the process, but you may want to. But then it wouldn't be an argument against Darwinian evolution, but an argument against natural abiogenesis. Elizabeth Liddle
I repeat: The "organisms" in a GA are designed. Lizzie left this out. The obvious conclusion is that living organisms are also designed. Mung
GAs are "general purpose" search methods... In genetic algorithms, the term chromosome typically refers to a candidate solution to a problem, often encoded as a bit string. The idea of searching among a collection of candidate solutions for a desired solution is so common in computer science that it has been given it's own name: searching in a "search space." Each chromosome can be thought of as a point in the search space of candidate solutions. The GA most often requires a fitness function that assigns a score (fitness) to to each chromosome in the current population. The fitness of a chromosome depends on how well that chromosome solves the problem at hand. ...candidate solutions to a problem are encoded as abstract chromosomes encoded as string of symbols, with fitness functions defined on the resulting space of strings. A genetic algorithm is a method for searching such fitness landscapes for highly fit strings. - Melanie Mitchell, An Introduction to Genetic Algorithms
Candidate solutions, not solutions. Encoded (by the designer). A point in the search space (designed). How well it solves a [target] problem (designed). Searching for = teleological. Target. Solution. Designed. Not like Darwinian evolution. Mung
The first issue to be faced is how to represent the individuals (organisms) that make up an evolving population. A fairly general technique is to describe an individual as a fixed length vector of L features that are chosen presumably because of their (potential) relevance to estimating an individuals fitness. - Kenneth A. De Jong, Evolutionary Computation
1. It's the population that evolves, not the solution. 2. There needs to be a mapping (also designed) of the individuals to the designed fitness function. Mung
Elizabeth Liddle:
Not sure which aspect you mean, unless it’s the computer I run it on.
See my post at @54 In a GA, one has to encode the potential solutions into a genome. That is an extremely important part of the process, and it too is designed. Chalk up one more for intelligent design. Mung
Elizabeth Liddle @55: Thank you for the trouble of your response. I think that part of the misunderstanding (and I believe it is a misunderstanding) arises because someone who generally accepts evolutionary theory and also publishes a simulation is assumed to be asserting that their simulation proves evolutionary theory. Arguably, it is assumed that the simulation in fact abides by the tenets of evolutionary theory, the core tenets of which are random mutation and natural selection. Any simulation that "short-cuts" these two tenets, while possibly of some practical benefit in analyzing data to classify it or automate the generation of probability distributions, can not honestly be labeled an "evolutionary" simulation, certainly not without a multitude of "it depends" caveats. In reality, what the simulation does is test a specific hypothesis. That hypothesis may be supported, and the conclusion may in turn support evolutionary theory, but the idea that such simulations are the core of the evidence for evolutionary theory is, I think, simply wrong. Arguably, your simulations seem more like tests of varieties of Mendelian inheritance and searching for patterns in existing genomes (somewhat like cladistic analysis). You seem to be testing the probabilities of some pre-determined mutation becoming fixed in a population. You seem to be using a random number only to trigger whether the mutation manifests, but not the actual character of the mutation itself (the kind of mutation, its size, where on the genome it occurs, etc., are not random but was predetermined by you for your particular simulation purpose). What you are not simulating is a random mutation arising in some genotype, being manifested in a phenotype, and increasing the reproductive and survival fitness of parents and offspring, and becoming fixed in the population. You are not simulating random mutation and natural selection giving rise to a few beneficial but mostly deleterious de-novo traits (what "evolution" is assumed to be). Arguably, your simulations seem more atuned to genetic engineering research in which selected traits (not random traits) are being probabalistically analyzed for success. While I may have mischaracterized what you've actually simulated, I won't play "twenty questions" to ferret out the essential details. Obviously it all "depends". That is a given. But did your simulations actually depend on random mutation and natural selection? Seems not. Were they actually lab and field tested? Depending on the kind of classifying you intended, yes. But depending on trying to understand some facet of evolutionary theory, again it seems not, you implied you could develop such a simulation but not that you actually did. It also seems you don't use a single simulator which implements a consistent algorithm, but rather a variety of simulators each with customized algorithms, depending again on what you intend to learn. In and of itself that is not unreasonable. But I wonder how much mathematical and theoretical consistency all the simulators share in common. If they all shared the same computational routines, then perhaps there is a lot of consistency. But if they each use individually 'tweaked' or customized routines, then they collectively don't simulate a single consistent "evolutionary theory" but rather simulate several different varieties (or modes?) of a theory, which necessitates further caveats. There is no “core” evidence – there is simply IMO an abundance of circumstantial evidence for so many aspects of the theory that much of it we can take for (virtually) granted. There was likewise an abundance of circumstantial evidence for so many aspects of the (now defunct) theories of a geocentric solar system, spontaneous generation, time is constant, etc., and yet as greater attention was paid to the inconsistent details in the circumstantial evidence, as the hidden mechanisms at work were understood, those quaint theories were proven false. If there were few or no inconsistencies in evolutionary theory, it might be reasonable to take it (virtually) for granted. But that isn't the case. While theories are revised to reflect new observations, the core tenets of those theories are regardless repeatedly tested against reality and almost never taken for granted. Contrast 'taking evolutionary theory virtually for granted' with the standard model of physics and the "law" of gravity: as successful as they are, they are not taken for granted because there remain many nagging inconsisencies. Especially in view of how "difficult life science is" as you note, all the more reason to not "take it (virtually) for granted". While there is merit in simulations intended to explore a narrow set of data, the error of generalization (which you rightly cite) is committed by those who implicity or explicitly characterize such simulations as consistent with evolution, when in fact no such consistency is in evidence, certainly no consistency on evolution's core tenets of random mutation and natural selection having been closely modeled in the simulation. Charles
Elizabeth Liddle:
I think it is really important to distinguish three things in a GA: The solution to the problem. This is not designed by the GA designer, but evolves.
That makes no sense to me. What do you mean the solution to the problem evolves?
OK, let me give a very simple (and not terribly typical) example. I have two sets of structural brain images. Each brain image consists of a 3D matrix of "voxels". I want to know what patterns reliably distinguish the Images A from Images B. So I set a template binary logistic regression equation, in which a linear combination of some parameter times the value of some voxel gives a probability that the brain in question belongs to one group or the other. I have no idea at the beginning what the difference between the brains is, if any. And I start by generating a population of "equations", each of which randomly selects any number of voxels, from any part of the brain, multiplies each by a randomly drawn parameter, and outputs a probability, for each brain, that it belongs to one group or another. And then I look at the accuracy of each equation (well, it's automated of course). Most of course will get 50% right and 50% wrong. But some fortuitously, will get slightly more than 50% right. So I "breed" from those (I usually use "sexual reproduction", so the "offspring" are random combinations of two "parents", but I also "mutate" the parameters, the number of voxels selected, and which voxels selected"). And very quickly I find myself with a population of equations that classify the brain images. rather well. I then test my classifier on a completely different set of brain images. Often this goes wrong, of course (my "population" of equations has "evolved" to survive well in a the very specific "environment" of the initial dataset, but can't cope in a different environment), so we go back to the drawing board. But if all goes well, we find we have a classifier that not only can tell, with considerable reliability, which images belong to set A and which to set B, but which tells us what the patterns are that distinguish the two. That is the sense (in this example) in which the solution evolves. We start off with a random equation that performs no better than chances, and we end up with an equation that can not only distinguish between categories of brain images, but can tell us what patterns distinguish them. In other words I end up with information, in a very real sense, that I did not have at the beginning, and that I did not put into the algorithm. The solution is what you have at the end of a run. (Or not. Perhaps no solution is found.) Either way, if a solution to the problem is located or not, it makes no sense to speak of a solution that evolves. Well, it does to me. Half way through I might have a classfier that is better than chance, but still produces false negatives and/or false positives. By the end, I hope for near-perfect classification. "Evolving" seems to me exactly what the solution does.
And really, you left out the most important aspect of a GA. Also designed, I might add.
Not sure which aspect you mean, unless it's the computer I run it on. If so, sure. Elizabeth Liddle
DrBot @53:
If you stop the process part way through you still have a solution (or rather a population of candidate solutions) – it just won’t usually be as good a solution as if you leave it running for longer.
Precisely my point. You don't have a solution. You have a population of "candidate solutions" which may or may not actually constitute a solution to the problem that is trying to be solved by the GA. You don't have a "solution" until you choose one of the candidates, and even then it may not actually be a solution.
That is why it is entirely correct to say that the solutions(s) evolve.
Only by mangling the English language beyond recognition. The potential solutions "evolve." Or are you claiming that all the potential solutions "evolve" to converge on the same solution of you just let the GA run long enough?
The effectiveness of the solution is gradually increased over time using a process of replication, modification and selection (usually called evolution)
Do GA's always solve the problem posed? If not, then you're arguing that the effectiveness of non-solutions is gradually increased. In what world? Look, we're not morons here. Give us some credit. Mung
Charles, thanks for this:
Ok, help me understand what you’re specifically talking about, and then I’ll translate my argument to your specifics.
Well, what I'm doing depends on what I'm trying to do! So the answers to your questions vary. But I'll try to give some typical answers:
In your program: 1) how do you model an individual’s genome
Again, it depends what I am trying to do. If it's really simple, I simply model it as a repertoire of possible outputs with a frequency distribtuion. Or it could be as an linear equation, in which the parameters are randomly adjusted; or it could be a potentially non-linear equation in which the terms themselves can be added or removed. Those approaches are useful for classifiers. If (for fun, mainly) I'm trying to model some aspect of evolution (as opposed to learning, or trying to solve a problem) then the genome may consist of a randomly adjustable "code" where different code sequences result in different "phenotypic" behaviour.
2) how many mutations per generation
That's usually an adjustable parameter. Sometimes it's interesting to see what the threshold is at which "evolution" breaks down, or does not occur. Again, it depends on the purpose of the program. However, I often use recombination as part of my "breeding" algorithm (as in a sexually reproducing population). With a large enough genome, sometims very few "external" mutations are required, because plenty of variants are produced by combining sequences from one parent genome with sequences from the other. In which case the cut points would also usually be random.
3) how much of the genome is mutated with each generation
Again, that depends on the purpose of the simulation.
4) how does a random number determine a specific mutation
In various ways. What I usually do is to generate random numbers between 0 and 1 and have a threshold (another adjustable parameter) below which a component of the genome is mutated; if it is to be mutated, again, it depends on what I am trying to do. It could be that I replace an element of the genome with another element, drawn from a probability distribution either determined a priori, or, in some cases, that pdf itself could be part of the simulation. Or, in the simplest models, I just add random "noise" to the behavioural repertoire.
5) what is the size of each mutation
Again, it depends. Sometimes it will be a change in an individual element of the genome. However, with recombination, the mutations can be quite large, in that the "child" allele may be a different size from the parent, and may differ from it in various ways, including numbers of repeats, and actual sequences.
6) how is a mutation’s benefit or cost determined
Depends on the purpose of the program. For learning models, I will build in "feedback" ("reward" or "penalty"). For classifiers, similarly, the better the classification (the more correct assignments) the better chance that genome has of replication. But it's possible (and I've so far only played around with this) to dispense with a formal fitness function, and have the ability to reproduce itself determine how likely it is to reproduce! That would be closer to a "real life" fitness function. However, normally, I want my critters to solve my problems, not theirs :)
7) how large is the starting population
Adjustable, and either remains startic or itself can grow or shrink. Depends on the purpose of the program. For practical problem solving it's a tradeoff between time and memory.
8) how is an individual’s breeding fitness scored
Well, of course, for problem solving, I'm not specifically interested in the breeding fitness, but whether it has figured out my problem! And in any case, fitness scores are relative to something (sometimes the ancestral population, sometimes the previous generation). So I haven't actually scored breeding fitness, although I would in the case of my good-replicators-replicate-model. But that's WIP :)
9) what is the criteria for a mutation to be adopted into the population
No criteria. Whether a mutation propagates is simply a function of its success; no additional criteria is required. It's output, in other words, not input.
10) how is an individual’s normal life-span modelled
Again, it depends on the program. Often by a cull of the least good performers, but again, it's output, not input. Good performers will outlive poor ones.
How then do you lab test or field test your simulation results?
Well, again, it depends on what the model is for. For classifiers, you "evolve" or "train" your classifier on a "training set" of data, in which the correct categorisation is fed back to the model, then you let it loose on a "test" set, in which you know the correct categorisation, and you see how well it does. If it does well, then you do that again a few times. Once you have a consistent classifier, the most interesting output is usually the method it uses, because that tells you what pattern it has found most reliably distinguishes the categories. But you may also want to use it, as, say, a diagnostic, where the categories are unknown even to you. For a behavioral model, you compare the behaviour of the model with the behaviour you are trying to model; so a good model will have similar learning curves, and make similar error types to the people whose behaviour you are trying to understand. For a model in which you are trying to understand some facet of evolutionary theory (by what steps an "irreducibly complex" function evolves, for instance) you would examine in detail the lineage of the IC function. But again, it depends what specific question you are trying to answer. And I suspect this is where the bone of contention lies. Gil is obviously a smart guy, and so, obviously was his father. But it is important, when evaluating the scientific integrity of other projects to understand what the specific purpose of any given study is. I know of no study designed to "prove Darwinism" or "model the evolution of species X". Most simulations are designed to solve a specific problem or test a specific hypothesis. And these are as rigorous as any in science, but, as with all hypothesis testing, you have to be very careful about the generalisability of your conclusions. I think that part of the misunderstanding (and I believe it is a misunderstanding) arises because someone who generally accepts evolutionary theory and also publishes a simulation is assumed to be asserting that their simulation proves evolutionary theory. In reality, what the simulation does is test a specific hypothesis. That hypothesis may be supported, and the conclusion may in turn support evolutionary theory, but the idea that such simulations are the core of the evidence for evolutionary theory is, I think, simply wrong. There is no "core" evidence - there is simply IMO an abundance of circumstantial evidence for so many aspects of the theory that much of it we can take for (virtually) granted. Other parts are unknown, and require further investigation. Sometimes these investigations deliver surprises (exciting surprises!) and we know that in many respects Darwin was wrong. But then all scientist are :) That's why it's fun. Elizabeth Liddle
Elizabeth Liddle @45:
I’m not clear what aspect of that you think is not Darwinian?
I'm betting he thinks NONE of it is Darwinian. Charles @46:
1) how do you model an individual’s genome
BINGO! Elizabeth failed to mention that in her "important to distinguish three things in a GA" post. That's quite an oversight, really. I have to assume it was intentional, because she really should know better. Right Elizabeth? You're telling us the important things to distinguish about GA's, and you forget a little thing like encoding the genomes? How do you decide what the genome is going to look like? I mean, is there some cookie cutter genome that all GA's use? Mung
The solution is what you have at the end of a run. (Or not. Perhaps no solution is found.) Either way, if a solution to the problem is located or not, it makes no sense to speak of a solution that evolves.
If you stop the process part way through you still have a solution (or rather a population of candidate solutions) - it just won't usually be as good a solution as if you leave it running for longer. That is why it is entirely correct to say that the solutions(s) evolve. The effectiveness of the solution is gradually increased over time using a process of replication, modification and selection (usually called evolution) DrBot
What Tierra Is Not
Life on Earth is the product of evolution by natural selection operating in the medium of carbon chemistry. However, in theory, the process of evolution is neither limited to occuring on the Earth, nor in carbon chemistry. Just as it may occur on other planets, it may also operate in other media, such as the medium of digital computation. And just as evolution on other planets is not a model of life on Earth, nor is natural evolution in the digital medium. http://life.ou.edu/tierra/whatis.html
Seems that what Tierra can and cannot do is pretty irrelevant. Mung
Elizabeth Liddle:
I think it is really important to distinguish three things in a GA: The solution to the problem. This is not designed by the GA designer, but evolves.
That makes no sense to me. What do you mean the solution to the problem evolves? The solution is what you have at the end of a run. (Or not. Perhaps no solution is found.) Either way, if a solution to the problem is located or not, it makes no sense to speak of a solution that evolves. And really, you left out the most important aspect of a GA. Also designed, I might add. Mung
Elizabeth Liddle @43
Well, what I’m proposing on another thread is just that: a “targetless” model in which things that “breed” better breed more of themselves, thereby concentrating the traits that promote better breeding in the evolving population.
So then is it just coincidence that you chose not to use a GA? Aren't GA's "targetless"? Mung
MathGrrl:
What do you think of Tom Ray’s Tierra? Does it meet your criteria of being “targetless”?
It's teleological. Mung
Integrity is of paramount importance in science, I agree. But I do not think you have made your case that "Darwinism" lacks such. That doesn't mean that there don't exist scientists whose integrity is wanting. Unfortunately there is indeed a lot of sloppy science around. I don't see any evidence that it is particularly rife in evolutionary biology though. (Evolutionary psychology maybe....) Elizabeth Liddle
Gil - can you release my comment that's in moderation. I'd be interested to see how you will respond. I think models have a far wider utility than you are allowing, and that the relationship between models and empirical data can be more subtle than you think. beyond the bluster, I think there are some interesting questions raised about how we can use simulations to tell us about the real world. I actually think it's something ID will have to face as groups like the EIL develop their models. Heinrich
My father, who is 89, is one of the few remaining living scientists who developed the atomic bomb during WWII. He and his colleagues did so with computational tools no more sophisticated than a slide rule. They had to get it right, in a hurry and with no excuses for failure, because, as my father told me when I was growing up in the 1950s, if Hitler got the bomb first he would have the power to rule the world. These are the standards of scientific integrity, discipline, and accountability with which I was groomed during my formative years. Darwinism represents the antithesis of such standards. GilDodgen
Elizabeth Liddle @45: I’m talking about programs (well part of what I’m talking about) where a starting population of individuals is bred, with random variance, where the probability of breeding depends on their score on a fitness criterion. Ok, help me understand what you're specifically talking about, and then I'll translate my argument to your specifics. In your program: 1) how do you model an individual's genome 2) how many mutations per generation 3) how much of the genome is mutated with each generation 4) how does a random number determine a specific mutation 5) what is the size of each mutation 6) how is a mutation's benefit or cost determined 7) how large is the starting population 8) how is an individual's breeding fitness scored 9) what is the criteria for a mutation to be adopted into the population 10) how is an individual's normal life-span modelled How then do you lab test or field test your simulation results? Charles
Hi Charles :) You wrote:
Elizabeth Liddle @39: The glaring deficiency is that *none* of the evolutionary simulation implementations actually employ random number generators when defining a “mutation”. Random mutation and natural selection are *the* core principles of Darwinian evolution and while the algorithms discuss “random mutation”, the actual programmed implementations are *not* random, they’re not even pseudo-random.
Well, I use a random-number generator in mine, always. I guess I could use some true random generator, like the ones you can get from atmospheric noise, but I honestly don't see that that is critical, and the great thing about a random number generator is that you can choose whether to reset the seed or not, which is usefuul if you want to re-run a previous run for some reason. Or am I not understanding you? I'm certainly not aware that other people don't use random-number generators, and all the ones I've looked at do.
Further, “natural selection” is simulated by matching against a pre-determined forward-looking fitness pattern rather than a backward-looking simulation of what has previously survived, or even culling some percentage of mutations based on empirical observations of how few mutations (as a percentage) survive.
Well, that may be true of some,I guess, depending on the purpose of the simulation, but again, in mine, I don't do that. Unless I'm misunderstanding what you mean by "matching". Of course you match the output to your fitness function, but that is merely the equivalent, in nature, of an individual's probability of reproduction being determined by some interaction between the trait it bears and the environment it inhabits. Could you clarify? In the case of the ones I write (either to solve a problem or to simulate learning) I generally select the top 2/3rds or so of each generation (sometimes the top half) for "mating" and breeding. Or sometimes I pick pairs at random and have the winners of each pair "mate". There are lots of ways of doing it.
That doesn’t make life-sciences junk science – it makes them difficult science!
Science as a discipline is about factual observation and intellectal honesty. No degree of difficulty excuses substituting imagination for fact and claiming something has been simulated. Science is not a video game. Or is it now?
No, of course it isn't. But you left out a crucial component of science, which is hypothesis testing! Simulations are just one of the many kinds of models we test.
When engineers simulate heat transfer from a liquid into a solid, they don’t compute the motion of every molecule based on physics, rather it is a statistical averaging of the effects of molecules treated as a multitude of finite groups in progressive “slices” adjacent to and across the boundaries. Empirically tested “rules of thumb” accurately and realistically “emulate” (yes, quantitatively replace) the net effect of the particular liquid and solid instead of attempting to compute the energy transfer at each molecular collision. Engineers can do this reliably because a) they factually know the composititon of the materials involved, and b) the properties and behavior of these materials are accurately known because they have been measured and recorded specifially for use in exactly these kinds of computations. The handbooks and databases literally fill libraries.
Yes indeed. My husband was a physicist before he became a life-scientist :)
This is indeed difficult for evolutionay biologists because they a) don’t know the properties of what they are “simulating” and b) don’t know what rules to apply because they neither understand the behavior nor have any rules, but inexcusably have even ignored most of the structures in question on the presumption they’re junk and don’t matter.
Well, no. I simply disagree with you here. Or, at best, I don't know what kind of "simulation" you have in mind. What you say isn't mapping on to my experience (as a life scientist), anyway.
If an engineer were to build a flight simulator from a duck decoy hanging by a string with a fan in front of it, and his excuse was that birds appear lighter than air and the only question is how they push forward through a wind, that simulation would never lead to any insight of how airflow over a curved surface produces lift. Yet evolutionary biologists claim to gain insight into evolution by random mutation and natural selection from non-random algorithims and pre-determined fitness patterns, the only question in their (closed) minds being how rapidly do variances spread through a population.
Far more questions that one flow through our very open minds :) But yes, we do gain insight (not that I am an evolutioanry biologist, but what I do is realted) from simulations, which is why we do them!
In the same breath you assert: So I don’t think anyone expects a “Darwinian simulation” to resemble anything like a real-life scenario. No scenario could. … And so we can, using simulations, demonstrate that facets of Darwin’s theory (the core, in fact) work: that if a population breeds with variance, and if that variance results in phenotypic differences in reproductive success within a given environment, the population will evolve and adapt [Charles: ostensibly by random mutation and natural selection, right?]. We can also test this in the field and in the lab, with rigour. … Where is the “rigour” in ignoring the junk DNA? Where is the rigour in simulating random mutation with non-random algorithms? Where is the rigour in simulating natural selection with forward looking pattern matching?
Who is "ignoring" "junk" DNA? Non-coding DNA (non-coding for proteins anyway) is an absolutely vital part of genetic research, especially in my field. And the Grants' work in the Galapagos was nothing if not rigorous and meticulous, as was Endler's work with guppies, and indeed Lenski's work with E.coli. I accept, from your comments, that you do not agree, but I suggest that you may have an inaccurate model of how biological research is done, and what simulations are used for.
How can you excuse using non-random, forward-looking algorithms as ‘demonstrating that core facets of Darwin’s theory in fact work’, and while such ‘simulations do not resemble real-life scenarios’ they can be tested in the field with rigour? Do you not expect to find real-life in the field. Are random mutation and natural selection not core facets of Darwin’s theory? If random mutation and natural selection are not simulated then claims to have genetic or evolutionary algorithms can not be sustained. If you will next argue that real-life Darwinian random mutation and natural selection are just too difficult to simulate, then you really have no basis to assert Darwinian evidenciary standards are high, do you. They are at best, arbitrary, and their simulations are nothing more than “just so” programs.
I don't think I am following you. I'm not talking about "non-random, forward looking algorithms". I'm talking about programs (well part of what I'm talking about) where a starting population of individuals is bred, with random variance, where the probability of breeding depends on their score on a fitness criterion. I'm not clear what aspect of that you think is not Darwinian? Elizabeth Liddle
Elizabeth Liddle @39: The glaring deficiency is that *none* of the evolutionary simulation implementations actually employ random number generators when defining a "mutation". Random mutation and natural selection are *the* core principles of Darwinian evolution and while the algorithms discuss "random mutation", the actual programmed implementations are *not* random, they're not even pseudo-random. Further, "natural selection" is simulated by matching against a pre-determined forward-looking fitness pattern rather than a backward-looking simulation of what has previously survived, or even culling some percentage of mutations based on empirical observations of how few mutations (as a percentage) survive. That doesn’t make life-sciences junk science – it makes them difficult science! Science as a discipline is about factual observation and intellectal honesty. No degree of difficulty excuses substituting imagination for fact and claiming something has been simulated. Science is not a video game. Or is it now? When engineers simulate heat transfer from a liquid into a solid, they don't compute the motion of every molecule based on physics, rather it is a statistical averaging of the effects of molecules treated as a multitude of finite groups in progressive "slices" adjacent to and across the boundaries. Empirically tested "rules of thumb" accurately and realistically "emulate" (yes, quantitatively replace) the net effect of the particular liquid and solid instead of attempting to compute the energy transfer at each molecular collision. Engineers can do this reliably because a) they factually know the composititon of the materials involved, and b) the properties and behavior of these materials are accurately known because they have been measured and recorded specifially for use in exactly these kinds of computations. The handbooks and databases literally fill libraries. This is indeed difficult for evolutionay biologists because they a) don't know the properties of what they are "simulating" and b) don't know what rules to apply because they neither understand the behavior nor have any rules, but inexcusably have even ignored most of the structures in question on the presumption they're junk and don't matter. If an engineer were to build a flight simulator from a duck decoy hanging by a string with a fan in front of it, and his excuse was that birds appear lighter than air and the only question is how they push forward through a wind, that simulation would never lead to any insight of how airflow over a curved surface produces lift. Yet evolutionary biologists claim to gain insight into evolution by random mutation and natural selection from non-random algorithims and pre-determined fitness patterns, the only question in their (closed) minds being how rapidly do variances spread through a population. In the same breath you assert:
So I don’t think anyone expects a “Darwinian simulation” to resemble anything like a real-life scenario. No scenario could. ... And so we can, using simulations, demonstrate that facets of Darwin’s theory (the core, in fact) work: that if a population breeds with variance, and if that variance results in phenotypic differences in reproductive success within a given environment, the population will evolve and adapt [Charles: ostensibly by random mutation and natural selection, right?]. We can also test this in the field and in the lab, with rigour. ...
Where is the "rigour" in ignoring the junk DNA? Where is the rigour in simulating random mutation with non-random algorithms? Where is the rigour in simulating natural selection with forward looking pattern matching? How can you excuse using non-random, forward-looking algorithms as 'demonstrating that core facets of Darwin’s theory in fact work', and while such 'simulations do not resemble real-life scenarios' they can be tested in the field with rigour? Do you not expect to find real-life in the field. Are random mutation and natural selection not core facets of Darwin's theory? If random mutation and natural selection are not simulated then claims to have genetic or evolutionary algorithms can not be sustained. If you will next argue that real-life Darwinian random mutation and natural selection are just too difficult to simulate, then you really have no basis to assert Darwinian evidenciary standards are high, do you. They are at best, arbitrary, and their simulations are nothing more than "just so" programs. Charles
@ NeilBJ #41
Re: Elizabeth Liddle @#39
And so we can, using simulations, demonstrate that facets of Darwin’s theory (the core, in fact) work: that if a population breeds with variance, and if that variance results in phenotypic differences in reproductive success within a given environment, the population will evolve and adapt.
Yes, we can demonstrate that facets of Darwin’s theory work, but only because we overlay our understanding of Darwin’s theory on the program that is written. Demonstrating with a program how we think evolution works is not the same as replicating a real evolutionary sequence.
I agree.
Do you agree that Avida is a demonstration of how evolution works? Does it not have a conscious, that is, intelligent selection algorithm in it? Does evolution have a conscious selection algorithm?
Both Avida and evolutionary process select. Avida does this by means of an algorithm that scores the functionality of each individual (IIRC) according to a table. Those that score highest have the greatest chance of breeding (can't remember how stochastic Avida is). In evolution the "scoring" is intrinsic - the organisms who have whatever functions it takes to maximise their probability of breeding simply do, so no algorithm is needed. But the principle is the same: in the Avida simulation managing to perform some cool high scoring function helps you breed; in the wild, managing to do something cool like be well camouflaged also helps you breed, but instead of being artificially ("intelligent") give a score for it, you just breed well because you don't get eaten!
As I have thought about evolutionary algorithms, I have wondered how a “targetless” algorithm could be written. If the evolutionary variation occurs independently with respect to need, it would seem that it would be impossible to write an honest algorithm.
Well, what I'm proposing on another thread is just that: a "targetless" model in which things that "breed" better breed more of themselves, thereby concentrating the traits that promote better breeding in the evolving population. I won't know in advance what the trick is (as those who use GAs to solve real world problems don't know in advance with the trick is - they just set up the fitness algorithm so that solving it enhances survival).
The variation that occurs is unpredictable, and whether or not the organism survives as a result of that variation or in spite of it is an after-the-fact observation. And this does not even begin to solve the problem of how an algorithm could be written to demonstrate a significant morphological change.
Well, not realistically, obviously, because life is very complicated! But in principle, I don't see why not. Do you know the clock-evolving video?
The programmer seems to have no choice but to overlay a target in the algorithm. The target the programmer implements obviously biases the program in favor of that target. It would seem that a program with a very large number of targets would come incrementally closer to approximating a real evolutionary process, but at what cost?
I think it is really important to distinguish three things in a GA: The fitness function (the criterion, decided by the GA designer, as to whether the output of an individual matches the desired output) The kind of mutations the individuals under go (but not which ones) - again this is decided by the designer. The solution to the problem. This is not designed by the GA designer, but evolves. So in a GA, I would argue, the designer has two main inputs, the fitness function and the search space, and leaves the evolving critters the job of figuring out how to solve the problem. In evolution, the fitness function is simply performed by the environment, and actually constantly changes as the population itself is part of its own environment. The search space - well, that's where, conceivably, a Designer might be inserted (cf Behe) but may be itself constrained by fitness (in other words populations that tend to mutate in a certain way may survive as populations better than populations that mutate to fast, or too slowly, or too radically). The individuals - well the individual in life are the direct analogs of the individuals in the GA.
I am reminded of Dr. David Berlinski’s observation that true Darwinian algorithms do not work and genetic algorithms that do work are not Darwinian.
Well, it's witty, but not entirely true IMO :) Cheers Lizzie Elizabeth Liddle
NeilBJ,
As I have thought about evolutionary algorithms, I have wondered how a “targetless” algorithm could be written. If the evolutionary variation occurs independently with respect to need, it would seem that it would be impossible to write an honest algorithm.
What do you think of Tom Ray's Tierra? Does it meet your criteria of being "targetless"? What do you mean by "an honest algorithm"
The programmer seems to have no choice but to overlay a target in the algorithm.
I don't think this is done in Tierra, but I'm curious to see if you agree. MathGrrl
Re: Elizabeth Liddle @#39 And so we can, using simulations, demonstrate that facets of Darwin’s theory (the core, in fact) work: that if a population breeds with variance, and if that variance results in phenotypic differences in reproductive success within a given environment, the population will evolve and adapt. Yes, we can demonstrate that facets of Darwin’s theory work, but only because we overlay our understanding of Darwin’s theory on the program that is written. Demonstrating with a program how we think evolution works is not the same as replicating a real evolutionary sequence. Do you agree that Avida is a demonstration of how evolution works? Does it not have a conscious, that is, intelligent selection algorithm in it? Does evolution have a conscious selection algorithm? As I have thought about evolutionary algorithms, I have wondered how a “targetless” algorithm could be written. If the evolutionary variation occurs independently with respect to need, it would seem that it would be impossible to write an honest algorithm. The variation that occurs is unpredictable, and whether or not the organism survives as a result of that variation or in spite of it is an after-the-fact observation. And this does not even begin to solve the problem of how an algorithm could be written to demonstrate a significant morphological change. The programmer seems to have no choice but to overlay a target in the algorithm. The target the programmer implements obviously biases the program in favor of that target. It would seem that a program with a very large number of targets would come incrementally closer to approximating a real evolutionary process, but at what cost? I am reminded of Dr. David Berlinski’s observation that true Darwinian algorithms do not work and genetic algorithms that do work are not Darwinian. No life-science experiment will be as conclusive as a physics or an engineering experiment, because there are far more variables, most of which are unknown, and whose pdfs have to be estimated, or even guessed at. That is all I am trying to say in this post. NeilBJ
Elizabeth; despite your unwarranted fondness of it, neo-Darwinian evolution IS, without a doubt, Pseudo-science; Is evolution pseudoscience? Excerpt:,,, Thus, of the ten characteristics of pseudoscience listed in the Skeptic’s Dictionary, evolution meets nine. Few other?pseudosciences — astrology, astral projection, alien abduction, crystal power, or whatever — would meet so many. http://creation.com/is-evolution-pseudoscience =============== "Certainly, my own research with antibiotics during World War II received no guidance from insights provided by Darwinian evolution. Nor did Alexander Fleming's discovery of bacterial inhibition by penicillin. I recently asked more than 70 eminent researchers if they would have done their work differently if they had thought Darwin's theory was wrong. The responses were all the same: No. Philip S. Skell - Professor at Pennsylvania State University. http://www.discovery.org/a/2816 Podcasts and Article of Dr. Skell http://www.evolutionnews.org/2010/11/giving_thanks_for_dr_philip_sk040981.html Science Owes Nothing To Darwinian Evolution - Jonathan Wells - video http://www.metacafe.com/watch/4028096 ================ And though neo-Darwinian evolution is absolutely horrid as to being a rigorous science, anyone who dares question it is 'EXPELLED" EXPELLED - Starring Ben Stein - Part 1 of 10 - video http://www.youtube.com/watch?v=Fj8xyMsbkO4 Slaughter of Dissidents - Book "If folks liked Ben Stein's movie "Expelled: No Intelligence Allowed," they will be blown away by "Slaughter of the Dissidents." - Russ Miller http://www.amazon.com/Slaughter-Dissidents-Dr-Jerry-Bergman/dp/0981873405 Academic Freedom Under Fire — Again! - October 2010 Excerpt: All Dr. Avital wanted to do was expose students to some of the weaknesses inherent in Darwin’s theory. Surely there’s no harm in that — or so one would think. But, of course, to the Darwinian faithful, such weaknesses apparently do not exist. http://www.evolutionnews.org/2010/10/academic_freedom_under_fire_-_038911.html Journal Apologizes and Pays $10,000 After Censoring Article - Granville Sewell episode - June 2011 http://www.evolutionnews.org/2011/06/journal_apologizes_and_pays_10047121.html bornagain77
GilDodgen: Thanks for the clarification. I guess my response is that you are comparing apples with origins. The point of the kind of simulations you work with is indeed, to simulate the real world realistically enough that, for example, it is a valid environment for pilots to train in, or makes accurate enough predictions to guide a real-life trajectory. Similarly with weather models, although weather, obviously is notoriously unpredictable in any more than the relatively short term because of the critical dependence of any model on starting conditions, weather being a chaotic system. But the kinds of models you seem to be referring to wrt to Darwinian evolution are not those kinds of models at all. At least none are that I know of. Life, for a start, is orders of magnitude more complex than weather, even, and just as non-linear. This is why I keep recommending Denis Noble's book "The Music of Life" (an essay, really), the content of which is also delivered here: http://videolectures.net/eccs07_noble_psb/ His original interest was in the heart, and indeed, has produced extraordinarily good models that allow us greater understanding of what contingencies can result in fibrillation. But his point is that even with something as "simple" as the heart, building the model from the bottom up, is hugely computationally intensive, and in the end, what we need are not the specific equations that govern a specific phenomenon, but a mathematical abstraction that allows us to generalise across broad classes of phenomena. So I don't think anyone expects a "Darwinian simulation" to resemble anything like a real-life scenario. No scenario could. Nonetheless, simulations are a hugely important tool in biological investigation, and, indeed, helpful for establishing broad principles, especially when these involve non-linear relationships, as living things must. And so we can, using simulations, demonstrate that facets of Darwin's theory (the core, in fact) work: that if a population breeds with variance, and if that variance results in phenotypic differences in reproductive success within a given environment, the population will evolve and adapt. We can also test this in the field and in the lab, with rigour. No life-science experiment will be as conclusive as a physics or an engineering experiment, because there are far more variables, most of which are unknown, and whose pdfs have to be estimated, or even guessed at. That doesn't make life-sciences junk science - it makes them difficult science! And I simply dispute that the standards of evidence are low. The evidence may be far noisier (and must be, and is) but that means that our conclusions must be much more provisional, and hedged with far more caveats, not that they must not be made. Elizabeth Liddle
ba77, I am not a physicist, and have no expertise in quantum mechanics! I'm not disputing the existence of God anyway (one way or the other). Elizabeth Liddle
Perhaps I was not explicit enough about the theme of my essay. I thought that the conclusion would be obvious. The theme is that in the world of legitimate computer simulation technology the standards are very high for acceptance of the relevance of the simulation to the real world. In the fantasy world of Darwinism there is no standard of accountability or empirical verification. Make up a story, or design a "simulation" program that "proves" a conclusion that was reached in advance, and you can get a paper published in Nature, with no challenges allowed, no matter how logical or evidential. The absence of standards of accountability and empirical verification is the hallmark of a pseudoscience. This is why I claim that Darwinism is junk science of the highest order. In no other legitimate scientific discipline would such low standards of evidence be acceptable. GilDodgen
Well Elizabeth, you seem to have it all figured out, but to calm my reservations as to your expertise in this matter, perhaps you would care to provide a 'non-local' cause for quantum entanglement within DNA, etc.., which does not involve God as to its origination of cause??? bornagain77
Well, I don't think any of that is evidence for (or against) God. As I said, I think it is completely irrelevant to the question. OK, going offline for a bit now, nice to talk to you :) Cheers Lizzie Elizabeth Liddle
Elizabeth, you state: 'If God exists then we aren’t going to find evidence within the universe' Actually we find evidence in life that demands a 'non-local' (i.e. not limited by time or space) cause! Here is the falsification of local realism (materialism). Here is a clip of a talk in which Alain Aspect talks about the failure of 'local realism', or the failure of materialism, to explain reality: The Failure Of Local Realism - Materialism - Alain Aspect - video http://www.metacafe.com/w/4744145 The falsification for local realism (materialism) was recently greatly strengthened: Physicists close two loopholes while violating local realism - November 2010 Excerpt: The latest test in quantum mechanics provides even stronger support than before for the view that nature violates local realism and is thus in contradiction with a classical worldview. http://www.physorg.com/news/2010-11-physicists-loopholes-violating-local-realism.html Quantum Measurements: Common Sense Is Not Enough, Physicists Show - July 2009 Excerpt: scientists have now proven comprehensively in an experiment for the first time that the experimentally observed phenomena cannot be described by non-contextual models with hidden variables. http://www.sciencedaily.com/releases/2009/07/090722142824.htm (of note: hidden variables were postulated to remove the need for 'spooky' forces, as Einstein termed them — forces that act instantaneously at great distances, thereby breaking the most cherished rule of relativity theory, that nothing can travel faster than the speed of light.) And yet, quantum entanglement, which rigorously falsified local realism (materialism) as the true description of reality, is now found in molecular biology! Quantum Information/Entanglement In DNA & Protein Folding – short video http://www.metacafe.com/watch/5936605/ Quantum entanglement holds together life’s blueprint – 2010 Excerpt: When the researchers analysed the DNA without its helical structure, they found that the electron clouds were not entangled. But when they incorporated DNA’s helical structure into the model, they saw that the electron clouds of each base pair became entangled with those of its neighbours (arxiv.org/abs/1006.4053v1). “If you didn’t have entanglement, then DNA would have a simple flat structure, and you would never get the twist that seems to be important to the functioning of DNA,” says team member Vlatko Vedral of the University of Oxford. http://neshealthblog.wordpress.com/2010/09/15/quantum-entanglement-holds-together-lifes-blueprint/ Untangling the Quantum Entanglement Behind Photosynthesis – May 11 2010 Excerpt: “This is the first study to show that entanglement, perhaps the most distinctive property of quantum mechanical systems, is present across an entire light harvesting complex,” says Mohan Sarovar, a post-doctoral researcher under UC Berkeley chemistry professor Birgitta Whaley at the Berkeley Center for Quantum Information and Computation. “While there have been prior investigations of entanglement in toy systems that were motivated by biology, this is the first instance in which entanglement has been examined and quantified in a real biological system.” http://www.sciencedaily.com/releases/2010/05/100510151356.htm i.e. It is very interesting to note that quantum entanglement, which conclusively demonstrates that ‘information’ in its pure 'quantum form' is completely transcendent of any time and space constraints, should be found in molecular biology on such a massive scale, for how can the quantum entanglement 'effect' in biology possibly be explained by a material (matter/energy) 'cause' when the quantum entanglement 'effect' falsified material particles as its own 'causation' in the first place? (A. Aspect) Appealing to the probability of various configurations of material particles, as Darwinism does, simply will not help since a timeless/spaceless cause must be supplied which is beyond the capacity of the material particles themselves to supply! To give a coherent explanation for an effect that is shown to be completely independent of any time and space constraints one is forced to appeal to a cause that is itself not limited to time and space! i.e. Put more simply, you cannot explain a effect by a cause that has been falsified by the very same effect you are seeking to explain! Improbability arguments of various 'special' configurations of material particles, which have been a staple of the arguments against neo-Darwinism, simply do not apply since the cause is not within the material particles in the first place! ,,,To refute this falsification of neo-Darwinism, one must show local realism to be sufficient to explain the quantum non-locality we find within molecular biology! ,,, As well, appealing to ‘non-reductive’ materialism (multiverse or many-worlds) to try to explain quantum non-locality in molecular biology, or anything else for that matter, destroys the very possibility of doing science rationally; Michael Behe has a profound answer to the infinite multiverse (non-reductive materialism) argument in “Edge of Evolution”. If there are infinite universes, then we couldn’t trust our senses, because it would be just as likely that our universe might only consist of a human brain that pops into existence which has the neurons configured just right to only give the appearance of past memories. It would also be just as likely that we are floating brains in a lab, with some scientist feeding us fake experiences. Those scenarios would be just as likely as the one we appear to be in now (one universe with all of our experiences being “real”). Bottom line is, if there really are an infinite number of universes out there, then we can’t trust anything we perceive to be true, which means there is no point in seeking any truth whatsoever. “The multiverse idea rests on assumptions that would be laughed out of town if they came from a religious text.” Gregg Easterbrook BRUCE GORDON: Hawking’s irrational arguments – October 2010 Excerpt: For instance, we find multiverse cosmologists debating the “Boltzmann Brain” problem: In the most “reasonable” models for a multiverse, it is immeasurably more likely that our consciousness is associated with a brain that has spontaneously fluctuated into existence in the quantum vacuum than it is that we have parents and exist in an orderly universe with a 13.7 billion-year history. This is absurd. The multiverse hypothesis is therefore falsified because it renders false what we know to be true about ourselves. Clearly, embracing the multiverse idea entails a nihilistic irrationality that destroys the very possibility of science. http://www.washingtontimes.com/news/2010/oct/1/hawking-irrational-arguments/ ================= Alain Aspect and Anton Zeilinger by Richard Conn Henry - Physics Professor - John Hopkins University Excerpt: Why do people cling with such ferocity to belief in a mind-independent reality? It is surely because if there is no such reality, then ultimately (as far as we can know) mind alone exists. And if mind is not a product of real matter, but rather is the creator of the "illusion" of material reality (which has, in fact, despite the materialists, been known to be the case, since the discovery of quantum mechanics in 1925), then a theistic view of our existence becomes the only rational alternative to solipsism (solipsism is the philosophical idea that only one's own mind is sure to exist). (Dr. Henry's referenced experiment and paper - “An experimental test of non-local realism” by S. Gröblacher et. al., Nature 446, 871, April 2007 - “To be or not to be local” by Alain Aspect, Nature 446, 866, April 2007 bornagain77
oops, make that est. Elizabeth Liddle
No, I didn't say that either - but the premise you ascribed to me is certainly not my "primary premise". As far as God is concerned, I don't think that the demonstration that an ID was, or was not, the best inference for the origins of life on earth would settle the question one way or the other. I think it's a completely irrelevant question, tbh. If I was to be persuaded that the first genome had been intelligently designed, or that evolution was regularly tweaked by an invisible intelligent agency, I would not infer that God existed, simply that some intelligent denizen of the universe, hitherto unknown to science, must exist, and that we should probably get a grant to find out more about it. That's because I don't think God is a denizen of the universe. If God exists then we aren't going to find evidence within the universe, because the whole evidence-finding project depends on comparing things with one cause with things with another. If God causes everything, then we aren't going to find God's handiwork on some things but not others. If the universe is the work of God then the mark of God's work is going to be on everything. In other words, I don't think ID makes for very sound theology :) So I absolutely don't have an atheistic (or theistic) agenda when it comes to studying life and its origins and mechanisms. I don't think God is to be found by science. Or rather, any god found by science, wouldn't be God. IMO. God, I would contend, is found by love. At least the one I worship is :) Ubi caritas et amor, Deus ibi es. If I have premise, I guess that's it :) Elizabeth Liddle
Elizabeth Liddle; 'Not even in the ballpark' Glad to see you think God did create life on earth then!!! bornagain77
Not even in the ballpark ba77 :) Elizabeth Liddle
Elizabeth you state: 'Well, I’m not even sure what you think my “primary premise” is' Let's see if I can narrow it down "However life got here, God did NOT do it!!!" Is that close enough Elizabeth??? bornagain77
Well, I'm not even sure what you think my "primary premise" is :) Elizabeth Liddle
i.e. all you are doing Elizabeth, with your 'supplementary evidence', with no actual foundation in science in which to base your postulations in the first place, is 'whistling in the dark, trying to placate your 'chosen' atheistic philosophy!!! bornagain77
Elizabeth Liddle, 'I do try to provide support (evidence and/or argument)' I hate to inform you, but unless you can produce actual observational evidence for neo-Darwinian processes producing a gain in functional complexity/information above that which is already present in life, then all your other 'supplementary evidence' is completely meaningless for you have not proved the validity of your primary premise in the first place!!! bornagain77
Mung just asked whether anyone was aware of population genetics computer simulations, and I was, so I posted the link.
Thank you. Mung
Well, I didn't, and I do try to provide support (evidence and/or argument) for my positions. However, I welcome any challenge to do so whenever I fail to :) Cheers Lizzie Elizabeth Liddle
Elizabeth, as long as you don't claim that your referenced computer simulation supports your position, then i have no beef. But if you do claim that it does then I rightly demand that you provide ACTUAL observational evidence to back up your claim. Preferable a violation of the fitness test~! bornagain77
ba77, we seem to have got our wires crossed. I posted a reference for Mung, in a post where I didn't make any conjecture whatsoever. In fact I see no conjectures by me in this thread at all, "atheistic" or otherwise. What post of mine were you referring to? Elizabeth Liddle
So Elizabeth, 'No, I didn’t say anything of the kind' I do so wish that you would clarify when you have no evidence to support your atheistic conjectures. Perhaps you could put a disclaimer on every post your write??? :) bornagain77
No, I didn't say anything of the kind, ba77. Mung just asked whether anyone was aware of population genetics computer simulations, and I was, so I posted the link. Elizabeth Liddle
So Elizabeth you say your referenced computer simulation proves your point, whereas I say my referenced computer simulation proves my pint. What to do??? What to do??? Hey Elizabeth, let's look at what the ACTUAL observational evidence says and let it decide!!! What do you say???? bornagain77
Is anyone aware of any computer simulations of population genetics models? ,,, Using Computer Simulation to Understand Mutation Accumulation Dynamics and Genetic Load: Excerpt: We apply a biologically realistic forward-time population genetics program to study human mutation accumulation under a wide-range of circumstances.,, Our numerical simulations consistently show that deleterious mutations accumulate linearly across a large portion of the relevant parameter space. http://bioinformatics.cau.edu.cn/lecture/chinaproof.pdf MENDEL’S ACCOUNTANT: J. SANFORD†, J. BAUMGARDNER‡, W. BREWER§, P. GIBSON¶, AND W. REMINE http://mendelsaccount.sourceforge.net http://www.scpe.org/vols/vol08/no2/SCPE_8_2_02.pdf Oxford University Admits Darwinism's Shaky Math Foundation - May 2011 Excerpt: However, mathematical population geneticists mainly deny that natural selection leads to optimization of any useful kind. This fifty-year old schism is intellectually damaging in itself, and has prevented improvements in our concept of what fitness is. - On a 2011 Job Description for a Mathematician, at Oxford, to 'fix' the persistent mathematical problems with neo-Darwinism within two years. http://www.evolutionnews.org/2011/05/oxford_university_admits_darwi046351.html =============== Whale Evolution Vs. Population Genetics - Richard Sternberg PhD. in Evolutionary Biology - video http://www.metacafe.com/watch/4165203 Waiting Longer for Two Mutations - Michael J. Behe Excerpt: Citing malaria literature sources (White 2004) I had noted that the de novo appearance of chloroquine resistance in Plasmodium falciparum was an event of probability of 1 in 10^20. I then wrote that 'for humans to achieve a mutation like this by chance, we would have to wait 100 million times 10 million years' (1 quadrillion years)(Behe 2007) (because that is the extrapolated time that it would take to produce 10^20 humans). Durrett and Schmidt (2008, p. 1507) retort that my number ‘is 5 million times larger than the calculation we have just given’ using their model (which nonetheless "using their model" gives a prohibitively long waiting time of 216 million years). Their criticism compares apples to oranges. My figure of 10^20 is an empirical statistic from the literature; it is not, as their calculation is, a theoretical estimate from a population genetics model. http://www.discovery.org/a/9461 Experimental Evolution in Fruit Flies (35 years of trying to force fruit flies to evolve in the laboratory fails, spectacularly) - October 2010 Excerpt: "Despite decades of sustained selection in relatively small, sexually reproducing laboratory populations, selection did not lead to the fixation of newly arising unconditionally advantageous alleles.,,, "This research really upends the dominant paradigm about how species evolve," said ecology and evolutionary biology professor Anthony Long, the primary investigator. http://www.arn.org/blogs/index.php/literature/2010/10/07/experimental_evolution_in_fruit_flies etc.. etc.. bornagain77
Yes: http://jhered.oxfordjournals.org/content/92/3/301.short Elizabeth Liddle
It is widely believed that GA's emulate biological evolution, though I think the point is debatable. In addition, certain programs have been written which purport to simulate some aspect of biological evolution. Such simulations will likely employ some form of GA. Is anyone aware of any computer simulations of population genetics models? Mung
DrBot @13: I believe GA’s have been used in optimization tasks for wing design in airliners though. Again, it is a euphemism. Wing design requires far more precision and provable, predictable outcome than does "emulation [I assume you meant simulation?] of biological evolution". The algorithm for traversing a search space to find the optimal solution is similar, conceptually, but the actual implementations to meet aerospace requirements will be far more detailed and rigourous especially to the point of adhering to known properties of materials and the "rules" of aerodynamics. "biological evolution" OTOH, has little in the way of comparably known properties and rules. Only recently have a few genomes been fully mapped, but how they "work" (and hence how to "simulate" their workings) is largely unknown. Are you sure it is not simply that you conclude a lack of rigor because you don’t like the [global warming model] result? I don't like the irreproducible methods because data and adjustments thereto have been irrecoverably discarded, nor do I like the cherry-picked exclusion of available data points, nor the statistically invalid adjustments on the included data. The results are inconclusive at best, poor science at worst. I’m not really sure what you mean. [Would you get on an airplane that had been subjected to the same rigour as has “emulation of biological evolution”?] Do you believe the accuracy, precision, thoroughness and reliability of “emulation [I again assume you really meant simulation?] of biological evolution” to be comparable to that required to build reliable aircraft? Do you believe the [simulation] of biological evolution” reflects reality to the same extent that the LS-DYNA simulations reflect reality? Do evolutionary biologists apply the same standards and criteria of accuracy, predictibility and reliability to their results as do aerospace engineers? Would you be just as willing to get on an airplane that met the standards of “[simulation] of biological evolution” as an airplane that met the standards of LS-DYNA simulations?` I think you are getting your terminology a little mixed up. I interchanged "simulation" and "emulation", yes, but since you are emphasizing that distinction, I will further point out that as per a strict definition of "emulation", your own point that "not enough is known to write a complete emulation of biological evolution" is rather moot since (as you rightfully insist) an "emulator" by strict definition is a functional replacement of some other process or mechanism, and accordingly nothing can ever be written in software that will replace living biological processes. Simulated, maybe if the state of the art improves vastly, but never emulated, as per your distinction. However, the point remains that modeling of biological evolution hardly rises to the same level of modeling as done in aerospace, weapons testing, etc. That is the point. The lack of predictable laws, material properties, mathematically rigourous formulae and results that match reality, provably and sufficiently, such that planes, bridges, buildings etc. are safe and reliable. I believe the biologist would argue that their simulations have rigour (and I would agree) Yes, the same biologists who rigourously discount most of the genome as "junk" without understanding how to simulate it or the unmapped extents, with whom you again no doubt agree. Conversely, when an aerospace engineer discounts something, such as ground effect above 100 feet, it is not arbitrary ignorance, but proven and predictable flight aerodynamics with underlying science. That is why when you get on an aircraft designed with LS-DYNA simulations, it not only flies above 100 feet but also takes off and lands through ground effect because it wasn't ignored. That is one real-world example of the difference in modeling rigour between the two disciplines. Charles
Oddly, I use (and construct) evolutionary algorithms of various kinds to actually investigate biology, though not evolution per se. And I have two main uses: One is to develop classifiers for biological datasets (e.g. brain images) that learn (on a training set where the answer is known) to identify patterns that categorise datasets correctly (into images acquired under different conditions, for example, or from different groups of people), then are tested on a dataset where the information is not available to the classifier. They are useful because although we know the problem we want to solve (which images belong to which conditions, and what are patterns that best distinguish them), we do not know the solution, or what patterns in our vary large datasets (hundreds of thousands of brain voxels) are the important ones. The other use is for actually modelling learning itself. In other words evolutionary algorithms, are, essentially, trial-and-learning algorithms that provide good predictive models of what goes on in intelligent brains. This is, btw, the big problem I have with ID - I'd argue,and have, that evolutionary processes are "intelligent" systems, in the sense that they form a learning system, directly analogous to the learning system that goes on in our brains - so it is no surprise that the outputs have a family resemblance. Evolutionary processes don't make forward models, however, and we do, so that's big difference. Elizabeth Liddle
Charles, I think you are getting your terminology a little mixed up. From wikipedia
In computing, an emulator is hardware and/or software that duplicates (or emulates) the functions of a first computer system in a different second computer system, so that the behavior of the second system closely resembles the behavior of the first system. This focus on exact reproduction of external behavior is in contrast to some other forms of computer simulation, in which an abstract model of a system is being simulated. For example, a computer simulation of a hurricane or a chemical reaction is not emulation.
The types of simulation employed in biology follow similar methodologies to other sciences and are used for the same purposes - testing aspects of the theories, predictions based on the theory, to see how accurate the theory is.
Again to Gil’s point, it is the lack of rigour in testing the results of “emulation of biological evolution” however incomplete at present, whereas in aerospace emulation, the rigour is so exacting and provable, that air travel is one of the most safest forms of transportation.
I believe the biologist would argue that their simulations have rigour (and I would agree)
A rigor and candor notably lacking in the Global Warming modelling, by contrast.
Are you sure it is not simply that you conclude a lack of rigor because you don't like the result?
Would you get on an airplane that had been subjected to the same rigour as has “emulation of biological evolution”?
I'm not really sure what you mean. Aircraft are designed, we created them. We did not create life so we have to study it to know how it works. This process is ongoing. I believe GA's have been used in optimization tasks for wing design in airliners though. DrBot
DrBot @4: It is certainly true that not enough is known to write a complete emulation of biological evolution but of course the same is true of most things that science is studying that is why they are called simulations and why they are useful (and not a joke) because the differences between the simulation and reality help you understand reality better – they help direct the science. I would further argue, that most sciences, biological evolution notably excepted, in fact employ highly accurate and testable emulations. Nuclear weapons testing, the LHC design and chemical reaction modelling, for example, know enough to write complete emulations. While biological evolution may be more complex and lacking adequate data and "laws", the solution lies in the direction of more rigour and precision and revising the theory everywhere it fails to give correct predictions. And if it fails to make useful, testable predictions, then let's not speak of it in the same breath as "most things that science is studying". Charles
DrBot @4: Evolutionary algorithms are used for, amongst other things, logic circuit design. Other genetic algorithms are not attempts to model biology, they are being used for other purposes, including design. The terminology used here is very loose, arguably misleading, in the context of Gil's post. When it is said a "virus has infected my computer" we mean neither a biological virus nor an infection. We mean a software program deliberately designed and tested to exploit operating system flaws has been loaded and executed, all using standard program techniques. The "evolutionary algorithms" and "genetic algorithms" used in logic circuit design are more accurately termed "methodologies" as they are not source programs or executables lifted out of the biology labs, but rather CAD modules written entirely anew for the purposes of FPGA and ASIC design, employing trial and error search methodologies only conceptually similar to those employed in biology labs. "Evolutionary strategies" have even been applied to the travelling salesmen problem, but we don't really mean his route evolves, rather the solutions change and narrow to the one most optimal. Again it is a euphemism, insofar as it is applied to non-genetic problem solving. Just like "computer virus" is a euphemism, in the CAD context while one can find terms like "chromsome" and "gene", these terms along with "evolutionary algorithms" and "genetic algorithms" are simply euphemisms. There is little in parallel (aside from the conceptual traversing of a search space) with biologic genetic microevolution (to say nothing of macro evolution), and more to Gil's point, a rigourous comparison of computed result against actual circuit operation, which rigour is lacking in the largely theoretical and anecdotal genetic modelling. Not enough is known to write a complete emulation of the weather but weather simulations are not ‘just a joke’, they are very useful. Indeed they are very useful, but then again there is a tremendous amount (albeit incomplete) of demonstrated meterological science, stochastics, and detailed historical and real-time data gathering which underly weather simulations, as well as the constant rigorous checking against reality and candor in assesing failure. A rigour and candor notably lacking in the Global Warming modelling, by contrast. It is certainly true that not enough is known to write a complete emulation of biological evolution but of course the same is true of most things that science is studying that is why they are called simulations and why they are useful (and not a joke) because the differences between the simulation and reality help you understand reality better – they help direct the science. Again to Gil's point, it is the lack of rigour in testing the results of "emulation of biological evolution" however incomplete at present, whereas in aerospace emulation, the rigour is so exacting and provable, that air travel is one of the most safest forms of transportation. Would you get on an airplane that had been subjected to the same rigour as has "emulation of biological evolution"? Charles
Let's see, neo-Darwinists can't prove evolution by observational science,,,, “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain - Michael Behe - December 2010 Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.(that is a net 'fitness gain' within a 'stressed' environment i.e. remove the stress from the environment and the parent strain is always more 'fit') http://behe.uncommondescent.com/2010/12/the-first-rule-of-adaptive-evolution/ Testing Evolution in the Lab With Biologic Institute's Ann Gauger - podcast with link to peer-reviewed paper Excerpt: Dr. Gauger experimentally tested two-step adaptive paths that should have been within easy reach for bacterial populations. Listen in and learn what Dr. Gauger was surprised to find as she discusses the implications of these experiments for Darwinian evolution. Dr. Gauger's paper, "Reductive Evolution Can Prevent Populations from Taking Simple Adaptive Paths to High Fitness,". http://intelligentdesign.podomatic.com/entry/2010-05-10T15_24_13-07_00 Is Antibiotic Resistance evidence for evolution? - 'The Fitness Test' - video http://www.metacafe.com/watch/3995248 New Research on Epistatic Interactions Shows "Overwhelmingly Negative" Fitness Costs and Limits to Evolution - Casey Luskin June 8, 2011 Excerpt: In essence, these studies found that there is a fitness cost to becoming more fit. As mutations increase, bacteria faced barriers to the amount they could continue to evolve. If this kind of evidence doesn't run counter to claims that neo-Darwinian evolution can evolve fundamentally new types of organisms and produce the astonishing diversity we observe in life, what does? http://www.evolutionnews.org/2011/06/new_research_on_epistatic_inte047151.html ,,,, and yet despite the stunning lack of observational evidence that neo-Darwinism can generate ANY functional complexity/Information above what is already present in life, we got these nifty evolutionary algorithms, that were DESIGNED by brilliant programmers, that prove evolution true??? ,,, Well I guess that is all fine and well if you are willing to throw the scientific method completely out the window simply to justify your atheistic bias for neo-Darwinism.,,, Just ignore the man behind the curtain,,, Pay no attention to that man behind the curtain. - video http://www.youtube.com/watch?v=YWyCCJ6B2WE ,,, and just ignore the fact that materialism dissolves into absurdity,,, What Would The World Look Like If Atheism Were Actually True? http://www.metacafe.com/watch/5486757/ =================== bornagain77
I don't quite see why modeling the Ph of something would help if, for example you were testing an hypothesis about population dynamics - unless of course there was a specific reason for Ph to be included, for example if you were looking at the effect of ocean acidification on fish populations. The implied argument seems to be that models of an aspect of evolution are only valid if they account for every atom. If so then I would disagree, and agree with Dr Liddle that a model only needs to include aspects relevant to the hypothesis being tested. DrBot
Actually, I guess my point is the same as DrBot's, but I would still appreciated clarification. Elizabeth Liddle
Gil: Like Neil Rickert, I don't see the connection between the first, very interesting part of your OP, and your comments regarding "Darwinian computer simulations". I'm not even sure whether you are referring to simulations designed to demonstrate the principles of Darwinian evolution, or evolutionary algorithms designed to solve real world problems. Either way, the fact that more sophisticated software exists doesn't seem to me to be relevant to the validity of "Darwinian computer simulations". What is relevant to their validity of course, is the hypothesis they are designed to test, or the problem they are designed to solve. Could you clarify? Elizabeth Liddle
DrBot, As part of any realistic GA model you would have to provide numerous parameters (pH, temperature, chemical environment...) which would make the model unbearably complicated. And how would you test this in the lab? And if you were to simplify the model by making various assumptions, how would it then represent reality? NZer
NeilBJ, Fascinating question. As far as I am aware, quantum mechanical computations are so complex that even supercomputers can only model simple molecules. I suspect Finite element analysis as mentioned by Gil would be insufficient to cope with biological complexity. NZer
If there are no algorithms that model actual biological systems and if not enough is yet known to write a true emulation, then you are correct. Current evolutionary algorithms are a joke.
Evolutionary algorithms are used for, amongst other things, logic circuit design. It is certainly true that not enough is known to write a complete emulation of biological evolution but of course the same is true of most things that science is studying that is why they are called simulations and why they are useful (and not a joke) because the differences between the simulation and reality help you understand reality better - they help direct the science. Not enough is known to write a complete emulation of the weather but weather simulations are not 'just a joke', they are very useful. There are plenty of genetic algorithms that model actual biological systems though, they just do so to different levels of abstraction depending on the aspect of biology being studied. Other genetic algorithms are not attempts to model biology, they are being used for other purposes, including design. DrBot
I am a retired logic design engineer. I designed the logic for VLSI chips that became part of a computer. Every one of my designs had to pass running in an emulation program. The emulation program modeled every feature of an actual chip: capacitance, inductance, signal delays, power consumption, and of course the logic gates. In theory the operation of the emulation program would not be distinguishable from the operation of an actual chip. In practice, the logic delays in the emulation program would be different from the actual chip, but the delays nevertheless would be known. (I hope I have remembered the details correctly. It’s been 20 plus years since I worked on this stuff. I became an application programmer in meantime.) As I understand evolutionary algorithms, they would be classified as simulations and not emulations. In other words they are loose models of the evolutionary process and not exact representations. A couple of questions come to mind. Are there any evolutionary algorithms that model actual biological systems? I am thinking of Avida, which evolves logic circuits, so it obviously does not model an actual biological system. Do evolutionary biologists yet know enough about evolutionary processes so that they could write an emulation based on a real biological system? If there are no algorithms that model actual biological systems and if not enough is yet known to write a true emulation, then you are correct. Current evolutionary algorithms are a joke. NeilBJ
But..but..METHINSKITISLIKEAWEASEL provides all the proof one could ever need!!! Matteo
Personally, I am undecided on the significance of evolution simulations. However, I am wondering why you think your experience with FEA is relevant. As far as I know, that's addressing a very different kind of problem. Neil Rickert

Leave a Reply