Uncommon Descent Serving The Intelligent Design Community

FEA, PR, E, Ro, EOS (Or, Why Darwinian Computer Simulations are Less than Worthless)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

FEA = finite element analysis
PR = Poisson’s Ratio
E = Young’s modulus
Ro = mass density
EOS = equation of state

Darwinian computer simulationists have no idea what I’m talking about, but they should.

A thorough understanding of FEA, PR, E, Ro, and EOS is a prerequisite for any computer-simulationist who hopes to have any confidence that his computer simulation will have any validity concerning the real world (and this just concerns transient, dynamic, nonlinear mechanical systems — nothing that even approaches, by countless orders of magnitude, the complexity, sophistication, and functional integration of biological systems).

Even with all of my understanding and years of experience, I would never expect anyone to accept the results of one of my FEA computer simulations without empirical verification. However, with a consistent track record of validated simulations within a highly prescribed domain (which I have) I can at least save much wasted effort pursuing what the simulations suggest will not work.

It is for this reason, and many others, that I consider Darwinism to be not just pseudoscience, but perhaps the quintessential example of junk science since the advent of the scientific method and rational inquiry concerning how things really work in the real world.

Darwinists have no idea what rigorous standards are required in the rest of the legitimate engineering and science world, and how they have been given an illegitimate pass concerning empirical or even rational justification of their claims.

Comments
PPS: I suspect, many objectors, at root, don't really believe in a freely thinking intelligent mind. Which is self referentially incoherent. D S Robertson aptly sums up:
AIT and free will are deeply interre- lated for a very simple reason: Informa- tion is itself central to the problem of free will. The basic problem concerning the relation between AIT and free will can be stated succinctly: Since the theo- rems of mathematics cannot contain more information than is contained in the axioms used to derive those theo- rems, it follows that no formal opera- tion in mathematics (and equivalently, no operation performed by a computer) can create new information. In other words, the quantity of information out- put from any formal mathematical op- eration or from any computer operation is always less than or equal to the quan- tity of information that was put in at the beginning. Yet free will appears to cre- ate new information in precisely the manner that is forbidden to mathemat- ics and to computers by AIT. The nature of free will—the ability of humans to make decisions and act on those decisions—has been debated for millennia. Many have disputed even the very existence of free will. The idea that human free will cannot oppose the de- crees made by the gods or the three Fates is a concept that underlies much of Greek tragedy. Yet our entire moral system and social structure (and, I would venture to guess, every moral system and social structure ever devised by human- kind) are predicated on the existence of free will. There would be no reason to prosecute a criminal, discipline a child, or applaud a work of genius if free will did not exist. As Kant put it: “There is no ‘ought’ without a ‘can’ ” [4, p. 106]. The Newtonian revolution provided an even stronger challenge to the con- cept of free will than the Greek idea of fate, which at least allowed free will to the Fates or the gods. The Newtonian universe was perfectly deterministic. It was commonly described in terms of a colossal clockwork mechanism that, once it was wound up and set ticking, would operate in a perfectly predictable fashion for all time. In other words, as Laplace famously noted, if you knew the position and velocities of all the par- ticles in a Newtonian universe, as well as all the forces that act on those par- ticles, then you could calculate the en- tire future and the past of everything in the universe . . . . Of course, a deterministic universe could produce an illusion of free will. But for this dis- cussion I am not interested in illusory free will. Around the beginning of the 20th century, the development of quantum mechanics seemed to provide a way out of the Newtonian conundrum. The physical universe was found to be ran- dom rather than perfectly deterministic in its detailed behavior. Thus although the probability of the behavior of quan- tum particles could still be calculated deterministically, their actual behavior could not. Physicists as prominent as Sir Arthur Eddington argued that ran- dom quantum phenomena could pro- vide mechanisms that would allow the operation of free will [4, p. 106]. Eddington’s ideas are not universally held today. A perfectly random, quan- tum universe no more allows free will than does a perfectly deterministic one. A “free will” whose decisions are deter- mined by a random coin toss is just as illusory as one that may appear to exist in a deterministic universe.
kairosfocus
January 16, 2012
January
01
Jan
16
16
2012
02:30 PM
2
02
30
PM
PDT
SC: You are right on. When someone appeals to infinite resources, by direct implication, we know the jig is up! Just for fun: 10^80 baryons [fr astronomy] * 10^25 s [th/d lifespan, about 50 mn times 13.7 BY] * 10^45 PTQS/s [rounded up] ~ 10^150 states of the particles ("atoms") in the cosmos we observe across its thermodynamic lifespan. (Where BTW fastest Chem rxns are ~ 10^30 PTQS's. This is of course where Dembski's number comes from, where also 2^500 ~ 3.27*10^150.) Just 1,000 bits have 1.07*10^301 possible configs: b1-b2-b3- . . . b1000 That is, the observed universe, acting as a monkey at the keyboard, could not search 1 in 10^150th of the possibilities. So, a sample is a fairly skinny zero of the field of possibilities. If that sample is blind/random and or blindly mechanical (i.e. nature here does not act as a purposeful algorithm) sampling theory tells us that with maximum likelihood, we will pick up only the bulk, dominant feature of the config space, i.e. gibberish. How do we know the bulk will be gibberish? Easy, multipart, specific functionality demands well matched and organised parts, like letter-strings in this post. And, a 3-D complex combination is reducible to a nodes-arcs wiring diagram, thence, strings structured according to a further specification. That is, a language. The requisites of specific function rule out most of the space. We can easily see that for text, or for say a motor or an indicating instrument (which is a specialised motor.) But doesn't self replication get us away from that and create a CONNECTED CONTINENT of possibilities? Nope, as the relecant facility has to store wiring diagram and component specifying rules in a data structure, and has to set up a processing facility to implement the algors and stored data. It is an additional bit of irreducibly complex info. At macro level, complex components -- e,g. the avian lung in life, do not have close, incrementally connected intermediates. So, the entire theory and its precursor to get to first life, is based on something that is all but zero possibility absolutely, and in operational terms is tantamount to zero. Unobservable. But, of course, to the indoctrinated, the above MUST be false, and sounds like "assertions." Nope, it is easily empirically confirmed, just look all around. It is also analytically reasonable. The best explanation of a moving coil meter is a D'Arsonval. The best explanation of the cockpit panel and the 747 in which we find it is a Boeing. The best explanation of the vNSR in the living cell is a designer of cell based life, and the best explanation of an Avian lung etc etc is a designer of body plans. None of which is acceptable to the evolutionary materialist establishment. But, bit by bit, as we move ever deeper into the information era, it will be clearer and clearer that their frame of thought has crashed and burned. But, that establishment and those who look to them to shed light, will be the last to realise it. Then, they will try to ride the tide of chaotic change to land on their feet and come out on top yet again. they may even reformulate to blunt the force of the collapse. That is what the marxist apparatchiks did. Good day, GEM of TKI PS: You may want to read here on, where it has been laid out in summary, in steps.kairosfocus
January 16, 2012
January
01
Jan
16
16
2012
02:20 PM
2
02
20
PM
PDT
Double mutations are irrelevant, if we consider drift.
And, indeed, sex (if double-mutation means change in two separate loci). Sex is distributed processing - the whole population is 'working on' solutions. Individually neutral or recessive mutations must drift, but however they travel, when two complementary 'solutions' meet ... kaboom. Or rather, they each gain a selective boost by their mutual effect, even though recombination can act to break them up again - such broken-up combinations are simply 'wild-type'. The result of these selective boosts, even in the presence of disruptive recombination, is to increase frequencies such that the 'favoured' combination occurs more and more often, and breaks up less and less (the commoner the alleles, the more likely that recombination will have no effect). Each time combination occurs, both alleles get extra tickets in the 'lottery'. The end result looks fortuitious, but does not require serial fixation.Chas D
January 16, 2012
January
01
Jan
16
16
2012
12:15 PM
12
12
15
PM
PDT
Prototypes are rarely a single “mutation” away from some previously existing device, but once you have a prototype it’s time to tinker.
Then it shouldn't be difficult to provide examples of inventions that are not incremental.Petrushka
January 16, 2012
January
01
Jan
16
16
2012
11:11 AM
11
11
11
AM
PDT
Yes indeed. That's why evolutionary algorithms can be so powerful of course - they explore solutions that a human designer would dismiss as dead ends. It's also why I've been saying for years that evolutionary processes are pretty intelligent. What makes them different from human brain processes (foresight/intention) isn't even all its cracked up to be. Saves time, but if you have a vast number of iterations at your disposal, that doesn't matter, and what you lose in iterations you gain in creativity. So it's not surprising that biological systems look intelligently designed. In that sense, they are.Elizabeth Liddle
January 16, 2012
January
01
Jan
16
16
2012
10:59 AM
10
10
59
AM
PDT
Human invention is certainly faster in some ways than biological invention. Brains embody a form of evolution that learns more quickly than populations. But I think the concept of foresight is fuzzy to the point of being useless. Just examine what you wrote about avoiding things that don't work, and think about how many inventors have credited success to ignoring conventional ideas about what doesn't work. Then consider the percentage of inventions that actually last and give rise to new species of inventions. The percentage is pretty low. Most new things don't work. Even with foresight, from the macro viewpoint, invention is cut and try.Petrushka
January 16, 2012
January
01
Jan
16
16
2012
10:53 AM
10
10
53
AM
PDT
If such local maxima (high points surrounded by lower points) do exist, then the fitness landscape will be “unconnected” in those dimensions. However, the higher the dimension of the landscape, the more likely it is that some traversable connecting path will exist along some dimension...
This is precisely the point I've tried to make with gpuccio, when I say that natural selection is more powerful than directed evolution. It is not unrelated to the point Adam Smith tried to make regarding the robustness of market economies vs command economies. One never knows in advance where utility will arise, and your search is more likely to fail if your targets are narrow and your direction is one-dimensional.Petrushka
January 16, 2012
January
01
Jan
16
16
2012
10:43 AM
10
10
43
AM
PDT
I'd say both "foresight" and "side sight". What human designers can do is to simulate before execution (foresight) and not bother with things that obviously won't work) and also bring in solutions from other "design lineages ("side sight") like adding an engine to a carriage, or a computer chip to a washing machine. Evolution has to execute all the intermediate steps (can't simulate), and can't transfer solutions from one lineage to another.Elizabeth Liddle
January 16, 2012
January
01
Jan
16
16
2012
10:35 AM
10
10
35
AM
PDT
The point I am trying to convey is that designers are able to employ a number of tools not available to evolutionary algorithms, among them foresight, and the ability to create intermediate parts which have no purpose on their own until combined into a larger whole.
Give me an example of foresight. I know that seems obvious, but I don't find it obvious. It gets less and less obvious as you move farther away from copying with modification. I'm not trying to be difficult. I just think that words are tossed around in this discussion without much thought being given to their implications. In particular I see the word design being used without any thought being given to what designers do.Petrushka
January 16, 2012
January
01
Jan
16
16
2012
10:27 AM
10
10
27
AM
PDT
That seems pretty fair, SCheesman :) Thanks! Let me go through:
I find it much more helpful to frame the issue in the following manner: Any evolutionary algorithm (EA) instantiated through a computer programme is necessarily searching a finite space of possible “solutions” defined by the parameters of that programme. This is no different than biology, as we see, being defined by a finite (though vastly larger) number of possible arrangements of DNA, and whatever higher-order structures encode the information necessary for the existence and replication of life.
Not entirely sure about "finite". In biology, and in some EAs, the evolving population becomes part of the environment, so solution-space itself is constantly changing (because so is the problem space). So in biology, something that gives a phenotype the edge in one generation may be totally inadequate several generations down the line. Also, some EAs are designed to respond to changes in input - learning algorithms, where the EA needs to respond to changed contingencies.
The ability of the EA to move through that space to arrive at some “final” or “target” solution is dependent on a number of things which have or are expected to have direct analogs in real life; these are, chiefly: 1) The “connectedness” of the “viable” solutions. 2) A gradient in fitness exists between the solutions, such that one solution can be chosen among a number of alternatives. 3) Means exist to escape local maxima in fitness space in order to find yet more advantageous solutions. 4) The velocity at which the solution evolves is dependent upon the rate at which, from a given location, that adjacent solutions can be tested and compared against the fitness of the current solution.
Let me get this straight. If we represent fitness on a vertical scale (with higher fitness higher up) and phenotypic change on, say, two horizontal scales, for high points to be reachable it is important that they are not separated from other high points by lower intervening points (although narrow "ravines" and gentle downward slopes can sometimes be traversed). If such local maxima (high points surrounded by lower points) do exist, then the fitness landscape will be "unconnected" in those dimensions. However, the higher the dimension of the landscape, the more likely it is that some traversable connecting path will exist along some dimension (i.e. one without wide ravines or long downward slopes), So I think 1:3 are restatements of the same thing, really. 4 is more or less equivalent to the rate of drift. Yes?
In any “successful” EA, I think you can show that the properties 1-4 exist; you can draw a road-map of solutions from the start to the end (connectedness), and the solutions produced are inevitably those possible for that programme. In fact you get from A to B as inevitably as water, flowing downhill, overcomes local obstacles. The rate and route can be adjusted by varying the fitness definition and gradients.
Well, a "successful" EA, by definition, gets "inevitably" to a solution! If it didn't, it wouldn't be "successful"! So I'm not quite sure of your point here. However, I do agree with your water analogy, and I actually prefer to plot fitness landscapes upside down, with fitness "wells" or "sinks" rather than "peaks". And yes, if the population is reasonably fluid (fair amount of drift), then it should work its way down any downhill slope the "fitness wells" pretty well inevitably. However, it's possible to make an EA that doesn't always end up in the same well, if there is more than one.
My opinion is that ID arguments saying EA’s are an inadequate and indeed misleading model of real evolution should be concerned with showing that in real biology points 1-4 are not nearly as favourably present, briefly:
Yes, I think so, except that I'd say they are much more obviously present in biology than in GAs! Because biology is far higher-dimensioned than any human-built GA.
1) Solution space is not well-connected. This is basically the argument of irreducible complexity at one level, the argument of the sparseness of viable folded proteins among the combinations of DNA on another – that there is no way to get from “A” to “B” in tiny steps, and only intelligence can make the jump.
Yes, that's the argument.
2) ID generally argues that the gradient is not nearly so powerful in generating novelty as it is generally attributed. It is not even always able to operate as efficiently as might be expected, even if a path can be shown to exist. Hence the knock-out experiments which introduce a simple deleterious mutation and see if an organism can recover it’s original functionality.
I think there is an analogy glitch here. I don't think "the gradient" is what "generates novelty". Perhaps there's a factor missing from your list. Imagine the version of the landscape image where fitness lies in wells, and flows down to meet it. If there are barriers in the way (ridges; long rising hillsides), the population will be trapped behind them. However, if there are merely flat plains in the way, what may slow things down (what I thought you were getting at with your rate of testing) is what you might regard as "viscosity". If the population doesn't change much, it will stick in a lump on the plain. However, if there is lots of novelty, in the form of genetic variation being created, then it will tend to "spread out" over the plain, by drift, and eventually find a sink. In other words, it's not the gradient that generates variety - the gradient is what is also called "natural selection", and it actually reduces variety, it doesn't increase it. What generates variety is the "RM" part, in old parlance - the degree to which neutral variants drift through the population, bringing parts of it, as it were, to the lip of downward slopes.
3) ID argues that biology, as we observe it, is “trapped” in the vicinity of existing fitness maxima. Everywhere we see optimized systems. Even in cases (such as bacteriological resistance) where there is some movement away from the “norm” it is achieved due to degradation and loss of function, and only in the face of a severe stress that upsets the current optimization. Remove the stress, and the “solution” tends to move back toward the original maximum. Nothing really new or useful has been created.
Well, "degradation and loss of function" are concepts somewhat alien to your nice model. Where do they fit? What they would seem to mean is that once you've embarked on the journey down towards a fitness well, you can't easily climb back up. But that's OK, isn't it? But then you say it can. I think you may be confusing phenotype with population here. The population can "explore" a temporary fitness sink. If it gets too bedded in (no longer interbreeds with the rest of the population, becomes, in fact, a separate species), and the fitness sink vanishes (fills in?) then you may get extinction. If it doesn't, then, as you say, it can go back up. The individuals can't, but we are talking about population movement here, not individuals. And this exactly matches observation. Highly specialised populations (Giant pandas?) are highly vulnerable to habitat change, whereas generalisers, like rats, are pretty invulnerable.
4) The rate-determining step in evolution in biology is the rate of single and (much less likely) double mutatiions. Here, again Behe’s “Edge of Evolution” sums up this idea, using resistance to anti-malarial drugs as one of a number of examples.
Single mutations yes. Double mutations are irrelevant, if we consider drift.
The evolutionist’s response, quite reasonably, is to try to demonstrate that these objections are overstated or inconsequential.
ta-daa!!!!
I hope that is fair-minded explanation of the problem — Elizabeth?
Very :) Cheers LizzieElizabeth Liddle
January 16, 2012
January
01
Jan
16
16
2012
10:22 AM
10
10
22
AM
PDT
One more point. You said: "Perhaps you will take up my challenge to produce the theory of protein design that doesn’t require any form of incremental evolution." That's not my point. Design often requires or includes "evolutionary increments". It's just not limited to them. Prototypes are rarely a single "mutation" away from some previously existing device, but once you have a prototype it's time to tinker.SCheesman
January 16, 2012
January
01
Jan
16
16
2012
10:01 AM
10
10
01
AM
PDT
Petrushka: "Please explain what you mean by the claim that designers can “skip steps.” The point I am trying to convey is that designers are able to employ a number of tools not available to evolutionary algorithms, among them foresight, and the ability to create intermediate parts which have no purpose on their own until combined into a larger whole. In fact, the entire construction may be completely useless or non-functional until the last part is correctly installed. My larger point, however, was not to argue any of the 4 points I raised above, but merely to try to shift the discussion to try to come to some agreement about what properties might exist (or not exist) that would allow you to judge the accuracy with with an EA models real-life biology. If you read most of the discussion above (and below), much of it has to do with whether this or that statement is a valid objection or not. If you were completely unbiased in this discussion, what properties would you come up with the evaluate EA's as a model of reality? I gave my four suggestions, and tried to relate them to the larger debate. Do you see others? Do you at least agree that IF I were right about 1-4 then EA's as we see them would NOT be a valid model?SCheesman
January 16, 2012
January
01
Jan
16
16
2012
09:54 AM
9
09
54
AM
PDT
Here's an article describing how chains of behavior can evolve. It's not language, but as an experiment it's pretty elegant. http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.1000292Petrushka
January 16, 2012
January
01
Jan
16
16
2012
08:53 AM
8
08
53
AM
PDT
Perhaps you should reread my post, and specifically refute my points which were well constructed, summarized: 1. Gil says (in effect) here are the buzz words (some terms) regarding some things I do in my work. 2. Gil refers to unidentified "engineering standards". 3. Gil says that Darwinists seem to know nothing of the above, thus are not capable of running simulations. 4. I make several points as illustrations as to why Gil's little corner of experience gives him no cause to say that people not formally schooled in engineering or hard sciences cannot run simulations. Especially since he is not formally trained as such himself. 5. I show how that I myself have run simulations which have nothing in common with Gils "protected categories" of buzz words. I also maintained that it is possible that a Darwinian can run simulations that may have merit that Gil would not be able to understand. 6. I made no value judgement regarding simulations of biological evolution. Just for the record, I consider them bogus for the reasons which have been well covered in the I.D. blogoshpere. Now please refute my points if you would.groovamos
January 16, 2012
January
01
Jan
16
16
2012
08:38 AM
8
08
38
AM
PDT
Petrushka: The mere fact you can string together a paragraph in a single iteration is evidence that human designers can get around a disonnected function space. Or did you create the response above by beginning with a single letter or word and applying a process of single mutations and duplications?
The problem of language production is a bit above my pay grade. Both Chomsky and B.F. Skinner wrote extensively about it and both failed to provide convincing analyses. It is one of the central problems in artificial intelligence, and an unsolved one. But I would argue that language is not conceptually more difficult to explain than the origin of any complex chain of behavior, including the famous chains of chemical behavior illustrated in the cell videos. Please explain what you mean by the claim that designers can "skip steps." On an earlier thread there was a discussion of the invention of the light bulb, and many people seemed unaware that hundreds of years of incremental experimentation preceded the commercial bulb. Perhaps you will take up my challenge to produce the theory of protein design that doesn't require any form of incremental evolution.Petrushka
January 16, 2012
January
01
Jan
16
16
2012
08:34 AM
8
08
34
AM
PDT
Given a formally defined alphabet, syntax, semantics and control as a means to drive search for solutions towards improved utility, an algorithm is a formalism that performs optimisation of some kind or another. Nature does not solve, want, intend or choose anything. All of these are, observably, tasks that are formulated and solved by agents via what is called choice contingency as opposed to chance contingency or law-like necessity. For these tasks to be solved one needs not physical laws but rules arbitrarily defined and instantiated into (although totally independent of ) physical reality. Failure to recognise that is a stopper to any discussion about the origins, and in particular discussions about free will. Given a deck of cards, all evolution can do is shuffle or remove some of them. It can never produce new cards simply because it has neither purpose, nor intent, nor foresight and operates within the a-priori set bounds.Eugene S
January 16, 2012
January
01
Jan
16
16
2012
07:05 AM
7
07
05
AM
PDT
groovamos, Yes mathematicians do write math books and yes engineering requires math. OK you have done countless simulations, but that ain't what Gil is referring to- he is taklking about simulations that alegedly simulate biological evolution. Have you written any of those types of simulations?Joe
January 16, 2012
January
01
Jan
16
16
2012
06:37 AM
6
06
37
AM
PDT
In any case, my point was not really to argue the validity of any of the points I raised with Petrushka and champignon (I expect them to disagree), but to show that these are valid points of argument. The form that Petrushka used in his response I think indicates that I have been successful, as he both raised a counter-argument and posed a question. I think it is impossible to have a really good discussion unless you can come to some agreement about where the areas of contention actually lie.SCheesman
January 16, 2012
January
01
Jan
16
16
2012
06:07 AM
6
06
07
AM
PDT
Petrushka: The mere fact you can string together a paragraph in a single iteration is evidence that human designers can get around a disonnected function space. Or did you create the response above by beginning with a single letter or word and applying a process of single mutations and duplications? Foresight is just one huge advantage. And yes, a designer can skip steps and make choices, champignon. What is your definition of intelligence?SCheesman
January 16, 2012
January
01
Jan
16
16
2012
03:52 AM
3
03
52
AM
PDT
That's because the designer is God really, really intelligent.champignon
January 15, 2012
January
01
Jan
15
15
2012
09:46 PM
9
09
46
PM
PDT
It would appear that Thornton and others have taken up the challenge of demonstrating the connectedness of functional space. What I fail to understand is how a designer would get around a disconnected functional space. All the actual inventions by humans that I am aware of are either incremental or involve horizontal transfer, or both.Petrushka
January 15, 2012
January
01
Jan
15
15
2012
09:11 PM
9
09
11
PM
PDT
"Darwinists have no idea what rigorous standards are required in the rest of the legitimate engineering and science world,” I can always tell within a few sentences when I’m reading something that will turn out to be written by Gil. There seems to always be a proud rendering of Gil the engineer, one without any formal training in engineering, and here he refers to some mystery collection of engineering standards. Many books for engineers (and scientists) have been written by mathematicians, for example: Kreysig's Advanced Engineering Mathematics also Probability And Statistics For Scientists And Engineers by Walpole and Myers. Funny how these non-engineers can teach engineers without reference to this mystery collection of standards. I’ve done countless simulations using SPICE based software tools, Verilog, and Mathcad, without incorporating any of the above elements listed by Gil. I’ve also written C++ simulations for some of the most difficult problems and these typically perform an iterative process that cannot be be conceptualized and setup for tools such as Matlab and Mathcad. Actually this is why Bell Labs invented C++, for the sole purpose of tackling immensely complex simulations needed for their immensely complex networks. In reality FEA simulators are mostly written in C++ because as the object oriented language of choice, it is naturally suited to the complexity of those classes of problems. What I think Gil does not understand is that C++ is suitable for tackling unimaginably complex problems that have nothing in common with FEA or anything with which Gil has familiarity. So it is entirely likely that a hyper-motivated Darwinist could master C++ and construct a simulation that escapes Gil’s above categories and likely will escape his understanding. While I’m at it, is it possible for Gil to relate to us how far you progressed in higher mathematics? Did you study vector calculus or differential equations?groovamos
January 15, 2012
January
01
Jan
15
15
2012
08:26 PM
8
08
26
PM
PDT
Gil, Before you hijack your own thread, could you address the objections that commenters have raised to your claims about simulations?champignon
January 15, 2012
January
01
Jan
15
15
2012
07:05 PM
7
07
05
PM
PDT
Dear French Mushroom, My imagination is not fevered. If you want to talk about fevered imagination, let's put the abiogenesis/Darwinian-evolution thesis to the test. Please don't give me the hackneyed "Darwinism has nothing to say about materialistic abiogenesis" line. Darwinists clearly assume materialistic abiogenesis as the basis of their philosophy. Fevered imagination is required to assume that dirt spontaneously self-generated and produced the first self-replicating cell, with all its highly functionally integrated information-processing systems, protein-synthesis machinery, and error-detection-and-repair algorithms. But the fever of the Darwinian imagination rises even further with the assumption that randomly-infused errors (whether filtered or unfiltered by natural selection) can magically transform that first (hopelessly improbable) self-replicating cell into Mozart.GilDodgen
January 15, 2012
January
01
Jan
15
15
2012
06:04 PM
6
06
04
PM
PDT
What is inconsequential from the teleological and ID viewpoints, is the whole question of evolution, which seems to make totally-obsessive idiot non-savants out of otherwise thoroughly decent, journeymen scientists. If protons, neutrons and electrons developed from photons on issuing from the Singularity, or in any case, the point of Creation, ex nihilo, (now scientifically confirmed, as reported, here), surely, the subatomic particles that quantum mechanics are able to empirically study today, being of the self-same provenance as their 'advance party' trail-blazers, existed prior to the development of space-time? And belong to an all together different dimension - or, rather, 'immension'. As the fundamental particles of physical matter, therefore, they indicate that as far as teleology is concerned, their configuration at the grosser level into vegetable or animal organisms is of sovereign irrelevance. Indeed the same can be said in the matter of abiogenesis.Axel
January 15, 2012
January
01
Jan
15
15
2012
04:01 PM
4
04
01
PM
PDT
Exactly. I think we both agree that abstract numerical models whether singly analytical or iteratively produced are in no manner any grant of truth, adequate substitute or replacement for empirical results. The stochastic mutation camp needs to pipe down in back until they can show unassisted and significant mutations that are unquestionably a case of macro-evolution on the lab table. And the agency driven mutation camp needs to put a sock in it until biologists can demonstrate the same by human engineering practices.Maus
January 15, 2012
January
01
Jan
15
15
2012
03:17 PM
3
03
17
PM
PDT
I find it much more helpful to frame the issue in the following manner: Any evolutionary algorithm (EA) instantiated through a computer programme is necessarily searching a finite space of possible "solutions" defined by the parameters of that programme. This is no different than biology, as we see, being defined by a finite (though vastly larger) number of possible arrangements of DNA, and whatever higher-order structures encode the information necessary for the existence and replication of life. The ability of the EA to move through that space to arrive at some "final" or "target" solution is dependent on a number of things which have or are expected to have direct analogs in real life; these are, chiefly: 1) The "connectedness" of the "viable" solutions. 2) A gradient in fitness exists between the solutions, such that one solution can be chosen among a number of alternatives. 3) Means exist to escape local maxima in fitness space in order to find yet more advantageous solutions. 4) The velocity at which the solution evolves is dependent upon the rate at which, from a given location, that adjacent solutions can be tested and compared against the fitness of the current solution. In any "successful" EA, I think you can show that the properties 1-4 exist; you can draw a road-map of solutions from the start to the end (connectedness), and the solutions produced are inevitably those possible for that programme. In fact you get from A to B as inevitably as water, flowing downhill, overcomes local obstacles. The rate and route can be adjusted by varying the fitness definition and gradients. My opinion is that ID arguments saying EA's are an inadequate and indeed misleading model of real evolution should be concerned with showing that in real biology points 1-4 are not nearly as favourably present, briefly: 1) Solution space is not well-connected. This is basically the argument of irreducible complexity at one level, the argument of the sparseness of viable folded proteins among the combinations of DNA on another - that there is no way to get from "A" to "B" in tiny steps, and only intelligence can make the jump. 2) ID generally argues that the gradient is not nearly so powerful in generating novelty as it is generally attributed. It is not even always able to operate as efficiently as might be expected, even if a path can be shown to exist. Hence the knock-out experiments which introduce a simple deleterious mutation and see if an organism can recover it's original functionality. 3) ID argues that biology, as we observe it, is "trapped" in the vicinity of existing fitness maxima. Everywhere we see optimized systems. Even in cases (such as bacteriological resistance) where there is some movement away from the "norm" it is achieved due to degradation and loss of function, and only in the face of a severe stress that upsets the current optimization. Remove the stress, and the "solution" tends to move back toward the original maximum. Nothing really new or useful has been created. 4) The rate-determining step in evolution in biology is the rate of single and (much less likely) double mutatiions. Here, again Behe's "Edge of Evolution" sums up this idea, using resistance to anti-malarial drugs as one of a number of examples. The evolutionist's response, quite reasonably, is to try to demonstrate that these objections are overstated or inconsequential. I hope that is fair-minded explanation of the problem -- Elizabeth?SCheesman
January 15, 2012
January
01
Jan
15
15
2012
12:09 PM
12
12
09
PM
PDT
The concepts of simulation are a perennial source of confusion for Gil, as the following two threads from 2006 reveal. Plus ça change... A realistic computational simulation of random mutation filtered by natural selection in biology Gil has never grasped the nature of a simulation modelchampignon
January 15, 2012
January
01
Jan
15
15
2012
09:00 AM
9
09
00
AM
PDT
I think Gil may be mistaking engineering simulations as to how a machine will perform in real life, with the Darwinian models of evolutionary processes, which are not fundamentally "simulations" at all. They are actual examples of evolutionary algorithms in action. What we can do, however, is to use such models to see whether we can reproduce real-life behaviour. If we can, and if the match is precises, we have support for the model as a model of the real-life process. For example, I work with learning models. They are not "simulations" - my models really learn. They can solve problems at the end that they couldn't solve at the beginning. What is particularly interesting is the mistakes they make while learning, and the way they adapt (or do not) to changed contingencies. If the pattern of errors and flexibility (or not) in the behaviour of the model resembles the pattern of errors and flexibility in the system I am modelling is a good match, I have support for my my model as a model of that system. In other words, we test the model as a model for the real life system under investigation by comparing outputs. This is quite unlike the engineering context. In the engineering simulations you refer to you are making a model of known mechanisms (hence your need for Young's modulus, etc) in order to predict real life behaviour. In contrast, we are testing a hypothesis about unknown mechanisms in order to find out whether our model is a good one for the mechanisms underlying our real-life behaviour. The two cases are very different.Elizabeth Liddle
January 15, 2012
January
01
Jan
15
15
2012
08:24 AM
8
08
24
AM
PDT
I know what all those terms mean as well, and so do most of the scientists I know who write evolutionary simulations. Your claims about evolutionary scientists are fallacious, and based on your own prejudices and lack of knowledge.GCUGreyArea
January 15, 2012
January
01
Jan
15
15
2012
08:05 AM
8
08
05
AM
PDT
1 2 3

Leave a Reply