Uncommon Descent Serving The Intelligent Design Community

Tautologies and Theatrics (part 2): Dave Thomas’s Panda Food

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
arroba Email

(this also servers as a partial response to a formal request for a response fielded by the UDer’s mortal enemies, the Pandas, specifically Dave Thomas in Take the Design Challenge!)

This is part 2 a of discussion of evolutionary algorithms. In (part 1): adventures in Avida, I exposed the fallacious, misleading, and over-inflated claims of a Darwinist research program called Avida. Avida promoters claim they refuted Behe’s notion of irreducible complexity (IC) with their Avida computer simulation. I discussed why that wasn’t the case. In addition, I pointed out Avida had some quirks that allowed high doses of radiation to spontaneously generate and resurrect life. Avida promoters like Lenski, Pennock, Adami were too modest to report these fabulous qualities [note: sarcasm] about their make-believe Avidian creatures in the make-believe world of Avida. One could suppose they refrained from reporting these embarrassing facts about their work because it would have drawn the ridicule the project duly deserves from the scientific community.

In contrast to the spectacular computation theatrics of Avida, Dave Thomas of Pandas Thumb put together a far less entertaining but no less disingenuous “proof” of the effectiveness of Darwinian evolution. Every now and then, the Panda faithful need some food to help them sustain their delusions about naturalistic evolution. This food I call Panda food, and chef Dave Thomas cooked up a pretty nice recipe to feed delusions to the faithful Pandas at our rival weblog. Perhaps if Dave Thomas refines his Panda food recipes, he should consider opening a restaurant chain, and maybe he should call it Panda’s.

To introduce what is at stake, I first introduce the idea of known explicit targets and unknown but desired targets. An explicit known target is a target which we clearly can see, describe, and precisely locate. Examples of such a target would be the bulls-eye an archer aims for.

We can build machines to help us hit such explicit targets. A good example of such an intelligently designed explicit-target hunter is the Infra-red Maverick Missile.

IR Maverick

When on a mission to destroy something like a tank, the aircrew tasked to fly the mission, locates the explicit target (i.e. a tank), and then describes the target to the missile through the process of designation (the designation process is analogous to a point-an-click on the aircrews video screen). Upon launch, the missile employs a feedback and control strategy very akin to classical control theory to home its way to the target.

But those are examples hitting explicit targets. What about unknown, but desired targets? Let me call such targets “targets of opportunity”. A target of opportunity would be the kind of target we know inexplicitly, but still seek after. A good example of such a target of opportunity would be deer in forest during a deer hunting season. Hunters have a general strategy for tracking and hunting the deer, but they don’t know in advance what their target will be (be it Bambi or Bambi’s mother, for example). We don’t know what kind of game we may or may not bag, just that we have a general idea of what we’re striving after.

Does the military have human/machine systems with “target of opportunity” capability? Ahem. Even if I did know of such things, I’d have to deny the existence of such missiles like SLAM-ER Target-of-Opportunity Missile.

SLAM

In the field of engineering and human endeavors, many of the solutions can be thought of like the process of hunting down targets of opportunity. Sometimes we are confronted with a problem, we have strategy we know in advance will yield a solution even before we explicitly know what the solution is.

A VERY simple case in point. Take the integers from 1 to 1000. The following question is posed to us, “what is the sum of these integers, 1 to 1000?” Do we have to know in advance what the answer is? Maybe, maybe not. I’ll cheat and give you the answer. It’s 500,500.

The important point is, that even if you did not know the answer (the target of opportunity) in advance, you have well-proven strategies to find and hit the target. One such strategy would be to sit down with a calculator or spread sheet and add the numbers form 1 to 1000. Another would be to write a computer program which added them together. Yet another would be to write a genetic algorithm to find the answer. I’ll provide several such examples a the end of this essay for you computer geeks out there! But the most important thing in hitting such a target of opportunity is that by intelligently designing the right strategy, one can hit a target of opportunity without the target being explicitly described. Get the picture?

Adding of numbers is a very primitive example of hunting down a target of opportunity. A far more sophisticated example, is finding the optimal design of a computer chip given certain constraints. The space of possibilities is extremely large, but engineers can program genetic algorithms (much like they build sophisticated calculators) to hunt down solutions on their behalf.

Back to the Pandas challenge of me. To build their case, anti-IDers will often need to equivocate and obfuscate the issues. Clarity is their enemy, confusion is their friend. Such was the recent offering by Dave Thomas of Pandas in a long, tedious essay, Target? TARGET? We don’t need no stinkin’ Target!.

He shows how a genetic algorithm can hunt down a target of opportunity. But as I hope I’ve shown, such a thing is unremarkable! However, he hints his program demonstrates mindless forces can find such targets without intelligent design.

Dave employs equivocation and Orwellian Double Speak to argue his case. He takes a designed selection strategy and tries to pass it off as an example of mindless undesigned forces which can magically converge on a target of opportunity. How does he promote his theatrical gimmick? Read what he says, and then read the challenge he poses to IDers:

Genetic Algorithms are simplified simulations of evolution that often produce surprising and useful answers in their own right. Creationists and Intelligent Design proponents often criticize such algorithms for not generating true novelty, and claim that these mathematical recipes always sneak the “answer” into the program via the algorithm’s fitness testing functions.

There’s a little problem with this claim, however. While some Genetic Algorithms, such as Richard Dawkin’s “Weasel” simulation, or the “Hello World” genetic algorithm discussed a few days ago on the Thumb, indeed include a precise description of the intended “Target” during “fitness testing” on of the numerical organisms being bred by the programmer, such precise specifications are normally only used for tutorial demonstrations rather than generation of true novelty

I have placed the complete listing of the Genetic Algorithm that generated the numerous MacGyvers and the Steiner solution, at the NMSR site.

If you contend that this algorithm works only by sneaking in the answer (the Steiner shape) into the fitness test, please identify the precise code snippet where this frontloading is being performed.

Thomas sneaks the answer in by intelligently designing a strategy which will find the target of opportunity. This sort of gimmickry is not much beyond the following illustration:

One kid goes up to another with a paint ball gun and shoots him, and says,

“Don’t get mad, I wasn’t aiming at you, I was aiming at the shirt you were wearing.”

bulls eye shirt

By giving the computer the correct strategy (like a method of adding numbers) one guarantees the answer (or target) will be hit, or at least a near miss. There are numerous strategies which will succeed, but they still must be intelligently designed. For the less technically minded readers, I hope what I’ve written so far gives a narrative explanation of what’s really going on.

To get an idea of how easy it would be to give the wrong search strategy, consider a long sequence of driving directions. If even one occurence of the word “left” is substitutted for “right” or vice versa, the directions will fail. Without intelligence programming the selection strategy, the target would have missed in Dave’s program. However, Dave Thomas used intelligence to ensure a miss wouldn’t happen, or at least, less likely. He thus snuck the answer in after all, contrary to his denials.

In the post script, for the benefit of the technically minded readers, I’ll address the more technical details to help put all of Dave’s nonsense to rest.

Salvador Cordova

PS

TECHNICAL DETAILS

Dave’s Challenge:

If you contend that this algorithm works only by sneaking in the answer (the Steiner shape) into the fitness test, please identify the precise code snippet where this frontloading is being performed.

I’ll identify it plain and simple, and call his bluff. The major front loading is in how selection is made. With the wrong selection description, the wrong target of opportunity, if any, will be hit. Simple!

Dave counts on a bit of obfuscation to make his work unreadable. He chooses an antiquated computer language known as FORTRAN to make his demands. “Lets invite UD software engineers to read my hieroglyphics and invite them to show where I sneaked the answer in!” Sheesh.

That said, I will identify an important part of his barely readable code, which, if removed will cause the genetic algorithm to miss the target. The fact that this section is essentially irreducibly complex is testament that intelligent design was needed to enable the genetic algorithm to do its thing.

If any section is even slightly re-written in a mindless way, the program likely misses the target at best and fails to even functionally compile at worst. I’m sorry the following link will look like hieroglyphics to some, but of necessity, I need to show it to call Dave’s bluff with it. Here is one of the many places where Dave sneaks the answer in:

Dave Thomas’s Code Bluff

Does Dave Thomas doubt me that I’ve identified where he snuck the answer in? How about we allow 5 random changes to the code segment I pointed to? Does he think such mindless modification can be introduced and the algorithm will still function? Do we think the GA will successfully hit the target (assuming the GA can even run) in the midst of 5 measly random changes? Will Dave run away from the fact that the above selection strategy needs intelligent design? Or will he represent that the above code segment came to be of its own accord, and that the selection strategy described by the above code is the product of blind mindless processes?? Will he continue to insist what he did is not sneaking the answer in?

The selection strategy in his program is anything but natural. Just because the terms Darwinian and selection are used in the argument does not mean intelligent agency is not permeating the entire project. Such labelings are doublespeak. If I went through and re-labeled everything intelligently designed selection vs. natural selection, you’d get the real gist of what’s happening!.

All right, as I promised, I’ll now present several ways to add the numbers 1 to 1000 and get the answer 500500. With the exception of the first program, in each case the target answer will not be an explicitly stated target, but rather a target of opportunity which is hit via an intelligently designed hunting strategy.

The sample programs are written in the C language.

This program will give the explicit answer to question, “What is the sum of the numbers from 1 to 1000?” :

explicit.c

This program will give the answer to question, “What is the sum of the numbers from 1 to 1000?” through a brute force computation which involved adding all the numbers from 1 to 1000:

brute.c

This program will give the answer to question, “What is the sum of the numbers from 1 to 1000?” through Gauss’s method of mathematical induction:

gauss.c

This program will give the answer to question, “What is the sum of the numbers from 1 to 1000?” through recursive addition of all the numbers from 1 to 1000 :

recurs.c

This program will give the answer to question, “What is the sum of the numbers from 1 to 1000?”
through a genetic algorithm. The algorithm pairs up numbers form 1 to 1000. Rather than compute the midpoint via a simple calculation it takes a random number as a starting point and then mutates the random number and uses a fitness function to select between the mutant and the original number to give the current best midpoint estimate. The process is repeated with increasing refinement. 2 times the sum of the midpoints then becomes the sum we are seeking. Snapshots of the algorithm’s progress are given along the way. The following computational theatrics are akin to what Dave Thomas performed:

ga.c

PPS
I and my co-workers (while I was in school in the 90’s) worked on target recognition systems and simulations of missile guidance systems. Dave can feed the biologists at Pandas Thumb his Panda food, but half the UDers here have relevant engineering backgrounds to see through the charade. He could not have picked a worse thing to do than challenge the UDers to disprove the flimsy claims of his intelligently designed program.

Comments
[...] Thomas is in a bit of a tizzy over my humble offering: Tautologies and Theatrics (part 2): Dave Thomas’ Panda Food. He responds at Pandas Thumb with: Calling ID’s Bluff, Calling ID’s Bluff. I thought [...] Dave Thomas says, “Cordova’s algorithm is remarkable” | Uncommon Descent
Caligula, Thank you for your comments. This thread has now scrolled off the main page. As I post related threads, feel free to raise your points again. I think the readers will appreciate discussion of the issues you raise. Salvador scordova
Salvador: Just one additional comment, if you please. Although exact fitness calculations for wild populations over evolutionary time are complex beyond imagination, in Dave's simplified simulation (with only one selection factor) the fitness difference between any two organisms is quite trivial to calculate. I'm sure that Dave would be happy to comply if challenged. Could you, in turn, demonstrate that CSI, the fundamental quantity of Intelligent Design arguments, is computable in such a simplified case as this? Can you calculate the CSI contained by, say, the formal solution and a MacGyver of your choice? Can you, then, show us exactly how and where the same amount of CSI is hidden in Dave's code? This involves *more* than merely pointing at specific lines of code. It means actually calculating, with clear explanations, the amount of CSI hidden the "sneaky" code lines. It would also be important to explain exactly how these bits were transferred into the solution. caligula
Salvador: I think biologists readily admit that there is no consensus on mechanisms of evolution, or at least on their relative importance. This is largely due to problems with measuring selective pressures. It is of limited use to be able to measure the *total* fitness of an organism, if we can't *calculate* this fitness from theory, i.e. if we can't break the measured total fitness into more interesting subcomponents (pressures due to sexual selection etc.) Certainly, this *is* a major problem for biology. However, I'm not sure that it is justified to single out biology as *the* problematic science. As you more or less admitted, we can hardly show experimentally that the laws of classical physics follow from Quantum Mechanics. We can, with great effort, demonstrate that QM makes successful predictions on subatomic level. Similarly, biologists *have*, with great effort, made successful predictions on a wide range of issues from insect behavior to unicellular evolution. But it is next to impossible to explain the behahvior of a physical system of macroscopic scale in terms of individual "quantums", just as it is next to impossible to explain the evolution of a population over evolutionary time in terms of individual selective pressures. (That is, if an *explanation* means something "fully detailed".) Also, I don't think Dave Thomas ever claimed that his program was a simulation of biological evolution. He only claimed that it is a demonstration of a blind algorithm producing CSI by exploiting cumulative selection. And I certainly think Thomas was successful in what he set out to do. He uses a very simple fitness test ("consume less energy to be more fit") which produces a diverse "family tree", the surviving leaves of which tend to be MacGyvers. Does this fitness function "hide" intelligent design, and is it highly "unnatural", whatever that is? Hardly. Although no natural population may have had to solve this specific Steiner problem (I wouldn't bet on it, though!), all natural populations probably have addressed the "minimize energy consumption" problem in one way or the other! caligula
Caligula, Thank you for taking the time to read through the articles I linked to and responding. But I must caution, what scientific enterprise can hope to survive if it cannnot measure it's most fundamental quantities?? In other scientific disciplines, we can measure mass, charge, energy, power, position, time, dimensions (well, at least at classical versus atomic scales). But the inability to measure fitness, a quantity so fundamental to a theory, seems extremely disconcerting. If one can't measure fundamental quantities upon which a theory fundamentally relies, one then has to question the coherence of the theory in the first place as well as the ability to confirm it experimentally. Granted, we do have Quantum Mechanics where the unmeasurability of observables is par for the course, but at least the theory is stated in terms of what can't be measured! I find no comparable analogue for Natural Seleciton, and it is this fact that caused a few evolutioanry biologists to jump ship. Because we have such a difficult time even measuring what's in the real world, isn't it a bit pre-mature to say our computer fitness correctly models something which we aren't even sure exists? I'm fine with operational science (like electrodynamics, celestial mechanics, chemistry, engineering, etc.), but forensic "science" (like evolutionary theory) ought perhaps be put in another category because we don't have the same level of verifiability. I'm not immediately saying here that ID is the answer (even though that is my personal view, and I certainly promote it), but ID issues aside, shouldn't a greater degree of skepticism that has been practiced be welcome given the state of affairs as outlined by Lewontin and others? Salvador scordova
Salvador: I read Lewontin's paper. I think he's saying that calculating the total fitness difference between organisms is a very *complex* affair. On the other hand, he is *not* claiming that (a) selection is random, (b) fitness differences did not exists, (c) fitness differences were not, in evolutionary timescale, responsible of complexity (i.e. CSI) in organisms. Yes, this paper can be seen as criticism on various hypotheses from the ultra-adaptationist camp. In Lewontin's opinion, many theorists simplify things way too much, (a) by isolating single selection factors from the whole picture and (b) by not taking into account that things like changes in the population size can affect selection factors. He's not necessaruily saying that ultra-adaptationist theories were *wrong*; he's claiming that their mathematical treatment is currently over-simplified. But this paper hardly helps *your* case here, because Lewontin is not giving any support to your position that natural selection is *chaotic*. Complex just isn't a synonym for chaotic. BTW. In the same paper Lewontin claims ("Getting there from here") that complex evolutionary pathways have been experimentally verified to exist, with functional intermediates. How about that? (Yes, his main point is that such pathways are a subset in a "maze" with many dead ends, but I don't think that is much of a problem for the theory of evolution. It is simply a partial answer to his own question: why only a tiny subset of all conceivable phenotypes have realized in the history of life.) caligula
Caligula asked: And listen, exactly what do you think natural selection is?
I think it is double speak, and does not reflect reality at all. Selection, as Allen Orr pointed out, does not trade in the language of design. As lewontin showed, its rife with mathematical self-contradiction. I fixed up some of the links to Lewontin above. I higly recommend his Santa Fe Winter 2003 essay. Quite eye-opening to the utter futility of describing evolution in terms of fitness. Biology should be more appropriately described by function (an engineering perspective) versus fitness (a self-contradictory Darwinian paradigm). scordova
scordova: "Of course the programs fitness function was rigged, and it’s no less rigged than the Steiner-solving GA’s. That’s the whole point. Did Dave Thomas not know in advance his fitness function would have a chance of being marginally successful (in a MacGeyver sense at least), or did he have some monkey code his fitness functions or describe the fitness function to him?" I'm speechless. ...would have a CHANCE of being MARGINALLY successful? Is *that* what makes it cheating? If only the program was guaranteed to *fail*, it would immediately become a valid demonstration of random mutations with ID-free selection at work? Listen, of course evolutionary algorithms and e.g. reinforcement learning are used exactly because we think they *might* succeed. But as you said yourself, all we know -- or hope mostly, although hopes turned true are the ones that get published -- is that these techniques have a *chance* of being successful. And indeed, they are favored in cases where we don't need to get a 100% accurate answer to a problem. Instead, we oftentimes want to get sufficiently accurate answers to a whole bunch of problems -- such as evaluating each of the possible states of the enviroment (e.g. valid game positions of a strategy game) w/o getting too many downright stupid evaluations. And listen, exactly what do you think natural selection is? Is it a monkey or a randomizer? Sure, selection pressures can and do change both in quantity and quality, but they are definitely not wildly *random* (because they are part of Cosmos instead of Chaos, for starters). And yet, that's exactly what you seem to be requiring from GAs in order for them to be "natural" in the quote above. If so, it is little wonder you think it can't produce the illusion of design. You are saying that a selection pressure favoring blindness should grow ears! caligula
(here is a response I posted to at Pandas Thumb)
[Dave Thomas wrote:] My challenge to Salvador and the UD Software Engineering Team is simple and straightforward: if the Target’s “shirt” (a stated desire for the shortest connected straight-line networks) is indeed as “CLOSE” to the “Target” itself (the actual Steiner Solution for the given array of fixed points) as you say it is, then you and your Team should be easily able to deduce the proper answer, and send it along. I’ll be waiting! See you next Monday, August 21st. - Dave
Thomas mis-describes the sense of my argument. The specification of a problem solving STRATEGY and successfully implementing that STRATEGY will yield solutions equivalent to some or all of the solutions in the solution space (or maybe good enough). Thomas mis-describes my position again. Aiming for the shirt versus the person is like aiming to find the right strategy. That's what I meant. If he mis-understood for whatever reason be it me or him, I hope this helps clarify the issue. With respect to my ga.c, of course I knew the program was rigged. I knew the seach strategy would work. Searching for a strategy is like finding a sufficient aimpoint. I provided 4 inexplicit strategies: brute.c gauss.c recurs.c ga.c Each is a different strategy for hitting the same target. 4 different sets of driving directions leading to Rome from 4 different starting points, so to speak. Of course the programs fitness function was rigged, and it's no less rigged than the Steiner-solving GA's. That's the whole point. Did Dave Thomas not know in advance his fitness function would have a chance of being marginally successful (in a MacGeyver sense at least), or did he have some monkey code his fitness functions or describe the fitness function to him? By the way, I'm honored to see Dave is effectively calling me liar. cheerio guys, Sal PS computing Fermat points is a bit tedious, if I have inclination I might provide them and finish of my speculation for a solution to his problem scordova
Here's the problems I've seen with the arguments:
"Avida promoters claim they refuted Behe’s notion of irreducible complexity (IC) with their Avida computer simulation."
Actually, no. What GAs show is that it is possible to create systems via evolutionary mechanisms where the removal of any component makes the entire system fail. Really, there are two classes of IC then: "Class A" systems which can be constructed incrementally, but (at some later stage) fail if one piece is removed, and "Class B" systems which cannot be constructed incrementally and fail entirely if one piece is removed. Simply pointing out that a biological system fails if one piece is removed doesn't tell you whether you're dealing with a class A system (which evolution can create) or a class B system (which evolution cannot create). Careful research is needed to differeniate between then two, and IDists want to jump to the conclusion that any IC system is actually a class B system. GAs illuminated the fact that not all IC systems are class B systems (as IDists would like to argue). By in large, Sal's general argument seems to be that GAs, whether or not there is an explicit target, employ a very specific and limited strategy to find a specific solution. What Sal misses is the fact that GAs are much more capable he gives them credit for. First of all, Sal's "GA" (if you can call it that) is specifically setup to hone-in on a solution to a problem that has one and only one solution. Actual GAs, on the other hand, are well known to be able to find completely different solutions (on different runs) to a single problem - they aren't secretly preprogrammed with the solution, hard-coded to hone-in on that solution, or merely deriving the solution through a series of deterministic mathematical steps. GAs are moving through complex space and finding varieties of solutions that satisfy it's goal.
"Is the selection process in Thomas’s code natural or intelligently designed?"
The selection process is intelligently designed. However, that isn't particularly relevant when you understand how GAs work. The selection process (or fitness function) is used to determine which organisms (real or digital) go on to reproduce. The descendent organisms are similar to the parents that they descend from. Thus, a selection process gets evolution moving in a particular direction. Since we want a particular outcome (e.g. shortest route between six points), we want each subsequent generation to be closer to that goal and so we allow the best ones to have children. It's not the goal that gets the organisms to evolve in a particular direction, but it's actually the survival differential that makes them evolve. The GA goal is simply used to determine who reproduces and who doesn't - in other words, it determines how the survival differential is applied across the population. In the absence of a goal, the organisms can evolve in a particular direction simply by having a survival differential - provided that the differential is non-random. In the real world, organisms are hunters, prey (by predators and micro-organisms), and competing for mates. This produces a somewhat stable (and non-random) survival differential that allows real-world organisms to evolve. Hence, it's really not necessary to have an goal for evolution to work. (Another way to look at it is to say that nature does have a goal, and that goal is to produce organisms that reproduce - of which survival is an important part.)
"Thank you for responding. Can you, for the benefit of the reader explain what would happen to this algorithm in the absence of 1. intelligent design of the selection process 2. intelligent design of the “creatures” such that they are amenable to intelligently designed selection"
GAs generally aren't used to create ecologies of organisms that need to eat, hunt, evade predators, or compete for mates. In the absence of a goal or any of those needs, GAs won't create anything at all because there's nothing to create a survival differential (no goal, no competition, no starvation) - everybody survives, everybody reproduces (on average) at similar levels, and that means nobody evolves. You don't need a goal to have GAs evolve, but you do need a survival differential (or more accurately a reproductive differential) in order to have evolution.
"Rather, this circuitous route serves the anti-design case by sneaking away the fine-tuning into the things you just listed: CPU, OS, GA engine, etc."
The CPU and OS aren't "sneaking" anything into the simulation. They are analogous to having a universe that works by laws. Being allowed to alter them is a little bit like saying that you should be allowed to alter fundamental forces of the universe and still have biological evolution work while you (for example) alter the binding properties of carbon, make hydrogen an unstable element, turn nitrogen into a noble gas, or increase or decrease the electromagnetic or gravitational forces by several magnitudes. While it would theoretically be possible to sneak things in via the GA engine, the engine is right there for everyone to see and scrutinize. It still works when nobody is pulling any sneaky business. The purpose of GAs isn't to answer the question of whether the universe is designed or not, but it illuminates the question of whether GAs can, in the presence of stable fundamental laws, create CSI. Under those conditions, the answer is "yes". Further, you can verify in the code that we are maintaining a relatively hands-off approach to the system and allowing a few simple rules (random mutation, selection, reproduction) to create our complexity. If there is a breach of this (for example, if we alter the organisms' genome by inserting pre-designed information), you are entitled to complain, but there isn't one. BC
"ID advocates….are saying that the solution is already implicitly defined in the statement of the problem" "By the way steveh, do you think that Dave Thomas has misrepresented my position as I have pointed out above?" I don't know why you are asking me specifically, but I would answer "no". As far as I can tell, you have never used those words exactly; The way to the goal is not specified by the original statement - the standard claim is the solution is implicitly defined in the design of the fitness function. However, the fitness function usually mirrors the original statement in some way, so the claims are arguably equivalent - in cases where the fitness function isn't modelled on the original statement you would accuse us of sneaking in new information which leads to the solution by a back door. In other words, the original statement specifies a goal, the fitness function provides a metric of how close a potential solution is to that goal. The FF follows from the statement, not from advanced knowledge of what the solution is. For example, here the problem is to "find the (total length of the) shortest network which connects a given set of points" and the corresponding fitness function returns a score which is high if the total length is small and the points are connected. You could, maybe, restate the original problem as: "Find the network which has the minimum S, where S is the total length of the network + (L, a large amount if any point is not connected to it, or zero otherwise). Any design is in the mapping of a fake genome to a potential solution, but that design doesn't indicate how to get to a good solution. Yes, I used the word "design". One can design something to mimic an undesigned process, so don't get too excited about that. steveh
caligula wrote: I have not read Dembski’s books. Although The Design Inference has been even been translated to Finnish, I haven’t seen any of Dembski’s books at my local library
I appreciate your response. I was not trying to be provocative, but its been my experience that most of the criticisism agains CSI are misrepresentations of what CSI actually is. One of the most common come from Perahk, Elsberry, and Dembski's former teacher Shallit. Part 3 of 3 of this series will deal with Elsberry and Shallit's misrepresentations of CSI. For CSI to be defined one needs: 1. Space of possible outcomes 2. specification of a target within those possible outcomes 3. actual event that coincides with the target It is notable that Shallit's GA for travelling salesman didn't frame his refuation in terms of the CSI formailities, but equivocated the definition of "bit"! For example an Mp3 or JPEG, or better yet a ZIP file have a certain number of bits associated with them. They can decompress into a larger file. Is that a violation of conservation of CSI? When talking about CSI, the conception of "bit" in the compressed state is not appropriate to the conception of "bit" in the decompressed state, even though from a computer storage standpoint, the conception of bits is the same. However from the standpoint of CSI the conception of bit in each case is NOT the same. Thank you for visiting and responding to my questions. Given that you don't have access to Dembski's books, I will try to respond to you comments in light of those facts. I hope to have more comments later. In the mean time, you may want to familiarize yourself with this paper: Specification The Pattern that Signifies Intelligence Salvador scordova
(I notified Dave Thomas with the following at Pandas Thumb)
Dave Thomas wrote: ID advocates, since y’all are saying that the solution is already implicitly defined in the statement of the problem
By the way Dave, that does not represent mine or any IDers position that I know of. I can understand perhaps how you may have come to that conclusion. You might even be tempted to fault my infelicitous expression of ideas for your horrid misunderstandings and misrepresentations and mischaracterizations of what IDers believe. I presume the last thing you would blame this mischaracterization on would be the Panda's propensity to uncharitably characterize what IDers say....Fine! The point is, what you said in your opening post does not accurately represent what I or other IDers believe. I hope you'll post an addendum somewhere on this site conveying the fact to the readers that what you said does not represent my position. If you have to sugar coat it by arguing that you were misled because you couldn't decode what I was claiming, fine. But I request you withdraw your mischaracterization of what I believe. Salvador scordova
Salvador: A correction. You apparently are not denying Thomas has produced CSI. You are simply claiming that this CSI was brought in by ID which is somehow hidden in the fitness function. My point remains, though. How much intelligence it takes to know that conservation of energy is "beneficial"? That kind of simple rule -- as simple as the one governing the formation of atoms, snowflakes, etc. -- will produce CSI similar to what Thomas' GA produced in zillions of *different* applications. Just *how* generic does the fitness rule have to be in order not to sneak in ID? caligula
Good (late) morning Sal, I believe I am familiar with most of Dembski's central claims (complex specified information; CSI only produced by intelligence; sometimes: information *in general* only produced by intelligence; explanatory filter; No Free Lunch, essentially claiming that selection does not work). I have not read Dembski's books. Although The Design Inference has been even been translated to Finnish, I haven't seen any of Dembski's books at my local library. (I have borrowed and read most of the YEC books available, as well as one originally German ID/baraminology book called (in Finnish!) "Evolution - Critical Analysis" (Scherer, Junkers; Finn. trans. Matti Leisola)). I have read some of Dembski's free PDF publications. Would I like to read the books? Sure. However, I have a principle not to *buy* any creationist or ID publications; a principle I hope is acceptable as it does not mean unwillingness to become familiar with their arguments. How do I define CSI? I think Dawkins gave the definition in 1986 (Blind Watchmaker), except he called CSI just "complexity", and made it clear that "complexity" involdes information specified in advance. Practically anything in the physical world can be interpreted as "information". (However, we have to bear in mind that there is no *universal* interpretation which automatically maps any physical object or phenomenon to bits!) Complexity is something too improbable to originate by chance alone, provided that the complexity is specified in advance. As I understand it, you, Salvador, are now trying to claim about the CSI produced by Thomas: - information produced by a necessity such as a natural law is not CSI - Thomas' information was produced by a "hidden" necessity (the fitness function *statistically* limits freedom in such a fashion as to gradually evolve MacGyvers) Well, it sure does. But so does natural selection. So does *all* cumulative selection. It exponentially (albeit often only statistically) limits freedom in any search space, coming up with objects that look designed, and it does so amazingly fast compared to a mere chance hypothesis. If this appearance of design produced by cumulative selection does not count as CSI, then I wonder what does. You may insist than similar appearance of design in *nature* is, alone, CSI. Fine. But then please don't extend your claims to computer science. Anyway, you probably had more questions than the definition of CSI. Feel free to "interrogate". :-) caligula
Tom, you are a good man. Thanks to you and Salvador for a fantastic discussion. I, for one, appreciate it. Barrett1
Here is my correction A = (86.6025, 150) B = (313.3975, 150) C = Fermat Point joining vertex 5,6,3 The solution is therefore assymetric, contrary to my earlier speculation. There are other converse solutions. At least that is what I think. I have not looked too deeply into Steiner trees before last night. That is my best speculation so far. Assuming Dave Thomas used naming conventions in his Fotran program that accurately reflect what was there, I used that to identify the appropriate code snippet. Salvador scordova
Salvador, "I found a 6 vertex steiner solution with only 3 points." The MAX number of Steiner points is 4 in this case. Tom English
I may need to make a slight correction in light of this: Euclidean Steiner Tree I found a 6 vertex steiner solution with only 3 points. Salvador scordova
Tom, I will continue to review you points. Both Bill and others liked what your wrote at ARN and even here. The reason is, your technical criticisms are worlds above the misrepresentations I'm used to seeing. I mean, you actually give legitimate things to consider versus people like...well...I'll save the names for another time. :-) Let me think on what you said about the displacement theorem. regards, Salvador scordova
Scott @14: " Everyone, please see: https://uncommondesc.wpengine.com/index.php/archives/802 " Everyone, also please see the follow-up, https://uncommondesc.wpengine.com/index.php/archives/907, "C’est la Avida," which includes a discussion of Eric Anderson's peculiar use of term "cumulative complexity" -- the source of substantial confusion in the original thread's comments. —————————— If someone uses the Newton-Raphson method to solve a system of nonlinear equations, and the solution requires a precision of better than 1 part in 10^150 to be fully specified, does this demonstrate that intelligence isn't required to produce CSI? (Assume that the system models some physical problem.) j
Sorry Scordova, there’s a lot of “above” and I dangerously past my bedtime. Let me know specifically what’s being misrepresented and I’ll get back to you tomorrow night (Europe time).
Well, thank you for visiting. The issue in question was Dave Thomas's asseriton:
ID advocates....are saying that the solution is already implicitly defined in the statement of the problem
scordova
Sorry Scordova, there's a lot of "above" and I dangerously past my bedtime. Let me know specifically what's being misrepresented and I'll get back to you tomorrow night (Europe time). steveh
By the way steveh, do you think that Dave Thomas has misrepresented my position as I have pointed out above? scordova
steveh, I don't have a FORTRAN compiler, besides, that's only a code snippet, it wouldn't work anyway. :-) scordova
Hmmm, why not just use the code here to get an instant solution: http://smartaxes.com/docs/ud/tautologies/bluff.txt ? steveh
ofro wrote: What I don’t understand is the basic premise of your example, which apparently already has an explicit solution of the problem built into the program.
I'm afraid that isn't quite correct because if you go to ga.c, and do a text search for 500500 you won't find it. The solution was never explicitly stored anywhere. I appreciate your participation however. Salvador scordova
steveh wrote: If I understand your solution correctly, one of 5c or 2c is unecessary because 5 & 2 are already connected to the network by the other half of the solution. I imagine that would change the result
[update: see below for my revised guess] Salvador scordova
If I understand your solution correctly, one of 5c or 2c is unecessary because 5 & 2 are already connected to the network by the other half of the solution. I imagine that would change the result. steveh
Something I should point out which Dave Thomas said:
but am especially interested in solutions by ID advocates, since y’all are saying that the solution is already implicitly defined in the statement of the problem
That is not my position nor that of any IDer I know. That would be very bad misrepresentation on his part regarding what was the intent my description of his work. He may have over extrapolated my story about the kid with a paint ball gun, but my little parable does not imply as a general principle that the solution is already implicitly defined in the statement of the problem. The solution is implicitly implied if: 1. a solution strategy exists to solve the problem 2. the solution strategy is implemented I can understand perhaps Thomas erring once to say, "the solution is already implicitly defined in the statement of the problem", however if he maintains that, I will have to protest that he is making a flagrant misrepresentation of my position and that of other IDers. Salvador scordova
From Steiner tree
For the Euclidean Steiner problem, points added to the graph (Steiner points) must have a degree of three, and the three edges incident to such a point must form three 120 degree angles. It follows that the maximum number of Steiner points that a Steiner tree can have is N-2, where N is the initial number of given points.
There is a triangle defined by 1,4,A where A is the nearest steiner point to 1 and 4. The triangle has dimensions (if I did not botch my trig): length 1,4 = 300 length 1,A = 173.2051 approx length 4,A = 173.2051 approx I did not bother running http://www.diku.dk/geosteiner/ to double check however...so do not hold me to my guess. [update: see below for my updated guess] Salvador scordova
scordova: “I never represented it to something natural, that exactly the point. I invite you then to comment on the naturalness of Dave Thomas’s simulation.” What I don’t understand is the basic premise of your example, which apparently already has an explicit solution of the problem built into the program. I don’t know of anything in nature where an apparent evolution-like process already contains the target in some form. In my mind, the basic premise of Thomas’ program comes significantly closer to approaching a natural evolutionary process, although I will obviously not claim that it is the same in all aspects. The Darwinian process is one in which a phenotype of an individual of a population is occasionally modulated and tested for how it fares in a certain environment (the program does it one individual at a time while not caring about the many non-mutated individuals), and when an individual is identified, it is further modified. That is what I believe I see in Thomas program but not in your example. So if I had to choose between the two, I would definitely give Thomas' program the vote of being more "realistic." ofro
Gosh, is FORTRAN still running out there? When is guidance unguided? Evidently when programmed by evolutionist. Without an intelligent agent knowing the end goal in order to provide a fitness selection critieria, the correct outcome would not have advanced at all towards a correct conclusion. GA may show some simple form of mimicry as in weighted outcome based upon input, but it does not lead to macro-evolutionary scales being highly vaunted so vociferously without caution. If anything it shows how Design can utilize such internal mechanisms. There is no such thing as "fitness selection" in an unguided process. I'm not fully convinced yet of frontloading, but its certainly more plausible than RM&NS. It appears evolutionist are trying to have it both ways. A guided process within an at-large unguided process. Variation of birds are allowable in an algorithm, but it would be confusing to then argue the bird will change into another form by random mutations. Everything we observe shows conservation. Duplication and repair. Feedback loops. The algorithm that exist allows for changes within a boundary - and this is the real fitness criteria that meets the survivability standard. Anything outside that fitness, goes to extinction. But it will forever be a bird, no matter how some evolutionist on here may hate the statement of that truth - as being simple minded: A bird will be a bird. The butterfly is a wonderful example of morphological change. But even then we are looking at the "preprogrammed outcome" of the larvae stage into adulthood. The end-form, genetic capacity is built in, is it not? From the beginning... This should teach us a lesson by observation - even morphological changes are pre-programmed. Observing nature and genetics, does each butterfly suddenly gain new genes? To try and mimick life with unguided process can only get you unguided failures. Thus the fruitfly experiments. The pathway of lifeforms is towards conservation and repair of the genome, not change and certainly not open to randomness without possible harm. This is more bluff, just like the PNAS paper quoted by Andrea. FORTRAN or not, without fitness tuning, you will not reach anywhere close to your target because it guides. Without guidance, there is no target to shoot for. The larvae would turn into a tree, a bat, a fish. No, what we see are tadpoles turning into frogs, caterpillars into butterflies. We see patterns genetically programmed for multiple stage metamophosis. We may one day in the future be able to preprogram some simple life forms with initial stages as cute as some caterpillars. It may be that with the input of an Oak leaf, or an ivy leaf, or a particular flower that the colors change in the butterfly. Maybe temperature will gage the outcome. Whatever our scientist create it will be designed, reactionary to external input, and choice driven for best outcomes as measured for that particular and "measured" ecosphere. But it will still be designed. Whether 10 inputs or 10,000. Michaels7
Was that post supposed to contradict, somehow, what I posted here?
No. I felt however if I didn't post it in its enitirety I would have not done justice to you. That post was a big turning point because that was the first time someone of your stature said something Bill and I didn't absolutely cringe at! I wanted the readers to appreciate your contributions to field of evolutionary algorithms. Salvaodr scordova
Salvador, Wow, did I really post that at ARN? Am I ever glad I got off caffeine! Seriously, I do regret the way I treated Bill Dembski and other IDists back then. Was that post supposed to contradict, somehow, what I posted here? Tom English
"CPU OS GA engine etc." You need those things when you write any sort of computer program(not usually the GA). If you want to compute the total weight of a load of rocks, all of those elements would have to be present; All of that fine tuning would be necessary. One mistake in your FORTRAN program or the operating system or your computer hardware and the answer would be wrong or non-existent. Therefore you have proved beyond all doubt that a physical pile of rocks can't have a combined weight - it's just too complicated. steveh
Salvador, I don't think you are justified in invoking the Displacement Theorem. Would you please establish that the assumptions of the analytic framework of Bill Dembski's "Searching Large Spaces" are met in the present circumstance? If you regard what Dave Thomas has implemented as assisted search (Bill's term), then you must regard the fitness function as the assistant. Bill stipulates that the assistant knows the target (the set of solutions) at the outset of the search. Show me where, in Bill's paper, having a fitness measure on candidate solutions equates to knowing the target. Tom English
caligula claims: And in these areas, they are *demonstrably* wrong about their CSI claims, because they can be falsified in the world of mathematics and computation.
Caligula, If I may ask, how familiar are you with Bill Dembski's works? Do have his books handy? This is a crucial question, because, if you assert such things on this weblog, I am somewhat obligated to ask you to defend you claims. And I may invite you to do so mathematically. I apologize for the brusk treatment you've encountered here, and you've earned some respect in the eyes of the readers for the way you've handled yourself today. However, now that you've made that assertion, I will have to ask a few questions. Do you have a definition of CSI that you can make this claim for, and do you have Bill Dembski's books? Salvador scordova
mike: I whole heartedly agree. Computers in the foreseeable future aren't going to simulate any past or present ecosystem of the Earth. Their computing power just can't match the required detail of development (genotype=>phenotype mapping) and all the various challenges to an organism's survival and reproduction. I have already admitted this here, and I have faithfully kept on saying it on the Finnish usenet group where I sometimes contribute. (This is not to say I found evolutionary simulations totally fruitless, however. On the contrary.) But the issue at hand with Dembski, Salvador and others does not only concern biological evolution, as I wrote earlier. It also concerns many of the most exciting fields in computer science. And in these areas, they are *demonstrably* wrong about their CSI claims, because they can be falsified in the world of mathematics and computation. You don't need millions of years to demontrate it. As anyone literate can see, Salvador's last resort is more or less that everyone else except ID promoters are *forbidden* to use math to support their claims. That would be sneaking ID into blind calculcations, apparently. Really, that is all he is saying. I thank Salvador and others for their time. Regrettably, as the previous time I wrote to this blog, I've grown tired of reading how the moderator is in my opinion just trying to confuse both the discussion and perhaps some of the readers with totally irrelevant points. Of course, he already announced that I'm the one trying to create confusion, so you have my word against his. caligula
Tom, I hope I don't embarrass you by introducing you a bit to them through something you wrote at ARN regarding Dembski's Displacement theorem: Tom responds to Bill. I hope you don't mind me quoting you as I'd like the readers to have an appreciation for your background:
Bill, I am sincerely impressed by your mathematics. I have always been impressed by your talent for propaganda, and that is also plenty evident in the paper. As you are well aware, it was I who argued in 1996, five years before you published No Free Lunch, that NFL follows from conservation of information in deterministic search. I appreciate your apparent recognition of the importance of that insight, though I would appreciate it even more if you cited my work. I have suspected that you have avoided calling attention to a secondary result in my paper, which says that a random walk of the search space almost always finds an excellent point within a modest number of steps. Now it seems that I was right, because you have used almost the same math to argue that it takes an exorbitant number of steps to reach a search target. Again, you have cited neither my paper nor the source I cited, a paper by Joe Breeden. My concern is that you bias your presentation by omitting reference to similar work that leads to a conclusion essentially the opposite of your own. The difference between your "search is slow" and my "search is fast" result is elementary and important. You give the size of the search target in absolute terms, and I give it as a fraction of the size of the search space. Both approaches are valid under certain circumstances. For readers who will not see my paper, I'll mention that if you want to be 99.99% sure of obtaining fitness better than that of 99.999% of all points in the search space, a random walk of 921 thousand points suffices. To obtain fitness in the top 1% with 99% certainty, the random walk need visit only 458 points. Because the target size is specified as a fraction of the search space size, these numbers hold (with some proviso for ordering points with identical fitness) for all large search spaces. Note that the approach of defining search targets in terms of fitness quantiles implicitly acknowledges that the quality of a search result is a matter of degree, not all-or-nothing. In other words, if your objective is to obtain a point with fitness better than 99% of points in the search space, but you end up with one that is merely better than 98.999%, the experience is generally not traumatic. You, of course, address problems with all-or-nothing solutions. This, in and of itself, would be fine, but you play quite a trick in reintroducing graded fitness as a means for the beneficent Bob to supply information to the benighted Alice. To my knowledge, never before has anyone given satisfactoriness primacy over fitness. When there is a fitness function, a satisfactory solution is one that is sufficiently fit. You switch things around without comment. It is a very clever tactic, but not one that I respect terribly much. And it also worth noting that when you first anthropomorphize Bob you appear to be taking a conventional approach to giving a concrete explanation. Few reviewers will realize that the teleology salesman has just stuck his foot in the door. In engineering, the objective is usually to find satisfactory, not optimal, solutions using acceptable amounts of time and space. Biologists who say evolution is an optimization process back off from that stance when you give them the option of calling it a "satisficing" process. In practice, the quality of a solution is rarely all-or-nothing, and the number of satisfactory solutions is generally increasing in the size of the problem instance. It is very interesting that you stipulate repeatedly that Alice must find one, and only one, protein. Why, precisely, is that, Bill? Why doesn't Alice search for any protein with certain functional properties? Why is Bob in love with a particular sequence of amino acids? Why doesn't Alice base the search on knowledge of existing proteins and their functional properties? You seem to be trumping up a case for teleology. A much more subtle and shrewd trick, which allows you to boost the case for the necessity of an external teleological assistant, is your assumption of a uniform distribution on the space of solutions. In prior work on the mathematics of search and optimization, the distribution of fitness functions has been assumed to be the average of all distributions, i.e., uniform. This sufficient condition for NFL induces a uniform distribution on the space of solutions, but the necessary and sufficient condition of a block-uniform distribution does not. The set of NFL distributions your framework does not accommodate is uncountable. Even for your all-or-nothing (binary, solution-or-not) fitness functions, the distribution of solutions may be far from uniform when the distribution of fitness functions gives NFL. Predicating a uniform distribution on the search space is particularly odd in the context of your protein example. For amino acid sequences, the universal distribution would be the natural choice. That is, there's a strong argument for exploring algorithmically compressible (simple) amino acid sequences prior to algorithmically random (complex) sequences. In practice, programs for evolutionary computation focus upon solutions with low algorithmic information, simply because their pseudorandom number sequences contain little information. In the important case of state-space search (say, by algorithm A* or iterative-deepening depth-first search), cheaper sequences of state-transforming operations are considered before more expensive sequences, and this implicitly defines a nonuniform distribution on the space of possible solutions. In short, I think your assumption of a uniform distribution on the search space is rarely useful. I should mention that I published work last year treating deterministic search algorithms as operators on probability distributions of fitness functions. I characterized NFL distributions as fixed points for all search algorithms, and showed that deterministic search preserves the nearest NFL distribution as well as the distance to that distribution. I also showed that randomization moves the distribution of search results closer to the fixed point, indicating rather clearly, I think, that randomization is a hedge againts mismatch of the search algorithm and an unknown distribution of fitness functions, not a strategy for speeding search. I fixated on the Kullback-Leibler distance, and failed to observe that my main results generalize immediately to a large class of distance measures, including the metrics based on Lp norms. I believe this is related to your work. On a positive note, I think much of what you have done in the paper could be quite useful. It is not merely the IDists who speak vacuously of intelligence, but many of my friends in machine intelligence. You deserve credit for nailing down the term. Your formalization of information gain, a topic that has occupied me at times, is also quite good, I think. But you indicate that your framework accommodates most of prior theory and practice, and this is simply not so. And it is grossly manipulative for you to turn the search problem upside-down, without acknowledging you have done so, to beg the question of the existence of teleological processes in nature. Sure, once you have smuggled in the notion that a biological process has searched for and found a specific sequence of amino acids, you can argue that assistance must have come from outside the observable universe. So what? From a false premise, conclude anything. Best wishes, Tom English
scordova
Salvador, "A genetic algorithm is like an instruction manual that tells the computer how to go about solving a problem. Genetic algorithms are good for solving only a limited set of problems." This is misleading. The genetic algorithm is a sequence of instructions for simulating evolution. One part of the simulation is evaluation of the fitness of all members of a population. For each individual in the population, a fitness function is applied to the individual. The fitness function is assumed to be defined, but it is not a part of the genetic algorithm itself. It may be thought of as modeling the environment in which the population evolves. The evolutionary simulation essentially does not "know" anything about the fitness function. The upshot is that to solve different problems with a genetic algorithm, you change the fitness function, not the genetic algorithm. A single genetic algorithm can be used to solve many different problems. "Furthermore, if the genetic algorithm is mis-programmed it won’t work." This is simply not true. Implementations of genetic algorithms are in fact hard to debug, precisely because programming errors often do not stop them from obtaining solutions to problems. For instance, if an implementation mutates alleles at twice the rate it is supposed to, it is very hard to tell by watching the behavior of the implementation that something is wrong. "Thus, it is misleading to hint that genatic algorithms negate the need for intelligent agency somewhere in the pipeline." Use of an intelligently designed simulation does not imply that what is simulated is intelligently designed. Tom English
Caligula asked: You can mutate the hardware, the CPU, the OS or the GA engine all you like.
The issue is not mutating these things but showing how finely tuned they need to be in order to compensate for the randomness of the objects they select. The alternative is to fine- tune the objects one is selecting, but that would look too much like special creation. Rather, this circuitous route serves the anti-design case by sneaking away the fine-tuning into the things you just listed: CPU OS GA engine etc. Then one can pretend those intellignetly designed things aren't seriously affecting the results, when indeed they are. Thomas is piggy backing on specified complexity of these artifacts and is not including them in his accounting equation. The displacement theorem helps put into perspective the amount of CSI anti-IDers are actually sneaking in to the system. You asked how much fine tuning, and I gave you an answer in terms of the improbability of the object in question. I even showed 5 ways to reach the same answer for adding numbers 1 to 1000. The genetic algorithm was the most circuitous and theatrical. The most conceptually simple was the brute method. But each strategy need intelligent authorship. To give you an idea of what the displacement theorem means, consider writing a GA to solve someones 100 bit password. One average one GA is no better than the next, in fact, no better than random chance. To presume that nature selects complexity is misguided. Orr pointed that out. John Davison will happily point out the more complex creatures are the ones going extinct. Selection in the wild is barely a sustainer, and more the destroyer of complexity. We see this supported empirically and theoretically. Orr unwittingly said it well: "Whether or not this kind of evolution is common, it betrays the fundamental error in thinking of selection as trading in the currency of Design. ". regards, Salvador scordova
mike1962: “The point is, the fact that there was less than “vague routes” is true, but misleading. The selection system was highly tuned and very specific about what the final result would be.” Caligula: "Perhaps, but by the same token so is natural selection. As Dawkins has said: there are *vastly* more ways of not being alive than being alive. Which applies to any creation of natural selection (organisms or their substructures): no matter how many possible evolutionary pathways are favored by selection, there are vastly more pathways rejected by selection. *That* is what selection, and especially *cumulative* selection, is all about." I agree. The question left for me then is: is "natural selection", that is, selection by the (designed or nondesigned) environment something capable of resulting in life as we know it in all it's glory? Nobody knows because nobody knows the initial conditions. Avida doesn't tell us anything we don't already know about process control, and it certainly cannot answer the big questions about life. Avida is a waste of time. mike1962
Yes, and imagine if large comets hit the Earth every ten years, and natural laws mutated every morning at breakfast time! We would hardly have complex life, if any life at all. But could you explain, Salvador, how does all this have *anything* to do with the issue at hand? You can mutate the hardware, the CPU, the OS or the GA engine all you like. But please notice that by doing that, instead of mutating a population -- something which is perfectly relevant in this discussion -- you are mutating the very natural laws, all of the environment of the population, and the fitness challenge that we are *supposed* to apply to the population. And you are mutating them all at a fast pace in evolutionary time scale. In short, you are trying to step outside cosmos and enter chaos, because your theory is incompatible with the cosmos. I'm interested to see how many followers you have in this move. The same applies to your calculations in #28. I doubt they make "sense" to anybody but yourself. They certainly didn't have anything to do with what mike and I were discussing. It's as simple as this: how many possible solutions are there, and how big a portion of them are MacGyvers with decently short length? Also, how many possible evolutionary routes are there to the MacGyvers, as opposed to the number of all possible evolutionary routes? My claim is that MacGyvers are a vast group, but even then, all the other solutions vastly outnumber MacGyvers. This means that (a) the selection process allows *plenty* of freedom while (b) it still produces specified results. I'm interested to read mike's take, though. caligula
Really? Tell me, do you think if I interchanged lines 8 with like 12 in Thomas’s code snippet that the system will still successfully guide to target? The point of GAs is to illustrate the power of imperfect self-replicators to find novel solutions to problems. If you remove the capacity for the imperfect self-replicators to exist, then the GA will not function. What is your point? franky172
ofro commented: I fail to see how your summation example comes remotely close to simulating a selection process in nature.
I never represented it to something natural, that exactly the point. I invite you then to comment on the naturalness of Dave Thomas's simulation. scordova
Zapatero wrote: To claim that Thomas “sneaks” the Steiner shape into his program via the fitness function is about as absurd as claiming that Fermat’s Last Theorem “contains” Andrew Wiles’ 150-page proof.
Really? Tell me, do you think if I interchanged lines 8 with like 12 in Thomas's code snippet that the system will still successfully guide to target? scordova
Caligula asked: BTW. Just for the benefit of all: since you seem to know that Thomas’ GA is “highly tuned and very specific about what the final result would be”, could you show us some calculations.
It would be on the order radom chance scanning the space of possible outcomes, i.e. if the solution space is improbable, then on average the likelihood a selection force existing by random chance to reach it even more remote. For example if random chance will hit the Steiner solution 1 out of 10^1000 times, then on average the existence of a selection force to guide it to target is more remote than that. That was the conclusion of the Displacement theorem. This is readily apparent with the challenge I offered. Let some mindless change, perhaps as little as 5 of the 1137 characters, be made in the code snippet I identified and let's see how frequently it will even compile, much less guide itself to target. There are small "comment section" island which would be immune to change, but beyond that, such untuning would destroy Dave's program. scordova
Nick Matzke asked: Forgive me for being dense,
You are not dense. You're one of the brightest guys out there.
but where, exactly, did you “identify the precise code snippet where this frontloading is being performed”?
Go to Dave Thomas’s Code Bluff. You'll see it corresponds to a section in Thomas's code. This section of code sets up the criteria for determining how fit a solution is. In otherwords, this section of code induces the selection pressure to select out solutions. You will not see any explict reference to the target in question. As I pointed out, the specification is essentially a strategy to hunt down the target. It is not as overt as Dawkins weasel. Salvador scordova
mike1962: "The point is, the fact that there was less than “vague routes” is true, but misleading. The selection system was highly tuned and very specific about what the final result would be." Perhaps, but by the same token so is natural selection. As Dawkins has said: there are *vastly* more ways of not being alive than being alive. Which applies to any creation of natural selection (organisms or their substructures): no matter how many possible evolutionary pathways are favored by selection, there are vastly more pathways rejected by selection. *That* is what selection, and especially *cumulative* selection, is all about. BTW. Just for the benefit of all: since you seem to know that Thomas' GA is "highly tuned and very specific about what the final result would be", could you show us some calculations. To me it seems, that there is a vast number of possible routes to any final result and notable variety final results, both in quantity and quality. (Some results giving about the same "length" have little common in detail of structure, except that a human observer might call them "MacGyvers". caligula
Nickm, Scordova indicated the following link for an example of the front-loading http://smartaxes.com/docs/ud/tautologies/bluff.txt As you say, it seems to be just measuring total length using Pythagoras. As I see it the length might show up, and be selected for/against indirectly, if this were a biological system. Eg if an creature had to eat in order to build and maintain long connections, or if it got slow, or easily damaged, if the total was large, then a long total length could be selected for by either starvation, being caught by a predator, or an accident without as much as turning on a calculator or even knowing the length. Also as you point out, a soap solution can get similar results without calculating anything. I suspect that if you were to try to mimic the soap-solution solution using a computer program it would be similarly complex. Also, Sal, we use powerful computers to predict the weather, yet somehow nature has always manage to work out if it was going to rain even before the invention of the computer. Even a dumb rock could 'calculate' its trajectory down a mountainside more accurately than a team of computer bods with the latest equipment. steveh
sagebrush gardener: Indeed, NN is not GA. There is no population, and the changes made into the network during backpropagation are not random. Why do I consider NNs relevant in this discussion? Because your original comment, as well as e.g. Dembski's claim that only ID can produce CSI, bring NNs into the discussion. ID, or at least Dembski and his supporters, are making a claim not only concerning biological evolution but concerning *all* blind algorithms, including AIs produced by various self-learning algorithms other than GAs. As for the human interference in backpropagation. You will see that backpropagation is a generic method for approximating *any* non-linear function. The backpropagation rule is carried out exactly the same way regardless of the function to be learned (i.e. the problem to be solved). The only difference, then, is the function to be learned. Sure, a human typically fixes the number of inputs and outputs to match those of the function to be learned. (Usually the number of all nodes in the network remain fixed during the learning process.) But this is simply practical rather than "major front-loading". As for the "desired output", as I said a human doesn't even have to *know* the desired output in cases like "reinforcement learning". Please see the link I gave earlier. As for "technical jargon". Discussion boards are a challenge, aren't they? Too many words and you're spamming, too few and you hide behind jargon. If allowed, I'm more than happy to discuss this issue thoroughly. Preferably by explaining some of the technical terms as needed and then making use of them for brevity. Fortunately, at least sagebrush seems to be able to learn about unknown terms on his own. caligula
Nickm: "As I understand it the genetic algorithm was simply selecting for shortest length. This is a very simple, low-specificity selection target, and yet the hits favored by this simple selection target end up being rather complex and hard to find by direct algorithms." Right. Any given set of waves on the ocean would be in the same boat. Nobody doubts that a variety of "complexity" can be built up by applying simplish selections to stochastic inputs. But there are quantifiable limits to the nature of the output given selection criteria and the allowable temporal orders that the selections are applied, etc. It is not an anything goes proposition, by any means. So then, I think what we need to know is, if I have a selection criterion that will generate cogs, another selection criterion that generates springs, and another selector that generate pins, is it possible for the outputs to coaless into a watch? I suppose it eventually could, if all the other selectors that may exist allow for it. So then, what does Avida show us that we didn't already know? Nothing that I can tell. The key questions about evolution on this planet (and universe) boil down to what the initial conditions were, and why they were the way they were. It's a wholistic proposition. If the universe is actually deterministic down deep, then nothing is an accident, and all life was bound to exist just the way it has. Otherwise, not, but then we're left with something in nature that is genuinely non-deterministic, which is beyond reason. At any rate, Avida, and programs like it, if they are useful at all, will end up showing us that life on earth is necessarily the product of some very non-trivial selection criteria. Take that as a ID-friendly prediction. mike1962
To claim that Thomas "sneaks" the Steiner shape into his program via the fitness function is about as absurd as claiming that Fermat's Last Theorem "contains" Andrew Wiles' 150-page proof. zapatero
Forgive me for being dense, but where, exactly, did you "identify the precise code snippet where this frontloading is being performed"? As I understand it the genetic algorithm was simply selecting for shortest length. This is a very simple, low-specificity selection target, and yet the hits favored by this simple selection target end up being rather complex and hard to find by direct algorithms. And: please identify the front-loaded target in the soap film version: http://www.pandasthumb.org/archives/2006/07/target_target_w_1.html Nickm
scordova,
What is at play here is an abundance of technical jargon to confuse the issues.
I sometimes suspect that, but being not very bright myself I tend to give the challenger the benefit of the doubt and begin by assuming the he knows something I don't and that he is not merely blowing smoke. In the process of considering his challenge and doing my best to determine a.) whether or not it is accurate and b.) whether or not it is applicable, I often surprise and delight myself by learning something new. sagebrush gardener
Caligula: "Has any of the targets let alone even a *vague* route to any of the “targets” (the formal solution or the MacGyvers) been intelligently designed? Not at all." But so what this all prove then? As I see it, it demonstrates stocastic inputs can yield certain "shaped" or selected output depending on a fitness algorithm designed to select increasingly desirable traits. This is certainly not news to anyone in the process control or AI world (such as myself.) I don't see how it benefits the Darwin camp. Nobody doubts the ability for an environment to select events that occur within it. Nobody doubts that unforseen paths may be "trodden" on the way to ever increasing "fitness." For example, a fairly simple example of this is a HVAC air temperature control system where an algorithm (PID in this case) takes temperature inputs and attempts to control the heating and air units to acheive a stable temperature close to the target. (Not as easy as you think for a large space. Simple thermostats do a very lousy job of it.) The PID may be manually tweaked and tuned during this process (since humans know what they want to achieve), but the "route" (actual temperature fluctuations) taken in this process is infinite and unknowable at the onset with any high degree of precision. Each fluctuation relative to the air unit states provides useful information to the stategy of the PID (and the human who may need to tweak it if things get out of hand, or were poorly estimated at the onset.) The point is, the fact that there was less than "vague routes" is true, but misleading. The selection system was highly tuned and very specific about what the final result would be. In the end, it's not logically different than Dawkins "methinks it is a weasel" program. In such systems, it's the selector that is all important, not the stocastic input. But is this how life came to be, and how it progresses in the formation of novel features? Does Avida demonstate anything other than a frontloaded system? No, despite the numerous paths that the input may take. mike1962
1. How would the computer generate a selection process (laws of nature, environment, etc.) all by itself, left alone on the table? 2. As far as I can see, this question brings us all the way to abiogenesis. Yes, Thomas is definitely assuming a readily available entity roughly comparable to a cell or even a multi-cellular organism. This is beyond the scope of the questing at hand, however. caligula
sagebrush gardener, What is at play here is an abundance of technical jargon to confuse the issues. The goal of a persuassive response, if the facts are on one's side, is to clarify and enlighten, not to use jargon to try to beat down the questioner. The art of programming is not very far removed from the art of writing an instruction manual, it's just more technical and rigorous. A genetic algorithm is like an instruction manual that tells the computer how to go about solving a problem. Genetic algorithms are good for solving only a limited set of problems. Furthermore, if the genetic algorithm is mis-programmed it won't work. Thus, it is misleading to hint that genatic algorithms negate the need for intelligent agency somewhere in the pipeline. Salvador scordova
caligula,
I would surely want to see you “tweak” by hand, say, the weights of a neural network “until it does” (give the “correct” results)! It’s a better idea to just let the NN learn the “target” using a blind and extremely generic method called “backpropagation”. How is a NN not self-learning?
Sorry to be thick, but are you saying that Avida is NN, or am I missing your point? I couldn't find a reference that indicated that Avida uses NN techniques. Also, you seem to be implying that "tweaking by hand" is not applicable to NN. Surely you don't mean that the output of NN is independent of the actions of the programmer, do you? My background is primarily in business programming and I claim no expertise in NN, but I did find this in an introduction to neural networks:
The Back Propagation NN works in two modes, a supervised training mode and a production mode. The training can be summarized as follows: Start by initializing the input weights for all neurons to some random numbers between 0 and 1, then: Apply input to the network. Calculate the output. Compare the resulting output with the desired output for the given input. This is called the error. Modify the weights and threshold for all neurons using the error. Repeat the process until error reaches an acceptable value, which means that the NN was trained successfully... [Emphasis added]
This seems to support my contention that a program's output is tweaked by the programmer to achieve a desired result -- even in Back Propagation NN. sagebrush gardener
Caligula responded: *Of course* it is intelligently designed.
Thank you for responding. Can you, for the benefit of the reader explain what would happen to this algorithm in the absence of 1. intelligent design of the selection process 2. intelligent design of the "creatures" such that they are amenable to intelligently designed selection Would you count on the system still hitting the target in question? In the absence of such intelligent design of the system, would you expect the system to generate CSI? scordova
Everyone, please see: https://uncommondesc.wpengine.com/index.php/archives/802 Scott
scordova: "....I request you answer this question. Is the selection process in Thomas’s code natural or intelligently designed?" I don't think that you are asking the right question. Of, course, a human wrote this program, so it is, (by definition, I hope) intelligently design. The question is: does it properly model or mimick the intended process of some random variation followed by some sort of an evaluation step? ofro
Since no one pays any attention to me anyway, I will continue to let others speak for me whenever possible. "The struggle for existence and natural selection are not progressive agencies, but being, on the contrary, conservative, maintain the standard." Leo Berg, Nomogenesis, page 406, 1922 From Reginald C. Punnett's book - Mimicry in Butterflies: "Natural selection is a real factor in connection with mimicry, but its function is to conserve and render preponderant an ALREADY EXISTING likeness, not to build up that likeness through the accumulation of small variations as is so generally assumed." page 152, 1915, My emphasis as it is in perfect accord with the Prescribed Evolutionary Hypothesis. Neither Berg nor Punnett exist in the Darwinian literature. I don't believe Gould even mentions these two books in his opus magnus - The Structure of Evolutionary Theory. It constitutes a scandal unprecedented in the history of science. "A past evolution is undeniable, a present evolution undemonstrable." John A. Davison John A. Davison
Salvador: *Of course* it is intelligently designed. The fitness function is indeed intelligently designed. However, the fitness test applies to each test in such a way as to compare competing organisms for a quality considered a measure of "fitness", in this case the conservation of energy. Has any of the targets let alone even a *vague* route to any of the "targets" (the formal solution or the MacGyvers) been intelligently designed? Not at all. Thomas has no idea whatsoever by which routes any of the lineages in the algorithm proceeds. All he's assuming, or hoping, is that the winning lineage will hit upon a fairly good solution (hopefully at least a "MacGyver"). In this problem, they seem to do so. The problem is complicated enough for the end results to qualify as CSI. Whether the ever-changing fitness landscapes of biological evolution also produce such impressive results, is not proven by Thomas' work, of course. Evolution simulations are leagues too simple to be compared to real life. But Thomas' work does disprove Dembski's claim that only intelligence can ever produce CSI. My apologies if my rambling starts to annoy. From now on I'll try and restrain myself by at most responding to your comments addressed to me. caligula
caligula asserts: The point under discussion is, can gradual random mutations and selection ever produce CSI. This *can* be readily demonstrated by the program Thomas has described.
Before you spam this thread with any more misdirection, I request you answer this question. Is the selection process in Thomas's code natural or intelligently designed? Salvador scordova
Caligula asked: Are you saying that nature does *not* select for the functions seen in phenotypes?
No. Besides, \"select\" is double speak. Read Allen Orr\'s slamming of Daniel Dennett: Dennett\'s Strange idea:
Dennett is fond of speaking of selection as leading organisms through “Design Space”: Selection “lifts” organisms along “ramps” of good Design. Although this imagery is often useful, it invites two subtle misconceptions about adaptation. The first is that natural selection cares about Design. In reality, selection “sees” only brute birth, death, and reproduction, and knows nothing of Design. Selection — sheer, cold demographics — is just as happy to lay waste to the kind of Design we associate with engineering as to build it. Consider the eyes of cave organisms who live in total darkness. If eyes are expensive to make, selection can wreck their exquisite engineering just as surely as it built it. An optic nerve with little or no eye is most assuredly not the sort of design one expects on an engineer’s blueprint, but we find it in Gammarus minus. Whether or not this kind of evolution is common, it betrays the fundamental error in thinking of selection as trading in the currency of Design.
Caligula: Are you saying the fitness tests made by an organism’s environment are *random*? No biologist would agree.
The organism\'s environment is random with respect to fitness, and fitness is also based on whims of what an human observer would deem \"fit\". Fitness is a non-rigorous conception. You may want to read Lewontin\'s article in Santa Fe 2003 to see that \"fitness\" is a highly suspect quality based on the observer\'s preferences. Evolutionary biology pretends to be objecive, Lewontin\'s essay crushes that pretense. For example, one could easily conclude a lineage is more fit since it survived. Well, it may have survived simply because it was lucky. Thus, by definition it has a survival advantage because it has the quality of being lucky! See David Raup\'s book: Extinction: Bad Genes or Bad Luck scordova
ofro: Additionally, the code running GAs also simulates the very *laws of nature*. As the latter are hardly mutating around us in the real world, it would very silly to mutate them in the computer, as per scordova's requirements. As I said, mutating the program as a whole is really like mutating the OS or the CPU or the motherboard. Of course, there have long been GAs where the organisms *themselves* are virtual code snippets. However, this kind of "genome" is hardly a general requirement for organisms used in GAs. The point under discussion is, can gradual random mutations and selection ever produce CSI. This *can* be readily demonstrated by the program Thomas has described. Provided that his fitness function does not "peek" into the future (i.e. peek at a pre-set "target"). Well, it doesn't. It checks whether the connectivity (basic functionality) is maintained, and whether the energy waste (road length) is *relatively* shorter than that on the competitors on every step. It's just as generic as any evolutionary arms race could be: conserve energy without losing a needed functionality. It does not, in any way, contain a definite path to a pre-set goal. In fact, the algorithm used by Thomas tends to find a local optimum which it can't escape by gradual changes, and the end results of separate test runs vary greatly. Almost all of the winners (the "MacGyvers") do *look* as if the winning lineage had "smelled" the basic principle of the problem, by not leaving any redundant road segments or by not using extra dots outside the polygon formed by the cities. This leaves the *impression* of design, but was clearly not front-loaded by human intelligence. I think Salvador is making an unsuccessful argument here. He'd be better off by trying to deny that the "MacGyvers" qualify as CSI, perhaps because they are too numerous to his taste to qualify as "specified". Maybe only the formal solution is CSI? Of course, the problem is that Thomas' algorithm *does* occasionally come up with the global optimum. Also, the "MacGyvers" are *vastly* outnumbered by solutions that look, well, just random noise. caligula
(note to Bob OH, I had fixed the misspellings you suggested last night, thanks, so that's why your spelling suggestion comments didn't appear, but please feel free to post to this thread. Thanks for the help. Sal) scordova
1. Could you provide the intelligently designed answer to Thomas’ challenge, please? Failing to do that would, rightly, cast doubt on the essay above
Go to http://www.diku.dk/geosteiner/ and grab the software and use it to answer Thomas's challenge. A good engineer would try to intelligently co-opt someone else's work first. :-) Salvador scordova
I don’t understand your argument, and certainly not the validity of your program with respect to the question asked. I have only very limited programming experience, but if I understand your ga.c program, you included the Gaussian solution in the program and then used some random process to zero in on that pre-determined solution. This would be equivalent to taking an explicit geometric solution to the given Steiner problem (I’ll let the experts tell me if there is one at all) and work your way towards that. More basically, I fail to see how your summation example comes remotely close to simulating a selection process in nature. If I understand it correctly, the whole process of checking for fitness is one of optimizing an outcome. I can’t think of a process in nature where a target like a sum needs to be obtained (somebody may mention checking the several quality control points during mitosis, but that is not an optimizing process, either). Thus I don’t see how approaching an explicit solution could be a genuine solution to the question posed. Also, your proposal to mutate the code makes no sense to me. The GA code is the software version of the behavior of the hardware called “enzyme” or “cell” or “organism”, plus a way to randomly changing this behavior in random directions and testing the outcome. Let’s take the example of an enzyme catalyzing a reaction X going to Y. The code is supposed to simulate how I can modify the enzyme such that the reaction output is optimized, i.e. either a (relative or absolute) maximum or minimum is reached. If I change the code like you proposed, I will either change the enzyme’s function so that it catalyzes a different reaction X going to Z or even P going to Q or, more likely, totally destroy the enzyme. I don't think that this was the purpose of the exercise. ofro
sagebrush gardener: I would surely want to see you "tweak" by hand, say, the weights of a neural network "until it does" (give the "correct" results)! It's a better idea to just let the NN learn the "target" using a blind and extremely generic method called "backpropagation". How is a NN not self-learning? NN/backpropagation is a generic method for automatically finding factors for the terms of a generic sum expression capable of approximating any non-linear function. (See e.g. Taylor's polynomes; a NN is just a computational representation for such polynomes, and backpropagation is a blind technique for learning the factor for each term of the polynome.) If you think that a human is always front-loading the learning process by providing the backpropagation algorithm with the correct outputs of the function to be learned, well that is simply not true. There are various techniques (bundled under the term "reinforcement learning") for finding good approximations for mathematical functions unknown to humans -- or simply too complex for the human mind to even comprehend. A neural network is perfectly capable of learning a function *without* ever being told any "correct" output produced by that function for a given input. See, for example, Gerry Tesauro's paper on "temporal difference" learning and how it was applied to the game of backgammon. This paper changed the whole backgammon hobby in the 90s. It changed not only how computers play the game but also how human grand masters play the game. As a programmer, you should have little trouble understanding how Tesauro's techniques work, and *why* they work without human interference. You probably need to know more than the basics of backgammon in order to fully appreciate this paper; but it's a great game well worth learning, a promise! http://www.research.ibm.com/massive/tdl.html (Tesauro's paper on TD-gammon: backgammon AI based on temporal difference learning) http://www.cs.ualberta.ca/%7Esutton/book/ebook/the-book.html (introduction to reinforcement learning; TD in-depth; case studies) caligula
1. Could you provide the intelligently designed answer to Thomas' challenge, please? Failing to do that would, rightly, cast doubt on the essay above. 2. "The major front loading is in how selection is made. With the wrong selection description, the wrong target of opportunity, if any, will be hit. Simple!" Are you saying that nature does *not* select for the functions seen in phenotypes? Are you saying the fitness tests made by an organism's environment are *random*? No biologist would agree. If the fact that fitness functions are not random but fairly specific is what you call "front-loading", then your term seems rather empty. Nature quite blindly "front-loads" just as much as GAs do. Think especially of evolutionary "arms races". 3. It is unreasonable to require GAs to mutate the program wrapped around and running the GA. It is exactly the same as requiring GAs to mutate the very OS. That is because the OS and the GA engine together effectively constitute a virtual OS for the GA being run. They, among other things, provide the environment (i.e. the fitness function) for the organisms of the GA. Granted, requiring the environment to mutate randomly (and FAST on evolutionary timescale) would surely prevent us from gaining the target we are looking for. But it would hardly be a simulation of Earth or even cosmos; it would rather simulate chaos. What's the point in proving that chaos doesn't produce CSI? No one doubts *that*. caligula
Non-programmers may not understand that computers are just tools for automating the assumptions of the programmer. If the program doesn't give the correct results the first time, the programmer tweaks it until it does. Not surprisingly, programs designed with evolutionary assumptions demonstrate the success of evolution. sagebrush gardener
A good example of such a target of opportunity would be dear in forest during a deer hunting season.
Wives and girlfriends beware... Bob P.S. "implicitly" not "inexplicitly" Bob OH

Leave a Reply