19 Replies to “The Consummate WEASEL

  1. 1
    AussieID says:

    Thanks Atom,

    The phrase I submitted was, “DICKY DAWKINS DOES IT AGAIN”

    A Partioned Search ‘dicovered’ the answer in 103 searches;

    A Deterministic Search arrived at in a swift 22;
    A proximity Reward Search at a resonable 4100;

    The Unassisted Random Search and the Proximity Neutral Search I left running whilst my class continued delving into their work. Well, whatdoyaknow? It’s still plugging away trying to find something to grab hold of!

    At this moment I think we should look to Weird Al Yankovic, and consider his words …


    Faces filled with joy and cheer
    What a magical time of year
    Howdy ho, it’s Weasel Stomping Day

    Put your viking helmet on
    Spread that mayonnaise on the lawn
    Don’t you know it’s Weasel Stomping Day
    (Weasel Stomping Day)

    All the little girls and boys
    Love that wonderful crunching noise
    You’ll know what this day’s about
    When you stomp a weasel’s guts right out

    So come along and have a laugh
    Snap their weaselly spines in half
    Grab your boots and stomp your cares away
    Hip hip hooray, it’s Weasel Stomping Day

    People up and down the street
    Crushing weasels beneath their feet
    Why we do it, who can say?
    But it’s such a festive holiday

    So let the stomping fun begin
    Bash their weaselly skulls right in
    It’s tradition – that makes it OK
    Hey everyone, it’s weasel stomping…
    We’ll have some fun on weasel stomping…
    Put down your gun, it’s Weasel Stompiong Day

    Hip hip hooray
    It’s Weasel Stomping Day
    Weasel Stomping Day

  2. 2
    DiEb says:

    1. I appreciate the possibility to create your own fitness function.

    2. It would be helpful if the algorithms were labeled as in the paper of R. Marks and W. Dembski: this is the case for Partitioned Search, but not for Proximity Reward Search, which is introduced by R. Marks and W. Dembski as Optimization by Mutation.

    3. There should be a version for Optimization by Mutation With Elitism.

    4. It would make it easier to find literature on the algorithms if Proximity Reward Search was classified as a (1,n) EA (and Optimization by Mutation With Elitism as a (1+1) EA).

  3. 3
    DiEb says:

    Ouch: Evolution strategies. Make it (1,n) ES resp. (1+1) ES.

  4. 4
    Atom says:


    Thank you for the feedback. In coding the GUIs and writing up the documentation my goal was to try to explain the ideas to as wide an audience as I could, in the simplest terms I could think of. (Obviously, I’m not always 100% successful in that.) Hence, why I refer to fitness functions as “Reward Functions”, and fitness evaluation and selection as “ranking of strings”, etc. I thought Proximity Reward would explain in non-technical language exactly what the fitness function is doing: rewarding a string based solely on its proximity to a target. Hence my idiosyncratic labeling.

    The papers are aimed at a more technical audience, while I hope that lots of non-technical people can find use of the GUIs, even high school kids. Could you perhaps write up an overview/tutorial on using the GUI with follow-up references to the primary literature? If you do something good I can always link to it as a more in depth exploration of the algorithms presented in the GUI. Just a thought, don’t feel pressured to do so.


  5. 5
    IRQ Conflict says:

    “Weasel Ware 2.0” Ha! Love it!

  6. 6
    computerist says:

    Very nice Atom, the GUI is excellent as well.

  7. 7
    idnet.com.au says:

    Great work Atom! I enjoyed watching the phrase “DAWKINS IS A SPIN MERCHANT” forming before my eyes.

  8. 8
    feebish says:

    Great fun. I used
    “Weasels Ripped my Flesh”
    Sorry it wouldn’t let me put an exclamation point on that sucker.
    What does “queries” mean, as distinct from “generations?” I found I wanted to know how many generations had passed in the proximity reward search.
    And what is the significance of the “medium number of queries” as opposed to the “query count.” Is it running more than once per “start search?” Sorry for the ignorant questions.

  9. 9
    DiEb says:

    1. Queries is not the number of generations, but the number of children produced, i.e., the number of times the fitness function has to be evaluated

    2. medium number of queries is the median as described on the weasel math page

    If we ran a large number of simulations, half the number of queries needed to achieve success would be above the median and half below.

    It would be more satisfactory to have the expected value of queries, but the median can be calculated more easily.

    BTW, Atom, do you manage the weasel math page, too? Or do you have at least some influence there? Then you could correct the phrase:

    First, let’s look at partitioned search used by Dr. Dawkins.

    It should be clear by now that partitioned search doesn’t resemble Dawkins’s weasel.
    And you could give the formulas for the expectations of the various weasels – it isn’t that hard πŸ™‚

  10. 10
    Atom says:


    DiEB answered your questions correctly, queries is the total number of children. To get generations from that number simply divide the total number of queries for a search by the population size you’re using. That will give you the generations so far.

    I do not maintain the math page, nor the math contained therein. (I’m a coder not a math-er. :)) However, I may be able to pass along useful information to the person in charge of that page. Forgive my ignorance, but is there somewhere that has the weasel expectation formulas in closed form so I can suggest they update the page? If someone has calculated these already, it shouldn’t be hard to get them to include these new formulas on the page, I’d just have to give them a link and make the suggestion. (People like making easy changes, I’ve noticed.)

    Thanks for all your feedback.


  11. 11
    feebish says:

    DiEB and Atom: Thanks for the explanations. And my apologies for writing “medium number of queries.”
    Atom: I’d really like to see you add “Generations” to the list of collected data, even though it can be calculated. Better to just show it. Generations seems more relevent to me than queries, (although I am so far from being an expert, it isn’t even funny).
    For example, if I choose 999 offspring (the maximum you allow, perhaps equivalent to an insect’s reproductive strategy) and a mutation rate of 2%, it took the program 25 generations to get the correct phrase. With offspring set to 4 (perhaps equivalent to an actual weasel) and a mutation rate of 2%…… well, it hasn’t come up with the correct phrase yet. I’m still waiting as I type this. It’s up to 28,000 queries, or 7,000 generations and still has 5 errors to correct.
    Interestingly, using these settings allows one to see frequent reversions. In fact, the target phrase was very nearly achieved fairly early on, and has actually been getting worse. This shows that your program does not latch individual letters, although the total fitness ratchets generally upwards (with occasional steps backwards). This is a good way to see the phenomenom of demi-ratcheting in real time.
    Anyway, good work, interesting to play with, and please add the number of generations needed to the data output.

  12. 12
    DiEb says:


    no need for an apology! BTW, your combination (pop.size = 4, mut.rate = .02) takes ~890,000 generations on average

    The number of queries is relevant for the running time of the program, and therefore, one tries to keep this number low.

  13. 13
    Atom says:

    Hey DiEB,

    Do you have any posts on your blog (or anywhere) where you go into the details of your markov analysis you used to generate your graphs for expected generations and expected queries?


  14. 14
    DiEb says:

    Not pretty, but it works.

  15. 15
    Atom says:

    Thanks DiEB. The code doesn’t look bad to me (but I don’t know R.)


  16. 16
    DiEb says:

    here, you can find some thoughts on the algorithm partitioned search, including the calculation for the expected number of queries…

  17. 17
    FreshVoice says:

    Right on Atom!
    Keep killin’ it over there…. nice work!

  18. 18
    DiEb says:

    perhaps you could pass this,too:

    –Dr. Dembski
    speaking of errors: in your paper Conservation of Information in Search – Measuring the Cost of Success could you correct the sign errors in equation (27) and on page 1057, left column, for Q?
    BTW, here you could take

    Q ≈ HarmonicNumber[(1-β)L]/μ

    as a more pleasing approximation (it fits better with

    Q ≈ N HarmonicNumber[(1-β)L]

    in the next section…)

  19. 19
    DiEb says:

    Well, I wrote to W. Dembski and R. Marks. We’ll see what happens πŸ™‚

Leave a Reply