Conservation of Information Intelligent Design

Aurelio Smith’s Analysis of Active Information

Spread the love

Recently, Aurelio Smith had a guest publication here at Uncommon Descent entitled Signal to Noise: A Critical Analysis of Active Information. Most of the post is taken up by a recounting of the history of active information. He also quotes the criticisms of Felsentein and English which have responded to at Evolution News and Views: These Critics of Intelligent Design Agree with Us More Than They Seem to Realize. Smith then does spend a few paragraphs developing his own objections to active information.

Smith argues that viewing evolution as a search is incorrect, because organisms/individuals aren’t searching, they are being acted upon by the environment:

Individual organisms or populations are not searching for optimal solutions to the task of survival. Organisms are passive in the process, merely affording themselves of the opportunity that existing and new niche environments provide. If anything is designing, it is the environment. I could suggest an anthropomorphism: the environment and its effects on the change in allele frequency are “a voice in the sky” whispering “warmer” or “colder”.

When we say search we simply mean a process that can be modeled as a probability distribution. Smith’s concern is irrelevent to that question. However, even if we are trying to model evolution as a optimization or solution-search problem Smith’s objection doesn’t make any sense. The objects of a search are always passive in the search. Objecting that the organisms aren’t searching is akin to objecting that easter eggs don’t find themselves. That’s not how any kind of search works. All search is the environment acting on the objects in the search.

Rather than demonstrating the “active information” in Dawkins’ Weasel program, which Dawkins freely confirmed is a poor model for evolution with its targeted search, would DEM like to look at Wright’s paper for a more realistic evolutionary model?

This is a rather strange comment. Smith quoted our discussion of Avida previously. But here he implies that we’ve only ever discussed Dawkin’s Weasel program. We’ve discussed Avida, Ev, Steiner Trees, and Metabiology. True, we haven’t looked at Wright’s paper, but its completely unreasonable to suggest that we’ve only discussed Dawkin’s “poor model.”

Secondly, “fitness landscape” models are not accurate representations of the chaotic, fluid, interactive nature of the real environment . The environment is a kaleidoscope of constant change. Fitness peaks can erode and erupt.

It is true that a static fitness landscape is an insufficient model for biology. That is why our work on conservation of information does not assume a static fitness landscape. Our model is deliberately general enough to handle any kind of feedback mechanism.

While I’m grateful for Smith taking the time to writeup his discussion, I find it very confused. The objections he raises don’t make any sense.

243 Replies to “Aurelio Smith’s Analysis of Active Information

  1. 1
    bFast says:

    Wierd statement: “fitness landscape”. I have never heard an IDer speak of “fitness landscape” except with the preamble of “dynamic”. “Dynamic fitness landscape”, ie, a landscape that is “chaotic, fluid … a kaleidoscope of constant change.”

  2. 2
    Carpathian says:

    I don’t think evolution cannot be modeled as a probability distribution.

    Imagine a dart board in a dark room into which players throw darts. They throw the darts but without a target in sight to aim for.

    If the dart board does not move, you should be able to model the probable distribution of the darts, but if it does move, how do you factor in the movement of the unseen board?

    Most importantly, there is still a winner who has had no idea of the target’s position.

  3. 3
    Joe says:

    With Intelligent Design Evolution evolutionary processes are searches, actual active searches. With unguided evolution evolutionary processes are passive and if they happen upon a benefit, then so be it. All is well until they stumble upon whatever can eliminate them.

    Nature tends to the most simple. It peels away the unnecessary and leaves what it cannot peel away, or has not peeled away yet. IOW nature searches for the simplest solution. It doesn’t stumble upon, nor can it build via accumulation, the information required for basic biological reproduction: The cell division processes required for bacterial life– living organisms are irreducibly complex all the way down.

  4. 4
    Daniel King says:

    Nature tends to the most simple. It peels away the unnecessary and leaves what it cannot peel away, or has not peeled away yet. IOW nature searches for the simplest solution.

    I didn’t know that. Is there reason to believe that?

    In any case, what does that have to do with Intelligent Design? Is Nature the designer?

  5. 5
    Joe says:

    The reason is it always takes the line of least resistance. It can produce stones, even piles of stones but not Stonehenges.

    What it has to do with ID is that nature couldn’t be the designer.

  6. 6
    Daniel King says:

    The reason is it always takes the line of least resistance.

    Always? How can anyone possibly know that? You’d have to examine every possible situation to claim that.

    It can produce stones, even piles of stones but not Stonehenges.

    So? What does a human construction have to do with Nature’s inherent capabilities?

    What it has to do with ID is that nature couldn’t be the designer.

    WHY NOT?

  7. 7
    Joe says:

    Yes, always, so far. Just as structures like Stonehenge will always require an intelligent designer.

    Why not? For one there isn’t any evidence that it can be the designer. All observations and experience argue against it.

    Human construction shows what requires intelligent agencies to produce. It also shows nature’s limitations.

  8. 8
    Joe says:

    For example see- Chase W. Nelson and John C. Sanford, The effects of low-impact mutations in digital organisms, Theoretical Biology and Medical Modelling, 2011, 8:9 | doi:10.1186/1742-4682-8-9

  9. 9
    ppolish says:

    Joe,Daniel, if you please…,

    Bees, Beavers, Humans.
    Honeycomb, Dam, Stonehedge.

    Natural Design all? Human transcend Nature?

  10. 10
    Daniel King says:

    Yes, always, so far. Just as structures like Stonehenge will always require an intelligent designer.

    True. Structures that human beings construct are constructed by human beings.

    What does that have to do with “Nature always taking the line of least resistance” or Nature not being the designer of living organisms?

  11. 11
    bornagain77 says:

    In regards to

    “nature couldn’t be the designer.”

    Daniel King asks

    “WHY NOT?”

    well, for a few examples, because,,,

    Human brain has more switches than all computers on Earth – November 2010
    Excerpt: They found that the brain’s complexity is beyond anything they’d imagined, almost to the point of being beyond belief, says Stephen Smith, a professor of molecular and cellular physiology and senior author of the paper describing the study: …One synapse, by itself, is more like a microprocessor–with both memory-storage and information-processing elements–than a mere on/off switch. In fact, one synapse may contain on the order of 1,000 molecular-scale switches. A single human brain has more switches than all the computers and routers and Internet connections on Earth.
    http://news.cnet.com/8301-2708.....2-247.html

    “Complexity Brake” Defies Evolution – August 8, 2012
    Excerpt: Consider a neuronal synapse — the presynaptic terminal has an estimated 1000 distinct proteins. Fully analyzing their possible interactions would take about 2000 years. Or consider the task of fully characterizing the visual cortex of the mouse — about 2 million neurons. Under the extreme assumption that the neurons in these systems can all interact with each other, analyzing the various combinations will take about 10 million years…, even though it is assumed that the underlying technology speeds up by an order of magnitude each year.
    http://www.evolutionnews.org/2.....62961.html

    Map Of Major Metabolic Pathways In A Cell – Picture
    http://2.bp.blogspot.com/-AKkR.....way-1b.png
    A map of the entire human metabolic pathway – interactive map (high resolution)
    http://www.cc.gatech.edu/~turk.....thways.png

    “To grasp the reality of life as it has been revealed by molecular biology, we must magnify a cell a thousand million times until it is twenty kilometres in diameter and resembles a giant airship large enough to cover a great city like London or New York. What we would then see would be an object of unparalleled complexity and adaptive design. On the surface of the cell we would see millions of openings, like the portholes of a vast space ship, opening and closing to allow a continual stream of materials to flow in and out. If we were to enter one of these openings with find ourselves in a world of supreme technology and bewildering complexity. We would see endless highly organized corridors and conduits branching in every direction away from the perimeter of the cell, some leading to the central memory bank in the nucleus and others to assembly plants and processing units. The nucleus of itself would be a vast spherical chamber more than a kilometer in diameter, resembling a geodesic dome inside of which we would see, all neatly stacked together in ordered arrays, the miles of coiled chains of the DNA molecules. A huge range of products and raw materials would shuttle along all the manifold conduits in a highly ordered fashion to and from all the various assembly plants in the outer regions of the cell.
    We would wonder at the level of control implicit in the movement of so many objects down so many seemingly endless conduits, all in perfect unison. We would see all around us, in every direction we looked, all sorts of robot-like machines. We would notice that the simplest of the functional components of the cell, the protein molecules, were astonishingly, complex pieces of molecular machinery, each one consisting of about three thousand atoms arranged in highly organized 3-D spatial conformation. We would wonder even more as we watched the strangely purposeful activities of these weird molecular machines, particularly when we realized that, despite all our accumulated knowledge of physics and chemistry, the task of designing one such molecular machine – that is one single functional protein molecule – would be completely beyond our capacity at present and will probably not be achieved until at least the beginning of the next century. Yet the life of the cell depends on the integrated activities of thousands, certainly tens, and probably hundreds of thousands of different protein molecules.
    We would see that nearly every feature of our own advanced machines had its analogue in the cell: artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of parts and components, error fail-safe and proof-reading devices utilized for quality control, assembly processes involving the principle of prefabrication and modular construction. In fact, so deep would be the feeling of deja-vu, so persuasive the analogy, that much of the terminology we would use to describe this fascinating molecular reality would be borrowed from the world of late twentieth-century technology.
    What we would be witnessing would be an object resembling an immense automated factory, a factory larger than a city and carrying out almost as many unique functions as all the manufacturing activities of man on earth. However, it would be a factory which would have one capacity not equalled in any of our own most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours. To witness such an act at a magnification of one thousand million times would be an awe-inspiring spectacle.”
    Michael Denton PhD., Evolution: A Theory In Crisis, pg.328
    http://www.uncommondescent.com.....aturalism/

    Systems biology: Untangling the protein web – July 2009
    Excerpt: Vidal thinks that technological improvements — especially in nanotechnology, to generate more data, and microscopy, to explore interaction inside cells, along with increased computer power — are required to push systems biology forward. “Combine all this and you can start to think that maybe some of the information flow can be captured,” he says. But when it comes to figuring out the best way to explore information flow in cells, Tyers jokes that it is like comparing different degrees of infinity. “The interesting point coming out of all these studies is how complex these systems are — the different feedback loops and how they cross-regulate each other and adapt to perturbations are only just becoming apparent,” he says. “The simple pathway models are a gross oversimplification of what is actually happening.”
    http://www.nature.com/nature/j.....0415a.html

    Mr. King, you are certainly free to believe that unguided material processes can create all that unfathomable complexity, (since you, contrary to your materialistic belief system, actually do have free will to choose what you believe is true), but I certainly don’t find your blind faith in unguided material processes persuasive! Especially since no one has ever witnessed unguided material processes produce non-trivial functional information/complexity:

    It’s (Much) Easier to Falsify Intelligent Design than Darwinian Evolution – Michael Behe, PhD
    https://www.youtube.com/watch?v=_T1v_VLueGk

    The Law of Physicodynamic Incompleteness – David L. Abel
    Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”
    If only one exception to this null hypothesis were published, the hypothesis would be falsified. Falsification would require an experiment devoid of behind-the-scenes steering. Any artificial selection hidden in the experimental design would disqualify the experimental falsification. After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided.
    The time has come to extend this null hypothesis into a formal scientific prediction:
    “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.”
    https://www.academia.edu/9957206/The_Law_of_Physicodynamic_Incompleteness_Scirus_Topic_Page_

  12. 12
    Daniel King says:

    Human construction shows what requires intelligent agencies to produce. It also shows nature’s limitations.

    Human construction tells us only what humans can produce. How does that show Nature’s limitations?

  13. 13
    Mark Frank says:

    As we have Dr. Ewert’s attention I would love to hear his response to the problems I raised in a comment on AS’s post. I have repeated it here with a bit more detail.

    Converting probabilities to their logs sometimes blinds us to the fact they are probabilities. So active information is defined as:

    So active information = endogenous information – exogenous information

    which is another way of expressing the ratio of two probabilities:

    p = prob(success|blind search)

    and

    q – prob(success|alternative search)

    But somehow this ratio p/q gets equated to the probability of the alternative search happening.

    To do this requires:

    1) Treating possible searches as a random variable

    2) Selecting a way of enumerating possible searches (e.g. a “search” is defined as an ordered subset of all the variables to be inspected, so the set of searches is all possible ordered subsets)

    3) Using Bernouilli’s principle of indifference to decide all searches are equally probable within this space of all possible searches

    All of this seems to be assumed in your work rather than made explicit and when made explicit raises some rather fundamental questions.  What is the probability distribution of searches? There are many  ways of enumerating searches – how do you justify your choice? On what basis do you assume each one is equally probable?

  14. 14
    Bob O'H says:

    When we say search we simply mean a process that can be modeled as a probability distribution.

    Aurelio Smith has already pointed out the problem with this, but to put some specifics on it, under this definition, rolling a die would be a search. Indeed, if you don’t make artificial restrictions on what you mean by a probability distribution, rolling a die with 6 4’s would be a search. As would diffusion, if you want to look at something dynamic in time.

  15. 15
    logically_speaking says:

    Bob O’H,

    “Under this definition, rolling a die would be a search. Indeed, if you don’t make artificial restrictions on what you mean by a probability distribution, rolling a die with 6 4?s would be a search”.

    I’m not sure that this is a good analogy.

    When rolling a die, you ARE doing a search in a sense, you are searching for any number between one and six (depending on the die!). What else would you be rolling a die for?

    Not to mention the actual intelligently designed dice that has to be deliberately rolled to achieve a result.

  16. 16
    Joe says:

    Daniel:

    Human construction tells us only what humans can produce. How does that show Nature’s limitations?

    Nature cannot produce Stonehenges, Daniel. Forensics, archaeology and SETI all rely on our knowledge of cause and effect relationships. Demonstrate that nature can produce something and we cannot say some intelligent agency was required to do it.

  17. 17
    Joe says:

    I would love to see someone demonstrate how this “game of life” models unguided evolution. I know it won’t happen but it would be nice to see an evo put its money where its mouth is.

  18. 18
    Joe says:

    And something else tat is very strange- Evos, if they had something, wouldn’t bother with Winston’s paper nor response. They would just present the evidence that demonstrates the power of unguided evolution. They would show us how it is operationalized. They would show us its entailments and its power. They would model it.

    However they don’t even try. It’s as if they know they have nothing but to attack ID. Yet attacking ID will never provide support for their claims.

  19. 19
    Joe says:

    Mark Frank:

    Converting probabilities to their logs sometimes blinds us to the fact they are probabilities. So active information is defined as:

    So active information = endogenous information – exogenous information

    Can you please show us where that is in the paper?

  20. 20
    Joe says:

    There needs to be a “James Randi test” for evolutionism…

  21. 21
    Mark Frank says:

    #26 Joe

    Me:

    Converting probabilities to their logs sometimes blinds us to the fact they are probabilities. So active information is defined as:
    So active information = endogenous information – exogenous information

    Joe:

    Can you please show us where that is in the paper?

    To repeat AS’s quote and link with my emphasis

    From A General Theory of Information Cost Incurred by Successful Search

    In comparing null and alternative searches, it is convenient to convert probabilities to information measures (note that all logarithms in the sequel are to the base 2). We therefore define the endogenous information I? as –log(p), which measures the inherent difficulty of a blind or null search in exploring the underlying search space ? to locate the target T. We then define the exogenous information IS as –log(q), which measures the difficulty of the alternative search S in locating the target T. And finally, we define the active information I+ as the difference between the endogenous and exogenous information: I+ = I? – IS = log(q/p).

  22. 22
    Joe says:

    Thanks Mark- I was looking in the “Active Information” paper- ie the wrong paper.

  23. 23
    Bob O'H says:

    logically_speaking – that wasn’t an analogy, it was a direct consequence of the definition!

    TBH, I think you are stretching the definition of a search – a search is for something, which is a subset of everything being searched. So, rolling a die to “search” for a number from 1 to 6 is bizarre, as the ‘search’ will always be successful first time around. So, in what sense is it a search, rather than a RNG?

  24. 24
    kairosfocus says:

    Bob O’H: Context. In a game where dice are used, the value on a toss will feed an outcome, and such an outcome may shape onward steps etc. E.g. sarting at a random location, I can use dice tosses to guide steps in a random walk. E.g. Red 123 that many steps backwards, 456 123 steps forward, and Green, similar but left right. This would explore a space and constitutes a search esp if there is a reward function based on where one lands. Thus, a prob distribution can be integral to or tantamount to a search. So, the partly blind chance driven search of a config space makes sense. And indeed in the introduction to the paper such a context is explored via a drone over a field of cups covering items. (A picture with hex packed pills is used to illustrate). The basic point is that we have a reference search, take a flat random sample or a random walk (maybe with drift) etc. As we are under needle in haystack blind search circumstances, the target zones are maximally unlikely, and the other options are samples with a bias. But the blindness extends to the search for a golden or at least good search that plunks us down next to a target zone. That comes from higher order space. If W possibilities are there, direct, the searches as samples come from a set of 2^W possibilities making S4S )and higher order yet searches) plausibly progressively harder. So if a search drastically outperforms flat random, it is reasonable to see that it was not blindly chosen and/or does not act blindly. From this cap to be bridged we may infer info conveying an advantage, active info. And the degree of effect relative to a flat random blind search is reasonable as a metric. And, the information can be put in probabilistic terms. Cf here: http://www.uncommondescent.com.....formation/ KF

  25. 25
    Bob O'H says:

    kf – in a game of dice a search metaphor makes sense, but according to Ewert just rolling a die 8for whatever reason9 is a search.

  26. 26
    kairosfocus says:

    Bob O’H (& attn MF):

    Ewert spoke in a context, with three initial background sections for context in a 40+ pp paper. That context from outset is blind, needle in haystack search and the probability distributions relate to taking searches, which are samples of config spaces. And in particular, blind samples.

    Notice how the main body opens:

    All but the most trivial searches are needle-in-the-haystack problems. Yet many searches successfully locate needles in haystacks. How is this possible? A success-ful search locates a target in a manageable number of steps. According to conserva-tion of information, nontrivial searches can be successful only by drawing on existing external information, outputting no more information than was inputted

    So, whatever infelicities of expression you may see or may think you see, that controlling context should be borne in mind.

    The probability distributions are in effect ways to address degrees of bias in samples, including samples based on an incremental search as is defined with reference to the search matrix which builds in a next step process.

    The issue is, how do evolutionary type searches outperform the yardstick, flat random sample blind needle in haystack search. The answer is, by input active information, such as obtains with say a warmer/colder signal pattern.

    In that broad context, different search strategies are effectively the same as differing probability distributions affecting sample choices.

    This also points to the case of search for a golden search, which puts you down on a target zone. Higher order searches for good searches are going to challenge you so that they will not — if blind — be likely to hand you a golden search.

    And on the strong statistical constraints imposed by the needle in haystack situation, it is reasonable to look at a v likely to succeed strategy that finds a way to add in info that guides search making the otherwise infeasible feasible that the performance gap is a measure of injected, bridging, active information.

    KF

  27. 27
    Bob O'H says:

    kf – if Ewert meant his definition within a context then hopefully he’ll clarify it here in the comments. As it is, his statement seems pretty unambiguous, with no suggestion that he means it within a certain context. I hope he’ll do this – it would be good to know precisely what he means by a search.

  28. 28
    Winston Ewert says:

    All of this seems to be assumed in your work rather than made explicit and when made explicit raises some rather fundamental questions. What is the probability distribution of searches? There are many ways of enumerating searches – how do you justify your choice? On what basis do you assume each one is equally probable?

    We don’t actually assume a uniform distribution. The contribution of “A General Theory of Information Cost Incurred by Successful Search” is to show that conservation of information still applies under a non-uniform initial distribution.

    The conclusion of conservation of information is that in order to produce complex life, the initial distribution of the universe must have been configured in such a way as to increase the probability of producing complex life.

    Is there a process that springs to mind that cannot be modeled as a probability distribution? This is taking the path of defining something so broadly that “search” means “anything”.

    So?

    I suggested a look at Sewall Wright’s paper as his approach is a classic attempt to describe gene combinations as a fitness landscape. He does not talk of environments as “landscapes”.

    I’m sure there is some merit in looking at Wright’s paper. But is he really doing anything that hasn’t been repeated in computer models?

    Agreed. I’d go further. If you model evolution in a truly static fitness landscape, there will be no evolution.

    Many computer models have evolution do indeed model a static fitness landscape and do in fact experience evolution of a sort. So either your prediction is utterly incorrect, or I’ve not understood it.

    Are you referring to “active information”? How does the idea of “the difference between the endogenous and exogenous information” help to address the dynamic, shifting interplay between a population of organisms and its niche?

    What I’m saying is that conservation of information merely requires that your search be a probability distribution. Your dynamic shifting process is still modelable as a probability distribution.

    Aurelio Smith has already pointed out the problem with this, but to put some specifics on it, under this definition, rolling a die would be a search. Indeed, if you don’t make artificial restrictions on what you mean by a probability distribution, rolling a die with 6 4?s would be a search. As would diffusion, if you want to look at something dynamic in time.

    Indeed, all of those are searches.

  29. 29
    Mark Frank says:

    Winston #35
     

    We don’t actually assume a uniform distribution. The contribution of “A General Theory of Information Cost Incurred by Successful Search” is to show that conservation of information still applies under a non-uniform initial distribution.

    Which pdf are we talking about?.  You identify a search with a pdf and your paper appears to show that the LCI holds even when that pdf is not uniform.  But that is not my point. You conclude that finding a more efficient search has an “information cost” which seems to be identified with a probability of finding that more efficient search i.e. the changes of success in the search for the search. This implies you must have some kind of pdf in mind for space of all possible searches.  That is the pdf I am questioning.  Otherwise the pdf might be simply zero probability of all searches that are less efficient than the improved search – which would certainly scupper the LCI.
     
    Nowhere can I find an explicit explanation of the pdf of possible searches although I think you are assuming each of those matrices which identify a search are equally probable.

  30. 30
    kairosfocus says:

    Bob O’H: All I am doing is pointing out the actual controlling context which the authors have a right to assume will be taken into account in reading. Text out of context = pretext is a classic problem of interpretation. KF

  31. 31
    kairosfocus says:

    Bob O’H:

    Let me follow up by clipping the opening words, verbatim:

    >> 1. The Search Matrix

    All but the most trivial searches are needle-in-the-haystack problems. Yet many searches successfully locate needles in haystacks. How is this possible? A success-ful search locates a target in a manageable number of steps. According to conserva-tion of information, nontrivial searches can be successful only by drawing on existing external information, outputting no more information than was inputted [1]. In previous work, we made assumptions that limited the generality of conservation of information, such as assuming that the baseline against which search perfor-mance is evaluated must be a uniform probability distribution or that any query of the search space yields full knowledge of whether the candidate queried is inside or outside the target. In this paper, we remove such constraints and show that | conservation of information holds quite generally. We continue to assume that tar-gets are fixed. Search for fuzzy and moveable targets will be the topic of future research by the Evolutionary Informatics Lab.

    In generalizing conservation of information, we first generalize what we mean by targeted search. The first three sections of this paper therefore develop a general approach to targeted search. The upshot of this approach is that any search may be represented as a probability distribution on the space being searched. Readers who are prepared to accept that searches may be represented in this way can skip to section 4 and regard the first three sections as stage-setting. Nonetheless, we sug-gest that readers study these first three sections, if only to appreciate the full gen-erality of the approach to search we are proposing and also to understand why attempts to circumvent conservation of information via certain types of searches fail. Indeed, as we shall see, such attempts to bypass conservation of information look to searches that fall under the general approach outlined here; moreover, conservation of information, as formalized here, applies to all these cases. >>

    I trust the point about reading in context is clear enough.

    KF

  32. 32
    Joe says:

    Why can’t evolution occur in a static environment? Is Aurelio really suggesting that mutations will not occur in a static environment? Isn’t Lenski’s experiment a static environment?

  33. 33
    Joe says:

    Aurelio:

    If the environment is truly static, there will be no selective pressure.

    That is incorrect. There may not be any selection pressure once the population’s fitness is optimized, but there will be until then.

    So what does “truly static” mean?

    Variation appears due to well-understood processes of imperfect replication etc.

    Except it isn’t “well understood”. Basic biological reproduction is irreducibly complex and as such requires an Intelligent Designer.

    Isn’t Lenski’s experiment a static environment?

    Nope. It’s boom and bust.

    I think whether or not it is static is debateable.

  34. 34
    Mung says:

    Bob O’H.

    There’s no problem with the rolling of a die as a search.

    When you roll a die it’s an experiment.

  35. 35
    Mung says:

    Aurelio Smith:

    So, we’re right in reading DEM (yourself in association with Dembski and Marks) as “search” is synonymous with “probability distribution?

    Could you be any more dense?

  36. 36
    Carpathian says:

    Joe:

    Basic biological reproduction is irreducibly complex and as such requires an Intelligent Designer.

    There is nothing to indicate that more sophisticated methods of biological reproduction did not arise through evolutionary pressure applied on simpler methods of reproduction used in the past.

    On the other hand, I have yet to see anyone write something describing how Intelligent Design methods could be applied to biology.

  37. 37
    Mung says:

    So, I’m right in reading DEM (yourself in association with Dembski and Marks) as “die” is synonymous with “search”?

  38. 38
    Carpathian says:

    Joe:

    Why can’t evolution occur in a static environment? Is Aurelio really suggesting that mutations will not occur in a static environment? Isn’t Lenski’s experiment a static environment?

    I think Joe right. I don’t see what would stop a better configuration than a current one developing in any given environment even if that environment is static.

  39. 39
    bornagain77 says:

    Carpathian claims

    “There is nothing to indicate that more sophisticated methods of biological reproduction did not arise through evolutionary pressure applied on simpler methods of reproduction used in the past.”

    and yet the facts say something very different:

    http://www.uncommondescent.com.....ent-561891

    Carp goes on to state:

    “On the other hand, I have yet to see anyone write something describing how Intelligent Design methods could be applied to biology.”

    Here you go:

    “It has become clear in the past ten years that the concept of design is not merely an add-on meta-description of biological systems, of no scientific consequence, but is in fact a driver of science. A whole cohort of young scientists is being trained to “think like engineers” when looking at biological systems, using terms explicitly related to engineering design concepts: design, purpose, optimal tradeoffs for multiple goals, information, control, decision making, etc. This approach is widely seen as a successful, predictive, quantitative theory of biology.”
    David Snoke*, Systems Biology as a Research Program for Intelligent Design

    podcast: “David Snoke: Systems Biology and Intelligent Design, pt. 1”
    http://intelligentdesign.podom.....9_09-07_00
    podcast: David Snoke: Systems Biology and Intelligent Design, pt. 2
    http://intelligentdesign.podom.....0_01-07_00

    How the Burgeoning Field of Systems Biology Supports Intelligent Design – July 2014
    Excerpt: Snoke lists various features in biology that have been found to function like goal-directed, top-down engineered systems:
    *”Negative feedback for stable operation.”
    *”Frequency filtering” for extracting a signal from a noisy system.
    *Control and signaling to induce a response.
    *”Information storage” where information is stored for later use. In fact, Snoke observes:
    “This paradigm [of systems biology] is advancing the view that biology is essentially an information science with information operating on multiple hierarchical levels and in complex networks [13]. ”
    *”Timing and synchronization,” where organisms maintain clocks to ensure that different processes and events happen in the right order.
    *”Addressing,” where signaling molecules are tagged with an address to help them arrive at their intended target.
    *”Hierarchies of function,” where organisms maintain clocks to ensure that cellular processes and events happen at the right times and in the right order.
    *”Redundancy,” as organisms contain backup systems or “fail-safes” if primary essential systems fail.
    *”Adaptation,” where organisms are pre-engineered to be able to undergo small-scale adaptations to their environments. As Snoke explains, “These systems use randomization controlled by supersystems, just as the immune system uses randomization in a very controlled way,” and “Only part of the system is allowed to vary randomly, while the rest is highly conserved.”,,,
    Snoke observes that systems biology assumes that biological features are optimized, meaning, in part, that “just about everything in the cell does indeed have a role, i.e., that there is very little ‘junk.'” He explains, “Some systems biologists go further than just assuming that every little thing has a purpose. Some argue that each item is fulfilling its purpose as well as is physically possible,” and quotes additional authorities who assume that biological systems are optimized.,,,
    http://www.evolutionnews.org/2.....87871.html

    Systems Biology as a Research Program for Intelligent Design – David Snoke – 2014
    http://bio-complexity.org/ojs/.....O-C.2014.3

    On the other hand, presupposing everything is just a cobbled together series of accidents, as Darwinism does, has hindered research into biology with, (dogmatically held), erroneous concepts such as vestigial organs and junk DNA

  40. 40
    Carpathian says:

    bornagain77:

    Your quotes support that biology has analogies in human design but you have not shown a design methodology.

    The first question to ask if I’m going to design a biological species is, “What is the future going to look like?”

    If I don’t know the environment, what is my specific goal?

    Secondly, how many initial copies will I make? It has to be more than two and maybe less than a million, but how do I know that?

  41. 41
    Joe says:

    Carpathian:

    There is nothing to indicate that more sophisticated methods of biological reproduction did not arise through evolutionary pressure applied on simpler methods of reproduction used in the past.

    Just the evidence: The cell divsion processes required for bacterial life

    On the other hand, I have yet to see anyone write something describing how Intelligent Design methods could be applied to biology.

    There has been plenty written about it.

  42. 42
    Joe says:

    Aurelio:

    Evolution explains the how, not the why.

    Except that evolution has yet to explain the how. The only thing so far is we are told to be comforted by the fact that evolution did happen.

  43. 43
    bornagain77 says:

    Carp, so you want to jump directly from just trying to get a firm handle on studying, and understanding, the unfathomed complexity being found in biology to creating the unfathomed complexity of biology?

    Good luck with all that:

    Francis Collins on Making Life
    Excerpt: ‘We are so woefully ignorant about how biology really works. We still don’t understand how a particular DNA sequence—when we just stare at it—codes for a protein that has a particular function. We can’t even figure out how that protein would fold—into what kind of three-dimensional shape. And I would defy anybody who is going to tell me that they could, from first principles, predict not only the shape of the protein but also what it does.’ –
    Francis Collins – Former Director of the Human Genome Project
    http://www.pbs.org/wgbh/nova/t.....enome.html

    The Challenge to Darwinism from a Single Remarkably Complex Enzyme – Ann Gauger – May 1, 2012
    Excerpt: How does a neo-Darwinian process evolve an enzyme like this? Even if enzymes that carried out the various partial reactions could have evolved separately, the coordination and combining of those domains into one huge enzyme is a feat of engineering beyond anything we can do.
    http://www.evolutionnews.org/2.....59191.html

    Creating Life in the Lab: How New Discoveries in Synthetic Biology Make a Case for the Creator – Fazale Rana
    Excerpt of Review: ‘Another interesting section of Creating Life in the Lab is one on artificial enzymes. Biological enzymes catalyze chemical reactions, often increasing the spontaneous reaction rate by a billion times or more. Scientists have set out to produce artificial enzymes that catalyze chemical reactions not used in biological organisms. Comparing the structure of biological enzymes, scientists used super-computers to calculate the sequences of amino acids in their enzymes that might catalyze the reaction they were interested in. After testing dozens of candidates,, the best ones were chosen and subjected to “in vitro evolution,” which increased the reaction rate up to 200-fold. Despite all this “intelligent design,” the artificial enzymes were 10,000 to 1,000,000,000 times less efficient than their biological counterparts. Dr. Rana asks the question, “is it reasonable to think that undirected evolutionary processes routinely accomplished this task?”
    http://www.amazon.com/gp/product/0801072093

    Dr. Fuz Rana, at the 41:30 minute mark of the following video, speaks on the tremendous effort that went into building the preceding protein:

    Science – Fuz Rana – Unbelievable? Conference 2013 – video
    http://www.youtube.com/watch?v.....38;index=8

    Computer-designed proteins programmed to disarm variety of flu viruses – June 1, 2012
    Excerpt: The research efforts, akin to docking a space station but on a molecular level, are made possible by computers that can describe the landscapes of forces involved on the submicroscopic scale.,, These maps were used to reprogram the design to achieve a more precise interaction between the inhibitor protein and the virus molecule. It also enabled the scientists, they said, “to leapfrog over bottlenecks” to improve the activity of the binder.
    http://phys.org/news/2012-06-c.....ruses.html

    Engineering principles, not Darwinian principles, lead to breakthroughs in designing new, relatively simple, proteins!:

    Computer-designed proteins recognize and bind small molecules – September 5, 2013
    Excerpt: In conducting the study, the researchers learned general principles for engineering small molecule-binding proteins with strong attraction energies. Their findings open up the possibility that binding proteins could be created for many medical, industrial and environmental uses.,,,
    The researchers adapted a computational tool called Rosetta developed in the Baker lab to craft new proteins that would bind the steroid digoxigenin, which is related to the heart-disease medication digoxin.,,,
    After generating many designs for digoxigenin-binders on a computer, the researchers chose 17 to synthesize in a lab. Experimental tests led the researchers to hone in on the protein they called DIG10. Further observations revealed that the binding activities of this protein were indeed mediated by its computer-designed interface, just as the researchers had intended.
    To upgrade their overall design methods, the researchers then used next-generation deep gene sequencing to probe the effect of each amino acid molecular building block on binding fitness. Using this method, they were able to discover how various engineered genetic variations affect the designed protein’s binding capabilities. The binding fitness map gave the researchers ideas for enhancing the binding affinity of the designed protein to the picomolar level, tighter than the nano-level.,,,
    http://phys.org/news/2013-09-c.....cules.html

  44. 44
    Mung says:

    Aurelio, that you would think that DEM mean “search” to be synonymous with “probability distribution” says all about you that Winston needs to know. And from your own mouth.

  45. 45
    Mung says:

    Winston Ewert:

    When we say search we simply mean a process that can be modeled as a probability distribution.

    Aurelio probably thinks evolution is synonymous with Conway’s Game of Life.

  46. 46
    Mung says:

    Yes, Aurelio, I do bother to read the comments. That’s how I came up with the Game of Life reference. You see, I thought perhaps you just didn’t understand what the term synonymous means. So I performed an experiment.

    Here’s a suggestion. If you really want your ip unblocked act less like a troll.

  47. 47
    Carpathian says:

    bornagain77:

    Despite all this “intelligent design,” the artificial enzymes were 10,000 to 1,000,000,000 times less efficient than their biological counterparts.

    That is the problem ID has to get around. ID failed here to equal biology which is exactly my point.

    ID is extremely difficult in that you don’t know what information you’re trying to put together for a given target.

    Evolution may be improbable in the sense that you cannot search for a target but ID’s problem is to define the required target before designing.

    Determining your “spec” for a design is more difficult than actual physical design.

  48. 48
    Carpathian says:

    How would one determine a better design for a predator in an environment that is 100 years off into the future?

    That is the missing information.

    Without an ability to foresee future environments you cannot design a solution.

  49. 49
    Joe says:

    Intelligent Design and evolution are NOT mutually exclusive.

  50. 50
    Joe says:

    Plurality of voices is one thing. But if our opponents want to be heard all they have to do is work on supporting unguided evolution- find a way to model it would be a great start.

  51. 51
    bornagain77 says:

    Carp, I believe God, who is omniscient and who created/creates time itself, is the Designer of the universe and of all life in it. Thus ‘far off targets’ in the future are child’s play for Him in His infinite knowledge since even time itself belongs to Him.

    Moreover, I never claimed that man was omniscient in his capacity as a designer.

    Apparently atheists are not so humble in their assessment of their own finite abilities since they, from my repeated debates with them, obviously think they know how to design things much better than God did.

    The role of theology in current evolutionary reasoning – Paul A. Nelson – Biology and Philosophy, 1996, Volume 11, Number 4, Pages 493-517
    Excerpt: Evolutionists have long contended that the organic world falls short of what one might expect from an omnipotent and benevolent creator. Yet many of the same scientists who argue theologically for evolution are committed to the philosophical doctrine of methodological naturalism, which maintains that theology has no place in science. Furthermore, the arguments themselves are problematical, employing concepts that cannot perform the work required of them, or resting on unsupported conjectures about suboptimality. Evolutionary theorists should reconsider both the arguments and the influence of Darwinian theological metaphysics on their understanding of evolution.
    http://www.springerlink.com/co.....34/?MUD=MP

    “atheists have their theology, which is basically: “God, if he existed, wouldn’t do it this way (because) if I were God, I wouldn’t (do it that way).”
    http://www.evolutionnews.org/2.....85691.html

    On the Vastness of the Universe
    Excerpt: Darwin’s objection to design inferences were theological. And in addition, Darwin overlooked many theological considerations in order to focus on the one. His one consideration was his assumption about what a god would or wouldn’t do. The considerations he overlooked are too numerous to mention here, but here’s a few:,,,
    http://www.uncommondescent.com.....ent-362918

    “One of the great ironies of the atheist’s mind is that no-one is more cock-sure of exactly what God is like, exactly what God would think, exactly what God would do, than the committed atheist. Of course he doesn’t believe in God, but if God did exist, he knows precisely what God would be like and how God would behave. Or so he thinks”,,,”
    Eric – UD Blogger

  52. 52
    DiEb says:

    Dear Winston,

    I understand that it is necessary for the conclusions of DEM that searches can be modeled as probability distributions.

    The first attempt to model searches generally that way was in the paper “The Search for a Search” – and it failed.

    I have troubles to understand how the probability distributions introduced by the algorithm in “A General Theory of Information Cost Incurred by Successful Search” are modeling the underlying searches in any meaningful way.

    Take e.g., as search space the natural numbers 1..100, and as fitness function “distance to a target”. Knowing this, I can construct an initiator, a terminator, an inspector, a navigator, a nominator, and a discriminator. Using those for the target {1}, I may get a probability distribution of P(S=1)=1, P(S!=1)=0 – my search will find this target every times.

    What conclusions can I draw from this model? Nothing meaningful – for example: what happens when the target is {2}? The probability to find this target could be 1, could be 0, could be anything in between. This “model”doesn’t differ between a complete search and a search which will always return “1”…

    And frankly, if my target is {1} – what is the big difference between a search represented by P(S=1) = 9/10, P(S=50) = 1/10, otherwise 0 – and P(S=1) = 9/10, P(S=51) = 1/10?

  53. 53
    Mung says:

    A question for Aurelio (and other critics).

    There are two urns, each containing a number of colored balls. You cannot see inside the urns.

    You pay one dollar to play and can select one ball from either urn. If you select a red ball you win one dollar.

    Which urn will you choose to select from?

  54. 54
    Daniel King says:

    A question for Aurelio (and other critics).

    Hopefully, there’s going to be a point to this.

    There are two urns, each containing a number of colored balls. You cannot see inside the urns.

    You pay one dollar to play and can select one ball from either urn. If you select a red ball you win one dollar.

    You’re crazy if you think anyone would waste time betting on such a stupid game. Who said that there was a red ball anywhere? Who said there was a difference between the two urns?

    Mung is incoherent. Consistently.

  55. 55
    Mung says:

    Daniel King:

    You’re crazy if you think anyone would waste time betting on such a stupid game.

    Crazy like a fox. Looks like I caught a fish though.

    Who said that there was a red ball anywhere?
    Who said there was a difference between the two urns?

    No one.

    Given the amount of information, one urn is as good as the other.

  56. 56
    Mung says:

    ok, the colored balls are either red or blue. Does that help?

  57. 57
    Winston Ewert says:

    Nowhere can I find an explicit explanation of the pdf of possible searches although I think you are assuming each of those matrices which identify a search are equally probable.

    Section 5 of the paper discusses how these distributions are obtained.

    So, we’re right in reading DEM (yourself in association with Dembski and Marks) as “search” is synonymous with “probability distribution?

    What you’ve said is incorrect, but what you meant is probably correct. A search is any process that can be modeled as a probability distribution. A die can be modeled as the distribution {1/6,1/6,1/6,1/6,1/6,1/6}, but the die is not the same thing as the distribution. For one, I can physically stack dice, but I can’t physically stack the distributions. Similarly, a search can be modeled as a distriubtion, but they aren’t the same thing.

    I’m curious. What do you mean by “configuration of the universe”?

    I mean the combination of the physical laws of the universe together with any initial conditions.

    You’re crazy if you think anyone would waste time betting on such a stupid game.

    I don’t know, a lot of people play the lottery.

  58. 58
    Winston Ewert says:

    DiEb,

    Saying that you don’t like or find useful the way we’ve modeled things isn’t a criticism of our work. It is pointless complaining.

  59. 59
    Mung says:

    WE:

    A die can be modeled as the distribution {1/6,1/6,1/6,1/6,1/6,1/6}, but the die is not the same thing as the distribution.

    Aw shucks. I was wrong:

    So, I’m right in reading DEM (yourself in association with Dembski and Marks) as “die” is synonymous with “search”?

    Of course, unlike Aurelio, I know what I said was ludicrous.

    There’s other ways to model a die. Say it doesn’t have dots on it’s six faces but colors. What prevents us from assigning a value to each color?

  60. 60
    DiEb says:

    Winston Ewert:

    Saying that you don’t like or find useful the way we’ve modeled things isn’t a criticism of our work. It is pointless complaining.

    Sorry, I try to make the point of my complaint more clearly:

    In your paper “A General Theory of Information Cost Incurred by Successful Search”, you don’t use the terms model or modeling, not even once. You only claim that searches can be represented as probability distributions (and I thought that even this language was a little bit strong). Now you go even further and say

    When we say search we simply mean a process that can be modeled as a probability distribution.

    There are countless ways to define mathematical modeling. But the unifying concept is that a model allows for (non obvious) predictions. To elaborate:

    1) The simplest model in population dynamics is representing the size of the population by a real number, and to assume that the growth is proportional to time (for short amounts of time) and size. This model allows to predict the size of a given population in the future – after measuring growth rate and population size. If the predictions fails, the model will be refined – or cast away.

    2) I can represent the planets of the solar system by the length of their English names, Mercury, Jupiter and Neptune by 7, Saturn and Uranus by 6, Venus and Earth by 5, and Mars by 4. The only predictions which I can draw from this model are along the way that Mars has the shortest English name. I hope we can agree that this isn’t a model of the solar system.

    So, what predictions does your model allow for? Especially, take my example in comment above (nr. 68): what can you predict about a search which finds the target {1} with certainty – if {1} was indeed the target?

    I may not like your model. But what use it is even to you?

  61. 61
    Mung says:

    DiEb,

    Evolution doesn’t predict anything.

  62. 62
    Mark Frank says:

    #73 WE

    Section 5 of the paper discusses how these distributions are obtained.

    I apologise. I had not properly understood this section. What would really help here would be a worked example.  However, I will try to explain my concern.

    I think that what you are saying amounts to:

    If you define a subset of all the possible probability distributions on omega (e.g. those which make the probability of finding the target > q) then this places constraints on the probability density function of the probability distributions in M(omega).

    To make it concrete consider the case where omega is just two items a1, a2.  There are infinitely many pdfs possible on a1 and a2 – ranging from a1 = 1 to a1 = 0.  These pdfs are the members of M(omega). At this point you have no other information about omega.  So you have no idea about the higher level pdf of the members of M(omega). It might be that only pdfs where p(a1 > 0.8) are possible. It might even be that the only possible pdf on omega is P(a1 = 1). It would depend on the process for generating pdfs.

    You could then define a function on all those pdfs e.g. g(pdf) = 1 if p(a1) > q, 0 if p(a1 <= q). This would enable you to conceptualise P(g(pdf)=1). But clearly you cannot deduce that probability without making some assumptions about the prior probability of the members of M(omega). And I am struggling to see where those assumptions are articulated (although I suspect you are assuming that the pdf of pdfs is uniform between 0 and 1).

  63. 63
    kairosfocus says:

    DiEb:

    I think, in respect of “modelling” the clip at 39 above to Bob O’H is relevant:

    Let me follow up by clipping the opening words, verbatim:

    >> 1. The Search Matrix

    All but the most trivial searches are needle-in-the-haystack problems. Yet many searches successfully locate needles in haystacks. How is this possible? A success-ful search locates a target in a manageable number of steps. According to conserva-tion of information, nontrivial searches can be successful only by drawing on existing external information, outputting no more information than was inputted [1]. In previous work, we made assumptions that limited the generality of conservation of information, such as assuming that the baseline against which search perfor-mance is evaluated must be a uniform probability distribution or that any query of the search space yields full knowledge of whether the candidate queried is inside or outside the target. In this paper, we remove such constraints and show that | conservation of information holds quite generally. We continue to assume that tar-gets are fixed. Search for fuzzy and moveable targets will be the topic of future research by the Evolutionary Informatics Lab.

    In generalizing conservation of information, we first generalize what we mean by targeted search. The first three sections of this paper therefore develop a general approach to targeted search. The upshot of this approach is that any search may be represented as a probability distribution on the space being searched. Readers who are prepared to accept that searches may be represented in this way can skip to section 4 and regard the first three sections as stage-setting. Nonetheless, we sug-gest that readers study these first three sections, if only to appreciate the full gen-erality of the approach to search we are proposing and also to understand why attempts to circumvent conservation of information via certain types of searches fail. Indeed, as we shall see, such attempts to bypass conservation of information look to searches that fall under the general approach outlined here; moreover, conservation of information, as formalized here, applies to all these cases. >>

    I trust the point about reading in context is clear enough.

    I suggest to you that the references to needle in haystack searches, searches and representation all directly imply a modelling approach. As, is common in many applications of mathematics to situations of interest.

    Wiki:

    Scientific modelling is a scientific activity, the aim of which is to make a particular part or feature of the world easier to understand, define, quantify, visualize, or simulate by referencing it to existing and usually commonly accepted knowledge. It requires selecting and identifying relevant aspects of a situation in the real world and then using different types of models for different aims, such as conceptual models to better understand, operational models to operationalize, mathematical models to quantify, and graphical models to visualize the subject. Modelling is an essential and inseparable part of scientific activity, and many scientific disciplines have their own ideas about specific types of modelling.[1][2]

    There is also an increasing attention to scientific modelling[3] in fields such as philosophy of science, systems theory, and knowledge visualization. There is growing collection of methods, techniques and meta-theory about all kinds of specialized scientific modelling.

    I would suggest that Marks, Dembski, Ewert et al have been working at a mathematical modelling exercise and have been gradually making it of wider and wider applicability. They began by using flat random sampling as a reference yardstick search of a config sopace, making a reasonable case on S4S that greatly improved searches will be so case specific that a blind search of the space of possible searches when combined with the resulting search implies that the likelihood of combined success is no greater than that of a straight flat random sample in a needle in haystack context.

    For me, that plausibility is strengthened by reflecting on the fact that as a search is a sampled subset of a set of cardinality W, the S4S space has cardinality 2^W. That of the power set. Which becomes much harder as W ~ 10^150 – 300 at lower relevant minimum.

    So, I find your “fail” dismissal inappropriate, unwarranted and selectively hyperskeptical.

    In the 2013 paper, M, D, E explicitly set out to generalise, removing the first level search from a flat random one. This they have in fact done.

    They have indicated onward work that will move to shape and location shifting targets. Of course, the dominant issue is the needle in haystack challenge and linked S4S so it is reasonable that shape shifting and moving like barrier islands will not materially affect the outcome.

    KF

    PS: And oh yes, what relevant aspect of the sol system is modelled by representing planetary name length by letter counts? Is that not a blatant strawmam caricature on your part? (In context of recent activities by your side’s lunatic fringe, what message does resort to such lurid caricature send to such? Especially, when it is joined to blanket dismissiveness of a serious case? Please, think again on how you are arguing, given the LF.)

  64. 64
    kairosfocus says:

    MF,

    kindly cf the just above to DiEb.

    A blind search of a space dominated by non-function and with small targets with inadequate time and atomic resources to sample enough of the v. large space W to make detection of an island Ti likely, is a challenge.

    The set of searches on W is a set of the subsets of W. The cardinality of the onward space for S4S is 2^W, W starting out at 10^150. Blind search for a golden search is so hard that combining it with an improved search will be harder, much harder generally on avg, than direct blind search.

    All this, in a context where the target is FSCO/I rich configurations. We already know that there is a known, adequate cause. Intelligently directed configuration that injects active intelligently sourced configuring information as a bridge that puts you down on or next to a target Ti. This then allows troubleshooting exploration to achieve adequate function through a much more restricted and feasible search.

    The cluster of considerations brings us back full circle to the point that FSCO/I in a configuration and part interaction, wiring diagram based entity, is a strong sign that it’s best current causal explanation is design.

    KF

  65. 65
    fifthmonarchyman says:

    DiEb says,

    In your paper “A General Theory of Information Cost Incurred by Successful Search”, you don’t use the terms model or modeling, not even once.

    I say,

    I find the contrast between model and search to be interesting. It illustrates the fallacy of equating software like Avida to other actual scientific models.

    from here:

    http://en.wikipedia.org/wiki/Scientific_modelling

    quote:

    A scientific model seeks to represent empirical objects, phenomena, and physical processes in a logical and objective way. All models are in simulacra, that is, simplified reflections of reality,

    end quote:

    With this description we can construct the following syllogism.

    Axiom) models reflect reality.

    Premise one) Evolution is not searching for any specific target other than survival.

    Premise two) Evolutionary Algorithms are searching for specific targets.

    Conclusion) Evolutionary Algorithms are not “models” of Evolution.

    peace

  66. 66
    Mark Frank says:

    #80 KF

    Edited (pressed enter too early)

    The set of searches on W is a set of the subsets of W.

    This is wrong. Using DEM’s definition of a search – the set of searches on W is all possible pdfs over W. This is quite different from from the set of subsets. Among other things it is infinite in size while the set of subsets of a finite set is itself finite.

  67. 67
    kairosfocus says:

    AS,

    Isn’t it time you paid attention to the idea history instead of trying to tag and dismiss?

    Let me again cite to you the roots in Orgel, Wicken and co, for the descriptive focus on the FUNCTIONALLY specified subset of CSI. Which on pp 148/9 of NFL, Dembski highlights as the aspect relevant to biological systems:

    ______________

    http://iose-gen.blogspot.com/2.....l#fsci_sig

    >> The observation-based principle that complex, functionally specific information/ organisation is arguably a reliable marker of intelligence and the related point that we can therefore use this concept to scientifically study intelligent causes will play a crucial role in that survey. For, routinely, we observe that such functionally specific complex information and related organisation come– directly [[drawing a complex circuit diagram by hand] or indirectly [[a computer generated speech (or, perhaps: talking in one’s sleep)] — from intelligence.

    In a classic 1979 comment, well known origin of life theorist J S Wicken wrote:

    ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]

    The idea-roots of the term “functionally specific complex information” [FSCI] are plain: “Organization, then, is functional[[ly specific] complexity and carries information.”

    Similarly, as early as 1973, Leslie Orgel, reflecting on Origin of Life, noted:

    . . . In brief, living organisms [–> a functional context] are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . . .

    [HT, Mung, fr. p. 190 & 196:] These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. [–> this is of course equivalent to the string of yes/no questions required to specify the relevant “wiring diagram” for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002, also cf here, here and here (with here on self-moved agents as designing causes).] One can see intuitively that many instructions are needed to specify a complex structure. [–> so if the q’s to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions. [–> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes. [The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196. Of course, that immediately highlights OOL, where the required self-replicating entity is part of what has to be explained (cf. Paley here), a notorious conundrum for advocates of evolutionary materialism; one, that has led to mutual ruin documented by Shapiro and Orgel between metabolism first and genes first schools of thought, cf here. Behe would go on to point out that irreducibly complex structures are not credibly formed by incremental evolutionary processes and Menuge et al would bring up serious issues for the suggested exaptation alternative, cf. his challenges C1 – 5 in the just linked. Finally, Dembski highlights that CSI comes in deeply isolated islands T in much larger configuration spaces W, for biological systems functional islands. That puts up serious questions for origin of dozens of body plans reasonably requiring some 10 – 100+ mn bases of fresh genetic information to account for cell types, tissues, organs and multiple coherently integrated systems. Wicken’s remarks a few years later as already were cited now take on fuller force in light of the further points from Orgel at pp. 190 and 196 . . . ]

    Thus, the concept of complex specified information — especially in the form functionally specific complex organisation and associated information [FSCO/I] — is NOT a creation of design thinkers like William Dembski. Instead, it comes from the natural progress and conceptual challenges faced by origin of life researchers, by the end of the 1970’s.

    Indeed, by 1982, the famous, Nobel-equivalent prize winning Astrophysicist (and life-long agnostic) Sir Fred Hoyle, went on quite plain public record in an Omni Lecture:

    Once we see that life is cosmic it is sensible to suppose that intelligence is cosmic. Now problems of order, such as the sequences of amino acids in the chains which constitute the enzymes and other proteins, are precisely the problems that become easy once a directed intelligence enters the picture, as was recognised long ago by James Clerk Maxwell in his invention of what is known in physics as the Maxwell demon. The difference between an intelligent ordering, whether of words, fruit boxes, amino acids, or the Rubik cube, and merely random shufflings can be fantastically large, even as large as a number that would fill the whole volume of Shakespeare’s plays with its zeros. So if one proceeds directly and straightforwardly in this matter, without being deflected by a fear of incurring the wrath of scientific opinion, one arrives at the conclusion that biomaterials with their amazing measure or order must be the outcome of intelligent design. No other possibility I have been able to think of in pondering this issue over quite a long time seems to me to have anything like as high a possibility of being true.” [[Evolution from Space (The Omni Lecture[ –> Jan 12th 1982]), Enslow Publishers, 1982, pg. 28.]

    So, we first see that by the turn of the 1980’s, scientists concerned with origin of life and related cosmology recognised that the information-rich organisation of life forms was distinct from simple order and required accurate description and appropriate explanation. To meet those challenges, they identified something special about living forms, CSI and/or FSCO/I. As they did so, they noted that the associated “wiring diagram” based functionality is information-rich, and traces to what Hoyle already was willing to call “intelligent design,” and Wicken termed “design or selection.” By this last, of course, Wicken plainly hoped to include natural selection.

    But the key challenge soon surfaces: what happens if the space to be searched and selected from is so large that islands of functional organisation are hopelessly isolated relative to blind search resources?

    For, under such “infinite monkey” circumstances , searches based on random walks from arbitrary initial configurations will be maximally unlikely to find such isolated islands of function . . . >>
    _____________

    Let us see if you will now respond to substance instead of trying to isolate and target personalities. Which, I point out to you is enabling behaviour in the context of the existence of the ever present lunatic fringe. As well as the tactical recommendation of one certain SDA in his notorious rules for radicals.

    KF

  68. 68
    Zachriel says:

    fifthmonarchyman: Premise two) Evolutionary Algorithms are searching for specific targets.

    Not all evolutionary algorithms search for specific targets.

  69. 69
    kairosfocus says:

    PS: Dembski, NFL:

    >> p. 148:“The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology.

    I submit that what they have in mind is specified complexity [[cf. here below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . .

    Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. . . . In virtue of their function [[a living organism’s subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole. Dembski cites:

    Wouters, p. 148: “globally in terms of the viability of whole organisms,”

    Behe, p. 148: “minimal function of biochemical systems,”

    Dawkins, pp. 148 – 9: “Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by ran-| dom chance alone. In the case of living things, the quality that is specified in advance is . . . the ability to propagate genes in reproduction.”

    On p. 149, he roughly cites Orgel’s famous remark from 1973, which exactly cited reads:

    In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . .

    And, p. 149, he highlights Paul Davis in The Fifth Miracle: “Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity.”] . . .”

    p. 144: [[Specified complexity can be more formally defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ” >>

  70. 70
    kairosfocus says:

    PPS: Meyer, in his reply to Falk’s critique of Signature in the cell:

    ___________________

    http://www.signatureinthecell......l-falk.php

    >> . . . [[W]e now have a wealth of experience showing that what I call specified or functional information (especially if encoded in digital form) does not arise from purely physical or chemical antecedents[[–> i.e. by blind, undirected forces of chance and necessity]. Indeed, the ribozyme engineering and pre-biotic simulation experiments that Professor Falk commends to my attention actually lend additional inductive support to this generalization. On the other hand, we do know of a cause—a type of cause—that has demonstrated the power to produce functionally-specified information. That cause is intelligence or conscious rational deliberation. As the pioneering information theorist Henry Quastler once observed, “the creation of information is habitually associated with conscious activity.” And, of course, he was right. Whenever we find information—whether embedded in a radio signal, carved in a stone monument, written in a book or etched on a magnetic disc—and we trace it back to its source, invariably we come to mind, not merely a material process. Thus, the discovery of functionally specified, digitally encoded information along the spine of DNA, provides compelling positive evidence of the activity of a prior designing intelligence. This conclusion is not based upon what we don’t know. It is based upon what we do know from our uniform experience about the cause and effect structure of the world—specifically, what we know about what does, and does not, have the power to produce large amounts of specified information . . . . >>
    __________________

    Meyer here speaks directly to functionally specific complex information in digitally coded form, but with direct application to wider FSCO/I.

    I trust this isolate, tag and dismiss rhetorical gambit will now be retired. At least, by those interested in addressing substance rather than techniques of caricature and dismissal.

  71. 71
    kairosfocus says:

    MF,

    you first need to read the already given context of that little clip, the opening paragraphs of the MDE 2013 paper:

    All but the most trivial searches are needle-in-the-haystack problems. Yet many searches successfully locate needles in haystacks. How is this possible? A success-ful search locates a target in a manageable number of steps. According to conserva-tion of information, nontrivial searches can be successful only by drawing on existing external information, outputting no more information than was inputted [1]. In previous work, we made assumptions that limited the generality of conservation of information, such as assuming that the baseline against which search perfor-mance is evaluated must be a uniform probability distribution or that any query of the search space yields full knowledge of whether the candidate queried is inside or outside the target. In this paper, we remove such constraints and show that | conservation of information holds quite generally. We continue to assume that tar-gets are fixed. Search for fuzzy and moveable targets will be the topic of future research by the Evolutionary Informatics Lab.

    In generalizing conservation of information, we first generalize what we mean by targeted search. The first three sections of this paper therefore develop a general approach to targeted search. The upshot of this approach is that any search may be represented as a probability distribution on the space being searched. Readers who are prepared to accept that searches may be represented in this way can skip to section 4 and regard the first three sections as stage-setting. Nonetheless, we sug-gest that readers study these first three sections, if only to appreciate the full gen-erality of the approach to search we are proposing and also to understand why attempts to circumvent conservation of information via certain types of searches fail. Indeed, as we shall see, such attempts to bypass conservation of information look to searches that fall under the general approach outlined here; moreover, conservation of information, as formalized here, applies to all these cases . . .

    Secondly,the substantial matter is that a search samples from the set W with some distribution of probabilities regarding likelihood of any particular member xi being picked up, i.e. a probability distribution function across the set of members xi constituting W.

    Any given search then kicks out a collection of members, which per definition is a subset of W. Therefore the space of all possible samples — and thus subsets — of W is the space from which any given search MUST come. Ranging from {} to W itself (a 100% census). of course the particular samples picked will be chosen based on the specifics of sampling, which imposes a further distribution, pointing onward to a higher yet order search.

    Therefore it is entirely appropriate to point out that searches will come from a much higher scaled set, 2^W in cardinality.

    Which renders immediately highly plausible the finding of M, D & E that such a search for a good search imposes a cumulative search burden that is at least as hard as a null search based on some natural sampling of the original W will.

    Which as I argued just now:

    http://www.uncommondescent.com.....and-fscoi/

    . . . leads directly back to the blind needle in haystack search challenge imposed by the requisites of FSCO/I.

    Active info is a bridge coming from designers that makes searches feasible.

    KF

  72. 72
    Mark Frank says:

    #89 KF

    You are confusing the result of the search with the search. Two completely different pdfs may end up with the same results. DEM defines a search as a pdf. Therefore a search is not the same as the subset of W which results from the search.

  73. 73
    kairosfocus says:

    MF, I have sufficiently shown

    (i) that searches per whatever means applied will impose some degree of bias from zero up to absolute, in selection, which gives the distribution that Marks, Dembski and Ewert speak of (per citation),

    (ii) That each search will pick a subset of the set W, so that a blind population of searches will come from the set of subsets. By necessity of what a sample is, what a subset is, and what a cluster of actual samples will be as a result.

    This addresses your intended correction and shows that it is itself in need of correction. As, that (ii) is so is actually independent of the fact that (i) will be so. Both are true and both carry some relationship but I does not overturn ii. And in the sense that the authors wrote, searches do organically connect to distributions on the set W.

    KF

  74. 74
    fifthmonarchyman says:

    Zac says,

    Not all evolutionary algorithms search for specific targets.

    I say,

    Not all ID critics are incapable of having a genuine discussion of the issues.

    Zac again demonstrates that he is not one of the honest critics by not providing examples and evidence for his claim.

    😉

    peace
    PS
    That is why he is ignored so often

  75. 75
    Winston Ewert says:

    I may not like your model. But what use it is even to you?

    I elaborated on the conclusions we draw from our model over at Evolution News and Views. I don’t feel the need to repeat myself here.

    And I am struggling to see where those assumptions are articulated (although I suspect you are assuming that the pdf of pdfs is uniform between 0 and 1).

    See the top of page 58. We begin with any arbitrary distribution mu on omega. This is projected, as discussed in section 5, to a distribution mu-bar on M(omega). (You could as easily go in the other direction, and start with a distribution mu-bar on M(omega) and produce a distribution M on omega). That is, you can pick any arbitrary distribution that you deem the natural distribution. We are not assuming anything about the distribution, and certainly not that it is uniform.

    Note that this means that you could decide that the natural distribution of the universe places a high probability on complex life. The result of conservation of information has no quarrel with you if you take that stance.

    The fact that any so-called search can be reduced to a probability distribution does not mean that any stochastic process that can be reduced to a probability distribution is a “search.”

    That’s not remotely what I’ve said. I’m not saying that because both searches and stochastic processes can be reduced to a probability distribution they are the same. That would indeed be incorrect.

    What I’ve said is that our purposes we define a search to be a process that can be reduced to a probability distribution. So all processes no matter how insane that can reduced to a probability distribution are searches for the purpose of COI. That is simply a matter of how we’ve chosen to define our terms.

    Is FSCO/I something you’ve heard of?

    If you have, do you (and as spokesman for DEM) endorse it?

    I’ve seen posts about it. I’m not inclined to take it seriously until I see it published some place more serious then a blog.

  76. 76
    Mark Frank says:

    KF #91

    All youp have shown is that you don’t understand DEM’s paper – much less the problems with it. But it is not worth pursuing this any more.

  77. 77
    Joe says:

    Mark Frank, All you and yours have done is prove that unguided evolution doesn’t have squat. Your entire position is nothing but bald declarations and attacks on anyone who questions them.

    It is very telling when all you have to do to stop ID is to step up and produce support for the claims of your position and you choose to act differently.

    Typical but still pathetic.

  78. 78
    kairosfocus says:

    WE:

    this means that you could decide that the natural distribution of the universe places a high probability on complex life. The result of conservation of information has no quarrel with you if you take that stance.

    Pretty serious fine-tuning and front loading!

    KF

  79. 79
    Carpathian says:

    bornagain77:

    Carp, I believe God, who is omniscient and who created/creates time itself, is the Designer of the universe and of all life in it. Thus ‘far off targets’ in the future are child’s play for Him in His infinite knowledge since even time itself belongs to Him.

    Yes. Information is the key here.

    In order for ID to work, there is a requirement that the designer be able to see the future and commit almost no errors.

    This is a problem for the ID movement though in that ID cannot be debated on a scientific basis if this is true.

    If the claim is that no one but God can possibly engage in ID, we need faith to believe in ID. We have then moved into a religious debate not a scientific one.

  80. 80
    kairosfocus says:

    MF (attn WE): I have pointed to the antecedents for the descriptivce summary in Orgel, Wicken, Dembski and Meyer from 85 on above: http://www.uncommondescent.com.....ent-562213 These will handily meet the more serious than a blog criterion. After all, all it is that I have done and others too, is to create an acronym for a stock descriptive phrase for functionally specific forms of complex specified information. And in the case of Orgel and Wicken, that was the original context. Dembski used a generalisation to speak of specified complexity in general. KF

    PS: Onlookers, you will be able to judge the [want of] seriousness of onward objectors if they refuse to discuss FSCO/I on grounds that it is not taken up by “senior” ID persons.

  81. 81
    kairosfocus says:

    MF, pardon directness but you are simply being personally dismissive in a context where you were specifically corrected by direct citation on the contextual meaning of the reference to probability distributions. KF

  82. 82
    Mark Frank says:

    WE
     

    See the top of page 58. We begin with any arbitrary distribution mu on omega. This is projected, as discussed in section 5, to a distribution mu-bar on M(omega). (You could as easily go in the other direction, and start with a distribution mu-bar on M(omega) and produce a distribution M on omega). That is, you can pick any arbitrary distribution that you deem the natural distribution. We are not assuming anything about the distribution, and certainly not that it is uniform.
    Note that this means that you could decide that the natural distribution of the universe places a high probability on complex life. The result of conservation of information has no quarrel with you if you take that stance.

    Thanks for going to this effort. It is both enlightening and frustrating.  I guess I am confused by what you mean by “project” or at least its implications. As far as I know a projection is just a type of mapping from one set to another. So you can map a pdf on omega to a subset of M(omega). But so what? How do you jump from this to concluding anything about the ontological status or probabilities of the two distributions? It would help to have a concrete example. Suppose omega comprises just two members A and B. Then as I understand it M(omega) is the set of all possible pdfs on omega and is defined by all the possible values of P( A) i.e. all the real numbers between 0 and 1. Can you give an example of mu and mu-bar?

  83. 83
    kairosfocus says:

    Carpathian:

    In order for ID to work, there is a requirement that the designer be able to see the future and commit almost no errors.

    Not at all.

    First, all designers anticipate future possibilities, we look to goals.

    Second, a world of technology all around us shows that initial designs can be incrementally developed to adequate performance and reliability to be good enough for purpose.

    Perfection is not required. Just a sophisticated technical base.

    The PC you are reading this on is good enough as a case in point.

    KF

  84. 84
    kairosfocus says:

    MF, observe the context of blind, needle in haystack search, and the onward context that any given search will typically be utterly uncorrelated to where targets Ti may be found, the individual searches being of course samples of W and members of the set of subsets of W. IT is patent that it will be hard for a particular direct search to conveniently deposit us on a target Ti or close enough that it is easy to thereafter find it on an incremental narrow scope of search. Thus, we see that the search for such a golden search will put us into a higher order search for search that will come from the power set. Which will hold cardinality 2^W for a set of large cardinality W, starting at 10^150 – 10^301, the reasonable threshold for the same FSCO/I you would dismiss, that turns out to be very directly relevant to blind needle in haystack search. KF

  85. 85
    bornagain77 says:

    Carp, in case you do not know, neo-Darwinism is itself based on (bad) Theological premises not mathematical premises.
    Clean up your own back yard first and then we can talk.

    What separates the science of ID from the pseudo-science of neo-Darwinism is that ID can be rigorously falsified by experiment, and neo-Darwinism cannot.
    In fact, ID invites rigorous experimentation to try to falsify its primary claim that unguided material processes cannot produce non-trivial functional information/complexity, and that only Intelligence can (Abel; Behe).

    Moreover, science cannot be conducted unless teleology is presupposed on some ultimate level.
    Insisting, as materialists/atheists do, that there is no ultimate reason why anything happens defeats the purpose of doing science in the first place of trying to find the reason why anything happens.

    i.e. “It just happened for no particular reason whatsoever’ is a science defeater!

  86. 86
    Carpathian says:

    bornagain77:

    i.e. “It just happened for no particular reason whatsoever’ is a science defeater!

    I agree!

    What separates the science of ID from the pseudo-science of neo-Darwinism is that ID can be rigorously falsified by experiment, and neo-Darwinism cannot.

    An experiment is what I intend to do. I will model both ID and evolution and see which is a more powerful method for generating successful body plans.

    The problem with ID is that I can see the limitations in anyone actually being able to do it, other of course than someone who can accurately see the future.

    As far as dismissing evolution, I believe that evolving even the simplest self-replicating code would qualify as proof that evolution could be a viable mechanism for biology also.

  87. 87
    Carpathian says:

    kairosfocus:
    I have given this a lot of thought and ID is tougher than it looks, not from the perspective of designing organism X but rather what role X should play in it’s environment.

    X’s effect on other creatures and plant life in an environment could lead to extinction of other species, both prey and predator, as well as a change in the food chain.

    Until you know the effects of the new organism X well into the future, you cannot release it into the environment.

  88. 88
    Mark Frank says:

    KF the cardinality of the set of all possible searches (as defined by DEM) is infinite (see #100 above for a small example). But, setting that aside, you are assuming a uniform probability across the set of all possible searches. This is clearly not the case for evolution (and many other real world cases). Searches that involve non-viable steps are quickly terminated. There is a strong relationship between possible searches and the “target” which is a viable organism which has viable offspring.

  89. 89
    Winston Ewert says:

    I have pointed to the antecedents for the descriptivce summary in Orgel, Wicken, Dembski and Meyer from 85 on above: http://www.uncommondescent.com…..ent-562213 These will handily meet the more serious than a blog criterion.

    You really think a comment on a blog that quotes other people and calls them idea-roots for FSCO/I qualifies as a serious presentation of the idea of FSCO/I?

  90. 90
    Joe says:

    Carpathian:

    In order for ID to work, there is a requirement that the designer be able to see the future and commit almost no errors.

    That doesn’t follow from anything.

    Until you know the effects of the new organism X well into the future, you cannot release it into the environment.

    Wow.

    Keep them straw man arguments coming, though. They are entertaining.

    I will model both ID and evolution and see which is a more powerful method for generating successful body plans.

    What does that even mean besides proving you have no clue what is being debated?

    Evolution via intelligent design is by far more powerful than unguided evolution. Try developing antennae without the specifications of what is required programmed in.

  91. 91
    Zachriel says:

    fifthmonarchyman: providing examples and evidence for his claim.

    Be happy to. Thanks for asking. See Krupp & Taylor, Social evolution in the shadow of asymmetrical relatedness, Proceedings of the Royal Society B: Biological Sciences 2015. For that matter, so is Word Mutagenation.

  92. 92
    Carpathian says:

    Joe:

    Until you know the effects of the new organism X well into the future, you cannot release it into the environment.

    Wow.

    Keep them straw man arguments coming, though. They are entertaining.

    I have been thinking about how to implement ID. You have issues with my concerns.

    If you’re better at this than I, show me how to use ID to introduce an organism into an environment.

    Give me details.

    I claim it’s harder than you think.

  93. 93
    Winston Ewert says:

    Can you give an example of mu and mu-bar?

    Let’s take the set {A,B}

    Let say that mu = 2/3 A, 1/3 B.

    The simplest way to understand mu_bar, it to think of it as a uniform distribution over the set {1,2,3}. Then we compose it with the mapping {1,2} -> A, {3} -> B.

    But so what? How do you jump from this to concluding anything about the ontological status or probabilities of the two distributions?

    We’re not claiming anything about ontological statuses. What we are claiming is that whatever process produced a successful search, must itself have a pdf biased towards indirectly producing the target

  94. 94
    Joe says:

    Carpathian:

    If you’re better at this than I, show me how to use ID to introduce an organism into an environment.

    You start by knowing what it is you are going to design. If you need a planet like earth then you have to know what it is that makes our planet the way it is. And you make it so.

    And if you wanted intelligent observers you would have to know what they require. If you wanted to introduce a new organism you would have to know what it requires. That’s it.

    It all depends on what the purpose is and what is physically possible. I would design my organisms with the ability to adapt to changes- either genetically or behaviourally- Intelligent Design Evolution.

    But then again, we don’t have any idea how to design living organisms, so yes it is even much harder than YOU think.

  95. 95
    Winston Ewert says:

    Determining your “spec” for a design is more difficult than actual physical design.

    I’d like a self-driving helicopter.

    Now that I’ve done the hard part of coming up with spec, can you get back to me with the easy part?

  96. 96
    Carpathian says:

    Joe:

    If you wanted to introduce a new organism you would have to know what it requires. That’s it.

    That’s not enough. I would need to know its effect on other organisms.

    If I wanted to introduce a new predator into a grassland environment, what prey would it successfully end up hunting? If it hunts the prey of a current predator, that older predator population may shrink in size.

    If the new predator is smaller and successfully hunts adolescents, the prey population may take a much larger hit than would be indicated by the numbers taken since much fewer prey would reach breeding age.

    These are serious questions can’t be ignored if you’re going to be doing biological design.

    Look at the Asian carp that have been transported to American rivers. They seem to have no natural predators and are thriving at the expense of American fish that have been here for thousands of years.

    You can’t just release a new organism without carefully looking at the possible effects.

  97. 97
    Carpathian says:

    Winston Ewert:

    It was easy for Ford to build the Edsel.

    It wasn’t easy to get it accepted in the marketplace.

  98. 98
    Joe says:

    Carpathian-

    Look at the Asian carp that have been transported to American rivers.

    An already existing design. No one on earth designed the carp.

    If you are just going to quote-mine my posts then why even bother?

    It all depends on what the purpose is and what is physically possible. I would design my organisms with the ability to adapt to changes- either genetically or behaviourally- Intelligent Design Evolution.

    But then again, we don’t have any idea how to design living organisms, so yes it is even much harder than YOU think.

    Go ahead- design a fish, I challenge you, knowing full well that you cannot do so.

  99. 99
    Carpathian says:

    Joe:

    Whether an organism is designed or evolved, introducing it into the wrong environment could cause damage to other organisms already there.

    That was the point I was making. For ID to work, it is not enough to design a single organism. The designer must know the future environment and the interaction of all the other organisms or he risks threatening the future of those other organisms.

    The devil is in the details.

    As far as designing a fish, if I managed to be able to, should I design an Asian Carp and throw it into the Mississippi?

    We have evidence that it would not be good.

    Try and think about it for awhile assuming that organism design was not an issue and you will find yourself stumped by the interaction of the ecosystem.

  100. 100
    Mark Frank says:

    WE #111
     
    Thanks again for continuing to respond. It is interesting.
     

    Let’s take the set {A,B}
    Let say that mu = 2/3 A, 1/3 B.
    The simplest way to understand mu_bar, it to think of it as a uniform distribution over the set {1,2,3}. Then we compose it with the mapping {1,2} -> A, {3} -> B.

    Still struggling a bit here. I thought mu-bar was a pdf over M(omega). But M(omega) is continuous containing all values of R A, 1-R B where R is a real number between 0 and 1. Are you saying that mu-bar is the pdf over M(omega) where P(R = 2/3 ) = 1? Perhaps more importantly – what role is mu-bar playing? It appears to be something like the the pdf for M(omega) that gives the maximum likelihood for a pdf over omega that is 2/3A, 1/3B.  But I suspect I am missing the point here.

    We’re not claiming anything about ontological statuses. What we are claiming is that whatever process produced a successful search, must itself have a pdf biased towards indirectly producing the target

    “Must” or “probably is”?  The resulting search might be some evidence for the pdf of the process but surely that relationship is a bit like the relationship between a Bayesian prior distribution and the observed outcome.  The process creating the outcome (which is itself a pdf) has a prior pdf which is modified by the observed outcome. But you somehow seem to be claiming you can deduce something about the process based purely on the outcome and ignoring the prior.

  101. 101
    Joe says:

    Carpathian, Why do you keep ignoring parts of what I post?

    For ID to work, it is not enough to design a single organism.

    No kidding.

    The designer must know the future environment and the interaction of all the other organisms or he risks threatening the future of those other organisms.

    That is why you have to know what it is you are trying to achieve!

    As far as designing a fish, if I managed to be able to, should I design an Asian Carp and throw it into the Mississippi?

    One asian carp wouldn’t be an issue.

  102. 102
    Joe says:

    But anyway, Carpathian, let’s keep this thread open for Winston’s concerns. He and Mark appear to be getting into something and we shouldn’t interrupt it.

  103. 103
    Carpathian says:

    Joe:

    I am not ignoring what you post. I am responding.

    My point was ID is more difficult than the design of organism X. Introducing X without understanding the ramifications of what X’s side effects could cause the loss of your already successful previous designs.

    Your response to me was to say it’s easy, just see what’s needed, but you simply hand-waved away the reality every manufacturer has before they introduce a product into the market. In some cases a new product eats into the profits of a previously established one by the same manufacturer.

    I’m not trivializing ID. It has bigger problems than simply designing X. The spec must be well defined taking into account the effect of X on the environment.

    That means the relationship of the whole ecosystem must be taken into account in the final design of X.

  104. 104
    Carpathian says:

    Joe:

    But anyway, Carpathian, let’s keep this thread open for Winston’s concerns. He and Mark appear to be getting into something and we shouldn’t interrupt it.

    Ok.

  105. 105

    Hi, Dr Ewert:

    You write:

    See the top of page 58. We begin with any arbitrary distribution mu on omega. This is projected, as discussed in section 5, to a distribution mu-bar on M(omega). (You could as easily go in the other direction, and start with a distribution mu-bar on M(omega) and produce a distribution M on omega). That is, you can pick any arbitrary distribution that you deem the natural distribution. We are not assuming anything about the distribution, and certainly not that it is uniform.

    Note that this means that you could decide that the natural distribution of the universe places a high probability on complex life. The result of conservation of information has no quarrel with you if you take that stance.

    And also:

    We’re not claiming anything about ontological statuses. What we are claiming is that whatever process produced a successful search, must itself have a pdf biased towards indirectly producing the target

    This seems clear. But are you, therefore, saying anything more than “Probability distributions in which certain outcomes are likely must be produced by processes that make those outcomes likely?”

    Is there, in other words, any reason to pick one distribution as the “natural” distribution against which other processes are “unnatural”? Or have I misunderstood you?

  106. 106
    fifthmonarchyman says:

    ZAC says,

    Be happy to.

    I say.

    Just as I have come to expect from you more smoke-blowing

    Word Mutagenation clearly has a target of English phrases and The details of the EA in the paper you mention are behind a firewall so it’s target can not be ascertained.

    It’s self evidently clear however that any EA will need criteria to determine which virtual organism is selected. We call that criteria the target

    Come on Zac, please at least try and feign that you care about what is actually being discussed.

    peace

    Please tell me what criteria

  107. 107
    Zachriel says:

    fifthmonarchyman: Word Mutagenation clearly has a target of English phrases

    English words are the fitness landscape. The only target is successful reproduction.

    fifthmonarchyman: It’s self evidently clear however that any EA will need criteria to determine which virtual organism is selected.

    Then biological evolution has a target: successful reproduction in the natural environment.

  108. 108
    fifthmonarchyman says:

    ZAc says,

    Then biological evolution has a target: successful reproduction in the natural environment.

    I say,

    I have no problem with this characterization. It’s your side that is claiming that evolution is not a search.

    If evolution has one target it can not be predictably counted on to reach a second unrelated one.

    So if we observe an improbable outcome in a population that is not merely “successful reproduction in the natural environment”. We can not credit it’s origin to evolution
    peace

  109. 109
    Zachriel says:

    fifthmonarchyman: I have no problem with this characterization. It’s your side that is claiming that evolution is not a search.

    Search usually implies a specific goal. Some evolutionary algorithms are used to find solutions to specific problems. They have an endpoint.

    On the other hand, life navigates a changing landscape, and many evolutionary algorithms model this process.

  110. 110
    Winston Ewert says:

    I thought mu-bar was a pdf over M(omega).

    It is. My attempt to explain the distribution in an accessible manner failed rather badly there.

    Let me try again:

    Let’s start with the uniform case. Suppose we have the 26 letters of the english alphabet, and have a uniform mu. The mu-bar is thus the uniform distribution over possible pdfs. i.e. 0 to 1 for all various 26 letters.

    Now instead, consider that we have a choice between two categories: vowels and consonants. The natural distribution is 5/26 for vowels and 21/26 for consonants. That gives us mu. Note that this mu is essentially the uniform mu composed through a mapping.

    Now, to construct the mu-bar for this case, start with the uniform over all pdfs from the previous case, and then compose it with the same mapping. So a search from before:

    a=.5 e=.25 z=.25

    becomes

    vowels=.25, consonants=.75

    b=.25 a =.25 r=.25 t=.25

    also becomes

    vowels=.25, consonants=.75

    which means the probabilities will combine, making the probability of this search twice as much. (Of course, it will end up being more then twice because they are a lot of searches will map to the same search.)

  111. 111
    Winston Ewert says:

    “Must” or “probably is”? The resulting search might be some evidence for the pdf of the process but surely that relationship is a bit like the relationship between a Bayesian prior distribution and the observed outcome. The process creating the outcome (which is itself a pdf) has a prior pdf which is modified by the observed outcome. But you somehow seem to be claiming you can deduce something about the process based purely on the outcome and ignoring the prior.

    Indeed, I meant “probably.” You could just be excessively lucky instead.

  112. 112
    Winston Ewert says:

    This seems clear. But are you, therefore, saying anything more than “Probability distributions in which certain outcomes are likely must be produced by processes that make those outcomes likely?”

    That is indeed all that conservation of information claims.

    Is there, in other words, any reason to pick one distribution as the “natural” distribution against which other processes are “unnatural”? Or have I misunderstood you?

    Without introducing philosophical assumptions, no.

  113. 113
    fifthmonarchyman says:

    zac says,

    On the other hand, life navigates a changing landscape, and many evolutionary algorithms model this process.

    I say,

    geez

    I give up, It is impossible to talk to you

    peace

  114. 114

    Thanks, Winston, for that response.

    So what, in that case, does the Conservation of Information Law tell us, apart from the fact that if one thing is more probable than another thing, something must make the second thing more probable?

    Why do we have to call that something “Active Information”?

    It’s not as though the probability ratio tells us the probability of that process occurring.

    For instance, let’s take the probability of finding a car wedged in the top of tree. In a tornado-free environment, this is vanishingly unlikely. In a tornado-prone environment, the probability is quite a lot higher. But the ratio of those probabilities (or the difference between the logs), which you call “Active Information”, is not the probability of a tornado prone environment, right? So what is it the probability of, and why should it matter?

  115. 115
    Winston Ewert says:

    Liddle,

    I wrote about what the laws tell us in my post at Evolution News and Views. Rather then me repeating myself, I’d request that you go read that.

  116. 116

    Hi, Winston.

    I have already read (twice) your article at ENV. I would not be asking you these questions here if I thought the answers to them were in that article.

    In that article you wrote:

    The conservation of information does not imply a designer. It is not a fine-tuning argument. It is not our intent to argue that all active information derives from an intelligent source. To do any of those things, we’d have to introduce metaphysical assumptions that our critics would be unlikely to accept. Conservation of information shows only that whatever success evolutionary processes might have, it is due either to the original configuration of the universe or to design.

    (my bold)

    That last statement is totally uninformative. It seems to me to be tantamount to saying that the LCI tells us that A is either B or not-B: evolution works either because of design, or not because of design (“due to the configuration of the universe”).

    That last thing (“the configuration of the universe”) can only be worth remarking on if you think that such a configuration is very improbable. But you do not say so, nor do you give us any idea as to how you would even work out such a probability. Indeed, you say that your argument is not a “fine-tuning argument”, so you are not even arguing that there is anything improbable about a universe configured in such a way as to facilitate evolutionary processes.

    Moreover, nowhere in your ENV article that I can find do you tell us what the ratio of p/q is the probability of, which was my second question.

    Sure I am a “critic” – but I’d be perfectly happy to hear the “metaphysical assumptions” that you think I would be “unlikely to accept”. But unless I hear them, I have no clue as to whether I’d accept them or not.

    Go on – I might surprise you!

  117. 117
    Joe says:

    Elizabeth- Just model neo-Darwinian evolution and be done with it. Then people can have a look and show you where you failed or how your model failed to demonstrate anything (your CSI example on TSZ was such a failure).

    Just remember natural selection is an eliminative process and it is blind and mindless.

  118. 118
    fifthmonarchyman says:

    EL says.

    Sure I am a “critic” – but I’d be perfectly happy to hear the “metaphysical assumptions” that you think I would be “unlikely to accept”

    I say,

    Why do we even have to get into metaphysics? Why not just deal with the implications of the paper?

    It seems to me that any metaphysical position at all would be compatible with this result. The only thing that is at issue is the strength of Evolution as a search.

    Understanding the limitations of evolutionary searches doesn’t have to be about theism verses atheism does it?

    peace

  119. 119

    We don’t have to, fifthmonarchyman – but Winston referred me to his ENV piece in which he implies that to understand what the Law of Conservation of Information has to say about a Designer, that’s what we’d have to do.

    If we don’t, the LCI appears to amount to no more than: evolutionary success is either due to design or natural processes. So not an ID argument at all.

    As for “the limitations of evolutionary searches” – no, they don’t have to be about theism versus atheism at all. I’d say they have nothing to do with either. What is fascinating about evolutionary searches – or rather, the evolutionary processes that underlie the adaptation of populations to their environment – is that they have very clear limitations, which is precisely what enables us to test hypotheses about them. Certain things can be done easily by human designers that can’t be done by evolutionary processes, and vice versa. And the pattern of biological characteristics, interestingly enough, is just the pattern you’d predict from evolutionary processes, and not from human designers, namely, nested hierarchies; no wholescale transfer of solutions from one lineage to another; retrofits rather than radical redesigns.

    On the other hand we know – because we can utilise them in the evolutionary algorithms we use to solve intractable problems – that they can also find solutions that escape human designers. They do this because unlike us, they have no inhibitions about exploring apparently unpromising lines of development. And, contrary to what many often assert here, they often travel quite far down apparently disadvantageous tracks, where a human designer would turn back, discouraged. And yet often, the breakthrough turns out to be at the end of that track.

    I speak metaphorically but the metaphor applies pretty well – many times I’ve had a solution to a problem have an ancestry that involved a large number of disadvantageous steps.

  120. 120
    Winston Ewert says:

    Elizabeth,

    That last statement is totally uninformative. It seems to me to be tantamount to saying that the LCI tells us that A is either B or not-B: evolution works either because of design, or not because of design (“due to the configuration of the universe”).

    No. The possibilities are that active information was:
    1) Injected into the universe via design.
    2) Present at the original configuration of the universe.
    3) Gained over time through stochastic processes.

    The math rules out possibility number 3. That is why the remaining options are design or initial configuration of the universe. I’m not making any claims about the probability of the configuration of the universe. I’m merely pointing out that every has to be traced back to the configuration, you can’t appeal to an increase in active information after that point.

    Furthermore, my post discusses the point of the COI in the paragraphs immediately after the one that you quoted. That is the section I was intending to refer you to.

    Moreover, nowhere in your ENV article that I can find do you tell us what the ratio of p/q is the probability of, which was my second question.

    It isn’t a probability. Its a measurement of the bias of a search towards a target.

    – Winston

  121. 121
    Joe says:

    Elizabeth:

    On the other hand we know – because we can utilise them in the evolutionary algorithms we use to solve intractable problems – that they can also find solutions that escape human designers.

    That is incorrect. Whatever computers do they do because of human designers.

    They do this because unlike us, they have no inhibitions about exploring apparently unpromising lines of development.

    Umm, they do that because of TIME- as in they can run more trials in less time. They have VIRTUAL resources. They don’t have to actually build every iteration.

    Come on Elizabeth, even you should be able to do better than that.

    So to compare to humans you have to have millions of engineers working somehow in sync yet taking differing paths to the solution.

    Computers do what they are designed to do. Everything they do traces back to a human designer. They are just tools.

    Natural selection is different in that it is a process of elimination. Whatever is good enough survives.

    From “What Evolution Is”, Ernst Mayr (one of the architects of the modern synthesis) page 117:

    What Darwin called natural selection is actually a process of elimination.

    Page 118:

    Do selection and elimination differ in their evolutionary consequences? This question never seems to have been raised in the evolutionary literature. A process of selection would have a concrete objective, the determination of the “best” or “fittest” phenotype. Only a relatively few individuals in a given generation would qualify and survive the selection procedure. That small sample would be only to be able to preserve only a small amount of the whole variance of the parent population. Such survival selection would be highly restrained.

    By contrast, mere elimination of the less fit might permit the survival of a rather large number of individuals because they have no obvious deficiencies in fitness. Such a large sample would provide, for instance, the needed material for the exercise of sexual selection. This also explains why survival is so uneven from season to season. The percentage of the less fit would depend on the severity of each year’s environmental conditions.

    The evolutionary processes computers use are akin to selection.

  122. 122
    Joe says:

    Elizabeth:

    And the pattern of biological characteristics, interestingly enough, is just the pattern you’d predict from evolutionary processes, and not from human designers, namely, nested hierarchies; no wholescale transfer of solutions from one lineage to another; retrofits rather than radical redesigns.

    Umm evolution is too messy to produce a nested hierarchy. Darwin went over that in 1859. Mayr went over that, Denton went over that and recently, in “Arrival of the Fittest”, Andreas Wagner went over that.

    Nested hierarchies require distinct groups. Transitional forms would blur all lines of distinction.

    And BTW, the US Army is a nested hierarchy and it has nothing to do with evolution or descent with modification. Linnaean taxonomy, the observed nested hierarchy in biology, also has nothing to do with evolution nor descent with modification.

    This is what happens when TSZ doesn’t allow dissenting views. Its regulars wallow in their own ignorance.

  123. 123
    Joe says:

    Winston,

    As evolutionists have pointed out, the target with respect to neo-Darwinian evolution is to survive and reproduce. And guess what? They start out given populations of living and reproducing organisms so they are already there. Target reached. The rest is all contingent serendipity.

    What’s not to like with a concept like that?

  124. 124
    Zachriel says:

    fifthmonarchyman: I give up

    It’s not that difficult. When someone says evolution is not a search, it’s because there’s no specific goal. Think of it as simply trying to keep one’s balance on a constantly shifting landscape.

  125. 125
    Mapou says:

    Liddle:

    And the pattern of biological characteristics, interestingly enough, is just the pattern you’d predict from evolutionary processes, and not from human designers, namely, nested hierarchies; no wholescale transfer of solutions from one lineage to another; retrofits rather than radical redesigns.

    This is obviously not true. A nested hierarchy is what we expect from human intelligent design over time with some multiple inheritance sprinkled in. In fact, almost all modern software programming languages enforce a strictly nested class hierarchy. C++ allows multiple inheritance but it is used sparingly in the business.

  126. 126
    Joe says:

    fifthmonarchyman

    When someone says evolution is not a search and that’s because there’s no specific goal, they are really telling you that it is all contingent serendipity and that it should never be mistaken for a scientific concept. 😎

  127. 127
    Joe says:

    Mapou 143- Nice job. Even though human design can violate a nested hierarchy doesn’t mean they all have to. OTOH gradual evolution will always produce transitional forms that will blur the nice, neat lines of distinction nested hierarchies require.

  128. 128
    fifthmonarchyman says:

    zac says,

    When someone says evolution is not a search, it’s because there’s no specific goal.

    I say,

    Just as I said

    you say,

    Think of it as simply trying to keep one’s balance on a constantly shifting landscape.

    Again just as I said.

    We once again seem to be in agreement on a minor point but instead of noting that and simply moving on to more important stuff you insist on rephrasing. You do it Ad nauseam here and just as often you will slip in a red herring if you can to try and change the subject entirely.

    We end up with comment after comment and nothing substantial is ever addressed clogging threads that could be interesting with blah blah blah. I blame myself for continuing to try with you when there are others that are honest critics.

    peace

  129. 129
    kairosfocus says:

    MF, Genomes are 4-state per base systems, which imposes a finite and discrete set of possibilities. When we have a space of possibilities W, the set of samples on said space will come from the set of subsets, of cardinality 2^W. And that seems to me the operative context. Going on to the evolutionary computing case, inherently you are dealing with a bitwise granularity, which is discrete and finite. Yes, you may work with the continuum [not least as calculus is generally handy to work with], but you are going to come back to a fine grained, discrete and finite case. Which we should not forget. KF

    PS: I should add that the exploration of possible molecular states can also be cellularised, based on the inherently discrete nature of molecules and the effective speed limit of chemical level interactions of relevant type ~10^-14 s.

  130. 130
    Mung says:

    Elizabeth Liddle:

    If we don’t, the LCI appears to amount to no more than: evolutionary success is either due to design or natural processes. So not an ID argument at all.

    Given that intelligent design is a natural process that’s a false dichotomy.

  131. 131
    kairosfocus says:

    WE, I think I need to note that my point has always been that all I have provided by using the abbreviation FSCO/I is an acronym for a descriptive summary of the functionally specific subset of complex specified information. That concept is well established. Which, is what I cited. It is also a readily observed phenomenon, starting with the strings of glyphs used to communicate coded information we are all using in this thread and the similar strings in DNA and proteins. Wiring diagram organised entities can readily be reduced to similar descriptive strings, as is commonly done with appropriate software. KF

  132. 132
    kairosfocus says:

    Carpathian, yes, design is tough to do. Especially when designed items have to function in a complex and partly uncontrolled and dynamic environment. That is why for instance central economic planning failed. But incremental development that has built-in robustness and adaptability, backed up by empirical testing and development with a healthy dose of stabilising negative feedbacks tends to work out fairly well. Robustness, redundancy and adaptability tend to be more effective than overly brittle optimisation on objective functions . . . if you can get away with that. Beyond, I would not infer from design of life to a designer or designers of effective omniscience. That has been on the table for thirty years of the modern design school of thought, here, Thaxton et al. KF

  133. 133
    Winston Ewert says:

    WE, I think I need to note that my point has always been that all I have provided by using the abbreviation FSCO/I is an acronym for a descriptive summary of the functionally specific subset of complex specified information.

    My apologies. That’s what I get for commenting on something I know nothing about. I was under the impression that you were trying to do something more novel then applying an acronym to the ideas of other people.

  134. 134
    Joe says:

    Winston Ewert wrote:

    I was under the impression that you were trying to do something more novel then applying an acronym to the ideas of other people.

    I was under the impression that he was trying to further develop the ideas of other people such that a wider audience can understand and appreciate them. And the acronym just further specified the subset of CSI- Dembski’s CSI.

  135. 135
    Winston Ewert says:

    I was under the impression that he was trying to further develop the ideas of other people such that a wider audience can understand and appreciate them. And the acronym just further specified the subset of CSI- Dembski’s CSI.

    Perhaps, I really haven’t followed FSCO/I enough to know. My only thought is that if it is a worthwhile development, I’d really like to see it published in a paper or conference.

  136. 136
    Mark Frank says:

    WE #139

    Thanks for continuing to be involved. I know how time consuming and irritating it can be responding to multiple interrogators.

    It isn’t a probability. Its a measurement of the bias of a search towards a target.

    This raises two questions:

    1) Biased as compared to what? What does unbiased look like?

    If you cannot define unbiased then it seems your assertion amounts to:

    Either the initial configuration of the universe was such that what happened subsequently was possible or a designer made it possible. True but not very interesting.

    2) You call the –log base 2 of (p/q) active information. But you say p/q is not a probability. Yet in other contexts you define information as –log base 2 of a probability (e.g.  endogenous information and exogenous information.  It seems like active information is a different kind of thing from other kinds of information.

  137. 137
    Mark Frank says:

    WE #128

    Thanks also for your efforts to explain mu and mu-bar. I am still struggling but let me try rephrasing what I think it might mean in my own words. I think you might be saying:

    For any pdf mu that gives a probability P of “hitting a target” it is possible to find a higher level pdf mu-bar that creates pdfs that in total have the same probability of “hitting the target”.

    Is that it?

  138. 138
    Mapou says:

    Joe:

    Mapou 143- Nice job. Even though human design can violate a nested hierarchy doesn’t mean they all have to. OTOH gradual evolution will always produce transitional forms that will blur the nice, neat lines of distinction nested hierarchies require.

    I see this as another huge problem for Darwinian evolution. Where is the blur? I’m sure there is yet another just-so, pseudoscientific story to explain it. Elsewhere, you mentioned Darwin’s extinction hypothesis but it’s obviously a non-explanation. Are there others?

  139. 139
    kairosfocus says:

    WE, came back by overnight. Appreciated. We have differing foci and emphases. For me, over years, the functional subset of CSI has proved fruitful (and especially digitally coded strings such as in DNA); where I note that Dembski and Meyer have in fact pointed to that subset and its significance in what Wallace once called the world of life. Historically, that is the context in which CSI was recognised as a significant characteristic of life forms, as Orgel and Wicken noted. I will normally briefly explain or expand the acronym when I use it. KF

  140. 140
    Mark Frank says:

    #147 KF

    MF, Genomes are 4-state per base systems, which imposes a finite and discrete set of possibilities. When we have a space of possibilities W, the set of samples on said space will come from the set of subsets, of cardinality 2^W. And that seems to me the operative context.

    We were discussing M(omega) the set of possible searches of omega. The number of possible searches is not the same as the number of possible subsets of the search space. Although the search space may be discrete, the set of pdfs on that search space is infinite (in fact uncountably infinite). That applies even if there is just one item in the search space with two possible values. Ask Winston if you doubt me. DEM have defined a search in such a way that it is equivalent to a pdf on the search space. Therefore there are an countably infinite number of searches (as defined by DEM).

    I don’t know what you mean by “operative context”.

  141. 141
    kairosfocus says:

    MF,

    as an exercise in pure math, one may indeed assign an uncountably infinite set of objective functions to a space.

    But, I suggest, this loses sight of what we are addressing.

    Performance has to be exhibited in time and space.

    In the hoped for evolutionary process, it takes generations for distinct sub populations to emerge and sort out superior/inferior performance. And 20 minutes or 20 years makes little material difference to the resulting process lags and memory-of-the-past cumulative effects that lead to granularity as a reasonable approach. For you and I to be here, generations of successful reproduction had to have happened, across time, leading to lagged effects.

    In computing, every step and cycle are granular in value and time.

    Atoms and molecules have an effective speed limit to chemical level interactions relevant to forming both monomers and chained macromolecules that appear in biological systems, ~ 10^ 13 or 14 per second.

    And so, we come right back to the relevant finite and discrete nature of what we are dealing with. In short, A/D conversion is natural to the case and will impose granularity.

    It remains so that WLOG, a system config can be described per wiring diagram on a structured set of Y/N q’s, yielding a bit string, inherently discrete. For a bit string of length n, W = 2^n gives the number of possibilities.

    Then, samples taken from the set will be subsets, and the number of possible subsets is indeed 2^W.

    For n = 500 – 1,000, we have that 10^57 sol system atoms or 10^80 for the observed cosmos at 10^13 – 14 actions/s, will explore 10^87 – 88 or 10^110 – 111 possibilities in 10^17 s. Which is an order of magnitude value for timeline since the typical dating of the singularity. The result is, the needle in haystack search challenge relative to 3.27* 10^150 or 1.07*10^301 possibilities. Where also the power sets take in every possible individual sample of the sets; which will be finite.

    So, on a reasonable assessment, there is indeed reason to consider the situation from this angle, and it sends the message that needle in haystack search challenge will dominate relevant cases. For we cannot explore possibilities, develop configs, exhibit and filter performance in infinitesimal increments of time or space.

    So, while taking the granular view does not confine us to a flat random sampling as the way to explore possibilities, a golden search does point to a higher order search for a search, and it is reasonable to see this as confronting a power set abstract space. One may impose a further golden search — why? — but the regress of exponentiation is already evident. And, we already are at the practical point of implying that the laws and initial circumstances of the cosmos would have had requisites of life written into them in astonishing ways.

    In short, you have suggested, inadvertently, a fine tuning, cosmological programming argument at the root of the physics of the cosmos.

    And if design is at the table from that level, then there is no good, non-ideological reason to exclude it thereafter, at OOL or OOBP up to origin of our own body plan.

    Worse, step back a moment and allow a non-countably transfinite set of possibilities for objective functions. The search for search challenge just exploded in scope. Of course, in practice, we will see clusters that boil down to re-imposing granularity for practical purposes. But not enough to help your case.

    KF

  142. 142
    MatSpirit says:

    Winston in 138:

    No. The possibilities are that active information was:
    1) Injected into the universe via design.
    2) Present at the original configuration of the universe.
    3) Gained over time through stochastic processes.

    Number three is the problem. Evolution combines a stochastic process (mutation) that generates information with a “fact checking” process (natural selection) that rejects the information that hurts the organism. The information that isn’t rejected is either useful to the organism or at least neutral. This makes evolution a “ratchet” that continually adds useful or neutral information to a genome while rejecting the bad information generated by mutations.

    Have you ever noticed that Dembski’s Explanatory Filter can’t even handle this two step process? It asks if the process being tested is random OR lawful, but you can’t even enter a process that uses both into it.

    What else do you think Dembski is overlooking?

  143. 143

    Hi, Winston. Thanks for your response. You wrote:

    No. The possibilities are that active information was:
    1) Injected into the universe via design.
    2) Present at the original configuration of the universe.
    3) Gained over time through stochastic processes.

    The math rules out possibility number 3.

    You define, “Active Information”, simply as ratio of the probability of X occurring, given process A, and the probablity of X occurring, given process B. So under that definition, all the “Active information” is is the degree to which X is not a flat probability distribution.

    In other words “Active Information” is simply a measure of how lumpy the probability distributions are in the universe.

    So why does “the math” (and in what way does the math) “rule out possibility number 3? Stochastic processes can indeed make what is originally flat, lumpy.

    For instance, let’s take a deep tray of pebbles, of assorted sizes, each size, well mixed, and with a frequency distribution such that large ones are no better represented spatially than small ones. Your target is large pebble (99th percentile). Pick a pebble from the top. As they are perfectly mixed, your chances of a picking a large pebble is no better than your chances of any other pebble.

    Now shake the tray. What happens next is a stochastic process. That process results in the big pebbles arranging themselves on the top, and the small ones further down, the tiniest ones being on the bottom. Now pick a pebble from the top. It is highly likely to be a large pebble.

    So shaking the tray has inserted Active Information. Gained over time by a stochastic process (shaking the tray).

    If not, why not?

    That is why the remaining options are design or initial configuration of the universe. I’m not making any claims about the probability of the configuration of the universe. I’m merely pointing out that every has to be traced back to the configuration, you can’t appeal to an increase in active information after that point.

    In which case all you are saying is mainstream physics: that entropy is always increasing over the whole system – that you need to import energy to reduce local entropy (as I did when I shook the tray of pebbles). But we know this is possible – we can do it with pebbles, and plants do it with photosynthesis. Tornadoes do it. Adding energy to a system frequently reduces local entropy.

    So what has the LCI got to add that isn’t just a restatement of Boltzmann?

    Furthermore, my post discusses the point of the COI in the paragraphs immediately after the one that you quoted. That is the section I was intending to refer you to.

    The passage after the one I quoted reads:

    We argue that Darwinian evolution is incomplete. For advocates of Darwinian and design theories alike, the aim is to explain the complexity of biological life. Darwinian evolution does not explain the complexity of biological life because its success or failure depends on the fitness landscapes it operates on. To make it complete, the theory would have to include the nature of the fitness landscapes that make the evolutionary process work. Darwinian evolution is only part of a theory of the explanation of biological complexity.

    Which still doesn’t till me “the point of the COI”! Of course “Darwinian evolution is incomplete”. All science is incomplete – and always will be. Sure, Darwinian models don’t attempt to account for the existence of the physical and chemical laws that make Darwinian processes possible. IDists like to claim that ID is not “Designer-of-the-gaps” – but that seems to be entirely where your paragraph above is going. Or, if that isn’t where it is going, what is it you are trying to say? As you note:

    What remains to ask is whether or not any of these explanations of the fitness landscape actually work. To that, conservation of information provides no answer.

    Precisely. It doesn’t.

    The thing is, Winston, it seems to me that the further you, Dembski and Marks have travelled down the road Dembski embarked on with “Specified Complexity” and “No Free Lunch” (and I actually commend you in particular for this) the more, it seems to me, it turns out that the “Design Inference” is no more than the conclusion that the universe must have started with properties that facilitated non-uniform distributions of events. In other word, that it started out, if not lumpy, with the capacity to become so. Not only that, but it has a property of “1/fness” which is certainly interesting – it contains variability (Information, if you will, or Shannon Entropy) at multiple scales, from sub-atomic to inter-galactic.

    But we cannot infer a Designer from such a property, at least not from the probability of a universe with such a property, because we do not know the pdf of possible universes. It may be that lumpiness is a necessary property of existence. Ontologically, what could be said even to exist in a totally flat universe?

    Moreover, nowhere in your ENV article that I can find do you tell us what the ratio of p/q is the probability of, which was my second question.

    It isn’t a probability. Its a measurement of the bias of a search towards a target.

    – Winston

    OK – in any case it’s more like an odds ratio, not a probability (my bad). But you could also express it as a measure of the increase in probability of an event, given a process that is not present at baseline, right? So you could write it as:

    p(X|process B)/p(X|process A).

    where X is a “target”, A is the baseline process (e.g. one with a flat pdf), and B is the process of interest, e.g. one in which some outcomes are more likely than others. Yes?

    In which case you could simply convert it an actual OR:

    [p(X|process B)*1-p(X|process B)]/[p(X|process A)*1-p(X|process A)]

    Then you’d simply have a measure of how much more likely X is, given process B than it is given process A. And if you regarded process A as one in which all outcomes were equally probable (as Dembski often does), then, Active Information simply becomes an normalised expression of how much more probable X is under the process in question than it would be under equiprobable random draw.

    Where does this get us, other than to the conclusion that the universe is non-uniform?

  144. 144
    kairosfocus says:

    MatSpirit, The key problem is not incremental change within deeply isolated islands of function imposed by the requisites of interactive function arising from coupling many parts, but to initially find the islands of function, the viable body plans. In short variation of finch beaks among existing populations is one thing, arrival of flying birds as a body plan is quite another. And — once a priori evolutionary materialism is not imposed on the issue — there is simply no adequate body of observationally grounded evidence for an incrementally advantageous step by step treelike blind watchmaker path from microbes to Mozart, mango trees and molluscs etc across a continent of viable forms feasible to traversal in a few thousand MY. Not to mention, the challenge to bridge from chemicals in a pond or the like to a cell based first life form. And it is in that context that needle in haystack search challenge to find shores of function becomes pivotal. Hence WE’s 3-point cluster. KF

  145. 145
    fifthmonarchyman says:

    MF says,

    Either the initial configuration of the universe was such that what happened subsequently was possible or a designer made it possible. True but not very interesting.

    I say,

    I find it to be interesting.

    What happened subsequently was the awe inspiring spectacle that is life. We are used to attributing this majestic panorama to evolution and now we know the process is not up to the task. That is cool info to have.

    EL says,

    So shaking the tray has inserted Active Information. Gained over time by a stochastic process (shaking the tray).

    If not, why not?

    I say,

    No you have not inserted active information.

    The fact that the bigger pebbles will rise to the surface is a consequence of the laws of physics that are already present in the overall system from the beginning.

    You don’t add any information by letting the system play-out according to those already existing laws.

    The knowledge of what will happen when you shake already exists in your mind or you would not choose to shake in the first place.

    We have active information from the preexisting laws and/or from your preexisting knowledge. No information whatsoever is added with the shaking.

    The resulting increased probability of picking a big pebble could be accurately predicted before you even touched the tray.

    peace

  146. 146
    kairosfocus says:

    PS: Once we see the explanatory filter on a per aspect basis, it does address the hoped for effect of joint incremental chance and necessity. In particular, observe that the issue is a joint complexity-specificity condition that implies increments of 500+ bits of information. That is, bridging to islands of function. Much smaller increments within islands of function would be well within the reach of chance to explain high contingency aspects. Where, mechanical necessity does not explain high contingency aspects of an object or process but instead lawlike necessity where closely similar initial conditions lead to closely similar outcomes such as F = m*a.

  147. 147
    Zachriel says:

    fifthmonarchyman: We once again seem to be in agreement on a minor point but instead of noting that and simply moving on to more important stuff you insist on rephrasing.

    Here’s is your statement again:

    fifthmonarchyman: Premise one) Evolution is not searching for any specific target other than survival.
    Premise two) Evolutionary Algorithms are searching for specific targets.
    Conclusion) Evolutionary Algorithms are not “models” of Evolution.

    Premise two is faulty. Some evolutionary algorithms search for specific targets; some do not. Hence, your syllogism is faulty.

    fifthmonarchyman: We end up with comment after comment

    Sure. That’s what happens when you lose track of the thread, and we then have to repeat your original contention.

  148. 148
    Zachriel says:

    Mapou: almost all modern software programming languages enforce a strictly nested class hierarchy.

    Sure, but that doesn’t mean that when we look at human artifacts generally that they form a nested hierarchy.

  149. 149
    Joe says:

    As predicted Elizabeth just ignores her outrageous errors about nested hierarchies and computers. Willful ignorance it is then, eh, Lizzie?

  150. 150
    Joe says:

    Zachriel- Only intention can produce a nested hierarchy. Nested hierarchies are all artificial.

  151. 151

    Well, I could make the same assertion about you, Joe, and, I submit, with more justification.

    That fact that you think I am in error doesn’t make it so.

    The possibility remains that you are.

  152. 152
    Joe says:

    What assertion, Lizzie? I made my case against you and I will and can defend it. Let’s see what you have and then we can tell who is right. However it is a given you won’t even address what I posted that proves my points.

  153. 153

    No, you didn’t, Joe. You just asserted I was wrong. Well, I’m asserting you are. See how that works?

  154. 154
    Joe says:

    No, Lizzie, I made my case in two posts above- posts 139 and 140- you lose

  155. 155
    Upright BiPed says:

    Winston,

    Is it your contention that any configuration of matter is information?

  156. 156

    In #140 Joe wrote:

    Umm evolution is too messy to produce a nested hierarchy. Darwin went over that in 1859. Mayr went over that, Denton went over that and recently, in “Arrival of the Fittest”, Andreas Wagner went over that.

    No, it isn’t, and Darwin didn’t say so.

    <blockquote.

    Nested hierarchies require distinct groups. Transitional forms would blur all lines of distinction.

    You have misunderstood the meaning of the term “nested hierarchies” then. Try “phylogenies” – it means the same thing, and they do not require discrete groups.

    And BTW, the US Army is a nested hierarchy and it has nothing to do with evolution or descent with modification. Linnaean taxonomy, the observed nested hierarchy in biology, also has nothing to do with evolution nor descent with modification.

    An observed nested hierarchy (or phylogeny) is just that – an observation. Linnaeus observed that the properties of living things produced such a hierarchy. Darwin posited, firstly, that such a hierarchy could arise from common descent, but that that in itself wouldn’t account for adaptive change over the generations. His theory of Descent with Modification and Natural Selection accounted for adaptive change.

    This is what happens when TSZ doesn’t allow dissenting views. Its regulars wallow in their own ignorance.

    It most certainly does allow dissenting views. What it does not allow is the posting of porn/malware (or links) nor does it allow the posting of personal info. Those are the only things that will get a member banned.

    Apart from that, you can post any view you like at TSZ. We have only banned two people.

  157. 157

    fifthmonarchyman wrote:

    No you have not inserted active information.

    The fact that the bigger pebbles will rise to the surface is a consequence of the laws of physics that are already present in the overall system from the beginning.

    You don’t add any information by letting the system play-out according to those already existing laws.

    The knowledge of what will happen when you shake already exists in your mind or you would not choose to shake in the first place.

    OK, say it was an earthquake then.

    We have active information from the preexisting laws and/or from your preexisting knowledge. No information whatsoever is added with the shaking.

    The resulting increased probability of picking a big pebble could be accurately predicted before you even touched the tray.

    peace

    OK, fine. If you don’t count tray shaking as Active Information addition, then I am happy to stipulate that the Universe already contained the information required to allow tray shaking.

    In that case Winston’s three options are, as I said, two, and we are no forrarder.

    Design and/or an inital low-entropy (i.e. lumpy, non-uniform) universe.

    Why should we infer Design?

  158. 158
    fifthmonarchyman says:

    Zac,

    very last comment on this

    It might have been nice to explore exactly what targets EA seek.

    But that ship sailed and was lost at sea in the midst of boring comments about whether or not Evolution itself is a search and whether English phrases are targets or fitness landscapes.

    Blah blah blah ZZZZ

    peace

  159. 159
    Joe says:

    Lizzie I have quoted Darwin, so you lose. Phylogenetic trees are not nested hierarchies. You are confused. And Darwin did not say that common descent would produce a nested hierarchy. You are bluffing or lying.

  160. 160
    Joe says:

    Extinction has only defined the groups: it has by no means made them; for if every form which has ever lived on this earth were suddenly to reappear, though it would be quite impossible to give definitions by which each group could be distinguished, still a natural classification, or at least a natural arrangement, would be possible.- Charles Darwin chapter 14

    and

    There is another stringent condition which must be satisfied if a hierarchic pattern is to result as the end product of an evolutionary process: no ancestral forms can be permitted to survive. This can be seen by examining the tree diagram on page 135. If any of the ancestors X, Y, or Z, or if any of the hypothetical transitional connecting species stationed on the main branches of the tree, had survived and had therefore to be included in the classification scheme, the distinctness of the divisions would be blurred by intermediate or partially inclusive classes and what remained of the hierarchic pattern would be highly disordered.- Denton, “Evolution: A Theory in Crisis” page 136 (X, Y and Z are hypothetical parental node populations)

    and

    The goals of scientists like Linnaeus and Cuvier- to organize the chaos of life’s diversity- are much easier to achieve if each species has a Platonic essence that distinguishes it from all others, in the same way that the absence of legs and eyelids is essential to snakes and distinguishes it from other reptiles. In this Platonic worldview, the task of naturalists is to find the essence of each species. Actually, that understates the case: In an essentialist world, the essence really [I]is[/I] the species. Contrast this with an ever-changing evolving world, where species incessantly spew forth new species that can blend with each other. The snake [I]Eupodophis[/I] from the late Cretaceous period, which had rudimentary legs, and the glass lizard, which is alive today and lacks legs, are just two of many witnesses to the blurry boundaries of species. Evolution’s messy world is anathema to the clear, pristine order essentialism craves. It is thus no accident that Plato and his essentialism became the “great antihero of evolutionism,” as the twentieth century zoologist Ernst Mayr called it.- Andreas Wagner, “Arrival of the Fittest”, pages 9-10

    Elizabeth doesn’t know what a nested hierarchy is nor what it entails.

  161. 161
    Zachriel says:

    fifthmonarchyman: It might have been nice to explore exactly what targets EA seek.

    Evolutionary algorithms don’t always have specific targets, but can have fitness landscapes that the replicators navigate. We provided a couple of examples, one from the scientific literature that was indirectly cited by News in another thread, which is why we provided it. See Krupp & Taylor, Social evolution in the shadow of asymmetrical relatedness, Proceedings of the Royal Society B: Biological Sciences 2015.

    Word Mutagenation doesn’t have a target. Rather, the replicators explore the landscape without regard to finding any particular position on the landscape.

    fifthmonarchyman: But that ship sailed

    While relative fitness changes based on what other replicators are doing, you can even change the dictionary itself. As long as those changes occur gradually, then the replicators would track along with those changes, like a ship on the waves.

    This is all standard-standard for evolutionary algorithms.

  162. 162

    Joe

    Lizzie I have quoted Darwin, so you lose. Phylogenetic trees are not nested hierarchies. You are confused. And Darwin did not say that common descent would produce a nested hierarchy. You are bluffing or lying.

    Yes, they are, Joe. So if you mean something other than a tree structure by “nested hierarchy” then I don’t.

  163. 163
    Winston Ewert says:

    1) Biased as compared to what? What does unbiased look like?

    It is biased compared to whatever you take to be your natural distribution.

    2) You call the –log base 2 of (p/q) active information. But you say p/q is not a probability. Yet in other contexts you define information as –log base 2 of a probability (e.g. endogenous information and exogenous information. It seems like active information is a different kind of thing from other kinds of information.

    Indeed, it is somewhat different then other types of information.

    For any pdf mu that gives a probability P of “hitting a target” it is possible to find a higher level pdf mu-bar that creates pdfs that in total have the same probability of “hitting the target”.

    The paper uses a particular mu-bar derived from bar which ends up with the same total probability. There would in fact be many different mu-bar that would end up with the same probability of hitting the target as mu. I don’t believe it really matters which one you end up using.

    In other words “Active Information” is simply a measure of how lumpy the probability distributions are in the universe.

    It is the measure of how biased the distribution is towards a particular target. If the universe is lumpy in arbitrary ways that don’t tend toward the target of interest, the universe won’t have active information.

    So shaking the tray has inserted Active Information. Gained over time by a stochastic process (shaking the tray).

    Where did the tray shaking process come from? If it was in your “universe” from the start, then you always had a high amount of active information towards large pebbles. If you introduced it later, you are interfering with the universe injecting active information after its creation.

    In which case all you are saying is mainstream physics: that entropy is always increasing over the whole system – that you need to import energy to reduce local entropy (as I did when I shook the tray of pebbles). But we know this is possible – we can do it with pebbles, and plants do it with photosynthesis. Tornadoes do it. Adding energy to a system frequently reduces local entropy.

    So what has the LCI got to add that isn’t just a restatement of Boltzmann?

    I say that active information is non-increasing. Since entropy is also non-increasing you decide that this means that active information and entropy are the same thing. They are not the same thing.

    Sure, Darwinian models don’t attempt to account for the existence of the physical and chemical laws that make Darwinian processes possible.

    That’s not what I intended to say and what I elaborated in the following paragraphs. I’m not asking Darwinian theory to give an account for the laws of nature. That is outside of the scope of the theory. I’m asking Darwinian theory to make explicit the assumptions about the nature of fitness landscapes and physical laws. I’m asking Darwinist not to assume that the fitness landscapes and laws of physics don’t matter, but that theory has to assume something about the nature of the fitness landscapes in order to work.

  164. 164
    Joe says:

    Elizabeth- Phylogenetic trees are not nested hierarchies. Period and I can provide a reference if you really need one.

    And just because a nested hierarchy can be depicted as a tree does NOT mean all tree patterns are a nested hierarchy.

    A Summary of the Principles of Hierarchy Theory That would be a start.

    You have absolutely no idea what a nested hierarchy is even though I told you.

  165. 165
    Zachriel says:

    Winston Ewert: I’m asking Darwinist not to assume that the fitness landscapes and laws of physics don’t matter, but that theory has to assume something about the nature of the fitness landscapes in order to work.

    Actually, it’s a crosscheck. Evolution tends to work best when there is a ordered relationship between the genotype, phenotype, and environment. We have many observations which show this ordered relationship. Conversely, evolution tests the landscape, and historical evidence shows that the landscape exhibits properties amenable to evolution.

  166. 166
    fifthmonarchyman says:

    EL says,

    Design and/or an inital low-entropy (i.e. lumpy, non-uniform) universe.

    Why should we infer Design?

    I say,

    It is not a question of why we should infer design. We are hardwired to infer design. We have no choice in the matter.

    The only question is whether we have any valid reason to abandon that preexisting inference.

    On the other hand we have no natural inclination to expect an uncaused low-entropy universe. It is a forced conclusion. Why make it?

    peace

  167. 167
    Zachriel says:

    fifthmonarchyman: The only question is whether we have any valid reason to abandon that preexisting inference.

    In science, all presumptions have to be taken skeptically—especially intuitive notions of design, which have historically been misleading.

  168. 168
    fifthmonarchyman says:

    zac says,

    In science, all presumptions have to be taken skeptically

    I say,

    Skepticism is good. Hyperskepticism not so much.

    Skepticism says “I’m willing to explore other explanations if they arise”

    Hyperskepticism says “I will disregard my hardwired impressions until I am given illreputable proof of their validity”

    You say,

    —especially intuitive notions of design, which have historically been misleading.

    I say,

    This is simply incorrect. In my everyday life I’m much more likely to incorrectly attribute the artifacts of design to “natural processes”.

    I assume that you are referring to our discovery of proximate causes but that sort of thing does not in anyway prove that our initial impressions were misleading. The process goes something like this.

    1) I notice that the large pebbles are on the top of the tray and infer design.

    2) I discover the tray has been shook and this shaking can cause large pebbles to move to the top.

    number 2 does not invalidate number 1

    peace

  169. 169

    Hi, Winston

    I say that active information is non-increasing. Since entropy is also non-increasing you decide that this means that active information and entropy are the same thing. They are not the same thing.

    That was not my reasoning! It would be very strange reasoning, as entropy is always increasing! And in any case, it would be fallacious, even if the premises were true, which they aren’t.

    I’m interested in your answer as to why they are different, but let me explain why I think they are related, and why I don’t think you conclusion is any different from the conclusion that the universe started with low entropy, which is the reason life is possible, but which leads to the conclusion that ultimately it will cease.

    Entropy can be described, informally, as “lumpiness” or, slightly more formally as “non-uniformity”. If entropy is always increasing, then the ultimate fate of the universe would be “heat death” – a completely undifferentiated universe (hence the ultimate end of life).

    And thermondynamic entropy, as you know, has a very similar definition to Shannon entropy, give or take a constant – it’s the sum of pi*log pi, where pi is the probability of the ith possible microstate of the system. Shannon entropy is the same, except that pi is the normalised frequency (or probability if you like) of the available patterns.

    Shannon entropy is thus maximised in for a uniform probability distribution, which means that a channel in which the symbols have a uniform probability distribution has a greater width than any channel with the same number of symbols but a non-uniform distribution, i.e. can carry more information. So as a rather dangerous shorthand, we can equate high Shannon Entropy with high information content, although really all it means it that it has high information capacity.

    So what we can say is that in a toy universe in with high thermodynamic entropy, i.e. a fairly uniform distribution of possible microstates, no one microstate is very probable, and so the chances of any given microstate occurring at a given time is low. On the other hand, in a toy universe in which the entropy is low, the chances of certain microstates occurring might be very high (and others far lower). So high thermodynamic entropy means that you would need a lot of information (in the usual English meaning) to know when to look at the system in order to find a target microstate. In contrast, low thermodynamic entropy means that, as long as your target microstate was one of the high-probability ones, you would need very little information as to when to look – hang around for a few minutes and one will turn up.

    This means that in a universe in a low entropy state (which ours was, and still is, compared to what it will eventually be), the probability distribution of microstates is not flat. So we have lots of microstates that are really quite common, even though, when the universe is in a high-entropy state, they’d be very rare. For instance, in a universe in a high entropy state, you are vanishingly unlikely to find a room that is warmer at one end than the other. In when entropy is low, on the other hand, it happens quite often! Similarly, complex configurations, such as vortices, are common in a low entropy universe, even though they are extremely unlikely in a high entropy universe. Thus, compared to what is likely to occur in a high entropy universe, many extraordinary things are really quite likely in a low entropy universe – tornadoes, for instance.

    You can see where I’m going with this, I hope. If target X has probability p in a high entropy universe, but probability q in a low entropy universe, then the Active Information represented by the low entropy state becomes equivalent to the entropy differential. Therefore, the Active Information required to make vortices, and chemistry, and, indeed, Life, possible, was indeed present at the start of the universe – embodied in its low entropy state, i.e. the state that gave it its extreme non-uniformity; its tendency to clump; its tendency to form a wide variety of elements of different weights; its tendency to give rise to energy humps and wells; in other word, the properties we call Physics and Chemistry, and what I have also called its “1/f-ness” – variability at a vast range of scales from the sub-atomic to the inter-galactic. And as entropy increases, the differential between what is probable in a flat universe (maximum entropy; maximum flatness of pdf) and what is probable in a lumpy universe, diminishes. So indeed Active Information will decrease over time. “Information”, in your formulation will still be conserved, as the total is still pi*log pi when the distribution is completely flat.

    We could thank the Designer for granting us a universe that started in a low entropy state, but I’m not sure we can infer her existence from her apparent gift 🙂

    ETA: subscripts don’t seem to work 🙁 Hope you can figure out my subscripted i’s.

  170. 170

    Joe:

    Elizabeth- Phylogenetic trees are not nested hierarchies. Period

    In that case, where I wrote “nested hierarchy” interpret my meaning as “phylogenetic tree”. In other words a distribution of properties that forms a tree-diagram. What Darwin drew, in other words, and what the Linnaean taxonomy forms.

  171. 171
    Zachriel says:

    fifthmonarchyman: Skepticism says “I’m willing to explore other explanations if they arise”

    Sure.

    fifthmonarchyman: This is simply incorrect. In my everyday life I’m much more likely to incorrectly attribute the artifacts of design to “natural processes”.

    People have attributed mountains, storms, rivers, the Sun, jewels, the planetary motions, to design.

  172. 172
    Joe says:

    A tree diagram can be made from a common design. The history of cars can form a tree diagram.

  173. 173

    A tree diagram can be made from a common design.

    Yes it can. But whereas common design can produce both tree and non-tree like lineages, Darwinian evolution (at least if we confine ourselves to longitudinal inheritance vectors, as Darwin did, and which are by far the most dominant vectors in macro-cellular organisms), can’t produce non-tree-like lineages. So that is a limitation. So if life evolved, we’d expect to see that limitation manifest in the distribution of properties of organisms, and we do. Whereas, if a Designer periodically intervened, we might see frequent violations of the tree, for instance, the transfer of the excellent bird-lung pattern into mammals, who could well benefit from them, or a re-routing of the laryngeal nerve, at least for giraffes.

    The history of cars can form a tree diagram.

    If you were to plot a phylogeny for cars (using an objective technique), you’d get a reasonable tree, but a lot of jumps between lineages. So often, one company gets a neat design idea, and then all the other companies tool up to get on the band-wagon. Also, patents tend to keep things tree-like until they expire, then it’s HGT all over the shop.

    So the noticeable dearth of solution-swapping between lineages, i.e. the fact that the tree-structure is much deeper than would be expected by chance, or by the product of designers capable of imaginative leaps, idea-borrowing, and re-tooling, is strongly suggestive of evolutionary processes at work rather than the work of an active intervening Designer.

    Also the complete absence tools, factories or even footprints.

    However, the existence of a universe in which all this could happen, or, indeed, the existence of existence at all, may be an argument for a creator deity. It’s not one I find compelling though.

  174. 174
    Mark Frank says:

    WE

    It is biased compared to whatever you take to be your natural distribution.

    So if I take my natural distribution to be different from yours then something may biased for you but not for me? Yet active information is a measure of bias. Whose bias?

  175. 175
    Joe says:

    Elizabeth- Darwinian evolution doesn’t have a mechanism capable of getting beyond populations of prokaryotes and that is given starting populations of prokaryotes. And guess what? Prokaryotes produce non-tree like patterns.

    Also given the nature of gradual evolution we wouldn’t expect a tree- a bush, maybe- but not a tree.

  176. 176
    fifthmonarchyman says:

    Zac says,

    People have attributed mountains, storms, rivers, the Sun, jewels, the planetary motions, to design.

    I say,

    Yes and knowing the proximate causes of those things does not invalidate that original attribution any more than knowing that the pebble tray shook invalidates our impression that the big pebbles are on top due to design.

    Your problem is you somehow have mistaken the proximate causes of things with their ultimate cause.

    Winston Ewert’s paper can help you to get past this sort of muddled thinking if you will simply allow yourself think about the implications.

    peace

  177. 177

    @ Joe # 193

    It’s not the gradualness of evolution that would make it bush-like, but non-longitudinal inheritance mechanisms. And indeed, in bacteria, we do see lots of horizontal inheritance mechanisms, and indeed we see much more bushiness.

    In sexually reproducing species, we have other ways of recombining our genetic material, and so even though most inheritance is down lineages, there is still lots of scope for variation.

    And the issue of prokaryotes to eukaryotes is an interesting one – the best hypothesis, and one supported by quite a lot of evidence – is probably Margulis’s. But there are others (“membrane infolding”) for instance. Not that those are non-Darwinian – it’s just that they presuppose a specific mechanisms for a fairly major heritable change.

  178. 178
    mike1962 says:

    Elizabeth LiddleWhereas, if a Designer periodically intervened, we might see frequent violations of the tree, for instance, the transfer of the excellent bird-lung pattern into mammals, who could well benefit from them, or a re-routing of the laryngeal nerve, at least for giraffes.

    On the contrary, if we assume the Designer wanted to befuddle people like you, and give you rocks on which to stumble, we would expect that the Designer would occasionally arrange that animals would have odd things like laryngeal nerves that seemingly could use re-routing. (Although the Giraffe does just fine with the current configuration.) And guess what? That’s exactly what we find! How scientific such reasoning is!

  179. 179

    mike 1962

    On the contrary, if we assume the Designer wanted to befuddle people like you, and give you rocks on which to stumble, we would expect that the Designer would occasionally arrange that animals would have odd things like laryngeal nerves that seemingly could use re-routing. (Although the Giraffe does just fine with the current configuration.) And guess what? That’s exactly what we find! How scientific such reasoning is!

    Precisely so, Mike. Which is why we cannot conclude from what we observe that there was no Designer. The Designer hypothesis is consistent with absolutely any observation we could possibly make (if we stipulate that the Designer is omnipotent anyway).

    Which is why of course, people make no such conclusions (or, if they do, why such a conclusion is not scientific).

    All scientist conclude is that there are non-Design mechanisms that could do the job.

    The problem I have with the ID movement is not their conclusion but their method of reasoning. I do not think you can infer a Designer from our observations, any more than we can infer not-a-Designer.

  180. 180
    Mapou says:

    Liddle:

    A tree diagram can be made from a common design.

    Yes it can. But whereas common design can produce both tree and non-tree like lineages, Darwinian evolution (at least if we confine ourselves to longitudinal inheritance vectors, as Darwin did, and which are by far the most dominant vectors in macro-cellular organisms), can’t produce non-tree-like lineages. So that is a limitation. So if life evolved, we’d expect to see that limitation manifest in the distribution of properties of organisms, and we do. Whereas, if a Designer periodically intervened, we might see frequent violations of the tree,

    But this is exactly what we see in nature. We see flying mammals, swimming birds and mammals, walking fish, we see dolphins with similar echolocation systems as bats, we see different ocean species sharing common swimming mechanisms, etc. Lateral genes are such a problem for Darwinism that Darwinists have been piling up all sorts of non-explanations to wipe the egg off their faces. This is precisely why, lately, we hear so much silly pseudoscientific talk about convergent evolution.

    The truth is that most of the LGTs occur early in the tree, which is precisely what we would expect from design.

    On another tangent, what will you people do when long sequences and even entire genes are found to be identical in distant branches of the tree? Will you continue to plead convergence or will you come up with some other non-scientific, just-so story?

  181. 181
    Joe says:

    Elizabeth, The very nature of transitional forms would make it bush-like. Every population can be looked at like an asterisk as that is what pattern it has the potential to create.

    Endosymbiosis is nothing more than “those eukaryotic organelles sure do look like they coulda been bacteria at one time”- that’s speculation, not science.

    Those still need to be tested.

    The Designer hypothesis is consistent with absolutely any observation we could possibly make (if we stipulate that the Designer is omnipotent anyway).

    That is false. There is a reason not all rocks are artifacts and all deaths are not murders.

    The problem I have with the ID movement is not their conclusion but their method of reasoning.

    It’s the same reasoning used by archaeologists, forensic science and SETI and is based on our knowledge of cause and effect relationships. And all one has to do to refute it is demonstrate that mother nature is sufficient.

    ID posits testable entailments.

  182. 182
    fifthmonarchyman says:

    EL says,

    I do not think you can infer a Designer from our observations.

    I say,

    But you do infer design from our observations. You are hardwired to do so. That is not at issue it is a fact.

    What you have is a preexisting design inference that you have chosen to discount for some reason. The only question is do you have warrant to do so.

    You don’t come to to the design question from a neutral position. You can’t.

    You start on the design side of the fence and therefore need compelling evidence to move to the nondesign side.

    Do you have any?

    peace

  183. 183

    mapou

    But this is exactly what we see in nature. We see flying mammals, we see dolphins with similar echolocation systems as bats, we see different ocean species sharing common swimming mechanisms, etc.

    Those examples help the Darwinian story, not yours, I’m afraid, mapou. Flying mammals have wing structures in which the anatomical homologs are clearly mammalian, not bird-like. And dolphins and bats do indeed share genes that lend themselves to echo-locating functions – not surprisingling as they are quite closely related, so evolving similar functions from similar genetic material is not especially remarkable. What is far more remarkable is that when organisms from different lineages (e.g birds and mammals) adapt to a similar environment (marine), the same features are present, but with homologs relating to their own lineages, not each others. If this were not the case, computer-derived phylogenies wouldn’t consistently give a tree, with penguins at the end of one branch and seals at the end of another.

    Lateral genes are such a problem for Darwinism that Darwinists have been piling up all sorts of non-explanations to wipe the egg off their faces.

    Not at all. Why should there be a problem? The fact that there are additional inheritance vectors does not falsify the mechanisms that were originally postulated. And certainly do not falsify Darwin’s principle of natural selection from variants – it’s just that we know know that there are non-longitudinal means of producing those variants.

    This is precisely why, lately, we hear so much silly pseudoscientific talk about convergent evolution.

    Convergent evolution normally refers to organisms that reach similar macroscopic morphologies by means of very different anatomical adaptations, e.g. birds and bats; dolphins and fish. They don’t present a problem, because one look at the skeleton will tell you that they are from different lineages. But clearly, an environment that favours streamlining and flippers will tend to favour variants that are more streamlined and have more flipper-like limbs. You are finding problems where there are none.

    If you want to find a problem with scientific accounts of biology, I suggest you focus on OOL, because we still don’t have a good account of that, and may never, although there are a lot of very suggestive leads.

    The truth is that most of the LGTs occur early in the tree, which is precisely what we would expect from design.

    Do you mean HGT? Because that’s where they are most abundant – at the root of the tree. And I don’t see whay it’s a prediction of Design. And we actually know a lot about how HGT happens.

    Or perhaps you do mean LGT? In which case – sure, hybridisation occurs most often near branching points. But that’s absolutely obvious under Darwinian mechanisms. It’s not at all obvious under design – the reverse, I’d say, is true: it’s when products have gone quite a long way down the lineage that you start to get hybrids (iPhones, for instance, from computers + phones).

    On another tangent, what will you people do when long sequences and even entire genes are found to be identical in distant branches of the tree? Will you continue to plead convergence or will you come up with some other non-scientific, just-so story?

    They already are, as you’d expect under common descent. Or do you mean “and absent from intervening branches”?

    I don’t know, mapou – let me know when it’s been discovered, and I guess the scientists who discover it will tell us how they propose to investigate possibly mechanisms.

  184. 184
    SimonLeberge says:

    Bob O’H and DiEb:

    The stochastic process defined by Dembski, Ewert, and Marks terminates with the selection of an element of the space Omega. Nature has not stopped to say, “Here it is — birds!” To suggest that Ewert thinks he has a model of biological evolution would be to insult his intelligence. That leaves us to ask why he and his editor have tossed around the term “conservation of information” at ENV. The theorem of DEM does not apply to the non-terminating process that has generated birds. I would allow that it applies to the process that ended with extinction of the dodos. But I can’t bring myself to regard the empty population as an example of biological complexity.

  185. 185
    Joe says:

    Elizabeth:

    Flying mammals have wing structures in which the anatomical homologs are clearly mammalian, not bird-like.

    Yes, they have a common DESIGN.

    Convergent evolution is just another “just-so” explanation. Dr Spetner lays the claim bare in “The Evolution Revolution”.

  186. 186

    fifthmonarchyman says:

    EL says,

    I do not think you can infer a Designer from our observations.

    I say,

    But you do infer design from our observations. You are hardwired to do so. That is not at issue it is a fact.

    Let me rephrase as I was unclear: I do not think you can infer a Designer of biological organisms from our observations of biological organisms. I do not think the evidence supports such an inference. The evidence is perfectly consistent with it (because an omnipotent Designer could design things any way she wanted, including designing them so that they looked as though they had evolved) but to make a positive inference, you’d have to be able to test it specifically. And you can’t do that easily without being more specific about constraints on the putative Designer.

    What you have is a preexisting design inference that you have chosen to discount for some reason. The only question is do you have warrant to do so.

    You don’t come to to the design question from a neutral position. You can’t.

    You start on the design side of the equation and therefore need compelling evidence to move to the nondesign side.

    Do you have any?

    peace

    Now I am misunderstanding you. I don’t know what you mean. I am not on “the nondesign side”. I don’t know whether there was/is a designer or not. I don’t see any evidence for one, but then an omnipotent designer could choose not to leave evidence.

    So we certainly can’t rule an omnipotent designer out. But nor can we conclude that there must be one.

  187. 187

    Yes, they have a common DESIGN.

    Convergent evolution is just another “just-so” explanation. Dr Spetner lays the claim bare in “The Evolution Revolution”.

    No, they don’t have a common Design. Bat wings and bird wings are quite different designs. It’s if anything like one designer was asked to make a flying animal out of a small dinosaur, and another was asked to make a flying animal out of a mouse.

    Which is exactly what you’d expect of a pair of animals so obviously related to dinosaurs and mice, respectively, in so many other respects.

  188. 188
    fifthmonarchyman says:

    EL says,

    but to make a positive inference, you’d have to be able to test it specifically. And you can’t do that easily without being more specific about constraints on the putative Designer.

    I say,

    No, you start with a positive inference from your observations you then must suppress this notion. check it out

    http://www.wsj.com/news/articl.....4046805070

    You say,

    I am not on “the nondesign side”. I don’t know whether there was/is a designer or not. I don’t see any evidence for one, but then an omnipotent designer could choose not to leave evidence.

    I say

    What I mean is you begin the game believing that what you see is the result of design and for some reason you abandoned that position for what you now think is a more neutral one.

    You did not start life on the fence you are not a blank slate.

    The question is did you have warrant for your change in perspective.

    Do you have convincing evidence that life is not designed? Is such evidence even possible?

    I think you have already acknowledged it’s not. So why the change?

    peace

  189. 189
    Joe says:

    Elizabeth:

    No, they don’t have a common Design. Bat wings and bird wings are quite different designs

    All mammals have a common design

  190. 190
    Joe says:

    Elizabeth:

    I do not think you can infer a Designer of biological organisms from our observations of biological organisms.

    That is why we also use other observations. If we could test unguided evolution you would have something. Yet it can’t even be modeled and offers no testable entailments.

  191. 191

    I say

    What I mean is you begin the game believing that what you see is the result of design and for some reason you abandoned that position for what you now think is a more neutral one.

    You did not start life on the fence you are not a blank slate.

    The question is did you have warrant for your change in perspective.

    Do you have convincing evidence that life is not designed? Is such evidence even possible?

    I think you have already acknowledged it’s not. So why the change?

    I don’t really know what you are asking me. No, I’ve just said, I don’t have convincing evidence that life was not designed. If the putative designer can make life look not-designed, then there’s no way we can rule it out, just as we can’t rule out the possibility that the earth was created last Thursday with the appearance of great age.

    I just don’t see any good arguments to infer Design from biology.

    To take an analogy: I might be perfectly convinced that my son has gone out to see a movie, but I cannot infer that from the fact that his coat isn’t on the hook. It could be on the floor of his bedroom, or he could indeed be out, but at the pub.

    It’s the inferential chain I am disputing, not the conclusion.

    And it seems to me that Ewert, Dembski and Marks are themselves conceding that the universe might be perfectly capable of producing living things “naturally” given enough “Active Information” at Big Bang. Which would not be an argument from biology, but an argument from physics and chemistry.

    Not a very good one, I have to say, but closer to a good one than inferring it from biology.

    I’d say the biggest argument for a creator deity is the fact that anything exist at all: “why is there anything rather than nothing?”

    But I don’t think it’s terribly watertight, even then. “Nothing” turns out to be a complicated matter when space itself is one of the Things that can be Noth.

  192. 192

    Joe:

    Yet it can’t even be modeled

    Yes it can and is. That you think it can’t be doesn’t make you correct.

    and offers no testable entailments.

    Yes it does, and has been tested, many times, in field, in lab, both experimentally and observationally, and of course in silico.

  193. 193
    fifthmonarchyman says:

    EL says,

    I just don’t see any good arguments to infer Design from biology.

    I say,

    You don’t need arguments. You are hardwired to infer design. You need arguments to justify your abandonment of this inference.

    You say,

    It’s the inferential chain I am disputing, not the conclusion.

    I say,

    There is no inferential chain you infer design from you observations in one step.

    quote:

    “Biology is the study of complex things that appear to have been designed for a purpose.”
    end quote:

    Richard Dawkins

    I’m not sure why you are having such a hard time grasping this. You need evidence to support changing from the position that life is designed to one that you now feel is more neutral.

    you say,

    And it seems to me that Ewert, Dembski and Marks are themselves conceding that the universe might be perfectly capable of producing living things “naturally” given enough “Active Information” at Big Bang.

    I say,

    The key word is “might” we don’t abandon our hardwired impressions just because it’s possible they are mistaken. We need good reasons to do so.

    It’s possible I might be a brain in a vat but I have seen no compelling evidence to abandon my hardwired impression that my body exists so I don’t.

    The same approach should be sufficient when dealing with the hardwired design inferences we all make.

    peace

  194. 194
    Zachriel says:

    fifthmonarchyman: Yes and knowing the proximate causes of those things does not invalidate that original attribution any more than knowing that the pebble tray shook invalidates our impression that the big pebbles are on top due to design.

    If you want to make a non-scientific claim, then we have no objection. If you claim there is scientific evidence of design in weather or biology, then we disagree.

  195. 195

    fifthmonarchyman:

    You are hardwired to infer design

    I don’t know what this means, or why it would be relevant.

    ETA: I also don’t agree with Dawkins’ definition of biology. It’s not that I don’t “grasp” it – I think it is incorrect. Biology is the study of living things. I don’t agree that living things have the appearance of being designed. I think they have the appearance of having been born to similar parents.

  196. 196
    Joe says:

    Elizabeth, I call your bluff. Please present these alleged models for UNGUIDED evolution. And after that please tells us about these alleged testable entailments for UNGUIDED evolution.

  197. 197
    Joe says:

    Zachriel:

    If you claim there is scientific evidence of design in weather or biology, then we disagree.

    Then present a viable alternative for biology.

  198. 198
    Joe says:

    Elizabeth:

    I just don’t see any good arguments to infer Design from biology.

    You haven’t demonstrated that you have understood them. You don’t even appear to understand exactly what is being debated. And if you read comment 139 it appears that you don’t understand computers.

  199. 199

    But birds are different, right?

  200. 200
    Joe says:

    Birds and bats share a common design also. It is on a different level than the common design shared by mammals. All animals share a common design on some level- at least one level. And that common design is elucidated by Linnaean taxonomy.

  201. 201
    Mapou says:

    Arguing with a Darwinist about intelligent design is like arguing with a Jehovah’s witness about blood transfusion.

  202. 202
    fifthmonarchyman says:

    EL says,

    I don’t know what this means, or why it would be relevant.

    I say,

    It means that science has demonstrated that we are hardwired to infer design when we observe certain things in nature. Ask a small child why the zebra is striped and she will assume that it was designed to be that way,

    It’s relevant because your position demands we deny this inborn assumption and instead come at the design question from a neutral position. Yet you don’t demand the same for other hardwired inferences.

    For example you don’t demand positive evidence before you grant that the materiel universe or your body exists. You tentatively accept these things until a better explanation for your impressions are given.

    You say,

    Biology is the study of living things. I don’t agree that living things have the appearance of being designed.

    I say,

    I’m not interested in present opinion I’m interested in how you can justify changing your mind.

    At one time you did believe that life appeared designed . Everyone does. That what it means to say that this inference is hardwired. The universality of this impression has been confirmed scientifically.

    What compelling evidence do you have for abandoning your natural belief?

    peace

  203. 203
    fifthmonarchyman says:

    Zac said,

    If you want to make a non-scientific claim, then we have no objection. If you claim there is scientific evidence of design in weather or biology, then we disagree.

    I say,

    The claim is that these things can not be produced algorithmically without the addition of active information. It does not matter whether you agree or not only if you can disprove the claim,

    Several hundred comments are all the evidence I need that you can not.

    peace

  204. 204
    Mung says:

    Mark Frank:

    So if I take my natural distribution to be different from yours then something may biased for you but not for me? Yet active information is a measure of bias. Whose bias?

    I’m sure the bias is all yours Mark. 😀

  205. 205
    SimonLeberge says:

    Mark Frank and DiEb and Bob O’H:

    The ID movement has a heavy investment in the terms “search,” “target,” “search for a search,” and “conservation of information,” going back at least to No Free Lunch (2002), and continuing through Being as Communion (2014). Ewert acknowledges now that a “search” doesn’t really search for the “target,” but sticks with the terms anyway. We can see that the change has yet to permeate his thinking, as he continues to refer to categorical success and failure in evolution:

    Darwinian evolution does not explain the complexity of biological life because its success or failure depends on the fitness landscapes it operates on.

    This isn’t just careless language. It makes sense only if something really does seek to “hit the target.”

    Ewert acknowledges that “active information” is a measure of bias, not information. But he continues to indicate otherwise by referring to “conservation of information.” He avoids speaking of the “search for a search,” though it is that to which the “conservation of information” theorem applies.

    I’d like to hear what you have to say about improving terminology. The “target” is just an event. DiEb sometimes refers to a “search” as a guess of an element of Omega. I’m fine with that, but hardly anyone else is. I know it seems silly, but uninformed decision process might get a better reception, in part because it indicates that there’s a sequence of steps, and in part because it doesn’t come across as flippant. DEM’s process S does make sequential decisions on which elements of Omega to “inspect” (take data on), and Delta(S) is a final selection of one of the inspected elements.

    The “search for a search” is just a mixture of uninformed decision processes, which is an uninformed decision process. The whys and wherefores are few and simple.

    There is no need for the “conservation of information” theorem.

  206. 206
    Mung says:

    SimonLeberge:

    Ewert acknowledges now that a “search” doesn’t really search for the “target,” but sticks with the terms anyway.

    The opening sentence from the abstract of A General Theory of Information Cost Incurred by Successful Search:

    This paper provides a general framework for understanding targeted search.

    Further:

    We continue to assume that targets are fixed. Search for fuzzy and moveable targets will be the topic of future research by the Evolutionary Informatics Lab.

  207. 207
    Mung says:

    SimonLeberge:

    The ID movement has a heavy investment in the terms “search,” “target,” “search for a search,” and “conservation of information,” going back at least to No Free Lunch (2002), and continuing through Being as Communion (2014).

    And this will probably continue to be the case as long as targeted searches continue to be presented as proofs of evolutionary theory.

  208. 208
    Mung says:

    Elizabeth Liddle:

    Entropy can be described, informally, as “lumpiness” or, slightly more formally as “non-uniformity”.

    There are indeed all sorts of silly ways to talk about entropy, most of which are wrong. If you ask someone what Entropy is they won’t be able to tell you.

    Elizabeth Liddle:

    I’m interested in your answer as to why they are different, but let me explain why I think they are related…

    They are related because Shannon’s measure of information can be applied to any probability distribution.

    However, there are many cases in which the entropy is undefined. That’s why they are different.

    Simple and concise.

  209. 209
    SimonLeberge says:

    Mark Frank:

    If the space Omega is countably infinite, then there definitely is no “natural” baseline distribution. DEM rule this out, but they shouldn’t. The most “natural” choice of a space of genotypes of organisms is countably infinite. Even if they argue for an upper bound on the size of a genotype, that doesn’t get them a particular distribution.

    That’s my best biologically-relevant example.

  210. 210
    kairosfocus says:

    S = k* ln W
    (It’s actually on Uncle Ludwig’s grave . . . )

  211. 211
    SimonLeberge says:

    Mung (224-225):

    Ewert has taken a big step away from DEM in his article at Evolution News and Views. You probably shouldn’t scour it for quotes. You might suffer the awful realization that I gave you correct explanations of the math before any of the ID theorists published them.

  212. 212

    fifthmonarchman wrote:

    At one time you did believe that life appeared designed . Everyone does. That what it means to say that this inference is hardwired. The universality of this impression has been confirmed scientifically.

    I dispute your premise. I don’t think we are “hard-wired” to think that everything is designed. I think we are born with the capacity to infer intention, and that in the early years some children may over-generalise – which is typical of a lot of features of early child development – a child will, typically, learn the word for “dog” and then call all four-footed animals “dogs”. My son, interestingly, once asked me “how do tornados see to suck?” His default was to assume they were intentional agents. He was very relieved when I explained that they were inanimate.

    But these intuitions are not universal.

    But even if your premise was correct, there is no need to justify why erroneous assumptions, or defaults, we are “hard-wired” to entertain as children should not be replaced by evidence-based conclusions as we become mature enough to call our instinctive assumptions into question.

  213. 213

    Mung:

    There are indeed all sorts of silly ways to talk about entropy, most of which are wrong. If you ask someone what Entropy is they won’t be able to tell you.

    Most people can tell me very precisely. But not all will give the same definition. That doesn’t matter, as long as they make it clear what they are talking about. I was talking about the flatness of the probability distribution of microstates (thermodynamics) or symbols(Shannon entropy), which is maximally flat when – sum p i * log p i is maximal.

  214. 214
    Zachriel says:

    fifthmonarchyman: The claim is that these things can not be produced algorithmically without the addition of active information. It does not matter whether you agree or not only if you can disprove the claim,

    The genome can incorporate information about its relationship to the environment through evolution.

    Are you claiming there is scientific evidence of design in weather?

  215. 215
    Mung says:

    SimonLeberge:

    Mung (224-225):

    Ewert has taken a big step away from DEM in his article at Evolution News and Views. You probably shouldn’t scour it for quotes.

    Too late! I already mocked his use of entropy in that article.

    That said, if entropy is lumpiness and birds are lumpy then birds are entropy and entropy is for the birds.

  216. 216
    Mung says:

    Elizabeth Liddle:

    Most people can tell me very precisely [what Entropy is]. But not all will give the same definition [of what Entropy is].

    That’s one way to define precision, I suppose.

    Does entropy have mass and velocity?

    I was talking about the flatness of the probability distribution of microstates (thermodynamics) or symbols(Shannon entropy), which is maximally flat when – sum p i * log p i is maximal.

    1.) There is no such thing as Shannon entropy.

    2.) Thermodynamics

    Let me quote from Wikipedia:

    Thermodynamics is a branch of physics concerned with heat and temperature and their relation to energy and work. It defines macroscopic variables, such as internal energy, entropy, and pressure, that partly describe a body of matter or radiation. It states that the behavior of those variables is subject to general constraints, that are common to all materials, not the peculiar properties of particular materials. These general constraints are expressed in the four laws of thermodynamics. Thermodynamics describes the bulk behavior of the body, not the microscopic behaviors of the very large numbers of its microscopic constituents, such as molecules. Its laws are explained by statistical mechanics, in terms of the microscopic constituents.

  217. 217
    Mung says:

    Zachriel:

    The genome can incorporate information about its relationship to the environment through evolution.

    Where does “aboutness” come from?

  218. 218
    fifthmonarchyman says:

    EL said,

    I dispute your premise.

    I say,

    It’s not a premise it’s a summary of the latest scientific findings on the subject

    EL said,

    I think we are born with the capacity to infer intention, and that in the early years some children may over-generalise

    I say,

    It’s not just children adults universally make the same inference. We all do. It’s how we are wired.

    check it out

    http://www.science20.com/write.....oke-139982

    and

    http://www.icea.ox.ac.uk/fileadmin/CAM/HADD.pdf

    and

    http://www.iep.utm.edu/theomind/

    you say,

    there is no need to justify why erroneous assumptions, or defaults, we are “hard-wired” to entertain as children should not be replaced by evidence-based conclusions as we become mature enough to call our instinctive assumptions into question.

    I say,

    I’m not saying that we should not question our instinctive assumptions as more evidence becomes available.

    I’m saying that in order to be consistent we must have the same evidential standard for abandoning the design inference that we do for other innate assumptions.

    In other words in order to be justified in ignoring the universal assumption of design you need compelling evidence.

    Do you have such evidence?

    Peace

  219. 219
    fifthmonarchyman says:

    Zac says,

    Are you claiming there is scientific evidence of design in weather?

    I say,

    geez

    no I’m claiming it is impossible to talk to you.

    peace

  220. 220
    Winston Ewert says:

    Y’all have done a crazy amount of posting in my absence. There is no way I can keep up with this thread. But I’ll try to answer a few question.

    So if I take my natural distribution to be different from yours then something may biased for you but not for me? Yet active information is a measure of bias. Whose bias?

    Indeed, if we choose different natural distribution the active information of the same search could be very different for me and you. You might conclude the universe had near-zero active information, and I might conclude it had a lot of active information. However, either way we come back to the same conclusion: the original configuration of the universe either had a natural distribution which made my target probable or had a strong bias to make my target probable.

    OK, fine. If you don’t count tray shaking as Active Information addition, then I am happy to stipulate that the Universe already contained the information required to allow tray shaking.

    In that case Winston’s three options are, as I said, two, and we are no forrarder.

    Design and/or an inital low-entropy (i.e. lumpy, non-uniform) universe.

    Why should we infer Design?

    As I stated in my ENV article, COI does not give us a solid reason to infer design. A darwinist can (and should) accept the COI as true without rejecting Darwinism. It poses a problem only for a Darwinist who thinks that all that matters is selection, replication, and mutation, and the laws of physics don’t matter and could equally as well be anything.

    That was not my reasoning! It would be very strange reasoning, as entropy is always increasing! And in any case, it would be fallacious, even if the premises were true, which they aren’t.

    Indeed, it would be rather bizarre reasoning, you’ll have to forgive my typo.

    What’s the difference between entropy and active information?

    Active information is a consequence of probability. It doesn’t assume anything about the laws of physics. This is useful for being able to make limited claims about universes that we know nothing about. As long as they operate according to a stochastic process, we can claim that they follow the conservation of information. We cannot make the same claim about entropy.

    For example, consider a universe which has only one law: gravity. It can start with a very uniform distribution of particles. It thus starts with high-entropy. Over time, the particles are attracted to each other into a giant ball those losing the entropy and transitioning to low-entropy. If we take our target to be that ball, we have a very large amount of active information. But the central point is that COI still applies, even through entropy goes in reverse in this imaginary universe.

    Another issues is that active information requires that the universe be biased towards some particular target. Low entropy merely requires that it be clumpy. In that way, active information is more specific. However, if I stick only with entropy, I can only look at the question in terms of the probabilities of states with similar entropy to birds.

    As another example, consider the question of why the water on earth is predominately located in the oceans and isn’t uniformly distributed throughout the earth’s atmosphere. There is a high amount of active information in the target of having full oceans. There must something in the laws of the universe that make this happen. The answer is pretty obvious: gravity.

    If I look at the same question from the perspective of entropy, what do we get? Certainly, having all the water in the ocean can be described as a low entropy state. Entropy tells us that this has to be paid for by increasing the entropy elsewhere.

    So to summarize:

    1) Entropy is a physical law, conservation of information is a consequence of the laws of probability.
    2) Active information is concerned with particular targets, entropy is concerned with non-uniformity in general
    3) Active information is concerned with the underlying laws that made an outcome probable, entropy is concerned with balancing out local decreases in entropy with increases elsewhere.

  221. 221
    Mung says:

    WE:

    Y’all have done a crazy amount of posting in my absence. There is no way I can keep up with this thread. But I’ll try to answer a few question.

    Indeed. Feel free to address yourself to those comments you feel are actually relevant. Most are not. I call this the Entropy theory of Blog comments. [They mostly clump around irrelevance.]

    Many of us appreciate your comments here. At least some of us do our best not to misrepresent them.

  222. 222

    Winston:

    It poses a problem only for a Darwinist who thinks that all that matters is selection, replication, and mutation, and the laws of physics don’t matter and could equally as well be anything.

    I’ve never actually met a “Darwinist” who thought any such thing. It would be a bizarre position.

    But Winston, with respect, this seems a little disingenuous. For years, William Dembski appeared to be arguing that we could infer a Designer from the complexity (Specified Complexity) of biological organisms because Darwinian processes couldn’t produce them with adequate probability (in fact, he often said that Specified Complexity was closely related to Irreducible Complexity, which is presumably why that terrible rendering of a bacterial flagellum still heads this site’s page). Darwin has been in the sights of the ID movement for years, and Behe was front and centre at the Dover trial. Here is Dembski in “Specification: the Pattern that Signifies Intelligence”:

    Next, define p = P(T|H) as the probability for the chance formation for the bacterial flagellum. T, here, is conceived not as a pattern but as the evolutionary event/pathway that brings about that pattern (i.e., the bacterial flagellar structure). Moreover, H, here, is the relevant chance hypothesis that takes into account Darwinian and other material mechanisms.

    And while he did not say how to compute P(T|H) in a way that “takes into account Darwinian and other material mechanisms”, the fact that he chose this example suggests that he considered it small enough to pass the Specification test.

    Now you are saying that the Dembski project, at least, presents no problem for Darwinian evolution (and I agree) unless that Darwinian theory posits that “the laws of physics don’t matter”. Darwin’s theory was always predicated, not just on the laws of physics, but specifically, on the existence of ancestral forms of life that reproduced with heritable variance in reproductive success.

    If “the laws of physics could be anything”, then under most alternative scenarios, there could be no such ancestral forms of life. There could be no reproduction; there could be no mapping of similar genotypes on to similar phenotypes; there could be no heritable variance in reproductive success in the current environment. You statement simply tells us that our universe is life-friendly.

    Which is simply not under dispute.

    Thank you for the rest of the your post. Yes, I basically agree. And in fact, I suggest, that many of Dembski’s critics have been making this point for years – that what you have written there is all his case amounts to – namely a case against a straw man. Which was why I used to make lame jokes about the “eleP(T|H)ant in the Room”.

    If the argument here is, essentially, “oh Darwinian evolution works just fine, but it requires as a prerequisite a universe in which the laws of physics and chemistry are something like the ones we observe in ours” any Design Inference does boils back down some kind of fine-tuning argument. Or a variant on Aquinas: “why is there low entropy structure rather than high-entropy mush?”

    Which is a good question, and one with potential metaphysical implications, but one more aligned with the position of, say BioLogos than that of the Discovery Institute. Or indeed, of most modern theology.

  223. 223
    Box says:

    Lizzie:

    You statement simply tells us that our universe is life-friendly.

    Which is simply not under dispute.

    If you have no problem with the idea that all the information of life is already present in the laws of physics, then fine.
    However, you should realize that a little probability arithmetic will show that such incredibly fine-tuned laws inevitably point to a designer.

    Lizzie: Now you are saying that the Dembski project, at least, presents no problem for Darwinian evolution (and I agree)(…)

    You do understand that it presents the problem: “where does the information come from”, right? IOW it does present a problem for “not-front-loaded unguided evolution”.

  224. 224
    Joe says:

    Elizabeth:

    And while he did not say how to compute P(T|H) in a way that “takes into account Darwinian and other material mechanisms”, the fact that he chose this example suggests that he considered it small enough to pass the Specification test.

    Elizabeth, no one knows how to calculate P(T|H) for most biological structures because evolutionary biologists cannot provide any numbers. They can’t provide any numbers because they have no idea- they can’t even model their claims.

    That is the real “eleP(T|H)ant in the Room”, Lizzie. Your position can’t even say if there is a feasibility let alone provide any actual evidence or a probability.

  225. 225
    fifthmonarchyman says:

    What I see here is a process like this.

    1) IDer posits that certain features of the universe are best explained as being the product of design.

    2) Critic claims that these features can instead be explained as the result of Darwinian processes.

    3) IDer presents proof that Darwinian processes are not up to the task.

    4) Critic declares victory in forcing IDer to abandon his original position.

    When faced with such logic the best approach is an eye roll. 😉

    peace

  226. 226
    Mark Frank says:

    #244 5MM

    The discussion above went more like this:

    1) IDer posits that certain features of the universe are best explained as being the product of design.

    2) Critic claims that these features can be explained as the result of Darwinian processes.

    3) IDer argues if you assume the world is not conducive to Darwninian processes then Darwinian processes cannot explain the world.

    4) Critic points out he never assumed that.

    5) IDist proclaims “therefore Design”

  227. 227
    fifthmonarchyman says:

    MF says,

    3) IDer argues if you assume the world is not conducive to Darwninian processes then Darwinian processes cannot explain the world.

    I say,

    That is quite a sentence you have there. Care to parse it for me.

    If it means what I think it means then you have completely misunderstood the implications of the paper

    peace

  228. 228
    Zachriel says:

    Mung: Where does “aboutness” come from?

    Evolution is essentially a learning process.

    fifthmonarchyman: geez

    Z: If you claim there is scientific evidence of design in weather or biology, then we disagree.

    f: The claim is that these things can not be produced algorithmically without the addition of active information.

    Winston Ewert: As another example, consider the question of why the water on earth is predominately located in the oceans and isn’t uniformly distributed throughout the earth’s atmosphere. There is a high amount of active information in the target of having full oceans. There must something in the laws of the universe that make this happen. The answer is pretty obvious: gravity.

    So a simple force adds active information?

  229. 229
    MatSpirit says:

    @ Elisabeth

    Your rock sorting example only gives quick results because it’s a two step process, like Darwinian evolution. Shaking the box provides the random input, separating the rocks by varying amounts. The little rocks then fall into the spaces between the large rocks thanks to law-like gravity. The sorting is done because little rocks can fall between large rocks, but not vice-versa. Without those lawful components, you could shake the box till the cows came home and there would be no sorting, just a series of rocks in random positions.

    That makes it even more amazing that Dembski overlooks the importance of the law-like component of evolution to the extent that his Explanatory Filter can’t even test it, yet he claims the EF is reliable.

    I’ve been hoping that Winston could have a talk with his mentor about this and other important problems with Dembski’s theories that nobody at the Evolutionary Information Lab seems to be aware of.

  230. 230
    Joe says:

    Mark Frank:

    2) Critic claims that these features can be explained as the result of Darwinian processes.

    The critics don’t have any evidence nor any models. They won’t even say what unguided evolution entails. The critics lose.

  231. 231
    Winston Ewert says:

    And while he did not say how to compute P(T|H) in a way that “takes into account Darwinian and other material mechanisms”, the fact that he chose this example suggests that he considered it small enough to pass the Specification test.

    In that case, Dembski is appealing to Behe. He believes that Behe has sufficiently demonstrated that the bacterial flagellum has a very low P(T|H). He’s not offering a new way to compute P(T|H). He’s borrowing Behe’s.

    The whole argument looks like this:

    1) Because of irreducible complexity, the bacterial flagellum is really improbable.
    2) If the bacterial flagellum is really improbable, it didn’t evolve.

    Dembski’s work on specified complexity only tries to prove point 2. He assumes that point 1 was already established by Behe. I think that many people would have regarded point 2 as too obvious to need a proof and much confusion has resulted.

    If “the laws of physics could be anything”, then under most alternative scenarios, there could be no such ancestral forms of life. There could be no reproduction; there could be no mapping of similar genotypes on to similar phenotypes; there could be no heritable variance in reproductive success in the current environment. You statement simply tells us that our universe is life-friendly

    But is the universe life-friendly? Does the configuration of the universe actually make the emergence and evolution of life probable? Are the properties that you mention sufficient? Nobody has demonstrated that those properties are sufficient to make the evolution of complex life probable. On the other hand, nobody has demonstrated that they are insufficient.

    Active information shouldn’t be taken as a convoluted fine tuning argument. It should be taken as raising questions. What kind of universe is required for complex life to emerge, and do we live in that kind of universe?

  232. 232
    Joe says:

    Winston, The bacterial flagellum doesn’t have a P(T|H)- not one multi-protein complex has one. That is the whole problem and Dr Johnson goes over that in “Nature’s Probability and Probability’s Nature”.

  233. 233
    Carpathian says:

    kairosfocus:

    Carpathian, yes, design is tough to do. Especially when designed items have to function in a complex and partly uncontrolled and dynamic environment.

    Exactly.
    I’m trying to put together a software simulation platform that will allow us to test ID against evolution and my problem is how quickly I can recover from an ID mistake.

    If I release 10,000 copies of organism X into an environment, I need to be able to “recall” them much as a car manufacturer would a vehicle that needs an update.

    By the time I do this, my ecosystem might be in unrecoverable trouble, especially if X reproduces quickly.

    In this case, I might now have 100,000 copies to update in order to contain the damage.

    It seems that only very slow changes might be manageable but that would rule out fast massive change in body plans.

  234. 234
    Joe says:

    Carpathisn, You are confused as there isn’t any “ID against evolution”

  235. 235

    Winston (thanks for engaging by the way! Good to talke to you!):

    In that case, Dembski is appealing to Behe. He believes that Behe has sufficiently demonstrated that the bacterial flagellum has a very low P(T|H). He’s not offering a new way to compute P(T|H). He’s borrowing Behe’s.

    The whole argument looks like this:

    1) Because of irreducible complexity, the bacterial flagellum is really improbable.
    2) If the bacterial flagellum is really improbable, it didn’t evolve.

    Dembski’s work on specified complexity only tries to prove point 2. He assumes that point 1 was already established by Behe. I think that many people would have regarded point 2 as too obvious to need a proof and much confusion has resulted.

    And that was always the problem. Behe did NOT establish any value for P(T|H). He simply made the argument that because it doesn’t work if any part is removed, it can’t evolve by incremental advantageous steps, and therefore it can only have happened by the coincidence. Firstly he did not compute the probability of it happening in the absence of incremental advantageous steps; secondly he ignored the possiblity of those steps being advantageous but by performing some other function; thirdly he ignored the fact that incremental steps can take away as well as add (as in an arch); fourthly he redefined (then re-redefined) IC as a continuous measure of the number of unselected steps needed to reach an IC structure, ignoring the fact that this is not computable (and Dembski ignored the fact that Pallen and Matzke had actually shown a pathway with selectable steps); and finally, and most seriously, Behe ignored drift, which allows even quite disadvantageous, and certainly neutral, variants to become quite prevalent, thus hugely increasing the probability that one of those variants will undergo the a mutation that makes the disadvantageous mutation useful.

    So much for Behe! But we can ignore most of that, because the more serious problem is Dembski’s, which is, essentially that of the mouse who suggested “belling the cat”. If you are going to propose a method of detecting design based on the probability of that pattern under the null of no-design, you need to show how you compute that probability objectively. If all it requires is for someone to look at the thing and say “that’s very improbable under the null of no design”, then the method is no better than “if it looks designed, it must be”.

    If you are going to base a methodology for design detection based on probability estimates, then you need to have an objective methodology for computing those probability estimates that doesn’t rest on your own skepticism regarding the probability of the non-design alternatives. And nobody has ever shown how to compute P(T|H) which is why it remains the eleP(T|H)ant in the room. Unless, of course, ID retreats to the design of the physics that facilitates the molecular biology!

    If “the laws of physics could be anything”, then under most alternative scenarios, there could be no such ancestral forms of life. There could be no reproduction; there could be no mapping of similar genotypes on to similar phenotypes; there could be no heritable variance in reproductive success in the current environment. You statement simply tells us that our universe is life-friendly

    But is the universe life-friendly? Does the configuration of the universe actually make the emergence and evolution of life probable? Are the properties that you mention sufficient? Nobody has demonstrated that those properties are sufficient to make the evolution of complex life probable. On the other hand, nobody has demonstrated that they are insufficient.

    Precisely. So we can neither infer Design, or No-Design from our observations of complex life.

    Which is fine. I don’t infer No-Design. I just don’t infer Design (which is entirely orthogonal to the question as to whether I believe in God or not). But biology qua science doesn’t claim to have demonstrated No-Design. It wouldn’t be a scientific claim.

    On the other hand ID, up till now, HAS claimed to infer Design: “Specification: the Pattern that Signifies Intelligence”.

    So there’s an assymmetry there, which is unfortunate, I think.

    Active information shouldn’t be taken as a convoluted fine tuning argument. It should be taken as raising questions. What kind of universe is required for complex life to emerge, and do we live in that kind of universe?

    And it’s a very interesting set of questions. But they are a long way from inferring an Intelligent Designer from a bacterial flagellum!

  236. 236
    Joe says:

    Elizabeth, It is up to evolutionists to provide P(T|H). That is because they don’t have any models nor evidence to support their claims and that means probabilities are all that is left to “test” the concept of unguided evolution. Yet, as Dr Johnson has pointed out, you and yours can’t even demonstrate a feasibility, let alone a probability. By being included in a probability discussion you and yours are getting more than you deserve.

    We infer Intelligent design due to our knowledge of cause and effect relationships, ie science. We apply scientific methodology to test our inferences and so far they have come out OK.

    But biology qua science doesn’t claim to have demonstrated No-Design. It wouldn’t be a scientific claim.

    And yet that is what is being taught in biology classrooms. That is what Darwin espoused- that was his whole point of natural selection. Mayr said teleology is not allowed in biology. Go talk to Jerry Coyne, he will tell you what he teaches in his classes.

    Hopefully you will be joining us in protest against such a thing.

  237. 237
    SimonLeberge says:

    Winston Ewert and Elizabeth Liddle:

    Dembski actually says nothing about Behe in “Specification: The Pattern that Signifies Intelligence” (2005). The bacterial flagellum serves primarily as an example in that paper. Dembski does not make a strong claim to have rejected strictly naturalistic cause in favor of some degree of supernatural (“non-natural”) design. He relegates to endnote 33 his own “Irreducible Complexity Revisited” (2004), in which he ostensibly defends Behe’s concept, but in fact revises (perhaps strengthens) it.

    Winston tries to tell a story of how the pieces of ID fit together neatly, and Elizabeth enables by flitting freely between 2005 and 2015. To my knowledge, Dembski has not framed design detection as rejection of strictly naturalistic (or material) cause since 2008. The fact is that he has abandoned even the “information accounting” conclusion of his last paper with Winston (DEM), and has undertaken development of an equivocal “maybe it’s due to God, maybe it’s intrinsic to nature” teleology in Being as Communion (2014).

    Winston, you surreptitiously package Being as Communion as a response to what Felsenstein and English wrote about DEM. No one who’s read and comprehended the book would fail to see what you’re doing at ENV and in this thread.

    Elizabeth, you’re dripping with brilliance, and I hate to see you waste your energies on arguments with people who don’t have a clue that the leading ID proponents (including Winston) have left them behind.

    And why, Winston, do they not have a clue? Could it be that you and your colleagues never say outright, “Well, we decided to modify our approach. Here’s why. Here’s how the new is connected to the old.” (Why have you never made explicit the mathematical relation of active information and specified complexity?) Lacking an explanation, I suppose that Dembski’s foremost concern is to develop a sort of teleology that can survive a test in federal court. After all, it was just five years ago that he and Marks boldly proclaimed, in “Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information,”

    Though not denying Darwinian evolution or even limiting its role in the history of life, the Law of Conservation of Information shows that Darwinian evolution is inherently teleological. Moreover, it shows that this teleology can be measured in precise information-theoretic terms.

    The measure was, of course, active information. According to one of the sections of the paper, “Intelligence Creates Information” — meaning that only intelligence creates information. The upshot is that in 2010, what you are referring to now as the bias of the initial universe, and are declining to explain, was what Dembski and Marks said could come only from intelligence.

    In short, Winston is introducing, unannounced, a revised perspective on life, the universe, and all that. Elizabeth should take some time out for reading, or she’ll contribute to the sound and the fury.

  238. 238
    Mung says:

    Winston Ewert:

    It poses a problem only for a Darwinist who thinks that all that matters is selection, replication, and mutation, and the laws of physics don’t matter and could equally as well be anything.

    Elizabeth Liddle:

    I’ve never actually met a “Darwinist” who thought any such thing. It would be a bizarre position.

    This is presumably the same Elizabeth Liddle who wrote the following:

    The fact that we design the physics chemistry, the environment, and the initial population are irrelevant

  239. 239
    Winston Ewert says:

    If you are going to propose a method of detecting design based on the probability of that pattern under the null of no-design, you need to show how you compute that probability objectively.

    Here’s the thing. Dembski’s specified complexity was intended to be used together with other argument. I.e. you combine irreducible complexity and specified complexity, or axe’s work on proteins and irreducible complexity, etc. It is not intended as a complete method of design detection by itself. Yet, you continually try to insists that its supposed to be and argue against it on that basis.

    It is like you complaining that a gasoline tank doesn’t work very well as a car. Well duh! of course it doesn’t.

    We ought to be arguing about whether or not biological features are improbable. But every time that I try to point in that direction you accuse me of changing the subject.

    The upshot is that in 2010, what you are referring to now as the bias of the initial universe, and are declining to explain, was what Dembski and Marks said could come only from intelligence.

    Allow me to quote from that paper:

    Likewise, the LCI Regress, as noted in the last bullet point, suggests that intelligence is ultimately the source of the information that accounts for successful

    The language used is that of suggestion, not of proving. Indeed, the fact that all active information had to be present at the origin of universe is suggestive of an ID view of things. Nevertheless, it is only suggestive. Your claims about my alleged sneaky change in what I’m saying are simply incorrect. I’ve merely clarified an area around which an unfortunate level of confusion has arisen.

    Winston, The bacterial flagellum doesn’t have a P(T|H)- not one multi-protein complex has one. That is the whole problem and Dr Johnson goes over that in “Nature’s Probability and Probability’s Nature”.

    It is true. The Darwinists have taken the line that we can’t prove that the flagellum has a low probability. They have done almost nothing to demonstrate that it has a high probability. One could argue that the burden of proof is on them to demonstrate they have a working theory. I don’t like to go there because I’m very suspicious of people who try to shift burdens of proof.

    With that, I’m going to have bow out of this thread. I’ve given it as much of my time and effort as I can. Thanks to everyone for participating.

  240. 240
    Mark Frank says:

    Thank you, Winston, for your good humoured and intelligent responses in this thread. The two remarkable things I have learned about active information are:

    1) Unlike endogenous and exogenous information, it is not a measure of probability. I found this surprising because I always thought that the ID community defined information in terms of the improbability of the result.

    2) It is a measure of bias but what counts as bias is a matter of opinion depending on your view of what is a natural distribution. So although in any given context calculating the active information is objective (assuming the exogenous and endogenous information are objective) its significance appears to be subjective.

  241. 241
    DNA_Jock says:

    Winston,

    All of Aurelio Smith`s posts have disappeared from this thread and others; could you see if they could be restored?

    Only some of them have been cached elsewhere.

  242. 242
    Mung says:

    What, exactly, would be the point of restoring Aurelio Smith’s posts? He never engaged Winston on anything of substance.

  243. 243
    DNA_Jock says:

    What, exactly, would be the point of restoring Aurelio Smith’s posts? He never engaged Winston on anything of substance.

    I disagree, but the curious-minded will have to visit reality-based sites to review the dialog and make up their own minds, if they dare.
    Of course, I do see the point, from the ID-proponent’s perspective, of not restoring Aurelio’s posts to this thread, titled as it is in his honor.

Leave a Reply