Uncommon Descent Serving The Intelligent Design Community

Signal to Noise: A Critical Analysis of Active Information

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The following is a guest post by Aurelio Smith. I have invited him to present a critique of Active Information in a more prominent place at UD so we can have a good discussion of Active Information’s strengths and weaknesses. The rest of this post is his.


My thanks to johnnyb for offering to host a post from me on the subject of ‘active information’. I’ve been following the fortunes of the ID community for some time now and I was a little disappointed that the recent publications of the ‘triumvirate’ of William Dembski, Robert Marks and their newly promoted postgrad Doctor Ewert have received less attention here than their efforts deserve. The thrust of their assault on Darwinian evolution has developed from earlier concepts such as “complex specified information” and “conservation of information” and they now introduce “Algorithmic Specified Complexity” and “Active information”.

Some history.

William Demsbski gives an account of the birth of his ideas here:

…in the summer of 1992, I had spent several weeks with Stephen Meyer and Paul Nelson in Cambridge, England, to explore how to revive design as a scientific concept, using it to elucidate biological origins as well as to refute the dominant materialistic understanding of evolution (i.e., neo-Darwinism). Such a project, if it were to be successful, clearly could not merely give a facelift to existing design arguments for the existence of God. Indeed, any designer that would be the conclusion of such statistical reasoning would have to be far more generic than any God of ethical monotheism. At the same time, the actual logic for dealing with small probabilities seemed less to directly implicate a designing intelligence than to sweep the field clear of chance alternatives. The underlying logic therefore was not a direct argument for design but an indirect circumstantial argument that implicated design by eliminating what it was not.*

[*my emphasis]

Dembski published The Design Inference in 1998, where the ‘explanatory filter’ was proposed as a tool to separate ‘design’ from ‘law’ and ‘chance’. The weakness in this method is that ‘design’ is assumed as the default after eliminating all other possible causes. Wesley Elsberry’s review points out the failure to include unknown causation as a possibility. Dembski acknowledges the problem in a comment in a thread at Uncommon Descent – Some Thanks for Professor Olofsson

I wish I had time to respond adequately to this thread, but I’ve got a book to deliver to my publisher January 1 — so I don’t. Briefly: (1) I’ve pretty much dispensed with the EF. It suggests that chance, necessity, and design are mutually exclusive. They are not. Straight CSI [Complex Specified Information] is clearer as a criterion for design detection.* (2) The challenge for determining whether a biological structure exhibits CSI is to find one that’s simple enough on which the probability calculation can be convincingly performed but complex enough so that it does indeed exhibit CSI. The example in NFL ch. 5 doesn’t fit the bill. The example from Doug Axe in ch. 7 of THE DESIGN OF LIFE (www.thedesignoflife.net) is much stronger. (3) As for the applicability of CSI to biology, see the chapter on “assertibility” in my book THE DESIGN REVOLUTION. (4) For my most up-to-date treatment of CSI, see “Specification: The Pattern That Signifies Intelligence” at http://www.designinference.com. (5) There’s a paper Bob Marks and I just got accepted which shows that evolutionary search can never escape the CSI problem (even if, say, the flagellum was built by a selection-variation mechanism, CSI still had to be fed in).

[*my emphasis]

Active information.

Dr Dembski has posted some background to his association with Professor Robert Marks and The Evolutionary Informatics Lab which has resulted in the publication of several papers with active information as an important theme. A notable collaborator is Winston Ewert Ph D, whose master’s thesis was entitled: Studies of Active Information in Search where, in chapter four, he criticizes Lenski et al., 2003, saying:

[quoting Lenski et al., 2003]“Some readers might suggest that we stacked the deck by studying the evolution of a complex feature that could be built on simpler functions that were also useful.”

This, indeed, is what the writers of Avida software do when using stair step active information.

What is active information?

In A General Theory of Information Cost Incurred by Successful Search, Dembski, Ewert and Marks (henceforth DEM) give their definition of “active information” as follows:

In comparing null and alternative searches, it is convenient to convert probabilities to information measures (note that all logarithms in the sequel are to the base 2). We therefore define the endogenous information IΩ as –log(p), which measures the inherent difficulty of a blind or null search in exploring the underlying search space Ω to locate the target T. We then define the exogenous information IS as –log(q), which measures the difficulty of the alternative search S in locating the target T. And finally, we define the active information I+ as the difference between the endogenous and exogenous information: I+ = IΩ – IS = log(q/p). Active information therefore measures the information that must be added (hence the plus sign in I+) on top of a null search to raise an alternative search’s probability of success by a factor of q/p. [excuse formatting errors in mathematical symbols]

They conclude with an analogy from the financial world, saying:

Conservation of information shows that active information, like money, obeys strict accounting principles. Just as banks need money to power their financial instruments, so searches need active information to power their success in locating targets. Moreover, just as banks must balance their books, so searches, in successfully locating targets, must balance their books — they cannot output more information than was inputted.

In an article at the Pandas Thumb website Professor Joe Felsenstein, in collaboration with Tom English, presents some criticism of of the quoted DEM paper. Felsenstein helpfully posts an “abstract in the comments, saying:

Dembski, Ewert and Marks have presented a general theory of “search” that has a theorem that, averaged over all possible searches, one does not do better than uninformed guessing (choosing a genotype at random, say). The implication is that one needs a Designer who chooses a search in order to have an evolutionary process that succeeds in finding genotypes of improved fitness. But there are two things wrong with that argument: 1. Their space of “searches” includes all sorts of crazy searches that do not prefer to go to genotypes of higher fitness – most of them may prefer genotypes of lower fitness or just ignore fitness when searching. Once you require that there be genotypes that have different fitnesses, so that fitness affects their reproduction, you have narrowed down their “searches” to ones that have a much higher probability of finding genotypes that have higher fitness. 2. In addition, the laws of physics will mandate that small changes in genotype will usually not cause huge changes in fitness. This is true because the weakness of action at a distance means that many genes will not interact strongly with each other. So the fitness surface is smoother than a random assignment of fitnesses to genotypes. That makes it much more possible to find genotypes that have higher fitness. Taking these two considerations into account – that an evolutionary search has genotypes whose fitnesses affect their reproduction, and that the laws of physics militate against strong interactions being typical – we see that Dembski, Ewert, and Marks’s argument does not show that Design is needed to have an evolutionary system that can improve fitness.

I note that there is an acknowledgement in the DEM paper as follows:

The authors thank Peter Olofsson and Dietmar Eben for helpful feedback on previous work of the Evolutionary Informatics Lab, feedback that has found its way into this paper.

This is the same Professor Olofsson referred to in the “Some Thanks for Professor Olofsson thread mentioned above. Dietmar Eben has blogged extensively on DEM’s ideas.

I’m not qualified to criticize the mathematics but I see no need to doubt that it is sound. However what I do query is whether the model is relevant to biology. The search for a solution to a problem is not a model of biological evolution and the concept of “active information” makes no sense in a biological context. Individual organisms or populations are not searching for optimal solutions to the task of survival. Organisms are passive in the process, merely affording themselves of the opportunity that existing and new niche environments provide. If anything is designing, it is the environment. I could suggest an anthropomorphism: the environment and its effects on the change in allele frequency are “a voice in the sky” whispering “warmer” or “colder”. There is the source of the active information.

I was recently made aware that this classic paper by Sewall Wright, The Roles of Mutation, Inbeeding, Crossbreeding and Selection in Evolution, is available online. Rather than demonstrating the “active information” in Dawkins’ Weasel program, which Dawkins freely confirmed is a poor model for evolution with its targeted search, would DEM like to look at Wright’s paper for a more realistic evolutionary model?

Perhaps, in conclusion, I should emphasize two things. Firstly, I am utterly opposed to censorship and suppression. I strongly support the free exchange of ideas and information. I strongly support any genuine efforts to develop “Intelligent Design” into a formal scientific endeavor. Jon Bartlett sees advantages in the field of computer science and I say good luck to him. Secondly, “fitness landscape” models are not accurate representations of the chaotic, fluid, interactive nature of the real environment . The environment is a kaleidoscope of constant change. Fitness peaks can erode and erupt. Had Sewall Wright been developing his ideas in the computer age, his laboriously hand-crafted diagrams would, I’m sure, have evolved (deliberate pun) into exquisite computer models.

References

History: Wm Dembski 1998 the Design inference, explanatory filter ( Elsberry criticizes the book for using a definition of “design” as what is left over after chance and regularity have been eliminated)

Wikipedia, upper probability bound, complex specified information, conservation of information, meaningful information.

Elsberry & Shallit

Theft over Toil John S. Wilkins, Wesley R. Elsberry 2001

Computational capacity of the universe Seth Lloyd 2001

Information Theory, Evolutionary Computation, and
Dembski’s “Complex Specified Information”
Elsberry and Shallit 2003

Specification: The Pattern That Signifies Intelligence by William A. Dembski August 15, 2005

Evaluation of Evolutionary and Genetic
Optimizers: No Free Lunch
Tom English 1996

Conservation of Information Made Simple William Dembski 2012

…evolutionary biologists possessing the mathematical tools to understand search are typically happy to characterize evolution as a form of search. And even those with minimal knowledge of the relevant mathematics fall into this way of thinking.

Take Brown University’s Kenneth Miller, a cell biologist whose knowledge of the relevant mathematics I don’t know. Miller, in attempting to refute ID, regularly describes examples of experiments in which some biological structure is knocked out along with its function, and then, under selection pressure, a replacement structure is evolved that recovers the function. What makes these experiments significant for Miller is that they are readily replicable, which means that the same systems with the same knockouts will undergo the same recovery under the same suitable selection regime. In our characterization of search, we would say the search for structures that recover function in these knockout experiments achieves success with high probability.

Suppose, to be a bit more concrete, we imagine a bacterium capable of producing a particular enzyme that allows it to live off a given food source. Next, we disable that enzyme, not by removing it entirely but by, say, changing a DNA base in the coding region for this protein, thus changing an amino acid in the enzyme and thereby drastically lowering its catalytic activity in processing the food source. Granted, this example is a bit stylized, but it captures the type of experiment Miller regularly cites.

So, taking these modified bacteria, the experimenter now subjects them to a selection regime that starts them off on a food source for which they don’t need the enzyme that’s been disabled. But, over time, they get more and more of the food source for which the enzyme is required and less and less of other food sources for which they don’t need it. Under such a selection regime, the bacterium must either evolve the capability of processing the food for which previously it needed the enzyme, presumably by mutating the damaged DNA that originally coded for the enzyme and thereby recovering the enzyme, or starve and die.

So where’s the problem for evolution in all this? Granted, the selection regime here is a case of artificial selection — the experimenter is carefully controlling the bacterial environment, deciding which bacteria get to live or die*. [(* My emphasis) Not correct – confirmed by Richard Lenski – AF] But nature seems quite capable of doing something similar. Nylon, for instance, is a synthetic product invented by humans in 1935, and thus was absent from bacteria for most of their history. And yet, bacteria have evolved the ability to digest nylon by developing the enzyme nylonase. Yes, these bacteria are gaining new information, but they are gaining it from their environments, environments that, presumably, need not be subject to intelligent guidance. No experimenter, applying artificial selection, for instance, set out to produce nylonase.

To see that there remains a problem for evolution in all this, we need to look more closely at the connection between search and information and how these concepts figure into a precise formulation of conservation of information. Once we have done this, we’ll return to the Miller-type examples of evolution to see why evolutionary processes do not, and indeed cannot, create the information needed by biological systems. Most biological configuration spaces are so large and the targets they present are so small that blind search (which ultimately, on materialist principles, reduces to the jostling of life’s molecular constituents through forces of attraction and repulsion) is highly unlikely to succeed. As a consequence, some alternative search is required if the target is to stand a reasonable chance of being located. Evolutionary processes driven by natural selection constitute such an alternative search. Yes, they do a much better job than blind search. But at a cost — an informational cost, a cost these processes have to pay but which they are incapable of earning on their own.

Meaningful Information

Meaningful Information Paul Vit´anyi 2004

The question arises whether it is possible to separate meaningful information from accidental information, and if so, how.

Evolutionary Informatics Publications

Conservation of Information in Relative Search Performance Dembski, Ewert, Marks 2013

Algorithmic Specified Complexity
in the Game of Life
Ewert, Dembski, Marks 2015

Digital Irreducible Complexity: A Survey of Irreducible
Complexity in Computer Simulations
Ewert 2014

On the Improbability of Algorithmic Specified
Complexity
Dembski, Ewert, Marks 2013

Wikipedia, upper probability bound, complex specified information, conservation of information, meaningful information.

A General Theory of Information Cost Incurred by Successful Search Dembski, Ewert, Marks 2013

Actually, in my talk, I work off of three papers, the last of which Felsenstein fails to cite and which is the most general, avoiding the assumption of uniform probability to which Felsenstein objects.

EN&V

Dietmar Eben’s blog

Dieb review “cost of successful search

Conservation of Information in Search:
Measuring the Cost of Success
Dembski, Marks 2009

The Search for a Search: Measuring the Information Cost of
Higher Level Search
Dembski, Marks 2009

Has Natural Selection Been Refuted? The Arguments of William Dembski Joe Felsenstein 2007

In conclusion
Dembski argues that there are theorems that prevent natural selection from explaining the adaptations that we see. His arguments do not work. There can be no theorem saying that adaptive information is conserved and cannot be increased by natural selection. Gene frequency changes caused by natural selection can be shown to generate specified information. The No Free Lunch theorem is mathematically correct, but it is inapplicable to real biology. Specified information, including complex specified information, can be generated by natural selection without needing to be “smuggled in”. When we see adaptation, we are not looking at positive evidence of billions and trillions of interventions by a designer. Dembski has not refuted natural selection as an explanation for adaptation.

ON DEMBSKI’S LAW OF CONSERVATION OF INFORMATION Erik Tellgren 2002

Comments
In other words, something that is only resorted to on eliminating TWO first resorts, in a context of a justifiably finite and restricted list of possibilities is the OPPOSITE of a default.
Really? Let's look at one of the other definitions KF quoted, and which he completely ignored:
4. by default in the absence of opposition or a better alternative: he became prime minister by default.
As KF has agreed, Dembski's filter resorts to 'design' only in the absence of the alternatives of chance and necessity. It is thus the default by one of the definitions he posted. It would be unfair to ascribe the above farce to deliberate deception, unless KF is dumb enough to forget to delete the other definitions. The retention of that context also means it isn't quote-mining. I'll leave the reader to determine the default option. In the meantime, I look forward to seeing KF explaining that the designer isn't God since God doesn't work in the fashion industry, and that genes aren't information since they don't line up equidistantly in straight lines. RoyRoy
April 24, 2015
April
04
Apr
24
24
2015
04:19 AM
4
04
19
AM
PDT
SL, I passed back for a moment. In effect functionally specific complex organisation and/or associated information (FSCO/I) denotes complex functionally organised configs of parts per a wiring diagram pattern that produces a result based on interaction. Text strings bearing coded info and fishing reels etc are typical examples. There are many cases in cell based life from the cell on up. Following Orgel and Kolmogorov et al (as well as AutoCAD etc), we may see that such can be reduced informationally to a structured string of Y/N q's, that formally describe the config that works. Think, circuit diagram, exploded view, coded string etc. The key is to recognise that such requisites of organised function tightly constrain the configs that maintain function to narrow zones T in much bigger spaces of clumped or scattered configs for the parts, W. Thus, needles in haystacks or islands of function in seas of non function. The 500 - 1,000 bit threshold sets a scope that is not amenable to blind search as already outlined. Under these conditions, blind searches on avg will reliably not outperform a flat random blind search as a yardstick. And, search for a golden search that magically plunks us down next to or within an island of function, is exponentially harder than the straight needle in haystack blind search. The odds of finding zones T in W on blind search define p, and the injected active info bridges the gap. Info, of course, is measured as outlined further above, on negative log probability, hence we can go back and forth between the two in our analysis, noting the reason why Marks & Dembski used simple random search as warranted yardstick. Invariably, on analysis, such active information is intelligently inserted, often unrecognised. I find, the best simple approach is to think in terms of config spaces as haystacks and islands of function as needles in them. Then we can see the scope of search to scope of space issue and resulting utter implausibility of blindly finding a needle. If a needle is actually found, that is strong reason to believe the search was in fact intelligently guided. Then, go back and re-read the Marks-Dembski papers with that background in mind, which may be a bit hard to spot directly from the papers. KFkairosfocus
April 24, 2015
April
04
Apr
24
24
2015
03:21 AM
3
03
21
AM
PDT
Kairosfocus, I'm having trouble understanding your remarks, because I don't know how FSCO/I is related to active information. Would you explain that to me, please? As Mark Frank has indicated, the active information is log(q/p) = log q - log p, where p is the probability that a "totally random" guess generates the target event, and q is the probability that the "search" (selection process) generates the target event. (However, Mark contends that "totally random" is not well defined.) The more formally you can express the relation of this measure to FSCO/I, the better for me.SimonLeberge
April 24, 2015
April
04
Apr
24
24
2015
02:50 AM
2
02
50
AM
PDT
MF, One last point for now: with all due respect, nope. The possibility of many different clumped and/or scattered arrangements is an obvious reality. To find FSCO/I rich configs in the space of possibilities starting in Darwin's warm pond or a volcano vent or a comet core etc, beyond mere formation of monomers (itself a challenge) is thus relevant and at the root of the TOL. With design ruled out for argument that leaves blind chance and/or mechanical necessity as hoped for causal explanations, in the face of a beyond astronomical scope blind search. You may hope to find a golden search of so far unmet promissory note character that upends the sort of consideration that leads to the conclusion that straight search with a typical random search as yardstick points to overwhelming improbability, but then you have the challenge that was outlined above. Namely, searches are subsets so blind searches for golden searches are blindly searching not in config space of W possibilities, starting for relevance at W = 10^150 - 301, but in spaces of scale 2^W, exponentially more difficult. This has been pointed out in your presence several times, but you have consistently failed to address it. Such, being consistent with your repeatedly announced policy to ignore remarks I make and./or to find excuses to project incoherence and/or incomprehensibility -- I add this for the new onlooker who will not know the years of exchanges that lie behind what is on the table today. KFkairosfocus
April 24, 2015
April
04
Apr
24
24
2015
02:29 AM
2
02
29
AM
PDT
Onlookers, please mark the focus on substance here. KFkairosfocus
April 24, 2015
April
04
Apr
24
24
2015
02:16 AM
2
02
16
AM
PDT
I think there are deeper conceptual problems with active information and indeed the whole LCI. We somehow get from defining the difference between p = prob(success|blind search) and q - prob(success|alternative search) to prob(alternative search) = p/q But if you look carefully at the literature this inversion of probabilities is never really justified. It requires: 1) Treating possible searches as a random variable 2) Selecting an arbitrary way of enumerating possible searches (a "search" is defined as an ordered subset of all the variables to be inspected) 3) Arbitrarily using Bernouilli's principle of indifference to decide all searches are equally probable within this space of all possible searches Each step is highly dubious.Mark Frank
April 24, 2015
April
04
Apr
24
24
2015
02:15 AM
2
02
15
AM
PDT
AS, I really need to be getting on with the day, but in the next step you cite Felsenstein making yet another strawmamn argument. The issue is not hill climbing within a nicely behaved island of function with a smooth fitness function (itself a major challenge per Axe et al) but to FIND such islands in vast config spaces under the challenge fuirst of OOL then OOBP, with 100 - 1,000 kbits and 10 - 100+ mn bits of just genome info on the table. The FSCO/I needle in haystack challenge to find functional wiring diagram coonfigs in possibility spaces kicks in at 500 - 1,000 bits. Where for every bit beyond the threshold the space DOUBLES. Again, this deeply undermines the credibility of your argument as it is a tilting at a strawmam again. In a context where, for years, the real argument has been oput and explained and the strawman corrected. I finally note, OOL is the root of the tree of life and no roots, no trunk or branches (where OOBP is about main branches that lead onwards to the full range of life forms). And surely, the challenge to find such configs in a vast field of physically possible alternatives MUST be relevant to both OOL and OOBP, thus to all of biological study of origins; the irrelevant mathematics objection fails. KFkairosfocus
April 24, 2015
April
04
Apr
24
24
2015
02:13 AM
2
02
13
AM
PDT
Jim Smith, I think that harping on the number of hits for "evolutionary search" is not such a great idea. It, and also "evolutionary optimization," originated with technologists. I suspect that the back-application to biology reflected formalism-envy among biologists, which I know was a problem for some. In any case, the terminology does not indicate that evolutionary biologists are unaware teleologists, in need of philosophers to make them mindful. The "search" in terms of which active information is defined does not search. Quite simply, it does not have an input to indicate what to search for. It depends in no way on the target. That's the essence of why Dembski, Ewert, and Marks can represent a "search" as a probability distribution. What constitutes a search is a potentially informed entity selecting and initiating a "search" (uninformed selection process) in order to cause the target event to occur. What Dembski and Marks have called the "search-forming process" is the entity that possibly searches. The process it forms, as modeled by Dembski, Ewert, and Marks, does not, in and of itself, search. It's been a while since I read Investigations, but I don't recall that Kauffman, as an emergentist, suggested the existence of something that exploited information of a target event to form a process to generate it. What I'm sure of is that he has objected strenuously, in recent years, to the notion that we can assign probabilities to evolutionary trajectories. That amounts to complete rejection of the representation of an evolutionary process as a probability distribution.SimonLeberge
April 24, 2015
April
04
Apr
24
24
2015
02:12 AM
2
02
12
AM
PDT
AS, in OP:
Dembski published The Design Inference in 1998, where the ‘explanatory filter’ was proposed as a tool to separate ‘design’ from ‘law’ and ‘chance’. The weakness in this method is that ‘design’ is assumed as the default after eliminating all other possible causes.
I am sorry, to misrepresent design as a "default" in the OP is a loaded strawman caricature of the design inference process. Let us go to a dictionary, here CED:
default (d??f??lt) n 1. (Law) a failure to act, esp a failure to meet a financial obligation or to appear in a court of law at a time specified 2. (Banking & Finance) a failure to act, esp a failure to meet a financial obligation or to appear in a court of law at a time specified 3. absence or lack 4. by default in the absence of opposition or a better alternative: he became prime minister by default. 5. in default of through or in the lack or absence of 6. (Law) judgment by default law a judgment in the plaintiff's favour when the defendant fails to plead or to appear 7. lack, want, or need 8. (Computer Science) computing a. the preset selection of an option offered by a system, which will always be followed except when explicitly altered b. (as modifier): default setting. vb 9. (Banking & Finance) (intr; often foll by on or in) to fail to make payment when due 10. (intr) to fail to fulfil or perform an obligation, engagement, etc: to default in a sporting contest. 11. (Law) law to lose (a case) by failure to appear in court 12. (tr) to declare that (someone) is in default [C13: from Old French defaute, from defaillir to fail, from Vulgar Latin d?fall?re (unattested) to be lacking]
In other words, something that is only resorted to on eliminating TWO first resorts, in a context of a justifiably finite and restricted list of possibilities is the OPPOSITE of a default. Where, from Plato in the Laws Bk X 2350 years ago on, it has been well known that phenomena routinely come as produced by one or more of chance, mechanical necessity or intelligently directed configuration. That is a strongly established fact. Mechanical necessity produces low contingency regularity under sufficiently close initial circumstances. That is its signature and the launchpad for seeking explanation per mechanical laws such as F = m*a or F = G m1m2/r^2 etc. This is foundational to the rise of modern science. Next, and particularly in light of the study of matter at molecular scale, in the past two centuries, it was recognised that statistical behaviour reflective of chance was also to be reckoned with, so chance was admitted and is readily recognised from stochastic patterns. In comms contexts we know the familiar flickering grass on the CRO screen, and things like white noisem flicker/pink noise, shot noise, Johnson noise in resistors etc are well studied phenomena. More broadly, chance can be seen as resulting from clash of uncorrelated causal chains leading to unpredictable outcomes beyond some distribution or other (flip 1,000 coins, and see the binomial distribution emerge . . or, ponder a 1,000 atom paramagnetic substance in a weak B field that would give parallel/antiparallel alignments). Or else, many quantum phenomena seem to be directly stochastic. The third factor is the kind of intelligently directed configuration you used to create the OP. The three are distinct, no one has reasonably been able to say collapse design into chance and necessity per an observationally justified adequate causal process, i.e. something that meets the vera causa test of Newton acknowledged by Lyell and Darwin. The design inference explanatory filter (especially in per aspect form) then exerts two successive defaults. First, mechanical necessity. This breaks where high contingency on initial conditions is observed for a given aspect of an object, phenomenon, process etc. For example, a die is a heavy object and routinely falls at 9.8 N/kg on being dropped under typical circumstances. On hitting the table etc, a butterfly effect process ensues due to eight corners and twelve edges. This shows a low contingency aspect and a high contingency one. The latter leads to tumbling and settling to a value in a flat random distribution, for a fair die. So the second default is blind chance leading to some reasonable distribution of possible outcomes. But when outcomes are highly contingent and fall well outside the reasonable expectations of chance and/or necessity due to FSCO/I, then we have an empirically and analytically well warranted adequate cause. Intelligently directed configuration, design. So, the design inference process is reasonable and not at all like the strawman "default" you presented in the OP. That you make such an error at almost the outset of your considerations drastically undermines your further case. I suggest you correct it. KF PS: You then proceed to the clip from a remark that Dembski subsequently explained and emphasised its compatibility with the explanatory filter approach, as is discussed in the weak argument correctives under the resources tab, top of this and every UD page. I cite no 30:
30] William Dembski “dispensed with” the Explanatory Filter (EF) and thus Intelligent Design cannot work This quote by Dembski is probably what you are referring to: I’ve pretty much dispensed with the EF. It suggests that chance, necessity, and design are mutually exclusive. They are not. Straight CSI is clearer as a criterion for design detection. In a nutshell: Bill made a quick off-the-cuff remark using an unfortunately ambiguous phrase that was immediately latched-on to and grossly distorted by Darwinists, who claimed that the “EF does not work” and that “it is a zombie still being pushed by ID proponents despite Bill disavowing it years ago.” But in fact, as the context makes clear – i.e. we are dealing with a real case of “quote-mining” [cf. here vs. here] — the CSI concept is in part based on the properly understood logic of the EF. Just, having gone though the logic, it is easier and “clearer” to then use “straight CSI” as an empirically well-supported, reliable sign of design. In greater detail: The above is the point of Dembski’s clarifying remarks that: “. . . what gets you to the design node in the EF is SC (specified complexity). So working with the EF or SC end up being interchangeable.”[For illustrative instance, contextually responsive ASCII text in English of at least 143 characters is a “reasonably good example” of CSI. How many cases of such text can you cite that were wholly produced by chance and/or necessity without design (which includes the design of Genetic Algorithms and their search targets and/or oracles that broadcast “warmer/cooler”)?] Dembski responded to such latching-on as follows, first acknowledging that he had spoken “off-hand” and then clarifying his position in light of the unfortunate ambiguity of the phrasal verb dispensed with: In an off-hand comment in a thread on this blog I remarked that I was dispensing with the Explanatory Filter in favor of just going with straight-up specified complexity. On further reflection, I think the Explanatory Filter ranks among the most brilliant inventions of all time (right up there with sliced bread). I’m herewith reinstating it — it will appear, without reservation or hesitation, in all my future work on design detection. [….] I came up with the EF on observing example after example in which people were trying to sift among necessity, chance, and design to come up with the right explanation. The EF is what philosophers of science call a “rational reconstruction” — it takes pre-theoretic ordinary reasoning and attempts to give it logical precision. But what gets you to the design node in the EF is SC (specified complexity). So working with the EF or SC end up being interchangeable. In THE DESIGN OF LIFE (published 2007), I simply go with SC. In UNDERSTANDING INTELLIGENT DESIGN (published 2008), I go back to the EF. I was thinking of just sticking with SC in the future, but with critics crowing about the demise of the EF, I’ll make sure it stays in circulation. Underlying issue: Now, too, the “rational reconstruction” basis for the EF as it is presented (especially in flowcharts circa 1998) implies that there are facets in the EF that are contextual, intuitive and/or implicit. For instance, even so simple a case as a tumbling die that then settles has necessity (gravity), chance (rolling and tumbling) and design (tossing a die to play a game, and/or the die may be loaded) as possible inputs. So, in applying the EF, we must first isolate relevant aspects of the situation, object or system under study, and apply the EF to each key aspect in turn. Then, we can draw up an overall picture that will show the roles played by chance, necessity and agency. To do that, we may summarize the “in-practice EF” a bit more precisely as: 1] Observe an object, system, event or situation, identifying key aspects. 2] For each such aspect, identify if there is high/low contingency. (If low, seek to identify and characterize the relevant law(s) at work.) 3] For high contingency, identify if there is complexity + specification. (If there is no recognizable independent specification and/or the aspect is insufficiently complex relative to the universal probability bound, chance cannot be ruled out as the dominant factor; and it is the default explanation for high contingency. [Also, one may then try to characterize the relevant probability distribution.]) 4] Where CSI is present, design is inferred as the best current explanation for the relevant aspect; as there is abundant empirical support for that inference. (One may then try to infer the possible purposes, identify candidate designers, and may even reverse-engineer the design (e.g. using TRIZ), etc. [This is one reason why inferring design does not “stop” either scientific investigation or creative invention. Indeed, given their motto “thinking God's thoughts after him,” the founders of modern science were trying to reverse-engineer what they understood to be God's creation.]) 5] On completing the exercise for the set of key aspects, compose an overall explanatory narrative for the object, event, system or situation that incorporates aspects dominated by law-like necessity, chance and design. (Such may include recommendations for onward investigations and/or applications.)
Resort to such weak talking points progressively undermines the credibility of the objection argument being made.kairosfocus
April 24, 2015
April
04
Apr
24
24
2015
01:57 AM
1
01
57
AM
PDT
The weakness in this method is that ‘design’ is assumed as the default after eliminating all other possible causes. Wesley Elsberry’s review points out the failure to include unknown causation as a possibility.
What types of phenomenon are "Unknown causes" capable of producing? Unknown causes is a god of the gaps solution. It can explain anything. We know what intelligence can do: produce information, produce cybernetic systems, produce irreducibly complex systems, use mathematical formulas and numerical values (ie fine tuning) etc. When we find these in nature and can't explain them by chance and/or necessity, intelligence is the best explanation. "Or it might be due to something unknown" is implicit in every scientific statement not just in ID. See Karl popper: you can't prove a scientific theory you can only falsify it. But for some reason ID proponents (and everyone else who proposes something controversial - the "unknown cause" comes up in parapsychology too) are supposed to prove the theory.Jim Smith
April 24, 2015
April
04
Apr
24
24
2015
01:36 AM
1
01
36
AM
PDT
F/N: Pardon a clip from my longstanding online note accessible via my handle, Sect A. Pardon also what I judge to be necessary length in the clip: ______________ >> . . . let us now consider in a little more detail a situation where an apparent message is received. What does that mean? What does it imply about the origin of the message . . . or, is it just noise that "got lucky"? * If an apparent message is received, it means that something is working as an intelligible -- i.e. functional -- signal for the receiver. In effect, there is a standard way to make and send and recognise and use messages in some observable entity [e.g. a radio, a computer network, etc.], and there is now also some observed event, some variation in a physical parameter, that corresponds to it. [For instance, on this web page as displayed on your monitor, we have a pattern of dots of light and dark and colours on a computer screen, which correspond, more or less, to those of text in English.] * Information theory, as Fig A.1 [an amplified version of the Shannon system diagram] illustrates, then observes that if we have a receiver, we credibly have first had a transmitter, and a channel through which the apparent message has come; a meaningful message that corresponds to certain codes or standard patterns of communication and/or intelligent action. [Here, for instance, through HTTP and TCP/IP, the original text for this web page has been passed from the server on which it is stored, across the Internet, to your machine, as a pattern of binary digits in packets. Your computer then received the bits through its modem, decoded the digits, and proceeded to display the resulting text on your screen as a complex, functional coded pattern of dots of light and colour. At each stage, integrated, goal-directed intelligent action is deeply involved, deriving from intelligent agents -- engineers and computer programmers. We here consider of course digital signals, but in principle anything can be reduced to such signals, so this does not affect the generality of our thoughts.] * Now, it is of course entirely possible, that the apparent message is "nothing but" a lucky burst of noise that somehow got through the Internet and reached your machine. That is, it is logically and physically possible [i.e. neither logic nor physics forbids it!] that every apparent message you have ever got across the Internet -- including not just web pages but also even emails you have received -- is nothing but chance and luck: there is no intelligent source that actually sent such a message as you have received; all is just lucky noise:
"LUCKY NOISE" SCENARIO: Imagine a world in which somehow all the "real" messages sent "actually" vanish into cyberspace and "lucky noise" rooted in the random behaviour of molecules etc, somehow substitutes just the messages that were intended -- of course, including whenever engineers or technicians use test equipment to debug telecommunication and computer systems! Can you find a law of logic or physics that: [a] strictly forbids such a state of affairs from possibly existing; and, [b] allows you to strictly distinguish that from the "observed world" in which we think we live? That is, we are back to a Russell "five- minute- old- universe"-type paradox. Namely, we cannot empirically distinguish the world we think we live in from one that was instantly created five minutes ago with all the artifacts, food in our tummies, memories etc. that we experience. We solve such paradoxes by worldview level inference to best explanation, i.e. by insisting that unless there is overwhelming, direct evidence that leads us to that conclusion, we do not live in Plato's Cave of deceptive shadows that we only imagine is reality, or that we are "really" just brains in vats stimulated by some mad scientist, or we live in a The Matrix world, or the like. (In turn, we can therefore see just how deeply embedded key faith-commitments are in our very rationality, thus all worldviews and reason-based enterprises, including science. Or, rephrasing for clarity: "faith" and "reason" are not opposites; rather, they are inextricably intertwined in the faith-points that lie at the core of all worldviews. Thus, resorting to selective hyperskepticism and objectionism to dismiss another's faith-point [as noted above!], is at best self-referentially inconsistent; sometimes, even hypocritical and/or -- worse yet -- willfully deceitful. Instead, we should carefully work through the comparative difficulties across live options at worldview level, especially in discussing matters of fact. And it is in that context of humble self consistency and critically aware, charitable open-mindedness that we can now reasonably proceed with this discussion.)
* In short, none of us actually lives or can consistently live as though s/he seriously believes that: absent absolute proof to the contrary, we must believe that all is noise. [To see the force of this, consider an example posed by Richard Taylor. You are sitting in a railway carriage and seeing stones you believe to have been randomly arranged, spelling out: "WELCOME TO WALES." Would you believe the apparent message? Why or why not?]
Q: Why then do we believe in intelligent sources behind the web pages and email messages that we receive, etc., since we cannot ultimately absolutely prove that such is the case? ANS: Because we believe the odds of such "lucky noise" happening by chance are so small, that we intuitively simply ignore it. That is, we all recognise that if an apparent message is contingent [it did not have to be as it is, or even to be at all], is functional within the context of communication, and is sufficiently complex that it is highly unlikely to have happened by chance, then it is much better to accept the explanation that it is what it appears to be -- a message originating in an intelligent [though perhaps not wise!] source -- than to revert to "chance" as the default assumption. Technically, we compare how close the received signal is to legitimate messages, and then decide that it is likely to be the "closest" such message. (All of this can be quantified, but this intuitive level discussion is enough for our purposes.)
In short, we all intuitively and even routinely accept that: Functionally Specified, Complex Information, FSCI, is a signature of messages originating in intelligent sources . . . . For in fact, the issue in the communication situation once an apparent message is in hand is: inference to (a) intelligent -- as opposed to supernatural -- agency [signal] vs. (b) chance-process [noise]. Moreover, at least since Cicero, we have recognised that the presence of functionally specified complexity in such an apparent message helps us make that decision. (Cf. also Meyer's closely related discussion of the demarcation problem here.) More broadly the decision faced once we see an apparent message, is first to decide its source across a trichotomy: (1) chance; (2) natural regularity rooted in mechanical necessity (or as Monod put it in his famous 1970 book, echoing Plato, simply: "necessity"); (3) intelligent agency. These are the three commonly observed causal forces/factors in our world of experience and observation. [Cf. abstract of a recent technical, peer-reviewed, scientific discussion here. Also, cf. Plato's remark in his The Laws, Bk X, excerpted below.] Each of these forces stands at the same basic level as an explanation or cause, and so the proper question is to rule in/out relevant factors at work, not to decide before the fact that one or the other is not admissible as a "real" explanation. This often confusing issue is best initially approached/understood through a concrete example . . .
A CASE STUDY ON CAUSAL FORCES/FACTORS -- A Tumbling Die: Heavy objects tend to fall under the law-like natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance. But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes!
This concrete, familiar illustration should suffice to show that the three causal factors approach is not at all arbitrary or dubious -- as some are tempted to imagine or assert. [More details . . .] Then also, in certain highly important communication situations, the next issue after detecting agency as best causal explanation, is whether the detected signal comes from (4) a trusted source, or (5) a malicious interloper, or is a matter of (6) unintentional cross-talk. (Consequently, intelligence agencies have a significant and very practical interest in the underlying scientific questions of inference to agency then identification of the agent -- a potential (and arguably, probably actual) major application of the theory of the inference to design.) . . . . To quantify the above definition of what is perhaps best descriptively termed information-carrying capacity, but has long been simply termed information (in the "Shannon sense" - never mind his disclaimers . . .), let us consider a source that emits symbols from a vocabulary: s1,s2, s3, . . . sn, with probabilities p1, p2, p3, . . . pn. That is, in a "typical" long string of symbols, of size M [say this web page], the average number that are some sj, J, will be such that the ratio J/M --> pj, and in the limit attains equality. We term pj the a priori -- before the fact -- probability of symbol sj. Then, when a receiver detects sj, the question arises as to whether this was sent. [That is, the mixing in of noise means that received messages are prone to misidentification.] If on average, sj will be detected correctly a fraction, dj of the time, the a posteriori -- after the fact -- probability of sj is by a similar calculation, dj. So, we now define the information content of symbol sj as, in effect how much it surprises us on average when it shows up in our receiver: I = log [dj/pj], in bits [if the log is base 2, log2] . . . Eqn 1 This immediately means that the question of receiving information arises AFTER an apparent symbol sj has been detected and decoded. That is, the issue of information inherently implies an inference to having received an intentional signal in the face of the possibility that noise could be present. Second, logs are used in the definition of I, as they give an additive property: for, the amount of information in independent signals, si + sj, using the above definition, is such that: I total = Ii + Ij . . . Eqn 2 For example, assume that dj for the moment is 1, i.e. we have a noiseless channel so what is transmitted is just what is received. Then, the information in sj is: I = log [1/pj] = - log pj . . . Eqn 3 This case illustrates the additive property as well, assuming that symbols si and sj are independent. That means that the probability of receiving both messages is the product of the probability of the individual messages (pi *pj); so: Itot = log1/(pi *pj) = [-log pi] + [-log pj] = Ii + Ij . . . Eqn 4 So if there are two symbols, say 1 and 0, and each has probability 0.5, then for each, I is - log [1/2], on a base of 2, which is 1 bit. (If the symbols were not equiprobable, the less probable binary digit-state would convey more than, and the more probable, less than, one bit of information. Moving over to English text, we can easily see that E is as a rule far more probable than X, and that Q is most often followed by U. So, X conveys more information than E, and U conveys very little, though it is useful as redundancy, which gives us a chance to catch errors and fix them: if we see "wueen" it is most likely to have been "queen.") Further to this, we may average the information per symbol in the communication system thusly (giving in termns of -H to make the additive relationships clearer): - H = p1 log p1 + p2 log p2 + . . . + pn log pn or, H = - SUM [pi log pi] . . . Eqn 5 H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: "it is often referred to as the entropy of the source." [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form. Though somewhat controversial even in quite recent years, this is becoming more broadly accepted in physics and information theory, as Wikipedia now discusses [as at April 2011] in its article on Informational Entropy (aka Shannon Information, cf also here):
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
Summarising Harry Robertson's Statistical Thermophysics (Prentice-Hall International, 1993) -- excerpting desperately and adding emphases and explanatory comments, we can see, perhaps, that this should not be so surprising after all. (In effect, since we do not possess detailed knowledge of the states of the vary large number of microscopic particles of thermal systems [typically ~ 10^20 to 10^26; a mole of substance containing ~ 6.023*10^23 particles; i.e. the Avogadro Number], we can only view them in terms of those gross averages we term thermodynamic variables [pressure, temperature, etc], and so we cannot take advantage of knowledge of such individual particle states that would give us a richer harvest of work, etc.) . . . >> ______________ I trust this background context will allow us to have a fairly balanced context in which to assess the matter of inferring signal thus intelligence in purposeful action rather than noise that got lucky, because of the adaptation of signals to the communication conventions and system structures and protocols being used. Where of course the mere existence of the metrics signal to noise ratio, noise figure/factor and noise temperature already implies that in many relevant cases one may readily distinguish the characteristics of the two and measure the relevant power spectra etc to extract such values. Further to this, in systems and contexts of relevance to design theory, we have functionally specific complex organisation and associated information, FSCO/I, with a complexity metric based on length of the chain of structured y/n q's to describe the relevant wiring diagram pattern (which may be a s-t-r-i-n-g) will exceed 500 - 1,000 bits. That is, there is a threshold that sets a needle in haystack search challenge such that blind chance and necessity on the scale of ~10^17 s and 10^57 atoms as searchers/observers [sol system] or 10^80 atoms as same [observed cosmos] will be hopelessly overwhelmed by the scope of the config space of possibilities for all practical purposes. For 1,000 bits, the search scope at 10^13 - 14 searches per atom per s . . . a fast chem rxn rate . . . would be 10^110 - 111, relative to a space of 1.07*10^301 possibilities. A back of envelope exercise will show that the ratio that takes the scope of search as one straw size would confront a haystack that dwarfs the observed universe. In such a context, we may confidently and reliably infer that if something portrayed as such a blind needle in haystack search has come up trumps, picking up deeply isolated needles (islands of function) then it is because it was not genuinely blind. Indeed, one can put forward a concept of the injected information required to bridge between what blind search would do and what is observed. Especially, as the search for a golden search that beats those odds somehow will come from a much higher order set than the original space of possibilities. For, a search of a space is a subset of it, so the set of possible searches is the set of subsets. Where there are W configs, the cardinality of the power set is 2^W, i.e. exponentially larger. This is the context for the Marks-Dembski contention that search for search gets so much harder successively that one reverts to the original search and the conclusion is that on average one has no good reason to expect that a blindly chosen blind search will outperform a straight flat random search; given that one is crossing vast seas of non-function blindly in hopes of hitting on shorelines of function that will then allow non-random hill climbing on self-reinforcing increments in fitness. The challenge is to get to islands of function, not to hill climb within such an island, for OOL and for OO body plans (OOBP). The gap between the expected no hope result and the achievement of FSCO/I can then be reasonably taken as a metric of injected intelligent, active information that has reduced the scope of search to manageable proportions. And from this we see the significance of FSCO/I and of active information. KFkairosfocus
April 24, 2015
April
04
Apr
24
24
2015
01:21 AM
1
01
21
AM
PDT
The search for a solution to a problem is not a model of biological evolution and the concept of “active information” makes no sense in a biological context. Individual organisms or populations are not searching for optimal solutions to the task of survival.
http://www.evolutionnews.org/2012/08/conservation_of063671.html
Conservation of Information Made Simple William A. Dembski Go to Google and search on the term "evolutionary search," and you'll get quite a few hits. Evolution, according to some theoretical biologists, such as Stuart Kauffman, may properly be conceived as a search (see his book Investigations). Kauffman is not an ID guy, so there's no human or human-like intelligence behind evolutionary search as far as he's concerned. Nonetheless, for Kauffman, nature, in powering the evolutionary process, is engaged in a search through biological configuration space, searching for and finding ever-increasing orders of biological complexity and diversity.
The work of Douglas Axe et al. provides empirical confirmation of the the roughness of the fitness landscape. http://www.evolutionnews.org/2015/01/biologic_instit_1092941.htmlJim Smith
April 24, 2015
April
04
Apr
24
24
2015
12:51 AM
12
12
51
AM
PDT
Apologies, as this is not a comment directly about information. But Aurelio Smith writes:
Secondly, “fitness landscape” models are not accurate representations of the chaotic, fluid, interactive nature of the real environment . The environment is a kaleidoscope of constant change. Fitness peaks can erode and erupt.
Darwin orginally saw species' random variation as being tuned progressively to a more or less uniform environment. Even now, the environment is seen as the non-random factor acting on random variations that enable it to be the substitute for a designer. But a "kaleidoscope of constant change" is at least as random as mutations are supposed to be. If it is true, what factor in the theory of evolution can possibly give it the prolonged trajectories we see - which alone enable it to be represented as a tree? To extend his analogy - mutations are a kaleidoscope of random change, and appear in an environment which is a kaleidoscope of random change. There is nothing in that scenario leading one to expect a picture to appear, still less to persist.Jon Garvey
April 24, 2015
April
04
Apr
24
24
2015
12:49 AM
12
12
49
AM
PDT
I'm particularly interested in how specified complexity and active information "live together" in ID theory. But I don't want to derail the discussion with my preoccupation. What I hope for now is that you'll accept my promise to behave well, and allow my comments to appear without delay.SimonLeberge
April 23, 2015
April
04
Apr
23
23
2015
11:17 PM
11
11
17
PM
PDT
Many thanks to Aurelio Smith for taking the time to write this! I have my own comments on it, but wanted to open it up first to see what UD readers thought about Aurelio's criticism, and if they had their own criticisms of Active Information. Looking forward to the discussion!johnnyb
April 23, 2015
April
04
Apr
23
23
2015
10:21 PM
10
10
21
PM
PDT
1 18 19 20

Leave a Reply