Uncommon Descent Serving The Intelligent Design Community

Signal to Noise: A Critical Analysis of Active Information

Categories
Evolution
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

The following is a guest post by Aurelio Smith. I have invited him to present a critique of Active Information in a more prominent place at UD so we can have a good discussion of Active Information’s strengths and weaknesses. The rest of this post is his.


My thanks to johnnyb for offering to host a post from me on the subject of ‘active information’. I’ve been following the fortunes of the ID community for some time now and I was a little disappointed that the recent publications of the ‘triumvirate’ of William Dembski, Robert Marks and their newly promoted postgrad Doctor Ewert have received less attention here than their efforts deserve. The thrust of their assault on Darwinian evolution has developed from earlier concepts such as “complex specified information” and “conservation of information” and they now introduce “Algorithmic Specified Complexity” and “Active information”.

Some history.

William Demsbski gives an account of the birth of his ideas here:

…in the summer of 1992, I had spent several weeks with Stephen Meyer and Paul Nelson in Cambridge, England, to explore how to revive design as a scientific concept, using it to elucidate biological origins as well as to refute the dominant materialistic understanding of evolution (i.e., neo-Darwinism). Such a project, if it were to be successful, clearly could not merely give a facelift to existing design arguments for the existence of God. Indeed, any designer that would be the conclusion of such statistical reasoning would have to be far more generic than any God of ethical monotheism. At the same time, the actual logic for dealing with small probabilities seemed less to directly implicate a designing intelligence than to sweep the field clear of chance alternatives. The underlying logic therefore was not a direct argument for design but an indirect circumstantial argument that implicated design by eliminating what it was not.*

[*my emphasis]

Dembski published The Design Inference in 1998, where the ‘explanatory filter’ was proposed as a tool to separate ‘design’ from ‘law’ and ‘chance’. The weakness in this method is that ‘design’ is assumed as the default after eliminating all other possible causes. Wesley Elsberry’s review points out the failure to include unknown causation as a possibility. Dembski acknowledges the problem in a comment in a thread at Uncommon Descent – Some Thanks for Professor Olofsson

I wish I had time to respond adequately to this thread, but I’ve got a book to deliver to my publisher January 1 — so I don’t. Briefly: (1) I’ve pretty much dispensed with the EF. It suggests that chance, necessity, and design are mutually exclusive. They are not. Straight CSI [Complex Specified Information] is clearer as a criterion for design detection.* (2) The challenge for determining whether a biological structure exhibits CSI is to find one that’s simple enough on which the probability calculation can be convincingly performed but complex enough so that it does indeed exhibit CSI. The example in NFL ch. 5 doesn’t fit the bill. The example from Doug Axe in ch. 7 of THE DESIGN OF LIFE (www.thedesignoflife.net) is much stronger. (3) As for the applicability of CSI to biology, see the chapter on “assertibility” in my book THE DESIGN REVOLUTION. (4) For my most up-to-date treatment of CSI, see “Specification: The Pattern That Signifies Intelligence” at http://www.designinference.com. (5) There’s a paper Bob Marks and I just got accepted which shows that evolutionary search can never escape the CSI problem (even if, say, the flagellum was built by a selection-variation mechanism, CSI still had to be fed in).

[*my emphasis]

Active information.

Dr Dembski has posted some background to his association with Professor Robert Marks and The Evolutionary Informatics Lab which has resulted in the publication of several papers with active information as an important theme. A notable collaborator is Winston Ewert Ph D, whose master’s thesis was entitled: Studies of Active Information in Search where, in chapter four, he criticizes Lenski et al., 2003, saying:

[quoting Lenski et al., 2003]“Some readers might suggest that we stacked the deck by studying the evolution of a complex feature that could be built on simpler functions that were also useful.”

This, indeed, is what the writers of Avida software do when using stair step active information.

What is active information?

In A General Theory of Information Cost Incurred by Successful Search, Dembski, Ewert and Marks (henceforth DEM) give their definition of “active information” as follows:

In comparing null and alternative searches, it is convenient to convert probabilities to information measures (note that all logarithms in the sequel are to the base 2). We therefore define the endogenous information IΩ as –log(p), which measures the inherent difficulty of a blind or null search in exploring the underlying search space Ω to locate the target T. We then define the exogenous information IS as –log(q), which measures the difficulty of the alternative search S in locating the target T. And finally, we define the active information I+ as the difference between the endogenous and exogenous information: I+ = IΩ – IS = log(q/p). Active information therefore measures the information that must be added (hence the plus sign in I+) on top of a null search to raise an alternative search’s probability of success by a factor of q/p. [excuse formatting errors in mathematical symbols]

They conclude with an analogy from the financial world, saying:

Conservation of information shows that active information, like money, obeys strict accounting principles. Just as banks need money to power their financial instruments, so searches need active information to power their success in locating targets. Moreover, just as banks must balance their books, so searches, in successfully locating targets, must balance their books — they cannot output more information than was inputted.

In an article at the Pandas Thumb website Professor Joe Felsenstein, in collaboration with Tom English, presents some criticism of of the quoted DEM paper. Felsenstein helpfully posts an “abstract in the comments, saying:

Dembski, Ewert and Marks have presented a general theory of “search” that has a theorem that, averaged over all possible searches, one does not do better than uninformed guessing (choosing a genotype at random, say). The implication is that one needs a Designer who chooses a search in order to have an evolutionary process that succeeds in finding genotypes of improved fitness. But there are two things wrong with that argument: 1. Their space of “searches” includes all sorts of crazy searches that do not prefer to go to genotypes of higher fitness – most of them may prefer genotypes of lower fitness or just ignore fitness when searching. Once you require that there be genotypes that have different fitnesses, so that fitness affects their reproduction, you have narrowed down their “searches” to ones that have a much higher probability of finding genotypes that have higher fitness. 2. In addition, the laws of physics will mandate that small changes in genotype will usually not cause huge changes in fitness. This is true because the weakness of action at a distance means that many genes will not interact strongly with each other. So the fitness surface is smoother than a random assignment of fitnesses to genotypes. That makes it much more possible to find genotypes that have higher fitness. Taking these two considerations into account – that an evolutionary search has genotypes whose fitnesses affect their reproduction, and that the laws of physics militate against strong interactions being typical – we see that Dembski, Ewert, and Marks’s argument does not show that Design is needed to have an evolutionary system that can improve fitness.

I note that there is an acknowledgement in the DEM paper as follows:

The authors thank Peter Olofsson and Dietmar Eben for helpful feedback on previous work of the Evolutionary Informatics Lab, feedback that has found its way into this paper.

This is the same Professor Olofsson referred to in the “Some Thanks for Professor Olofsson thread mentioned above. Dietmar Eben has blogged extensively on DEM’s ideas.

I’m not qualified to criticize the mathematics but I see no need to doubt that it is sound. However what I do query is whether the model is relevant to biology. The search for a solution to a problem is not a model of biological evolution and the concept of “active information” makes no sense in a biological context. Individual organisms or populations are not searching for optimal solutions to the task of survival. Organisms are passive in the process, merely affording themselves of the opportunity that existing and new niche environments provide. If anything is designing, it is the environment. I could suggest an anthropomorphism: the environment and its effects on the change in allele frequency are “a voice in the sky” whispering “warmer” or “colder”. There is the source of the active information.

I was recently made aware that this classic paper by Sewall Wright, The Roles of Mutation, Inbeeding, Crossbreeding and Selection in Evolution, is available online. Rather than demonstrating the “active information” in Dawkins’ Weasel program, which Dawkins freely confirmed is a poor model for evolution with its targeted search, would DEM like to look at Wright’s paper for a more realistic evolutionary model?

Perhaps, in conclusion, I should emphasize two things. Firstly, I am utterly opposed to censorship and suppression. I strongly support the free exchange of ideas and information. I strongly support any genuine efforts to develop “Intelligent Design” into a formal scientific endeavor. Jon Bartlett sees advantages in the field of computer science and I say good luck to him. Secondly, “fitness landscape” models are not accurate representations of the chaotic, fluid, interactive nature of the real environment . The environment is a kaleidoscope of constant change. Fitness peaks can erode and erupt. Had Sewall Wright been developing his ideas in the computer age, his laboriously hand-crafted diagrams would, I’m sure, have evolved (deliberate pun) into exquisite computer models.

References

History: Wm Dembski 1998 the Design inference, explanatory filter ( Elsberry criticizes the book for using a definition of “design” as what is left over after chance and regularity have been eliminated)

Wikipedia, upper probability bound, complex specified information, conservation of information, meaningful information.

Elsberry & Shallit

Theft over Toil John S. Wilkins, Wesley R. Elsberry 2001

Computational capacity of the universe Seth Lloyd 2001

Information Theory, Evolutionary Computation, and
Dembski’s “Complex Specified Information”
Elsberry and Shallit 2003

Specification: The Pattern That Signifies Intelligence by William A. Dembski August 15, 2005

Evaluation of Evolutionary and Genetic
Optimizers: No Free Lunch
Tom English 1996

Conservation of Information Made Simple William Dembski 2012

…evolutionary biologists possessing the mathematical tools to understand search are typically happy to characterize evolution as a form of search. And even those with minimal knowledge of the relevant mathematics fall into this way of thinking.

Take Brown University’s Kenneth Miller, a cell biologist whose knowledge of the relevant mathematics I don’t know. Miller, in attempting to refute ID, regularly describes examples of experiments in which some biological structure is knocked out along with its function, and then, under selection pressure, a replacement structure is evolved that recovers the function. What makes these experiments significant for Miller is that they are readily replicable, which means that the same systems with the same knockouts will undergo the same recovery under the same suitable selection regime. In our characterization of search, we would say the search for structures that recover function in these knockout experiments achieves success with high probability.

Suppose, to be a bit more concrete, we imagine a bacterium capable of producing a particular enzyme that allows it to live off a given food source. Next, we disable that enzyme, not by removing it entirely but by, say, changing a DNA base in the coding region for this protein, thus changing an amino acid in the enzyme and thereby drastically lowering its catalytic activity in processing the food source. Granted, this example is a bit stylized, but it captures the type of experiment Miller regularly cites.

So, taking these modified bacteria, the experimenter now subjects them to a selection regime that starts them off on a food source for which they don’t need the enzyme that’s been disabled. But, over time, they get more and more of the food source for which the enzyme is required and less and less of other food sources for which they don’t need it. Under such a selection regime, the bacterium must either evolve the capability of processing the food for which previously it needed the enzyme, presumably by mutating the damaged DNA that originally coded for the enzyme and thereby recovering the enzyme, or starve and die.

So where’s the problem for evolution in all this? Granted, the selection regime here is a case of artificial selection — the experimenter is carefully controlling the bacterial environment, deciding which bacteria get to live or die*. [(* My emphasis) Not correct – confirmed by Richard Lenski – AF] But nature seems quite capable of doing something similar. Nylon, for instance, is a synthetic product invented by humans in 1935, and thus was absent from bacteria for most of their history. And yet, bacteria have evolved the ability to digest nylon by developing the enzyme nylonase. Yes, these bacteria are gaining new information, but they are gaining it from their environments, environments that, presumably, need not be subject to intelligent guidance. No experimenter, applying artificial selection, for instance, set out to produce nylonase.

To see that there remains a problem for evolution in all this, we need to look more closely at the connection between search and information and how these concepts figure into a precise formulation of conservation of information. Once we have done this, we’ll return to the Miller-type examples of evolution to see why evolutionary processes do not, and indeed cannot, create the information needed by biological systems. Most biological configuration spaces are so large and the targets they present are so small that blind search (which ultimately, on materialist principles, reduces to the jostling of life’s molecular constituents through forces of attraction and repulsion) is highly unlikely to succeed. As a consequence, some alternative search is required if the target is to stand a reasonable chance of being located. Evolutionary processes driven by natural selection constitute such an alternative search. Yes, they do a much better job than blind search. But at a cost — an informational cost, a cost these processes have to pay but which they are incapable of earning on their own.

Meaningful Information

Meaningful Information Paul Vit´anyi 2004

The question arises whether it is possible to separate meaningful information from accidental information, and if so, how.

Evolutionary Informatics Publications

Conservation of Information in Relative Search Performance Dembski, Ewert, Marks 2013

Algorithmic Specified Complexity
in the Game of Life
Ewert, Dembski, Marks 2015

Digital Irreducible Complexity: A Survey of Irreducible
Complexity in Computer Simulations
Ewert 2014

On the Improbability of Algorithmic Specified
Complexity
Dembski, Ewert, Marks 2013

Wikipedia, upper probability bound, complex specified information, conservation of information, meaningful information.

A General Theory of Information Cost Incurred by Successful Search Dembski, Ewert, Marks 2013

Actually, in my talk, I work off of three papers, the last of which Felsenstein fails to cite and which is the most general, avoiding the assumption of uniform probability to which Felsenstein objects.

EN&V

Dietmar Eben’s blog

Dieb review “cost of successful search

Conservation of Information in Search:
Measuring the Cost of Success
Dembski, Marks 2009

The Search for a Search: Measuring the Information Cost of
Higher Level Search
Dembski, Marks 2009

Has Natural Selection Been Refuted? The Arguments of William Dembski Joe Felsenstein 2007

In conclusion
Dembski argues that there are theorems that prevent natural selection from explaining the adaptations that we see. His arguments do not work. There can be no theorem saying that adaptive information is conserved and cannot be increased by natural selection. Gene frequency changes caused by natural selection can be shown to generate specified information. The No Free Lunch theorem is mathematically correct, but it is inapplicable to real biology. Specified information, including complex specified information, can be generated by natural selection without needing to be “smuggled in”. When we see adaptation, we are not looking at positive evidence of billions and trillions of interventions by a designer. Dembski has not refuted natural selection as an explanation for adaptation.

ON DEMBSKI’S LAW OF CONSERVATION OF INFORMATION Erik Tellgren 2002

Comments
Zachriel:
The environment not random, but primarily cyclical.
Which is why natural selection brings about a wobbling stability, which excludes universal common descent.Joe
April 24, 2015
April
04
Apr
24
24
2015
08:22 AM
8
08
22
AM
PDT
The word "random" wrt biology and evolution means happenstance- as in accidental, errors and mistakes- not planned.Joe
April 24, 2015
April
04
Apr
24
24
2015
08:21 AM
8
08
21
AM
PDT
Jon Garvey: But a “kaleidoscope of constant change” is at least as random as mutations are supposed to be. The environment is not random, with changes primarily cyclical. - Edited for clarityZachriel
April 24, 2015
April
04
Apr
24
24
2015
08:20 AM
8
08
20
AM
PDT
Elizabeth Liddle:
Well, no, Joe. Evolutionary algorithms do precisely that – they not only simulate natural selection (via the fitness criteria) but random mutation as well
Nonsense. Natural selection is not a search and does not have a goal. Evolutionary algorithms are both a search and goal-oriented. It is obvious that you have no idea what natural selection is nor what it entails.Joe
April 24, 2015
April
04
Apr
24
24
2015
08:15 AM
8
08
15
AM
PDT
Jon Garvey wrote:
Darwin orginally saw species’ random variation as being tuned progressively to a more or less uniform environment. Even now, the environment is seen as the non-random factor acting on random variations that enable it to be the substitute for a designer. But a “kaleidoscope of constant change” is at least as random as mutations are supposed to be. If it is true, what factor in the theory of evolution can possibly give it the prolonged trajectories we see – which alone enable it to be represented as a tree? To extend his analogy – mutations are a kaleidoscope of random change, and appear in an environment which is a kaleidoscope of random change. There is nothing in that scenario leading one to expect a picture to appear, still less to persist.
I think the problem here is that wretched word "random". It has too many meanings to be useful! Sometimes it means "unintended"; sometimes it means "equiprobable"; sometimes it means "unpredictable"; sometimes it means "orthogonal"; sometimes it means "stochastic". And, of course, Darwin didn't use the word at all! So I think it's really important to be clear which we mean when we discuss evolutionary processes. Our current understanding of evolutionary processes is that both variation and "selection" are stochastic events - in other words, they are predictable statistically, in bulk (just as one can predict, statistically, that around 50% of coin tosses will be heads) but not predictable, necessarily (or at least not without a great deal of not-normally-available information) individually. So while we can predict that certain genes will show certain kinds of mutation more often than others, and that every new organism will have a certain number of mutations, we can't predict who will get what, even though we can predict that most offspring will be quite similar to their parents and their siblings. Similarly, we can't predict very well which of them will fall under a bus before reproducing, nor, on the other hand, go on to produce record numbers of offspring, we can say, more or less, that a certain proportion people will die without issue under a bus in any given year, and we can even say that those who don't are slightly less likely to be blind, deaf or suicidal than those who do. So the idea that there are "random" variations being acted on by "non-random" environmental events is not really a defensible one. What is important, however, is that in general, the first is "non-random" with respect to the second. It is probable (but not certain) that any given mutation is no more likely to appear de novo when it is not - and in evolutionary algorithms this is normally the case. In fact, it's why they are so useful - no "second guessing" of what is likely to turn out to be useful is employed, and so the system is free to explore "lines of enquiry" that a more far-sighted "designer" would reject as unfruitful. And just as variant offspring are similar to their parents, so novel environments tend to be similar to the preceding one. If it is warmish here, this year, it is likely to be still warmish, a few miles away, next year. So neither mutations nor environmental changes are "random" in the "equiprobable" sense, which is why "white noise" fitness landscapes are irrelevant to the discussion! Practically speaking, that means that Critter A, fairly well suited to Environment B is likely to have offspring similarly suited to both Environment B and Environment B+a few miles. That is the "active information" that makes the "search" of the population of which Critter A is a member of an optimal solution to the problem of surviving and breeding in Environment B and its near-neighbours a search of a smooth fitness landscape. And in a smooth fitness landscape, evolutionary processes work really well (as Ewert, Dembski and Marks agree). The question is therefore: do we need to invoke a designer to account for the "active information" inherent in a) the similarity of offspring to their parents or b) the similarity of one environment with the nearest neighbour in space or time? Regarding a): if it were not the case, we wouldn't call the critters self-replicators anyway! So the question would be moot. Regarding b) is there something "unnatural" about a universe in which places near in space and time tend to be similar to each other? If that is Ewert, Dembski and Marks' argument, then I think it needs to be be made more explicit!Elizabeth Liddle
April 24, 2015
April
04
Apr
24
24
2015
08:08 AM
8
08
08
AM
PDT
Jim Smith: For the purposes of the no free lunch theorem a search is an algorithm that traverses a landscape and has some criterion to identify “success”. The NFL theorems do not apply to "search" as defined by DEM. I recently heard Dembski say that his theorems are "in the spirit of the no free lunch theorems," or some such. To put it simply, in the NFL framework, the fitness function is part of the problem. In the framework of DEM, the fitness function, if any, is part of the solution.SimonLeberge
April 24, 2015
April
04
Apr
24
24
2015
08:07 AM
8
08
07
AM
PDT
Roy, that's a strawman based on a distortion of both the circumstances and what alternatives means. There are TWO, not one, successive alternatives considered, and they would be broken based on reasonable criteria backed up by empirical warrant. Unless you can show a reasonable fourth option or reduce three to two per empirical observation backed argument, we face a three way split and good reason to infer to design as best cause of FSCO/I, as it is highly contingent [so not deterministic mechanism] and it is both complex beyond a threshold and functionally specific per interactive organisation [so on an island of function maximally implausible to be found by blind search]. Where also, it is the case that on trillions of cases FSCO/I is reliably the adequate cause of said FSCO/I. You simply cannot show a single case of blind chance and/or necessity credibly causing FSCO/I, or you would be trumpeting it instead of playing at twisty definition games. KFkairosfocus
April 24, 2015
April
04
Apr
24
24
2015
08:03 AM
8
08
03
AM
PDT
Kairosfocus, There is absolutely no rush. This is a thread on active information. You brought up FSCO/I repeatedly. My response is: "OK, I wasn't going to push the question of how active information and specified complexity 'live together' in ID theory. But if you insist that a kind of specified complexity is relevant, then I insist that you make rigorous comparison to active information possible." If you want FSCO/I to be on the table with active information, then you've got to define it formally. No amount of background is a substitute for something like AI = log q -- log p, where q is the probability that a model of evolution generates an event of interest, and p is the probability that a baseline model generates that event. I can relate active information to specified complexity because Dembski, and now Ewert, Dembski, and Marks, have ventured formal definitions. If you don't have a formal definition, then I'd ask that you give folks a chance to discuss a measure that is formally defined, is in peer-reviewed papers, and is rarely mentioned at UD. In Being as Communion, Bill Dembski relegates specified complexity to a footnote, and devotes the final three chapters to active information. DEM came through with a conservation-of-information theorem for active information, not specified complexity.SimonLeberge
April 24, 2015
April
04
Apr
24
24
2015
07:57 AM
7
07
57
AM
PDT
kairosfocus: Z, evolutionary algorithms both presume being on an island of function Evolutionary algorithms explore the behavior of replicators, so start with whatever it takes to replicate successfully. kairosfocus: clearly a self-replicator cannot “evolve” from a non-self-replicator That's not clear, but is irrelevant in any case. kairosfocus: So it would probably be useful to focus on that specific argument. That is the topic of the thread. kairosfocus: And the richer the variety of that environment, in terms of potential threats and resources, then the more information it contains, quantifiable in bits, if you like. Yes. Replicators incorporate information about its relationship to the environment.Zachriel
April 24, 2015
April
04
Apr
24
24
2015
07:49 AM
7
07
49
AM
PDT
Well, no, Joe. Evolutionary algorithms do precisely that - they not only simulate natural selection (via the fitness criteria) but random mutation as well. And they can also be used in their own right as design tools - to solve problems that human designers can't solve otherwise. KF: clearly a self-replicator cannot "evolve" from a non-self-replicator, so to that extent, by definition, a the simplest self-replicator is on an "island" that cannot be reached via evolution. However, what we do not know is just how simple that self-replicator had to be in order to kick-start the process. But that, in any case, is not the topic of Ewert, Dembski and Marks papers, which appear to me to be making the argument that EVEN IF we assume a that a self-replicating population exists, the solutions "found" by such a population are making use of "Active Information" that has to be provided from somewhere. So it would probably be useful to focus on that specific argument. And I think the simple answer is that that "Active Information" is simply that in the literal landscape itself - the environment that the population inhabits. The "information" that causes one critter to produce more offspring than another critter's offspring consists of the threats and resources off that environment. And the richer the variety of that environment, in terms of potential threats and resources, then the more information it contains, quantifiable in bits, if you like.Elizabeth Liddle
April 24, 2015
April
04
Apr
24
24
2015
07:35 AM
7
07
35
AM
PDT
Z, evolutionary algorithms both presume being on an island of function and are fine tuned to work by their designers. Yes, designers, they are an example of intelligently directed configuration yielding functionally specific complex organisation. No credible, empirically warranted observationally demonstrated case of such an algor or similar case of FSCO/I arising by blind chance and necessity has been shown. KF PS: The proper start point is a pond or volcano vent or comet core or gas giant moon etc with reasonable chemicals. No reasonable blind chance and mechanical necessity driven path has been observationally, empirically warranted from such to a gated, encapsulated, metabolic automaton with a code using self replication facility. Likewise, embryologically feasible body plans reasonably require 10 - 100+ mn, not 100 - 1,000 k bases of genomic material. No observationally warranted blind watchmaker mechanism has been shown capable of these feats or significant steps to them. Speculation rooted in materialist a priorism has been imposed instead. As for instance Lewontin so candidly acknowledged.kairosfocus
April 24, 2015
April
04
Apr
24
24
2015
07:16 AM
7
07
16
AM
PDT
Evolutionary algorithms do not simulate natural selection, Zachriel. You are confused.Joe
April 24, 2015
April
04
Apr
24
24
2015
07:01 AM
7
07
01
AM
PDT
Jim Smith: If you had a supercomputer and a super-algorithm you could simulate mutation and natural selection on a computer and determine if it can achieve survival better than a random algorithm. You hardly need a supercomputer. An evolutionary algorithm works poorly on chaotic landscapes, but well on many non-chaotic landscapes.Zachriel
April 24, 2015
April
04
Apr
24
24
2015
06:55 AM
6
06
55
AM
PDT
For the purposes of the no free lunch theorem a search is an algorithm that traverses a landscape and has some criterion to identify "success". Mutation and natural selection is an algorithm that traverses the fitness landscape, even as the landscape is changing, and "survival" is the measure of success. If you had a supercomputer and a super-algorithm you could simulate mutation and natural selection on a computer and determine if it can achieve survival better than a random algorithm. The no free lunch theorem applies to mutation and natural selection as it does to any algorithm.Jim Smith
April 24, 2015
April
04
Apr
24
24
2015
06:49 AM
6
06
49
AM
PDT
Natural selection has not been refuted but it has been proven to be impotent wrt universal common descent.Joe
April 24, 2015
April
04
Apr
24
24
2015
06:30 AM
6
06
30
AM
PDT
Simon, yes, I understand that (I think). My point is that the only kind of "evolutionary" process that would be no better than random search would not be like any "evolutionary process" that actually exists. We'd have to postulate offspring were similar to their parents in no respect that affected their capacity to breed. So that would mean that either they were really really unlike their parents (in which case we wouldn't call them "self-replicators") or that their similarities were completely orthogonal to their capacity to breed. Which may be of mathematical interest, but seems irrelevant, unless someone is postulating that a designer is needed to ensure that two breeders with similar properties should often include properties that affect their ability to breed.Elizabeth Liddle
April 24, 2015
April
04
Apr
24
24
2015
06:25 AM
6
06
25
AM
PDT
Basic replicators will always be basic replicators. Also replication is the very thing that requires an explanation.Joe
April 24, 2015
April
04
Apr
24
24
2015
06:10 AM
6
06
10
AM
PDT
kairosfocus: Note the onward begged question of finding islands of function, especially at OOL Basic replicators are presumed in the discussion of so-called active information.Zachriel
April 24, 2015
April
04
Apr
24
24
2015
06:06 AM
6
06
06
AM
PDT
F/N: Note the onward begged question of finding islands of function, especially at OOL but also at OOBPs. KFkairosfocus
April 24, 2015
April
04
Apr
24
24
2015
06:02 AM
6
06
02
AM
PDT
Mark Frank:
All evolution does is evolve organisms to a state where they are sufficiently fit to survive.
Obviously they are already there, Mark. Evolutionism starts with such organisms. Also a fitness function isn't a target. It is something that checks to see how close the current product is to the solution.Joe
April 24, 2015
April
04
Apr
24
24
2015
05:58 AM
5
05
58
AM
PDT
F/N 2: WmAD in NFL -- yes on public easily accessible record for over a decade -- on CSI in the context of biofunction (thus enfolding FSCO/I]: ______________ http://iose-gen.blogspot.com/2010/06/introduction-and-summary.html#wd_defn >> p. 148:“The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity [[cf. here below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . . Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole. Dembski cites: Wouters, p. 148: "globally in terms of the viability of whole organisms," Behe, p. 148: "minimal function of biochemical systems," Dawkins, pp. 148 - 9: "Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by ran-| dom chance alone. In the case of living things, the quality that is specified in advance is . . . the ability to propagate genes in reproduction." On p. 149, he roughly cites Orgel's famous remark from 1973, which exactly cited reads: In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . . And, p. 149, he highlights Paul Davis in The Fifth Miracle: "Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity."] . . .” p. 144: [[Specified complexity can be more formally defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ” >> ________________ Notice the config space context and the implication of deep needle in haystack blind search challenge to find islands of function. KF PS: My last discussion on FSCO/I here at UD was here: https://uncommondescent.com/intelligent-design/id-foundations/functionally-specific-complex-organisation-and-associated-information-fscoi-is-real-and-relevant/kairosfocus
April 24, 2015
April
04
Apr
24
24
2015
05:57 AM
5
05
57
AM
PDT
#20 SimonL
The fitness function, if any, is part of the “search” itself.
It is also true that the fitness function is the "target" (to the extent that target makes sense in this context). All evolution does is evolve organisms to a state where they are sufficiently fit to survive. I think this is a key misunderstanding because artificial simulations are sometimes accused of sneaking in the information about the target via the fitness function. But actually by creating a fitness function they are creating a target.Mark Frank
April 24, 2015
April
04
Apr
24
24
2015
05:52 AM
5
05
52
AM
PDT
Simont_eberge- Correct the evolutionists by telling them genetic and evolutionary algorithms have goals and natural selection does not. Those algorithms are actively searching for a solution whereas natural selection isn't even a search.Joe
April 24, 2015
April
04
Apr
24
24
2015
05:46 AM
5
05
46
AM
PDT
Earth to Elizabeth Liddle- Genetic and evolutionary algorithms model intelligent design evolution and have nothing to do with natural selection. Also if your position had something- a model, testable entailments, actual evidence, then we could check it out. However it doesn't so all we can do is discuss probabilities even though you can't even show a feasibility. That yours is even included in a probability discussion is giving it more than it deserves. And the funny part is you aren't even aware of any of that.Joe
April 24, 2015
April
04
Apr
24
24
2015
05:44 AM
5
05
44
AM
PDT
F/N: I repeat, the pivotal issue to be explained is not hoped for hill climbing within islands of function but to get to islands of function by blind search on the gamut of sol system or observed cosmos. When you observe consistent brushing this aside to talk about things within such islands, questions are being insistently begged. KFkairosfocus
April 24, 2015
April
04
Apr
24
24
2015
05:43 AM
5
05
43
AM
PDT
Elizabeth Liddle, The fitness function, if any, is part of the "search" itself. Almost everybody stumbles on that point. (Yes, folks, evilutionists do correct one another in public.)SimonLeberge
April 24, 2015
April
04
Apr
24
24
2015
05:37 AM
5
05
37
AM
PDT
SL, I am on a key Skype call to B'dos, but will clip my IOSE. FSCO/I is descriptive, and quantifiable: _______________ http://iose-gen.blogspot.com/2010/06/introduction-and-summary.html#fsci_sig >> D: The significance of complex, functionally specific information/ organisation The observation-based principle that complex, functionally specific information/ organisation is arguably a reliable marker of intelligence and the related point that we can therefore use this concept to scientifically study intelligent causes will play a crucial role in that survey. For, routinely, we observe that such functionally specific complex information and related organisation come-- directly [[drawing a complex circuit diagram by hand] or indirectly [[a computer generated speech (or, perhaps: talking in one's sleep)] -- from intelligence. In a classic 1979 comment, well known origin of life theorist J S Wicken wrote:
‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]
The idea-roots of the term "functionally specific complex information" [FSCI] are plain: "Organization, then, is functional[[ly specific] complexity and carries information." Similarly, as early as 1973, Leslie Orgel, reflecting on Origin of Life, noted:
. . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . . . [HT, Mung, fr. p. 190 & 196:] These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. [--> this is of course equivalent to the string of yes/no questions required to specify the relevant "wiring diagram" for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002, also cf here, here and here (with here on self-moved agents as designing causes).] One can see intuitively that many instructions are needed to specify a complex structure. [--> so if the q's to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions. [--> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes. [The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196. Of course, that immediately highlights OOL, where the required self-replicating entity is part of what has to be explained (cf. Paley here), a notorious conundrum for advocates of evolutionary materialism; one, that has led to mutual ruin documented by Shapiro and Orgel between metabolism first and genes first schools of thought, cf here. Behe would go on to point out that irreducibly complex structures are not credibly formed by incremental evolutionary processes and Menuge et al would bring up serious issues for the suggested exaptation alternative, cf. his challenges C1 - 5 in the just linked. Finally, Dembski highlights that CSI comes in deeply isolated islands T in much larger configuration spaces W, for biological systems functional islands. That puts up serious questions for origin of dozens of body plans reasonably requiring some 10 - 100+ mn bases of fresh genetic information to account for cell types, tissues, organs and multiple coherently integrated systems. Wicken's remarks a few years later as already were cited now take on fuller force in light of the further points from Orgel at pp. 190 and 196 . . . ]
Thus, the concept of complex specified information -- especially in the form functionally specific complex organisation and associated information [FSCO/I] -- is NOT a creation of design thinkers like William Dembski. Instead, it comes from the natural progress and conceptual challenges faced by origin of life researchers, by the end of the 1970's. Indeed, by 1982, the famous, Nobel-equivalent prize winning Astrophysicist (and life-long agnostic) Sir Fred Hoyle, went on quite plain public record in an Omni Lecture:
Once we see that life is cosmic it is sensible to suppose that intelligence is cosmic. Now problems of order, such as the sequences of amino acids in the chains which constitute the enzymes and other proteins, are precisely the problems that become easy once a directed intelligence enters the picture, as was recognised long ago by James Clerk Maxwell in his invention of what is known in physics as the Maxwell demon. The difference between an intelligent ordering, whether of words, fruit boxes, amino acids, or the Rubik cube, and merely random shufflings can be fantastically large, even as large as a number that would fill the whole volume of Shakespeare’s plays with its zeros. So if one proceeds directly and straightforwardly in this matter, without being deflected by a fear of incurring the wrath of scientific opinion, one arrives at the conclusion that biomaterials with their amazing measure or order must be the outcome of intelligent design. No other possibility I have been able to think of in pondering this issue over quite a long time seems to me to have anything like as high a possibility of being true.” [[Evolution from Space (The Omni Lecture[ --> Jan 12th 1982]), Enslow Publishers, 1982, pg. 28.]
So, we first see that by the turn of the 1980's, scientists concerned with origin of life and related cosmology recognised that the information-rich organisation of life forms was distinct from simple order and required accurate description and appropriate explanation. To meet those challenges, they identified something special about living forms, CSI and/or FSCO/I. As they did so, they noted that the associated "wiring diagram" based functionality is information-rich, and traces to what Hoyle already was willing to call "intelligent design," and Wicken termed "design or selection." By this last, of course, Wicken plainly hoped to include natural selection. But the key challenge soon surfaces: what happens if the space to be searched and selected from is so large that islands of functional organisation are hopelessly isolated relative to blind search resources? For, under such "infinite monkey" circumstances , searches based on random walks from arbitrary initial configurations will be maximally unlikely to find such isolated islands of function. As the crowd-source Wikipedia summarises (in testimony against its ideological interest compelled by the known facts):
The text of Hamlet contains approximately 130,000 letters. Thus there is a probability of one in 3.4 × 10^183,946 to get the text right at the first trial. The average number of letters that needs to be typed until the text appears is also 3.4 × 10^183,946, or including punctuation, 4.4 × 10^360,783. Even if the observable universe were filled with monkeys typing from now until the heat death of the universe, their total probability to produce a single instance of Hamlet would still be less than one in 10^183,800. As Kittel and Kroemer put it, “The probability of Hamlet is therefore zero in any operational sense of an event…”, and the statement that the monkeys must eventually succeed “gives a misleading conclusion about very, very large numbers.” This is from their textbook on thermodynamics, the field whose statistical foundations motivated the first known expositions of typing monkeys.[3]
So, once we are dealing with something that is functionally specific and sufficiently complex, trial-and error, blind selection on a random walk is increasingly implausible as an explanation, compared to the routinely observed source of such complex, functional organisation: design. Indeed, beyond a certain point, the odds of trial and error on a random walk succeeding fall to a "practical" zero. >> _______________ I trust this will help to give background. KFkairosfocus
April 24, 2015
April
04
Apr
24
24
2015
05:28 AM
5
05
28
AM
PDT
OK, we seem to have communication! A second point I think worth making regarding the issue of probability as information is that the probabilities in question are frequentist probabilities. In other words, they are only calculable if you know (or can estimate, from data) the frequency distribution of the patterns, or whatever, in question. However, once we grant (as I think we should) that evolutionary search is better than blind search on any fitness landscape in which the process can be properly said to be "evolutionary" (i.e. as in my previous post, one in which offspring have similar properties to their parents), then the question becomes not: "how probable is it that an evolutionary process will find good complex solutions to surviving in a given environment?" (answer: high) but "how probable is it that an evolutionary process will arise?" And that is simply not possible to calculate - we simply do not have the data from which to estimate the frequency distribution in question. It may be that our universe, or a universe in which evolutionary processes can arise, are rare in the population of all possible universes, or it may be that they are very common. But we do not know which! To put that in the language of DEM, we do not know whether the amount of "Active Information" in our universe is a uniquely adequate amount to "fund" the phenomenon of evolutionary process, or not. For instance "1/f" seems to be a recurring feature of the patterns in our universe, and I suggest that this "1/f" property is precisely the property that a universe that lends itself to life will tend to have. Does it take a Mind to construct a 1/f universe? Or is 1/f simply a highly probable consequence of existence itself? The reason that 1/f-ness is important of course, is that any fitness landscape with 1/f properties will tend to be smooth at multiple scales - which is what a fitness landscape needs to be if evolutionary processes are to get ver far.Elizabeth Liddle
April 24, 2015
April
04
Apr
24
24
2015
04:40 AM
4
04
40
AM
PDT
Kairosfocus, You've given me an idea of FSCO/I, but I still don't know the definition. Presumably the complexity is --log q, where q is the probability that a strictly naturalistic process generates the target event, just as for Dembski. You differ in description of the event, do you not? Once we agree on the event and the description language, the descriptive complexity of the event is a constant D. I don't care about the details of calculation. For Dembski, the specified complexity is SC = --log q -- D. Things might go better if I change my request: Would you please relate FSCO/I formally to SC? More narrowly, can FSCO/I be expressed as a difference of (probabilistic) complexity and descriptive complexity?SimonLeberge
April 24, 2015
April
04
Apr
24
24
2015
04:29 AM
4
04
29
AM
PDT
A point worth making, I think, is that it can be misleading to separate the concept of the "fitness landscape" from the search algorithm you are considering. An "evolutionary algorithm" in practical terms, is not simply a discrete search strategy that can be applied to any "fitness landscape" where this is separately defined. The concept of an evolutionary algorithm inherently implies the existence of a population of self-replicators that replicate with variance. A population in which the offspring of any one individual, or pair of individuals, bore no relation to its parents could not be considered a population of self-replicators. In other words, it is part of the basic description of an "evolutionary algorithm" that offspring have similar properties to their parents - in other words that they are situated "near" their parents on the fitness landscape. In other words, although, mathematically, it may make sense to consider the performance of various "search" algorithms over "all possible fitness landscapes", in terms of a model of reality, it makes no sense, because any "search" algorithm worthy of the name of "evolutionary search" comes with its own moderately smooth fitness landscape built in. And will therefore always do better than "blind search" over a comparable fitness landscape. I have another couple of points to make, but I'll see if this one posts first!Elizabeth Liddle
April 24, 2015
April
04
Apr
24
24
2015
04:28 AM
4
04
28
AM
PDT
1 17 18 19 20

Leave a Reply