Uncommon Descent Serving The Intelligent Design Community

Aurelio Smith’s Analysis of Active Information

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Recently, Aurelio Smith had a guest publication here at Uncommon Descent entitled Signal to Noise: A Critical Analysis of Active Information. Most of the post is taken up by a recounting of the history of active information. He also quotes the criticisms of Felsentein and English which have responded to at Evolution News and Views: These Critics of Intelligent Design Agree with Us More Than They Seem to Realize. Smith then does spend a few paragraphs developing his own objections to active information.

Smith argues that viewing evolution as a search is incorrect, because organisms/individuals aren’t searching, they are being acted upon by the environment:

Individual organisms or populations are not searching for optimal solutions to the task of survival. Organisms are passive in the process, merely affording themselves of the opportunity that existing and new niche environments provide. If anything is designing, it is the environment. I could suggest an anthropomorphism: the environment and its effects on the change in allele frequency are “a voice in the sky” whispering “warmer” or “colder”.

When we say search we simply mean a process that can be modeled as a probability distribution. Smith’s concern is irrelevent to that question. However, even if we are trying to model evolution as a optimization or solution-search problem Smith’s objection doesn’t make any sense. The objects of a search are always passive in the search. Objecting that the organisms aren’t searching is akin to objecting that easter eggs don’t find themselves. That’s not how any kind of search works. All search is the environment acting on the objects in the search.

Rather than demonstrating the “active information” in Dawkins’ Weasel program, which Dawkins freely confirmed is a poor model for evolution with its targeted search, would DEM like to look at Wright’s paper for a more realistic evolutionary model?

This is a rather strange comment. Smith quoted our discussion of Avida previously. But here he implies that we’ve only ever discussed Dawkin’s Weasel program. We’ve discussed Avida, Ev, Steiner Trees, and Metabiology. True, we haven’t looked at Wright’s paper, but its completely unreasonable to suggest that we’ve only discussed Dawkin’s “poor model.”

Secondly, “fitness landscape” models are not accurate representations of the chaotic, fluid, interactive nature of the real environment . The environment is a kaleidoscope of constant change. Fitness peaks can erode and erupt.

It is true that a static fitness landscape is an insufficient model for biology. That is why our work on conservation of information does not assume a static fitness landscape. Our model is deliberately general enough to handle any kind of feedback mechanism.

While I’m grateful for Smith taking the time to writeup his discussion, I find it very confused. The objections he raises don’t make any sense.

Comments
No, you didn't, Joe. You just asserted I was wrong. Well, I'm asserting you are. See how that works?Elizabeth Liddle
May 3, 2015
May
05
May
3
03
2015
07:13 AM
7
07
13
AM
PDT
What assertion, Lizzie? I made my case against you and I will and can defend it. Let's see what you have and then we can tell who is right. However it is a given you won't even address what I posted that proves my points.Joe
May 3, 2015
May
05
May
3
03
2015
07:11 AM
7
07
11
AM
PDT
Well, I could make the same assertion about you, Joe, and, I submit, with more justification. That fact that you think I am in error doesn't make it so. The possibility remains that you are.Elizabeth Liddle
May 3, 2015
May
05
May
3
03
2015
06:50 AM
6
06
50
AM
PDT
Zachriel- Only intention can produce a nested hierarchy. Nested hierarchies are all artificial.Joe
May 3, 2015
May
05
May
3
03
2015
06:48 AM
6
06
48
AM
PDT
As predicted Elizabeth just ignores her outrageous errors about nested hierarchies and computers. Willful ignorance it is then, eh, Lizzie?Joe
May 3, 2015
May
05
May
3
03
2015
06:47 AM
6
06
47
AM
PDT
Mapou: almost all modern software programming languages enforce a strictly nested class hierarchy. Sure, but that doesn't mean that when we look at human artifacts generally that they form a nested hierarchy.Zachriel
May 3, 2015
May
05
May
3
03
2015
06:33 AM
6
06
33
AM
PDT
fifthmonarchyman: We once again seem to be in agreement on a minor point but instead of noting that and simply moving on to more important stuff you insist on rephrasing. Here's is your statement again: fifthmonarchyman: Premise one) Evolution is not searching for any specific target other than survival. Premise two) Evolutionary Algorithms are searching for specific targets. Conclusion) Evolutionary Algorithms are not “models” of Evolution. Premise two is faulty. Some evolutionary algorithms search for specific targets; some do not. Hence, your syllogism is faulty. fifthmonarchyman: We end up with comment after comment Sure. That's what happens when you lose track of the thread, and we then have to repeat your original contention.Zachriel
May 3, 2015
May
05
May
3
03
2015
06:31 AM
6
06
31
AM
PDT
PS: Once we see the explanatory filter on a per aspect basis, it does address the hoped for effect of joint incremental chance and necessity. In particular, observe that the issue is a joint complexity-specificity condition that implies increments of 500+ bits of information. That is, bridging to islands of function. Much smaller increments within islands of function would be well within the reach of chance to explain high contingency aspects. Where, mechanical necessity does not explain high contingency aspects of an object or process but instead lawlike necessity where closely similar initial conditions lead to closely similar outcomes such as F = m*a.kairosfocus
May 3, 2015
May
05
May
3
03
2015
06:25 AM
6
06
25
AM
PDT
MF says, Either the initial configuration of the universe was such that what happened subsequently was possible or a designer made it possible. True but not very interesting. I say, I find it to be interesting. What happened subsequently was the awe inspiring spectacle that is life. We are used to attributing this majestic panorama to evolution and now we know the process is not up to the task. That is cool info to have. EL says, So shaking the tray has inserted Active Information. Gained over time by a stochastic process (shaking the tray). If not, why not? I say, No you have not inserted active information. The fact that the bigger pebbles will rise to the surface is a consequence of the laws of physics that are already present in the overall system from the beginning. You don't add any information by letting the system play-out according to those already existing laws. The knowledge of what will happen when you shake already exists in your mind or you would not choose to shake in the first place. We have active information from the preexisting laws and/or from your preexisting knowledge. No information whatsoever is added with the shaking. The resulting increased probability of picking a big pebble could be accurately predicted before you even touched the tray. peacefifthmonarchyman
May 3, 2015
May
05
May
3
03
2015
06:13 AM
6
06
13
AM
PDT
MatSpirit, The key problem is not incremental change within deeply isolated islands of function imposed by the requisites of interactive function arising from coupling many parts, but to initially find the islands of function, the viable body plans. In short variation of finch beaks among existing populations is one thing, arrival of flying birds as a body plan is quite another. And -- once a priori evolutionary materialism is not imposed on the issue -- there is simply no adequate body of observationally grounded evidence for an incrementally advantageous step by step treelike blind watchmaker path from microbes to Mozart, mango trees and molluscs etc across a continent of viable forms feasible to traversal in a few thousand MY. Not to mention, the challenge to bridge from chemicals in a pond or the like to a cell based first life form. And it is in that context that needle in haystack search challenge to find shores of function becomes pivotal. Hence WE's 3-point cluster. KFkairosfocus
May 3, 2015
May
05
May
3
03
2015
05:55 AM
5
05
55
AM
PDT
Hi, Winston. Thanks for your response. You wrote:
No. The possibilities are that active information was: 1) Injected into the universe via design. 2) Present at the original configuration of the universe. 3) Gained over time through stochastic processes. The math rules out possibility number 3.
You define, "Active Information", simply as ratio of the probability of X occurring, given process A, and the probablity of X occurring, given process B. So under that definition, all the "Active information" is is the degree to which X is not a flat probability distribution. In other words "Active Information" is simply a measure of how lumpy the probability distributions are in the universe. So why does "the math" (and in what way does the math) "rule out possibility number 3? Stochastic processes can indeed make what is originally flat, lumpy. For instance, let's take a deep tray of pebbles, of assorted sizes, each size, well mixed, and with a frequency distribution such that large ones are no better represented spatially than small ones. Your target is large pebble (99th percentile). Pick a pebble from the top. As they are perfectly mixed, your chances of a picking a large pebble is no better than your chances of any other pebble. Now shake the tray. What happens next is a stochastic process. That process results in the big pebbles arranging themselves on the top, and the small ones further down, the tiniest ones being on the bottom. Now pick a pebble from the top. It is highly likely to be a large pebble. So shaking the tray has inserted Active Information. Gained over time by a stochastic process (shaking the tray). If not, why not?
That is why the remaining options are design or initial configuration of the universe. I’m not making any claims about the probability of the configuration of the universe. I’m merely pointing out that every has to be traced back to the configuration, you can’t appeal to an increase in active information after that point.
In which case all you are saying is mainstream physics: that entropy is always increasing over the whole system - that you need to import energy to reduce local entropy (as I did when I shook the tray of pebbles). But we know this is possible - we can do it with pebbles, and plants do it with photosynthesis. Tornadoes do it. Adding energy to a system frequently reduces local entropy. So what has the LCI got to add that isn't just a restatement of Boltzmann?
Furthermore, my post discusses the point of the COI in the paragraphs immediately after the one that you quoted. That is the section I was intending to refer you to.
The passage after the one I quoted reads:
We argue that Darwinian evolution is incomplete. For advocates of Darwinian and design theories alike, the aim is to explain the complexity of biological life. Darwinian evolution does not explain the complexity of biological life because its success or failure depends on the fitness landscapes it operates on. To make it complete, the theory would have to include the nature of the fitness landscapes that make the evolutionary process work. Darwinian evolution is only part of a theory of the explanation of biological complexity.
Which still doesn't till me "the point of the COI"! Of course "Darwinian evolution is incomplete". All science is incomplete - and always will be. Sure, Darwinian models don't attempt to account for the existence of the physical and chemical laws that make Darwinian processes possible. IDists like to claim that ID is not "Designer-of-the-gaps" - but that seems to be entirely where your paragraph above is going. Or, if that isn't where it is going, what is it you are trying to say? As you note:
What remains to ask is whether or not any of these explanations of the fitness landscape actually work. To that, conservation of information provides no answer.
Precisely. It doesn't. The thing is, Winston, it seems to me that the further you, Dembski and Marks have travelled down the road Dembski embarked on with "Specified Complexity" and "No Free Lunch" (and I actually commend you in particular for this) the more, it seems to me, it turns out that the "Design Inference" is no more than the conclusion that the universe must have started with properties that facilitated non-uniform distributions of events. In other word, that it started out, if not lumpy, with the capacity to become so. Not only that, but it has a property of "1/fness" which is certainly interesting - it contains variability (Information, if you will, or Shannon Entropy) at multiple scales, from sub-atomic to inter-galactic. But we cannot infer a Designer from such a property, at least not from the probability of a universe with such a property, because we do not know the pdf of possible universes. It may be that lumpiness is a necessary property of existence. Ontologically, what could be said even to exist in a totally flat universe?
Moreover, nowhere in your ENV article that I can find do you tell us what the ratio of p/q is the probability of, which was my second question. It isn’t a probability. Its a measurement of the bias of a search towards a target. - Winston
OK - in any case it's more like an odds ratio, not a probability (my bad). But you could also express it as a measure of the increase in probability of an event, given a process that is not present at baseline, right? So you could write it as: p(X|process B)/p(X|process A). where X is a "target", A is the baseline process (e.g. one with a flat pdf), and B is the process of interest, e.g. one in which some outcomes are more likely than others. Yes? In which case you could simply convert it an actual OR: [p(X|process B)*1-p(X|process B)]/[p(X|process A)*1-p(X|process A)] Then you'd simply have a measure of how much more likely X is, given process B than it is given process A. And if you regarded process A as one in which all outcomes were equally probable (as Dembski often does), then, Active Information simply becomes an normalised expression of how much more probable X is under the process in question than it would be under equiprobable random draw. Where does this get us, other than to the conclusion that the universe is non-uniform?Elizabeth Liddle
May 3, 2015
May
05
May
3
03
2015
05:32 AM
5
05
32
AM
PDT
Winston in 138: No. The possibilities are that active information was: 1) Injected into the universe via design. 2) Present at the original configuration of the universe. 3) Gained over time through stochastic processes. Number three is the problem. Evolution combines a stochastic process (mutation) that generates information with a "fact checking" process (natural selection) that rejects the information that hurts the organism. The information that isn't rejected is either useful to the organism or at least neutral. This makes evolution a "ratchet" that continually adds useful or neutral information to a genome while rejecting the bad information generated by mutations. Have you ever noticed that Dembski's Explanatory Filter can't even handle this two step process? It asks if the process being tested is random OR lawful, but you can't even enter a process that uses both into it. What else do you think Dembski is overlooking?MatSpirit
May 3, 2015
May
05
May
3
03
2015
04:47 AM
4
04
47
AM
PDT
MF, as an exercise in pure math, one may indeed assign an uncountably infinite set of objective functions to a space. But, I suggest, this loses sight of what we are addressing. Performance has to be exhibited in time and space. In the hoped for evolutionary process, it takes generations for distinct sub populations to emerge and sort out superior/inferior performance. And 20 minutes or 20 years makes little material difference to the resulting process lags and memory-of-the-past cumulative effects that lead to granularity as a reasonable approach. For you and I to be here, generations of successful reproduction had to have happened, across time, leading to lagged effects. In computing, every step and cycle are granular in value and time. Atoms and molecules have an effective speed limit to chemical level interactions relevant to forming both monomers and chained macromolecules that appear in biological systems, ~ 10^ 13 or 14 per second. And so, we come right back to the relevant finite and discrete nature of what we are dealing with. In short, A/D conversion is natural to the case and will impose granularity. It remains so that WLOG, a system config can be described per wiring diagram on a structured set of Y/N q's, yielding a bit string, inherently discrete. For a bit string of length n, W = 2^n gives the number of possibilities. Then, samples taken from the set will be subsets, and the number of possible subsets is indeed 2^W. For n = 500 - 1,000, we have that 10^57 sol system atoms or 10^80 for the observed cosmos at 10^13 - 14 actions/s, will explore 10^87 - 88 or 10^110 - 111 possibilities in 10^17 s. Which is an order of magnitude value for timeline since the typical dating of the singularity. The result is, the needle in haystack search challenge relative to 3.27* 10^150 or 1.07*10^301 possibilities. Where also the power sets take in every possible individual sample of the sets; which will be finite. So, on a reasonable assessment, there is indeed reason to consider the situation from this angle, and it sends the message that needle in haystack search challenge will dominate relevant cases. For we cannot explore possibilities, develop configs, exhibit and filter performance in infinitesimal increments of time or space. So, while taking the granular view does not confine us to a flat random sampling as the way to explore possibilities, a golden search does point to a higher order search for a search, and it is reasonable to see this as confronting a power set abstract space. One may impose a further golden search -- why? -- but the regress of exponentiation is already evident. And, we already are at the practical point of implying that the laws and initial circumstances of the cosmos would have had requisites of life written into them in astonishing ways. In short, you have suggested, inadvertently, a fine tuning, cosmological programming argument at the root of the physics of the cosmos. And if design is at the table from that level, then there is no good, non-ideological reason to exclude it thereafter, at OOL or OOBP up to origin of our own body plan. Worse, step back a moment and allow a non-countably transfinite set of possibilities for objective functions. The search for search challenge just exploded in scope. Of course, in practice, we will see clusters that boil down to re-imposing granularity for practical purposes. But not enough to help your case. KFkairosfocus
May 3, 2015
May
05
May
3
03
2015
04:12 AM
4
04
12
AM
PDT
#147 KF
MF, Genomes are 4-state per base systems, which imposes a finite and discrete set of possibilities. When we have a space of possibilities W, the set of samples on said space will come from the set of subsets, of cardinality 2^W. And that seems to me the operative context.
We were discussing M(omega) the set of possible searches of omega. The number of possible searches is not the same as the number of possible subsets of the search space. Although the search space may be discrete, the set of pdfs on that search space is infinite (in fact uncountably infinite). That applies even if there is just one item in the search space with two possible values. Ask Winston if you doubt me. DEM have defined a search in such a way that it is equivalent to a pdf on the search space. Therefore there are an countably infinite number of searches (as defined by DEM). I don't know what you mean by "operative context".Mark Frank
May 3, 2015
May
05
May
3
03
2015
02:57 AM
2
02
57
AM
PDT
WE, came back by overnight. Appreciated. We have differing foci and emphases. For me, over years, the functional subset of CSI has proved fruitful (and especially digitally coded strings such as in DNA); where I note that Dembski and Meyer have in fact pointed to that subset and its significance in what Wallace once called the world of life. Historically, that is the context in which CSI was recognised as a significant characteristic of life forms, as Orgel and Wicken noted. I will normally briefly explain or expand the acronym when I use it. KFkairosfocus
May 3, 2015
May
05
May
3
03
2015
02:52 AM
2
02
52
AM
PDT
Joe:
Mapou 143- Nice job. Even though human design can violate a nested hierarchy doesn’t mean they all have to. OTOH gradual evolution will always produce transitional forms that will blur the nice, neat lines of distinction nested hierarchies require.
I see this as another huge problem for Darwinian evolution. Where is the blur? I'm sure there is yet another just-so, pseudoscientific story to explain it. Elsewhere, you mentioned Darwin's extinction hypothesis but it's obviously a non-explanation. Are there others?Mapou
May 2, 2015
May
05
May
2
02
2015
11:17 PM
11
11
17
PM
PDT
WE #128 Thanks also for your efforts to explain mu and mu-bar. I am still struggling but let me try rephrasing what I think it might mean in my own words. I think you might be saying: For any pdf mu that gives a probability P of "hitting a target" it is possible to find a higher level pdf mu-bar that creates pdfs that in total have the same probability of "hitting the target". Is that it?Mark Frank
May 2, 2015
May
05
May
2
02
2015
11:09 PM
11
11
09
PM
PDT
WE #139 Thanks for continuing to be involved. I know how time consuming and irritating it can be responding to multiple interrogators.
It isn’t a probability. Its a measurement of the bias of a search towards a target.
This raises two questions: 1) Biased as compared to what? What does unbiased look like? If you cannot define unbiased then it seems your assertion amounts to: Either the initial configuration of the universe was such that what happened subsequently was possible or a designer made it possible. True but not very interesting. 2) You call the –log base 2 of (p/q) active information. But you say p/q is not a probability. Yet in other contexts you define information as –log base 2 of a probability (e.g.  endogenous information and exogenous information.  It seems like active information is a different kind of thing from other kinds of information.Mark Frank
May 2, 2015
May
05
May
2
02
2015
10:58 PM
10
10
58
PM
PDT
I was under the impression that he was trying to further develop the ideas of other people such that a wider audience can understand and appreciate them. And the acronym just further specified the subset of CSI- Dembski’s CSI.
Perhaps, I really haven't followed FSCO/I enough to know. My only thought is that if it is a worthwhile development, I'd really like to see it published in a paper or conference.Winston Ewert
May 2, 2015
May
05
May
2
02
2015
08:04 PM
8
08
04
PM
PDT
Winston Ewert wrote:
I was under the impression that you were trying to do something more novel then applying an acronym to the ideas of other people.
I was under the impression that he was trying to further develop the ideas of other people such that a wider audience can understand and appreciate them. And the acronym just further specified the subset of CSI- Dembski's CSI.Joe
May 2, 2015
May
05
May
2
02
2015
07:33 PM
7
07
33
PM
PDT
WE, I think I need to note that my point has always been that all I have provided by using the abbreviation FSCO/I is an acronym for a descriptive summary of the functionally specific subset of complex specified information.
My apologies. That's what I get for commenting on something I know nothing about. I was under the impression that you were trying to do something more novel then applying an acronym to the ideas of other people.Winston Ewert
May 2, 2015
May
05
May
2
02
2015
07:27 PM
7
07
27
PM
PDT
Carpathian, yes, design is tough to do. Especially when designed items have to function in a complex and partly uncontrolled and dynamic environment. That is why for instance central economic planning failed. But incremental development that has built-in robustness and adaptability, backed up by empirical testing and development with a healthy dose of stabilising negative feedbacks tends to work out fairly well. Robustness, redundancy and adaptability tend to be more effective than overly brittle optimisation on objective functions . . . if you can get away with that. Beyond, I would not infer from design of life to a designer or designers of effective omniscience. That has been on the table for thirty years of the modern design school of thought, here, Thaxton et al. KFkairosfocus
May 2, 2015
May
05
May
2
02
2015
07:00 PM
7
07
00
PM
PDT
WE, I think I need to note that my point has always been that all I have provided by using the abbreviation FSCO/I is an acronym for a descriptive summary of the functionally specific subset of complex specified information. That concept is well established. Which, is what I cited. It is also a readily observed phenomenon, starting with the strings of glyphs used to communicate coded information we are all using in this thread and the similar strings in DNA and proteins. Wiring diagram organised entities can readily be reduced to similar descriptive strings, as is commonly done with appropriate software. KFkairosfocus
May 2, 2015
May
05
May
2
02
2015
06:47 PM
6
06
47
PM
PDT
Elizabeth Liddle:
If we don’t, the LCI appears to amount to no more than: evolutionary success is either due to design or natural processes. So not an ID argument at all.
Given that intelligent design is a natural process that's a false dichotomy.Mung
May 2, 2015
May
05
May
2
02
2015
06:38 PM
6
06
38
PM
PDT
MF, Genomes are 4-state per base systems, which imposes a finite and discrete set of possibilities. When we have a space of possibilities W, the set of samples on said space will come from the set of subsets, of cardinality 2^W. And that seems to me the operative context. Going on to the evolutionary computing case, inherently you are dealing with a bitwise granularity, which is discrete and finite. Yes, you may work with the continuum [not least as calculus is generally handy to work with], but you are going to come back to a fine grained, discrete and finite case. Which we should not forget. KF PS: I should add that the exploration of possible molecular states can also be cellularised, based on the inherently discrete nature of molecules and the effective speed limit of chemical level interactions of relevant type ~10^-14 s.kairosfocus
May 2, 2015
May
05
May
2
02
2015
06:30 PM
6
06
30
PM
PDT
zac says, When someone says evolution is not a search, it’s because there’s no specific goal. I say, Just as I said you say, Think of it as simply trying to keep one’s balance on a constantly shifting landscape. Again just as I said. We once again seem to be in agreement on a minor point but instead of noting that and simply moving on to more important stuff you insist on rephrasing. You do it Ad nauseam here and just as often you will slip in a red herring if you can to try and change the subject entirely. We end up with comment after comment and nothing substantial is ever addressed clogging threads that could be interesting with blah blah blah. I blame myself for continuing to try with you when there are others that are honest critics. peacefifthmonarchyman
May 2, 2015
May
05
May
2
02
2015
06:15 PM
6
06
15
PM
PDT
Mapou 143- Nice job. Even though human design can violate a nested hierarchy doesn't mean they all have to. OTOH gradual evolution will always produce transitional forms that will blur the nice, neat lines of distinction nested hierarchies require.Joe
May 2, 2015
May
05
May
2
02
2015
06:09 PM
6
06
09
PM
PDT
fifthmonarchyman- When someone says evolution is not a search and that’s because there’s no specific goal, they are really telling you that it is all contingent serendipity and that it should never be mistaken for a scientific concept. :cool:Joe
May 2, 2015
May
05
May
2
02
2015
06:07 PM
6
06
07
PM
PDT
Liddle:
And the pattern of biological characteristics, interestingly enough, is just the pattern you’d predict from evolutionary processes, and not from human designers, namely, nested hierarchies; no wholescale transfer of solutions from one lineage to another; retrofits rather than radical redesigns.
This is obviously not true. A nested hierarchy is what we expect from human intelligent design over time with some multiple inheritance sprinkled in. In fact, almost all modern software programming languages enforce a strictly nested class hierarchy. C++ allows multiple inheritance but it is used sparingly in the business.Mapou
May 2, 2015
May
05
May
2
02
2015
06:04 PM
6
06
04
PM
PDT
fifthmonarchyman: I give up It's not that difficult. When someone says evolution is not a search, it's because there's no specific goal. Think of it as simply trying to keep one's balance on a constantly shifting landscape.Zachriel
May 2, 2015
May
05
May
2
02
2015
06:00 PM
6
06
00
PM
PDT
1 2 3 4 5 6 9

Leave a Reply