Uncommon Descent Serving The Intelligent Design Community

NEWS FLASH: Dembski’s CSI caught in the act

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Dembski’s CSI concept has come under serious question, dispute and suspicion in recent weeks here at UD.

After diligent patrolling the cops announce a bust: acting on some tips from un-named sources,  they have caught the miscreants in the act!

From a comment in the MG smart thread, courtesy Dembski’s  NFL (2007 edn):

___________________

>>NFL as just linked, pp. 144 & 148:

144: “. . . since a universal probability bound of 1 in 10^150 corresponds to a universal complexity bound of 500 bits of information, (T, E) constitutes CSI because T [i.e. “conceptual information,” effectively the target hot zone in the field of possibilities] subsumes E [i.e. “physical information,” effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”

148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology.

I submit that what they have in mind is specified complexity, or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . .

Biological specification always refers to function . . . In virtue of their function [a living organism’s subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways . . . “

Here we see all the suspects together caught in the very act.

Let us line up our suspects:

1: CSI,

2: events from target zones in wider config spaces,

3: joint complexity-specification criteria,

4: 500-bit thresholds of complexity,

5: functionality as a possible objective specification

6: biofunction as specification,

7: origin of CSI as the key problem of both origin of life [Eigen’s focus] and Evolution, origin of body plans and species etc.

8: equivalence of CSI and complex specification.

Rap, rap, rap!

“How do you all plead?”

“Guilty as charged, with explanation your honour. We were all busy trying to address the scientific origin of biological information, on the characteristic of complex functional specificity. We were not trying to impose a right wing theocratic tyranny nor to smuggle creationism in the back door of the schoolroom your honour.”

“Guilty!”

“Throw the book at them!”

CRASH! >>

___________________

So, now we have heard from the horse’s mouth.

What are we to make of it, in light of Orgel’s conceptual definition from 1973 and the recent challenges to CSI raised by MG and others.

That is:

. . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.]

And, what about the more complex definition in the 2005 Specification paper by Dembski?

Namely:

define ϕS as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . . where M is the number of semiotic agents [S’s] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history.[31] . . . [Then] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity [χ] of T given [chance hypothesis] H [in bits] . . . as  [the negative base-2 log of the conditional probability P(T|H) multiplied by the number of similar cases ϕS(t) and also by the maximum number of binary search-events in our observed universe 10^120]

χ = – log2[10^120 ·ϕS(T)·P(T|H)]  . . . eqn n1

How about this (we are now embarking on an exercise in “open notebook” science):

1 –> 10^120 ~ 2^398

2 –> Following Hartley, we can define Information on a probability metric:

I = – log(p) . . .  eqn n2

3 –> So, we can re-present the Chi-metric:

Chi = – log2(2^398 * D2 * p)  . . .  eqn n3

Chi = Ip – (398 + K2) . . .  eqn n4

4 –> That is, the Dembski CSI Chi-metric is a measure of Information for samples from a target zone T on the presumption of a chance-dominated process, beyond a threshold of at least 398 bits, covering 10^120 possibilities.

5 –> Where also, K2 is a further increment to the threshold that naturally peaks at about 100 further bits. In short VJT’s CSI-lite is an extension and simplification of the Chi-metric. He explains in the just linked (and building on the further linked):

The CSI-lite calculation I’m proposing here doesn’t require any semiotic descriptions, and it’s based on purely physical and quantifiable parameters which are found in natural systems. That should please ID critics. These physical parameters should have known probability distributions. A probability distribution is associated with each and every quantifiable physical parameter that can be used to describe each and every kind of natural system – be it a mica crystal, a piece of granite containing that crystal, a bucket of water, a bacterial flagellum, a flower, or a solar system . . . .

Two conditions need to be met before some feature of a system can be unambiguously ascribed to an intelligent agent: first, the physical parameter being measured has to have a value corresponding to a probability of 10^(-150) or less, and second, the system itself should also be capable of being described very briefly (low Kolmogorov complexity), in a way that either explicitly mentions or implicitly entails the surprisingly improbable value (or range of values) of the physical parameter being measured . . . .

my definition of CSI-lite removes Phi_s(T) from the actual formula and replaces it with a constant figure of 10^30. The requirement for low descriptive complexity still remains, but as an extra condition that must be satisfied before a system can be described as a specification. So Professor Dembski’s formula now becomes:

CSI-lite=-log2[10^120.10^30.P(T|H)]=-log2[10^150.P(T|H)] . . . eqn n1a

. . . .the overall effect of including Phi_s(T) in Professor Dembski’s formulas for a pattern T’s specificity, sigma, and its complex specified information, Chi, is to reduce both of them by a certain number of bits. For the bacterial flagellum, Phi_s(T) is 10^20, which is approximately 2^66, so sigma and Chi are both reduced by 66 bits. My formula makes that 100 bits (as 10^30 is approximately 2^100), so my CSI-lite computation represents a very conservative figure indeed.

Readers should note that although I have removed Dembski’s specification factor Phi_s(T) from my formula for CSI-lite, I have retained it as an additional requirement: in order for a system to be described as a specification, it is not enough for CSI-lite to exceed 1; the system itself must also be capable of being described briefly (low Kolmogorov complexity) in some common language, in a way that either explicitly mentions pattern T, or entails the occurrence of pattern T. (The “common language” requirement is intended to exclude the use of artificial predicates like grue.) . . . .

[As MF has pointed out] the probability p of pattern T occurring at a particular time and place as a result of some unintelligent (so-called “chance”) process should not be multiplied by the total number of trials n during the entire history of the universe. Instead one should use the formula (1–(1-p)^n), where in this case p is P(T|H) and n=10^120. Of course, my CSI-lite formula uses Dembski’s original conservative figure of 10^150, so my corrected formula for CSI-lite now reads as follows:

CSI-lite=-log2(1-(1-P(T|H))^(10^150)) . . . eqn n1b

If P(T|H) is very low, then this formula will be very closely approximated [HT: Giem] by the formula:

CSI-lite=-log2[10^150.P(T|H)]  . . . eqn n1c

6 –> So, the idea of the Dembski metric in the end — debates about peculiarities in derivation notwithstanding — is that if the Hartley-Shannon- derived information measure for items from a hot or target zone in a field of possibilities is beyond 398 – 500 or so bits, it is so deeply isolated that a chance dominated process is maximally unlikely to find it, but of course intelligent agents routinely produce information beyond such a threshold.

7 –> In addition, the only observed cause of information beyond such a threshold is the now proverbial intelligent semiotic agents.

8 –> Even at 398 bits that makes sense as the total number of Planck-time quantum states for the atoms of the solar system [most of which are in the Sun] since its formation does not exceed ~ 10^102, as Abel showed in his 2009 Universal Plausibility Metric paper. The search resources in our solar system just are not there.

9 –> So, we now clearly have a simple but fairly sound context to understand the Dembski result, conceptually and mathematically [cf. more details here]; tracing back to Orgel and onward to Shannon and Hartley. Let’s augment here [Apr 17], on a comment in the MG progress thread:

Shannon measured info-carrying capacity, towards one of his goals: metrics of the carrying capacity of comms channels — as in who was he working for, again?

CSI extended this to meaningfulness/function of info.

And in so doing, observed that this — due to the required specificity — naturally constricts the zone of the space of possibilities actually used, to island[s] of function.

That specificity-complexity criterion links:

I: an explosion of the scope of the config space to accommodate the complexity (as every added bit DOUBLES the set of possible configurations),  to

II: a restriction of the zone, T, of the space used to accommodate the specificity (often to function/be meaningfully structured).

In turn that suggests that we have zones of function that are ever harder for chance based random walks [CBRW’s] to pick up. But intelligence does so much more easily.

Thence, we see that if you have a metric for the information involved that surpasses a threshold beyond which a CBRW is a plausible explanation, then we can confidently infer to design as best explanation.

Voila, we need an info beyond the threshold metric. And, once we have a reasonable estimate of the direct or implied specific and/or functionally specific (especially code based) information in an entity of interest, we have an estimate of or credible substitute for the value of – log2(p(T|H)); especially if the value of information comes from direct inspection of storage capacity and code symbol patterns of use leading to an estimate of relative frequency, we may evaluate average [functionally or otherwise] specific information per symbol used. This is a version of Shannon’s weighted average information per symbol H-metric, H = –  Σ pi * log(pi), which is also known as informational  entropy [there is an arguable link to thermodynamic entropy, cf here)  or uncertainty.

As in (using Chi_500 for VJT’s CSI_lite [UPDATE, July 3: and S for a dummy variable that is 1/0 accordingly as the information in I is empirically or otherwise shown to be specific, i.e. from a narrow target zone T, strongly UNREPRESENTATIVE of the bulk of the distribution of possible configurations, W]):

Chi_500 = Ip*S – 500,  bits beyond the [solar system resources] threshold  . . . eqn n5

Chi_1000 = Ip*S – 1000, bits beyond the observable cosmos, 125 byte/ 143 ASCII character threshold . . . eqn n6

Chi_1024 = Ip*S – 1024, bits beyond a 2^10, 128 byte/147 ASCII character version of the threshold in n6, with a config space of 1.80*10^308 possibilities, not 1.07*10^301 . . . eqn n6a

[UPDATE, July 3: So, if we have a string of 1,000 fair coins, and toss at random, we will by overwhelming probability expect to get a near 50-50 distribution typical of the bulk of the 2^1,000 possibilities W. On the Chi-500 metric, I would be high, 1,000 bits, but S would be 0, so the value for Chi_500 would be – 500, i.e. well within the possibilities of chance.  However, if we came to the same string later and saw that the coins somehow now had the bit pattern of the ASCII codes for the first 143 or so characters of this post, we would have excellent reason to infer that an intelligent designer, using choice contingency, had intelligently reconfigured the coins. that is because, using the same I = 1,000 capacity value, S is now 1, and so Chi_500 = 500 bits beyond the solar system threshold. If the 10^57 or so atoms of our solar system, for its lifespan, were to be converted into coins and tables etc, and tossed at an impossibly fast rate, it would be impossible to sample enough of the possibilities space W to have confidence that something from so unrepresentative a zone T,  could reasonably be explained on chance. So, as long as an intelligent agent capable of choice is possible, choice — i.e. design — would be the rational, best explanation on the sign observed, functionally specific, complex information.]

10 –> Similarly, the work of Durston and colleagues, published in 2007, fits this same general framework. Excerpting:

Consider that there are usually only 20 different amino acids possible per site for proteins, Eqn. (6) can be used to calculate a maximum Fit value/protein amino acid site of 4.32 Fits/site [NB: Log2 (20) = 4.32]. We use the formula log (20) – H(Xf) to calculate the functional information at a site specified by the variable Xf such that Xf corresponds to the aligned amino acids of each sequence with the same molecular function f. The measured FSC for the whole protein is then calculated as the summation of that for all aligned sites. The number of Fits quantifies the degree of algorithmic challenge, in terms of probability [info and probability are closely related], in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 10^49 different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106 percent of 121-residue sequence space. In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space. A high Fit value for individual sites within a protein indicates sites that require a high degree of functional information. High Fit values may also point to the key structural or binding sites within the overall 3-D structure.

11 –> So, Durston et al are targetting the same goal, but have chosen a different path from the start-point of the Shannon-Hartley log probability metric for information. That is, they use Shannon’s H, the average information per symbol, and address shifts in it from a ground to a functional state on investigation of protein family amino acid sequences. They also do not identify an explicit threshold for degree of complexity. [Added, Apr 18, from comment 11 below:] However, their information values can be integrated with the reduced Chi metric:

Using Durston’s Fits from his Table 1, in the Dembski style metric of bits beyond the threshold, and simply setting the threshold at 500 bits:

RecA: 242 AA, 832 fits, Chi: 332 bits beyond

SecY: 342 AA, 688 fits, Chi: 188 bits beyond

Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond  . . . results n7

The two metrics are clearly consistent, and Corona S2 would also pass the X metric’s far more stringent threshold right off as a single protein. (Think about the cumulative fits metric for the proteins for a cell . . . )

In short one may use the Durston metric as a good measure of the target zone’s actual encoded information content, which Table 1 also conveniently reduces to bits per symbol so we can see how the redundancy affects the information used across the domains of life to achieve a given protein’s function; not just the raw capacity in storage unit bits [= no.  of  AA’s * 4.32 bits/AA on 20 possibilities, as the chain is not particularly constrained.]

12 –> I guess I should not leave off the simple, brute force X-metric that has been knocking around UD for years.

13 –> The idea is that we can judge information in or reducible to bits, as to whether it is or is not contingent and complex beyond 1,000 bits. If so, C = 1 (and if not C = 0). Similarly, functional specificity can be judged by seeing the effect of disturbing the information by random noise [where codes will be an “obvious” case, as will be key-lock fitting components in a Wicken wiring diagram functionally organised entity based on nodes, arcs and interfaces in a network], to see if we are on an “island of function.” If so, S = 1 (and if not, S = 0).

14 –> We then look at the number of bits used, B — more or less the number of basic yes/no questions needed to specify the configuration [or, to store the data], perhaps adjusted for coding symbol relative frequencies — and form a simple product, X:

X = C * S * B, in functionally specific bits . . . eqn n8.

15 –> This is of course a direct application of the per aspect explanatory filter, (cf. discussion of the rationale for the filter here in the context of Dembski’s “dispensed with” remark) and the value in bits for a large file is the familiar number we commonly see such as a Word Doc of 384 k bits. So, more or less the X-metric is actually quite commonly used with the files we toss around all the time. That also means that on billions of test examples, FSCI in functional bits beyond 1,000 as a threshold of complexity is an empirically reliable sign of intelligent design.

______________

All of this adds up to a conclusion.

Namely, that there is excellent reason to see that:

i: CSI and FSCI are conceptually well defined (and are certainly not “meaningless”),

ii: trace to the work of leading OOL researchers in the 1970’s,

iii: have credible metrics developed on these concepts by inter alia Dembski and Durston, Chiu, Abel and Trevors, metrics that are based on very familiar mathematics for information and related fields, and

iv: are in fact — though this is hotly denied and fought tooth and nail — quite reliable indicators of intelligent cause where we can do a direct cross-check.

In short, the set of challenges recently raised by MG over the past several weeks has collapsed. END

Comments
F/N: Onlookers, it seems I need to show why I said what I said at 194, again, by way of correction. So, let me clip the substance of that comment: ____________ >> [KF:] Pardon, again; are you aware of the size of actual genomes? When you [Dr Bot] say:
[Dr Bot, 192:] With the Golem encoding the complexity of the organism is directly related to the genotype. If you want 100 legs and a segmented body (like a millipede) you need to encode each segment and each leg explicitly. The size of the genome and the resulting search space becomes impossibly large and evolution can hit a barrier but when you have biological like indirect encodings and development you can build complex structures like that with very simple genomes
[KF, answering:] Real world genomes start at 100+ k to 1 mn bases, and for multicellular body plans we are looking at — dozens of times over — 10mn+ new base pairs. Genomes then run up to billions of bases in a well organised reference library. Just 100 k base pairs is a config space of 4^100,000 ~ 9.98 * 10^60,205 possibilities. The P[lanck]-time Q[uantum]-states of the observed cosmos across its lifespan, would amount to no more than 10^150, a mere drop in that bucket. “Simple genome” is a grossly simplistic strawman. And, the hinted-at suggestion that by using in effect a lookup table as a genome you have got rid of the need to code the information and the regulatory organisation, is another misdirection. You have simply displaced the need to code the algorithms that do the technical work. Notice, genomes are known to have protein etc coding segments, AND regulatory elements that control expression, all in the context of a living cell that has the machinery to make it work. The origin of the cell [metabolising and von Neumann self replicator], and its elaboration through embryogenesis into varied functional body plans have to be explained on chance plus necessity and confirmed by observation if the evo mat view is to have any reasonable foundation in empirically based warrant. >> _____________ I trust the point, and its context are now sufficiently clear. Notice, especially, the highlighted concession in 192:
The size of the genome and the resulting search space becomes impossibly large and evolution can hit a barrier
Let us ask: is 100 - 1,000+ kbits worth of genetic info to get to 1st life "impossibly large," and is 10 mn + dozens of times over to get to novel body plans "impossibly large"? I think the question answers itself, once we realise that just 100 k bits worth of stored info codes for up to 9.98 * 10^60,205 possible configs [the 10^80 atoms of our observed cosmos across its thermodynamic lifespan would only undergo 10^150 P-time states, where ~ 10^30 states are needed for the fastest chemical reactions], and shows the material point. That is what you are not being told in the HS or College classroom, what you are not reading in your textbooks, it is what museum displays will not tell you, it is what Nat Geog or Sci Am or Discover Mag will not tell you in print or on web or on TV, and it is what the NCSE and now BCSE spin doctors are doing their level best to make sure you never hear in public. GEM of TKI ++++++++++ Pardon: auto-termination.kairosfocus
May 5, 2011
May
05
May
5
05
2011
02:39 AM
2
02
39
AM
PDT
Dr Bot: I am sorry, this is a major thread on a key issue regarding the CSI metric. I do not think a late tangential debate on another subject will help much, especially when my pointing out a problem that seems to be recurring is turning into an occasion for turnabout rhetorical tactics. Sufficient has been said to underscore that the Golem case shows -- inadvertently -- how hard it is to try to get to complex systems by random walk searches with trial and error on success. That is enough to underscore that it shows the significance of the islands of function problem. Adaptation of a body plan is a different kettle of fish from getting to the functional plan in the first place. There is no informational free lunch. Good day GEM of TKIkairosfocus
May 4, 2011
May
05
May
4
04
2011
11:41 AM
11
11
41
AM
PDT
I said:
The point is specific – Other explanations exist for the issues raised by their particular experiment so it cannot be considered “an empirical confirmation of the barriers posed by functionally specific complex information beyond a threshold”.
I addressed your specific claim regarding Golem and the issues they encountered. You said:
Can you show that the sort of mechanisms that may successfully modify an already functioning body plan, can generate significantly different ones, including the different control mechanisms? [Recall my note on 2 vs 3 vs 6 or 8 legged walking gaits.]
Can you? My point is specific to Golem and their implementation. It is valid regardless of how or if biological evolution works. You are not actually addressing my comment, just deflecting to a different issue. My point is specific to Golem and their implementation. If you want to get into detail then there is plenty of research into this - perhaps you should take a look! As I already outlined, an indirect encoding or developmental mapping scheme can generate repeating structures like limbs, including control systems. Indeed some promising work is in adaptive controllers that configure themselves to provide effective control. These systems (often based on neural nets, designed with GA's) do not need specific architectures tuned to a specific morphology, they can be fairly generic but adapt during development - all inspired by biology! In the context of evolutionary robotics (which was basically what Pollack et al. were doing) an indirect encoding scheme, development and adaptive controllers can overcome the complexity barrier they encountered, and they can mean that small genotype changes translate into major prototypic differences (or if you prefer - radically different body plans) and major increases in complexity. But of course we are talking about designed experiments, evolutionary algorithms as design tools working from a designed starting point - crude approximations of living systems. They are not OOL experiments, they are IIC experiments (Increase In Complexity) so they already start on an island of function (or perhaps a continent - it depends on the encoding and development scheme!) It is easy to look at Pollack et.al's statement that they encountered a complexity barrier and claim that it proves something fundamental about biology. A proper scientific approach is to try and understand why they encountered this barrier, and if it even applies to biology. Improving the Golem encoding scheme might jump the complexity barrier, but there may be other barriers, and even with indirect mappings Golem may still be too far removed from biology to make direct comparisons.
The point is specific – Other explanations exist for the issues raised by their particular experiment so it cannot be considered “an empirical confirmation of the barriers posed by functionally specific complex information beyond a threshold”.
You keep deploying your default and repetitive argument which seems to amount to "if you can't explain the origin of life then you can't explain anything" I wish you would limit yourself to dealing with specific arguments on their merits. Going back a few posts:
You still have not cogently responded to the evidence that the issue is to get TO isolated islands of function in configuration space, rather than relatively minor adaptations within islands of function. And it is the former challenge that the Golem project underscores.
The issue I addressed was about the encoding scheme used in Golem. If you change the encoding scheme you turn a small island of function into a large continent. I have not addressed the issue of OOL because the issue of OOL is not the issue I was addressing, nor was it a goal of the Golem project. Golem starts with minimal function and looks at what descent with modification can do, it was not an OOL research project, it was not concerned with getting to an island of function so the problems they encountered do not underscore the problem of getting to an island of function.
... immaterial distractors lend themselves to strawman caricatures and ad hominems, thence atmosphere-poisoning. As you have already been through with me to a point where you had to half-apologise on trying to quit smoking if I recall correctly.
Ah, an attack on my person instead of just my arguments - there's a word for that ... If you recall, I over-reacted to a comment of yours which I took as a personal attack, and then apologised after some reflection. Perhaps you feel unable to forgive me? Don't worry, I forgive you ;) Your comments and demands are immaterial to my point about Golem, they are a distraction. The issue I addressed was about the encoding scheme used in Golem. And yes – some genomes are quite complex. The issue I addressed was about the encoding scheme used in Golem.DrBot
May 4, 2011
May
05
May
4
04
2011
07:10 AM
7
07
10
AM
PDT
PS: And, real genomes are quite quite complex.kairosfocus
May 4, 2011
May
05
May
4
04
2011
04:26 AM
4
04
26
AM
PDT
Dr Bot: The following excerpt aptly captures why I pointed out the significance of the getting to islands of function -- the macro evo not the micro evo -- material problem:
Why should I?
Because, that was the material point; and because immaterial distractors lend themselves to strawman caricatures and ad hominems, thence atmosphere-poisoning. As you have already been through with me to a point where you had to half-apologise on trying to quit smoking if I recall correctly. The Golem project illustrated -- as a case in point, not as the proof of all proofs -- the empirically observable challenge of getting to islands of function for chance and necessity. That is what I highlighted. In short, your objection that in effect I am not addressing the [limited]power of micro evo to adapt an already functioning body plan, is irrelevant, and of course feeds into the strawman and ad hominem problem you have already gone through with me. Please, let us not go down that fruitless path. Can you show that the sort of mechanisms that may successfully modify an already functioning body plan, can generate significantly different ones, including the different control mechanisms? [Recall my note on 2 vs 3 vs 6 or 8 legged walking gaits.] What about the first body plan? What about embryological feasibility? GEM of TKIkairosfocus
May 4, 2011
May
05
May
4
04
2011
04:25 AM
4
04
25
AM
PDT
You still have not cogently responded to the evidence that the issue is to get TO isolated islands of function in configuration space, rather than relatively minor adaptations within islands of function.
Why should I? I was commenting on your claim:
In short, this is an empirical confirmation of the barriers posed by functionally specific complex information beyond a threshold, and irreducible complexity.
By pointing out that their encoding scheme and the genotype to phenotype mapping they used may be one reason why their particular experiment got stuck at a local maxima with regards the complexity of the agents that were produced. The point is specific - Other explanations exist for the issues raised by their particular experiment so it cannot be considered "an empirical confirmation of the barriers posed by functionally specific complex information beyond a threshold". You could have simply accepted this valid observation but instead you, as always it seems, shifted the goal posts and demanded that I proove something else, that was not part of the point I was making. Perhaps I should re-state my general position - which I have made many times on this website but which you always seem to ignore: As a theist I believe that we are the product of design, as a scientist I am agnostic about the method of creation and skeptical about claims surrounding abiogenesis - from both sides. Naturalistic OOL is compatible with my theistic beliefs but I have no ideology that demands it, or anything else. When you ask me to account for OOL without design you are asking me to explain how something that I don't believe happened, happened. This gets very tiresome!DrBot
May 4, 2011
May
05
May
4
04
2011
01:22 AM
1
01
22
AM
PDT
PS: It should be clear from the metric for H, that once we have contingency, a string WILL have a non-zero value for Shannon information. In the case where one symbol has probability 1 and the rest have probability zero, H reduces to - log (1) = 0. There are no biologically relevant strings of AA's or nucleic acid bases that have zero values for the H metric. Similarly, it should be clear that the H metric standing by itself is not a good measure of what is involved in functionality or meaning. Hence the significance of Dembski's zones of interest T in a wider config space, and metrics that allow us to infer from degree of isolation, criteria for accepting some strings as most likely designed. This is without loss of generality, as more complex cases can be reduced to strings.kairosfocus
May 4, 2011
May
05
May
4
04
2011
01:07 AM
1
01
07
AM
PDT
Mung: Perhaps then, we can work together to work out how to express what needs to be said in a way that will communicate clearly enough to those who do not have a specific background in communication systems. Okay, let's try a beginning: 1: Information, conceptually, is:
1. Facts, data, or instructions in any medium or form. 2. The meaning that a human assigns to data by means of the known conventions used in their representation. [Dictionary of Military and Associated Terms. US Department of Defense 2005.] Information in its most restricted technical sense is an ordered sequence of symbols that record or transmit a message. It can be recorded as signs, or conveyed as signals by waves. Information is any kind of event that affects the state of a dynamic system. As a concept, however, information has numerous meanings.[1] Moreover, the concept of information is closely related to notions of constraint, communication, control, data, form, instruction, knowledge, meaning, mental stimulus, pattern, perception, representation, and especially entropy . . . [Wiki art., Information]
2: In addition, we could see that the specific organisation of a functional, dynamic entity [e.g. a car engine, a computer, a match with head and stick] is implicitly information-rich, and may be reduced to strings of symbols according to rules for describing the associated network of components. [Think of how a blueprint is represented in a CAD program.] this is obviously relevant to the coded symbol strings in DNA, the resulting AA sequences in protein chains, and the wider complex functional organisation of the living cell and the organism with a complex body plan. 3: When Hartley and others investigated information towards quantifying it in the 1920's - 40's, they found that the easiest way to do so would be to exploit the contingency and pattern of appearance of symbols in messages: there is a statistical distribution in messages of sufficient length in aggregate, e.g. about 1 in 8 letters in typical English text will be an E. 4: Accordingly, one may investigate the message as a statistical phenomenon, and isolate the observed frequency distribution of symbols. 5: From this, we may see that we are dealing with probabilities, as the likelihood of a letter e is 1 in 8 is similar to the odds of a die being tossed and coming up 3 is 1 in 6. So, the probability of the first is about 0.12, and the latter is about 0.17. 6: Letters like X or Z are far less likely, and so it is intuitively clear that they give much more information: info rises as probability falls. So, an inverse probability measure is close to what we want for a metric of information. 7: We also want an additive measure, so that we can add up information in successive symbols. 8: The best reasonably simple metric for this is a logarithmic one, and a log of 1/p, will be a negative log probability that will add up (as the already referenced always linked online note discusses). 9: That is what Hartley advised, and it is what Shannon took up. So the basic information metric in use in the field is based on a negative log of the frequency of occurrence of a given symbol in messages. 10: Citing the fairly short discussion in Taub and Schilling again:
Let us consider a communication system in which the allowable messages [think: e.g. ASCII text alphanumerical symbols] are m1, m2, . . ., with probabilities of occurrence p1, p2, . . . . Of course p1 + p2 + . . . = 1. Let the transmitter select message mk of probability pk; let us further assume that the receiver has correctly identified the message [My nb: i.e. the a posteriori probability in my online discussion is 1]. Then we shall say, by way of definition of the term information, that the system has communicated an amount of information Ik given by Ik = (def) log2 1/pk (13.2-1) [Princs of Comm Systems, 2nd edn, Taub and Schilling (McGraw Hill, 1986), p. 512, Sect. 13.2.]
11: Unpacking, the quantity of information in a message element k, in bits, is logarithm [to base 2 here] of the inverse of probability, which is equal to: Ik = (def) log2 1/pk Ik = log2 1 - log2 pk Ik = 0 - log2 pk Ik = - log2 pk, in bits 12: We note that any positive number can be seen as a particular number [here 2, sometimes 10, sometimes e] raised to a power, called the log: 10^3 = 1,000, so log10 (1,000) = 3 13: Likewise the information represented by an E in typical English, is Ie = - log2 (0.12) = 3.06 bits. 14: There are usually 20 amino acids in a protein chain, and since they are more or less not chemically constrained in chaining, a simple value for info per AA would be, on 5% odds per AA: Iaa = - log 2 (0.05) = 4.32 bits per AA 15: While that basic chemical fact allows that to be a baseline measure, in fact in functioning protein families, the AA's are not equally likely [here, the issue is not just the chemistry of chaining, but what is needed to get proper folding and function in protein space], and that is what Durston et al turned into their more complex measures. 16: For more complex cases, it is useful to make an average information per symbol measure across the set of symbols used, using a weighted average: H = - [SUM on i] pi * log pi 17: This measure is what is often called Shannon information, and it is related to the average info per symbol that is in messages sent down a channel [think, TCP/IP strings sent down a phone line to your DSL modem]. 18: Now, going further, such symbol strings are to be found in communication systems. Such may be explicitly organised around the sort of block diagram you have seen, e.g. a radio network, or how your PC is hooked up to the Internet using the TCP/IP protocol which is tied to the "layercake" approach. (That is why we talk of Bridges, Routers, and Gateways, they have to do with levels of the coding and protocol layercake.] 19: But that does not have to be so. To move from source to sink, info conceptually needs to be encoded and/or modulated, transmitted across a channel, and received, then demodulated and/or decoded, before it is in a form useful in the sink or destination. These are conceptual stages, not so much physical blocks. 20: In biological systems, that sort of process is going on all the time, and there are many implicit communication systems in such organisms. 21: Notice, so far we have simply measured probability of symbols in strings, and have not bothered about the functionality of meaningful messages. By this standard, it can be mathematically shown that a flat random distribution of symbols [similar to tossing a fair die] would give the highest possible value of H for a string. But such would be meaningless. An oddity of the metric. 22: In real world, functional messages, symbols are not flat random equiprobable, and for us to be able to communicate, we must be able to select and issue symbols, i.e. a string that must only be AAAAAA . . . has no contingency and though orderly is equally uninformative. We have no surprise to see an A as A is the forced value. - log2 (1) = 0. 23: So we see the significance of the sort of modelling Dembski et al have done: they recognise that symbol strings come from an in principle field of possible strings, the config space. 24: But only configs from a zone of interest will be functional. So, if we can describe what that zone of interest T is like, and we observe a given event E from it, we know we are in a special zone. 25: if such islands of function in large config spaces that are dominated by a sea of non-function, are sufficiently rare, it becomes unreasonable to think you could get there by chance. 26: The odds of two dice coming up 6-6 are 1 in 36, which is within reason. The odds of 400 dice all reading 6 by chance, are 1 in 6^400, far less likely, in fact beyond the reasonable reach of chance on the gamut of our observed cosmos. If you see this, the best explanation is that someone organised the dice to read 6. 27: Similarly, odds of DNA strings or AA strings being functional by chance can be worked out, and information values assigned. [Or, we can simply look at the way the strings are set up [4 states per symbol 20 states per symbol] and see that there is a given storage capacity, and/or modify for the observed patterns of symbols frequencies.] 28: We can then deduce metrics for the info stored in such strings. 29: We can then look at the degree of isolation by applying the sort of threshold metric that Dembski et al use, for strings from islands of function, and if we see we are beyond certain thresholds of complexity, it is reasonable to infer that we are looking at something that is best explained on intelligence. Just like with 400 dice all reading 6. __________ Does this help? GEM of TKIkairosfocus
May 4, 2011
May
05
May
4
04
2011
01:00 AM
1
01
00
AM
PDT
Dr Bot: You still have not cogently responded to the evidence that the issue is to get TO isolated islands of function in configuration space, rather than relatively minor adaptations within islands of function. And it is the former challenge that the Golem project underscores. It is becoming increasingly evident that darwinism supporters do not appreciate what is involved in putting together a complex, multipart, integrated functional entity, where the individual parts have to be right, have to fir with their neighbours, and have to be parts in a much broader integrated whole; whether within the cell or in the larger multicellular organism based on embryogensis of a zygote that then transforms itself into a body plan. In turn, all of this is based on the technology of life, whereby we have cells that integrate metabolic machines and a von Neumann, stored code based self-replicating facility. Expressed in a Wicken wiring diagram, complex and functional, information-rich organised entity. GEM of TKIkairosfocus
May 3, 2011
May
05
May
3
03
2011
11:50 PM
11
11
50
PM
PDT
PS: Were my remarks on info helpful?
I'm going to be honest here and say not really, though I think that's my fault and not yours. :) I graduated high school in three years and never got much beyond basic algebra and geometry. I planned to be a doctor, not an engineer. Funny thing is, I think it was my math competency that got me into the Navy as an engineer rather than as a corpsman. Isn't life funny? I just don't have a lot of the tools in my mental toolbox yet to understand a lot of this. So I'm trying to start simply and build up. So for a start I wanted to know in what way Shannon Information is applicable to biology. Does Shannon Information require a communications system? If you have a nucleotide sequence, in what way is it legitimate to claim that sequence has no Shannon Information? )or that it does contain Shannon Information.) How would you tell if there was an increase in the Shannon Information contained in the sequence?Mung
May 3, 2011
May
05
May
3
03
2011
11:32 AM
11
11
32
AM
PDT
Kairos thanks for pointing how genome is not a simple entity. Quite the opposite is true in the light of recent findings: 1.Cell needs the whole DNA (98% is "junk") otherwise it wouldn't spend tremendous resources to copy-replicate it. 2. Scientists from Harvard found the DNA fills certain volume inside nucleus in shape of Peano curve. That provides for well organized structure instead of chaotic tangle. 3. DNA Skittle visualization tool ( free download) clearly shows repetitive patterns interchanging with randomly distributed nucleotides in non-coding DNA. Also, interference and modulation type patterns are visible. 4. One dimensional string could be periodically marked for bending and assembly of two dimensional matrix (like QR code). Next it is possible to layer (stack) multiple two dimensional data matrices to fill volume. 5. Combining (2),(3) and (4) it is possible to envision form of data storage as a purpose for non-coding DNA. It is possible we are dealing with three dimensional chemical data storage system. 6. Similar to holographic recording I would expect huge capacity and inherent information redundancy. Smaller, broken off section of holographic recording will show the whole picture but with lower resolution. I would also expect powerful dynamic encryption as the basic information should be kept away from irresponsible users( humans). Mung thanks for detailed analysis of evEugen
May 3, 2011
May
05
May
3
03
2011
08:24 AM
8
08
24
AM
PDT
KF, I responded to your claim about the complexity barrier described by Pollack et. al.:
In short, this is an empirical confirmation of the barriers posed by functionally specific complex information beyond a threshold, and irreducible complexity.
I pointed out the differences between the encoding used by Golem, and that found in biology, and how this might account for the barrier they encountered. I did not claim that other barriers do not exist, or that genomes are simple, or that indirect mappings can solve all the problems of complexity. I merely offered an alternative explanation for the barrier they describe.DrBot
May 3, 2011
May
05
May
3
03
2011
06:54 AM
6
06
54
AM
PDT
Dr Bot: Pardon, again; are you aware of the size of actual genomes? When you say:
With the Golem encoding the complexity of the organism is directly related to the genotype. If you want 100 legs and a segmented body (like a millipede) you need to encode each segment and each leg explicitly. The size of the genome and the resulting search space becomes impossibly large and evolution can hit a barrier but when you have biological like indirect encodings and development you can build complex structures like that with very simple genomes
Real world genomes start at 100+ k to 1 mn bases, and for multicellular body plans we are looking at -- dozens of times over -- 10mn+ new base pairs. Genomes then run up to billions of bases in a well organised reference library. Just 100 k base pairs is a config space of 4^100,000 ~ 9.98 * 10^60,205 possibilities. The P-time Q-states of the observed cosmos across its lifespan, would amount to no more than 10^150, a mere drop in that bucket. "Simple genome" is a grossly simplistic strawman. And, the hinted-at suggestion that by using in effect a lookup table as a genome you have got rid of the need to code the information and the regulatory organisation, is another misdirection. You have simply displaced the need to code the algorithms that do the technical work. Notice, genomes are known to have protein etc coding segments, AND regulatory elements that control expression, all in the context of a living cell that has the machinery to make it work. The origin of the cell [metabolising and von Neumann self replicator], and its elaboration through embryogenesis into varied functional body plans have to be explained on chance plus necessity and confirmed by observation if the evo mat view is to have any reasonable foundation in empirically based warrant. GEM of TKIkairosfocus
May 3, 2011
May
05
May
3
03
2011
05:42 AM
5
05
42
AM
PDT
Dr Bot: Pardon: Do you see the key non-sequitur in your argument? Let me highlight:
In their system a multi legged robot requires a full description for each limb, but with a developmental system and an indirect mapping you can have one generic leg description in the genes that is repeated n times during development [and where do all these conveniently come from? THAT is the key, unanswered question . . . ] – in other words you can jump from a four legged to an eight legged morphology by changing the value of n from 4 to 8 (one mutation) . . .
Someone or something, somewhere has to put down the info to get the complex functional organisation, in detail, on the Wicken wiring diagram. Duplicating and modifying an existing functional structure is one thing, creating it de novo out of commonly available components not conveniently organised, is not. And BTW, the associated controls to move successfully on 2 vs 4 legs vs 6 vs 8 are very different. It is not just a matter of produce n legs. A 2 legged gait is very different from a 4 or a 6 or 8. (And, since 3 legs gives a stable tripod [though that in turn actually requires 6 controlled points for real stability, ask a camera tripod designer for why], it is the 6 or 8 that are simpler to use physically: stand on 3+, move 3+, repeat.) So, your counter-argument boils down to the same error made by the author of ev: ASSUMING an already functional body plan, we can modify it on a suitably nice fitness landscape, through a hill-climbing algorithm with a nice trend metric. But, you have no right to that missing step, It does not follow. The Golem project was obviously trying to get to that initial functioning body plan, and that is precisely why the experiment ran into the challenge of islands of isolated functional organisation in exponentially growing config spaces. Recall, every additional YES/NO decision node in the Wicken wiring diagram DOUBLES the number of possible configs. There is abundant evidence that once one has a nice smooth fitness pattern on an existing island of function, one may move about in it. But the real problem is to get to the shores of such an island of function. In short, we are seeing a massively begged question here. GEM of TKIkairosfocus
May 3, 2011
May
05
May
3
03
2011
05:07 AM
5
05
07
AM
PDT
I find the Golem project’s conclusion as at Sept 3, 2001 [no updates since then] highly interesting:
The evolutionary process appears to be hitting a complexity barrier that is not traversable using direct mutation-selection processes, due to the exponential nature of the problem. We are now developing new theories about additional mechanisms that are necessary for the synthetic evolution of complex life forms.
The other factor may be that they use direct mappings with no development - the genotype explicitly specifies the morphology of the agent. In their system a multi legged robot requires a full description for each limb, but with a developmental system and an indirect mapping you can have one generic leg discription in the genes that is repeated n times during development - in other words you can jump from a four legged to an eight legged morphology by changing the value of n from 4 to 8 (one mutation) - you can even encode the number of joints in a limb in a similar fashion so a single bit mutation can give you an extra joint in each limb. Part of the complexity barrier they encountered may be due to tese differences between their system and biology - which uses indirect mappings and development. With the Golem encoding the complexity of the organism is directly related to the genotype. If you want 100 legs and a segmented body (like a millipede) you need to encode each segment and each leg explicitly. The size of the genome and the resulting search space becomes impossibly large and evolution can hit a barrier but when you have biological like indirect encodings and development you can build complex structures like that with very simple genomes, and make drastic changes to the phenotype through minimal changes to the genotype.DrBot
May 3, 2011
May
05
May
3
03
2011
04:16 AM
4
04
16
AM
PDT
Mung: I find the Golem project's conclusion as at Sept 3, 2001 [no updates since then] highly interesting:
The evolutionary process appears to be hitting a complexity barrier that is not traversable using direct mutation-selection processes, due to the exponential nature of the problem. We are now developing new theories about additional mechanisms that are necessary for the synthetic evolution of complex life forms.
In short, this is an empirical confirmation of the barriers posed by functionally specific complex information beyond a threshold, and irreducible complexity. The only known mechanism to routinely surmount such an exponential isolation barrier is intelligence. This supports the point that there are isolated islands of specific function in large config spaces, and that only specific organised Wicken wiring diagram arrangements of particular components will work. Ev of course works by being WITHIN such an island of function. Meyer's remark as cited by ENV is apt:
[Robert] Marks shows that despite claims to the contrary by their sometimes overly enthusiastic creators, algorithms such as Ev do not produce large amounts of functionally specified information "from scratch." Marks shows that, instead, such algorithms succeed in generating the information they seek by providing information about the desired outcome (the target) from the outset, or by adding information incrementally during the computer program's search for the target. ... In his critique of Ev as well as other evolutionary algorithms, Marks shows that each of these putatively successful simulations of undirected mutation and selection depends on several sources of active information. The Ev program, for example, uses active information by applying a filter to favor sequences with the general profile of a nucleotide binding site. And it uses active information in each iteration of its evaluation algorithm or fitness function. (Stephen C. Meyer, Signature in the Cell, pp. 284-285 (HarperOne, 2009).)
A fitness function -- if it is a continuous function -- is an expression with an implicit infinity of values, much as the Mandelbrot set I have used above shows: there is infinite detail lurking in a seemingly simple function and algorithm to test for degree of proximity to the set proper. Once you write the function and feed in coordinates to the algorithm -- intelligently designed I must add -- you can probe to infinite depth. If one then ads a warmer-colder hill climbing routine, he can create the illusion of information emerging from nothing. But all that is happening is that one is climbing the trends on a particular type of smoothly trendy landscape. The information to do that was fed into the fitness function and the associated algorithms to map and to do a warmer-colder climb. You will recall my part B thought exercise, to plug in the set proper as a black-hole of non-function, turning the fitness landscape into an atoll with a fractal border. Now, there are infinitely many fine grained points where one following a hill-climb will without warning drop off into non-function. The predictable result: once one has a functional pattern, one will incrementally improve, up to a point, then there will be a locked in peak enforced by what happens if one goes one step too far in an unpredictable direction, i.e one is locked into a highly specialised niche. Stasis and brittleness opening up room for sudden disappearance. Which sound very familiar. Ev is not creating information out of nowhere and nothing, and it is dependent on the particular pattern of a fitness function within an island, aided by targetting via hill climbing on a distance to target metric. At most ev models some aspects of micro-evo, which is not in dispute. The real issue is not adaptation of an already functional body plan, but origin of such body plans in the face of the complexity challenge the Golem project underscored. GEM of TKI PS: Were my remarks on info helpful?kairosfocus
May 3, 2011
May
05
May
3
03
2011
03:16 AM
3
03
16
AM
PDT
EV Ware Dissection of a Digital Organism Answering Critics Who Promote Tom Schneider's ev Simulation Lee Spetner responds to Tom SchneiderMung
May 2, 2011
May
05
May
2
02
2011
09:04 PM
9
09
04
PM
PDT
Darwinism holds that new genes can evolve blindly out of old genes by gene duplication, mutation and recombination under the pressure of natural selection. The strong version of panspermia holds that they cannot arise this way or any other way in a closed system. If a computer model could mimic the creation of new genes by the Darwinian method, it would establish that the process works in principle and strengthen the case for Darwinism in biology. Here we briefly discuss some evidence and arguments for the Darwinian mechanism and some for panspermia. Then we consider three well-known computer programs that undergo evolution, and one other proposal. None of them appears to create new genes. The question remains unanswered.
Can Computers Mimic Evolution?Mung
May 2, 2011
May
05
May
2
02
2011
09:00 PM
9
09
00
PM
PDT
The golem@Home project has concluded. After accumulating several Million CPU hours on this project and reviewing many evolved creatures we have concluded that merely more CPU is not sufficient to evolve complexity: The evolutionary process appears to be hitting a complexity barrier that is not traversable using direct mutation-selection processes, due to the exponential nature of the problem. We are now developing new theories about additional mechanisms that are necessary for the synthetic evolution of complex life forms.
The Golem ProjectMung
May 2, 2011
May
05
May
2
02
2011
08:56 PM
8
08
56
PM
PDT
PS: In another sense, information reduces uncertainty about a situation, as in it tells you what is [or is likely] the case instead of what may be the case.kairosfocus
May 1, 2011
May
05
May
1
01
2011
01:39 PM
1
01
39
PM
PDT
Mung: The issue is, what is information, not what is Shannon Info. If you look here [scroll down a tad to Fig A.1], you will see my favoured version of the comm system architecture (which is amenable to the layercake type model now so often used, e.g. for the Internet). Information comes from a source and is transferred to a sink, through encoding and/or modulation, transmitter, a channel and a receiver demod and/or decoder. Noise affects the process, and there is usually a feedback path of the same general architecture. Now, the key metric is suggested by Hartley, as I showed above, taking a neg log of the probability of signal elements from a set of possible elements; based on observed relative frequency as a measure of probability. That gives additivity to the measure [most easily seen in contexts where channel corruption is not an issue so what is sent is what is received]. So, for symbol mi, of probability of occurrence pi, Ii = - log pi (Schenider confuses himself here, by failing to understand that this is probably the dominant usage, and using a synonym to "correct" Dembski's usage.) It turns out that usually, some symbols are more likely than others, e.g. about 1 in 8 letters in typical English passages is an e. E therefore communicates less information than other, rarer letters. Since Shannon was especially interested in average flow rate of info in a channel, he developed the weighted average info per symbol metric, H: H = - [SUM on i] pi * log Pi This has several names, one of which is Shannon info. Another is uncertainty. We see why if we understand that for a flat random distribution of states for symbols in a string, the uncertainty of the value of each member is a maximum. A third is informational entropy, which turns out -- there were decades of debate on this but it seems to be settling out now -- is conceptually linked to thermodynamic entropy, and not just in the form of the math. It is "simply" the average info per symbol for a given set of symbols. It peaks when the symbols are equiprobable, and oddly -- but on the way the metric was defined -- that means that the average information per symbol of a flat random set of glyphs [fair die or coin] is maximum. Never mind it is meaningless. And if you move away from a flat random distribution, your average info per symbol (notice my use of this clearest term) will fall. Dembski et al say that the ev setup is a sparse 1 setup on the targets, so that will be the case if you move from a flat random initial value to a more biased final one. Uncertainty, AKA average info per symbol, will fall as you move to such a target, simply because the symbols are no longer equiprobable. On the other hand, if we have a message where a given symbol [say, s] MUST be present always and the other symbols are not possible, gives ZERO info, as there is no "surprise" or "uncertainty" on its reception. No meaningful info, and the neg log metric is also zero. Order [in this sense] is as free of info -- in the functional sense -- as is the meaninglessness of an absolutely random string. Our concern is the communication of meaningful symbol strings, that function in some relevant context. They are constrained to be in certain configs, from the set of all in principle possible configs of strings of the same length. But these configs are aperiodic, whilst not being random. Just like strings of ASCII characters in this paragraph. That is, they are functionally specific and exhibit neither randomness nor forced meaninglessly repetitive order but functional, complex, specific aperiodic, meaningful organisation. Thus, we easily arrive at the constraint that these messages will come from defined and confined zones in the space of possible configs. Islands of function, in light of rules of function and the purpose that constrains how the rules are used. Now, near as I can follow, Schneider's search strings start with arbitrary values then move in successive constrained random walk steps that are rewarded on warmer/colder, towards the targets. Where the targets are allowed to move around a bit too (but obviously not too much or warmer/colder trends would be meaningless). He is defining degree of function on degree of matching, and he is moving one set of strings towards the other by exploiting a nicely set up Hamming distance metric based trend and the warmer/colder hill-climbing principle, with a target that may move a bit but is in the same zone. I think he is speaking somewhat loosely of a gain of info as in effect moving to match. If he is actually starting with a flat random initial string, the average info per symbol value would be at a max already: sum pi log pi is at a peak for that. What he is apparently talking about is moving to a matched condition, not a gain in Shannon metric as such. (Are the target strings also set at random?) Shannon info is not the right metric for this, it is only part way there. Functional specificity of complex strings that have to be in target zones to work right, is what has to be captured. And the metrics of CSI do that, one way or another. This side-issue is so far from getting a DNA string to code for AA chains that will fold and function in the context of the cell that it is ridiculous. Here, the constraints are those of real world chemistry and physics, and the context of the nanomachines and molecules in the living cell. Finding the correct cluster of such to get a functioning cell is a real technological tour de force. And metrics that identify how hard it is to get to islands of function in large config spaces on unintelligent mechanisms tracing to chance and necessity, are a help in seeing that. But, if you are locked into the idea that regardless of odds, we have nice smooth fitness functions that tell us warmer colder from arbitrary initial configs in prebiotic soups [or the equivalent] we will be blind to this issue of islands of function. Similarly, if we fail to understand how much meaning has to be programmed in to get a novel body plan to unfold from a zygote through embryogensis. The real issue is that the evidence of our experience with technology, and the fossil record are both telling us that we are dealing with islands of function, but the darwinian narrative demands that there MUST have been a smoothly varying trend to life and from first lie to us and what we see around us. So far, so much the worse for inconvenient empirical evidence and analysis. That is why models on computers that work within islands of function are allowed to pass themselves off as more than a model of what is not in dispute, micro-evo. But, the narrative is falling apart, bit by bit. GEM of TKIkairosfocus
May 1, 2011
May
05
May
1
01
2011
01:14 PM
1
01
14
PM
PDT
Thanks kairosfocus, I also think there's something hinky about Schneider's use of Shannon Information. He uses Shannon Information as his measure, almost bragging about it being the only valid measure. He apparently thinks his way is the only correct way. I noticed you have some coverage of Shannon Information on your web site, but I still don't have a real clear grasp. So what are some of the fundamentals of Shannon Information? Does it require a sender and a receiver? What else does it require? Is it even possible to have a "gain" in Shannon Information? Schneider assumes that the binding site starts with no information content because he starts with a randomly generated sequence of bases at the binding site. After a binding site has "evolved" to the point that it can be recognized, he then measures the information content (at the binding site - as the reduction in uncertainty) and subtracts his "before" and "after" to calculate his information "gain." But again, that's not how Shannon Information works, imo. With Shannon Information you can't get a gain in information. (And do you get a gain in information by a reduction in uncertainty?) Am I just way off base? I'll try to provide some relevant links and quotes in a followup posting.Mung
May 1, 2011
May
05
May
1
01
2011
11:26 AM
11
11
26
AM
PDT
Mung: Thank you for the work you have done to document the actual way ev works. It has been important for us to all see that ev actually contains language that directly implies that there is targetted search involved, using a Hamming type distance metric in a warmer/colder oracle hill-climbing routine. GEM of TKIkairosfocus
April 30, 2011
April
04
Apr
30
30
2011
03:19 PM
3
03
19
PM
PDT
btw, has anyone else noticed that MathGrrl has, in citing work such as Schneider's, conceded the argument? 1. Information can be mathematically defined. 2. The concept can be and has been applied to biological systems.Mung
April 30, 2011
April
04
Apr
30
30
2011
10:35 AM
10
10
35
AM
PDT
Again, in Schneider's own words:
Repressors, polymerases, ribosomes and other macromolecules bind to specific nucleic acid sequences. They can find a binding site only if the sequence has a recognizable pattern. We define a measure of the information (Rsequence) in the sequence patterns at binding sites.
The Information Content of Binding Sites on Nucleotide Sequences
Recognizer a macromolecule which locates specific sites on nucleic acids. [includes repressors, activators, polymerases and ribosomes]
We present here a method for evaluating the information content of sites recognized by one kind of macromolecule.
No targets?
These measurements show that there is a subtle connection between the pattern at binding sites and the size of the genome and number of sites.
...the number of sites is approximately fixed by the physiological functions that have to be controlled by the recognizer.
Then we need to specify a set of locations that a recognizer protein has to bind to. That fixes the number of sites, again as in nature. We need to code the recognizer into the genome so that it can co-evolve with the binding sites. Then we need to apply random mutations and selection for finding the sites and against finding non-sites.
INTRODUCTION So earlier in this thread I accused MathGrrl of not having actually read the papers she cites. I think the case has sufficiently been made that that is in fact a real possibility. I suppose it's also possible that she reads but doesn't understand. MathGrrl, having dispensed with the question of targets in ev, can we now move on the the question of CSI in ev?Mung
April 30, 2011
April
04
Apr
30
30
2011
10:01 AM
10
10
01
AM
PDT
Mung: Quite a bottomline on ev:
ev [is] a glorified version of Dawkins’ Weasel program . . . . instead of having a single final target string which each individual in the population is measured against (compared to), each individual in the population in ev has multiple targets (which are called binding sites), and the target strings at each “binding site” for each individual are independent of each other and independent of the target strings in the other individuals. So each individual in ev has more targets, but each one is shorter in length than the target in Weasel. It also has a shorter “alphabet” (ACGT). In addition, while the location of the target sites on each individual in the population are fixed, the actual target “letters” may be changed by a mutation . . . . we not only have multiple target “strings” per individual, each of which is capable of changing due to mutation, we also have the “receptor.” This is the “string” that we’re trying to get to match one of the target string. It also is different for each individual in the population . . . . The final twist is that the “receptor” also has a chance to be changed due to mutation. But it’s still string being compared to another string until we find a match, even with all the other fancy goings on. And oh, yeah, there’s that function that let’s us tell which strings are closer to the targets (fewer mistakes) and which ones are further away (more mistakes). Wipe out half the population each generation by replacing those with more mistakes by copies of those with fewer mistakes. Set up the right initial conditions and you’re bound to succeed.
MG et al have some further explaining to do. GEM of TKIkairosfocus
April 30, 2011
April
04
Apr
30
30
2011
12:41 AM
12
12
41
AM
PDT
Elsewhere I described ev as a glorified version of Dawkins' Weasel program. Weasel has an initial population of strings which are mutated until one is found to match a single final target phrase. ev has an initial population of what are in effect strings. (Is the genotype the same as the phenotype, as in Weasel?) But instead of having a single final target string which each individual in the population is measured against (compared to), each individual in the population in ev has multiple targets (which are called binding sites), and the target strings at each "binding site" for each individual are independent of each other and independent of the target strings in the other individuals. So each individual in ev has more targets, but each one is shorter in length than the target in Weasel. It also has a shorter "alphabet" (ACGT). In addition, while the location of the target sites on each individual in the population are fixed, the actual target "letters" may be changed by a mutation. [Note that when a "good" individual is copied to replaces a "bad" individual the location of the binding sites in the "offspring" are not changed. So you now have an additional member in the population with the exact same binding site locations.] So now we not only have multiple target "strings" per individual, each of which is capable of changing due to mutation, we also have the "receptor." This is the "string" that we're trying to get to match one of the target string. It also is different for each individual in the population. [At least until we go through the first round of selection, at which time we again get an additional member in the population with the same "receptor" as another member.] The final twist is that the "receptor" also has a chance to be changed due to mutation. But it's still string being compared to another string until we find a match, even with all the other fancy goings on. And oh, yeah, there's that function that let's us tell which strings are closer to the targets (fewer mistakes) and which ones are further away (more mistakes). Wipe out half the population each generation by replacing those with more mistakes by copies of those with fewer mistakes. Set up the right initial conditions and you're bound to succeed.Mung
April 29, 2011
April
04
Apr
29
29
2011
06:01 PM
6
06
01
PM
PDT
Mung: A measure of number of mistakes is a Hamming distance metric. Even without SHOWING Wiki the ducking stool for waterboarding, it coughs up:
In information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. Put another way, it measures the minimum number of substitutions required to change one string into the other, or the number of errors [aka mistakes] that transformed one string into the other.
GEM of TKIkairosfocus
April 29, 2011
April
04
Apr
29
29
2011
03:13 PM
3
03
13
PM
PDT
Mung: Let's get one of those Wiki admissions against interest:
In mathematics and computer science, an algorithm is an effective method expressed as a finite list[1] of well-defined instructions[2] for calculating a function.[3] Algorithms are used for calculation, data processing, and automated reasoning. Starting from an initial state and initial input (perhaps null),[4] the instructions describe a computation that, when executed, will proceed through a finite [5] number of well-defined successive states, eventually producing "output"[6] and terminating at a final ending state. [BTW, to stop an infinite loop we can force a termination under certain conditions, again a goal setting exercise.] The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.[7] A partial formalization of the concept began with attempts to solve the Entscheidungsproblem (the "decision problem") posed by David Hilbert in 1928. Subsequent formalizations were framed as attempts to define "effective calculability"[8] or "effective method";[9] those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's "Formulation 1" of 1936, and Alan Turing's Turing machines of 1936–7 and 1939.
And, regarding GA's, the same confesses -- we never even had to show the thumbscrews [as in , MG you still have some serious 'splaining to do on your outrageous citation of eppur si muove above . . . ] -- as follows:
In a genetic algorithm, a population of strings (called chromosomes or the genotype of the genome), which encode candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem, evolves toward better solutions. Traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible. The evolution usually starts from a population of randomly generated individuals and happens in generations. In each generation, the fitness of every individual in the population is evaluated, multiple individuals are stochastically selected from the current population (based on their fitness [and how is that set up and measured and controlled 5to have a nice trendy pattern, based on the coding? By monkeys at keyboards?]), and modified (recombined and possibly randomly mutated) to form a new population. The new population is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population. If the algorithm has terminated due to a maximum number of generations, a satisfactory solution may or may not have been reached.
Digging in yet deeper -- and nope, we did not demonstrate the rack to see this:
In mathematics, computer science and economics, optimization, or mathematical programming, refers to choosing the best element from some set of available alternatives. In the simplest case, this means solving problems in which one seeks to minimize or maximize a real function by systematically choosing the values of real or integer variables from within an allowed set. This formulation, using a scalar, real-valued objective function, is probably the simplest example; the generalization of optimization theory and techniques to other formulations comprises a large area of applied mathematics. More generally, it means finding "best available" values of some objective function given a defined domain, including a variety of different types of objective functions and different types of domains.
The above is fairly riddled with constrained, goal-seeking behaviour, set up by an intelligent designer. GEM of TKI PS: The picture of a nice, trendy objective function here by Wiki is illustrative. Especially on what "hill-climbing" is about.kairosfocus
April 29, 2011
April
04
Apr
29
29
2011
03:01 PM
3
03
01
PM
PDT
Let's look at some of the things ev can display:
Display control: the first 7 characters on the line control the kind of data printed to the list file: a = display average number of mistakes and the standard deviation for the population. c = display changes in the number of mistakes. The current Rsequence is given if r (below) is turned on. This allows graphs of Rsequence vs mistakes to be made. g = display genomic uncertainty, Hg. If this deviates much from 2.0, then the model is probably bad. i = display individuals' mistakes o = display orbits: information of individual sites is shown r = display information (Rsequence, bits) s = current status (range of mistakes) is printed to the output file. m = current status (range of mistakes) is printed to the list file.
Why this obsession with "mistakes"?
haltoncondition: char. This parameter (introduced [2006 June 24]) causes ev to halt when a given condition has occured. If the first character on the line is: - none - no halting condition r Rs>=Rf - the best creature has Rs at least equal to Rf m mistakes 0 - the best creature makes no mistakes b both r and m
Don't tell me this program has no targets.Mung
April 29, 2011
April
04
Apr
29
29
2011
02:53 PM
2
02
53
PM
PDT
1 2 3 7

Leave a Reply