Uncommon Descent Serving The Intelligent Design Community

# FOOTNOTE: On Einstein, Dembski, the Chi Metric and observation by the judging semiotic agent

Share
Flipboard
Print
Email

(Follows up from here.)

Over at MF’s blog, there has been a continued stream of  objections to the recent log reduction of the chi metric in the recent CSI Newsflash thread.

Here is commentator Toronto:

__________

>> ID is qualifying a part of the equation’s terms with subjective observation.

If I do the same to Einstein’s, I might say;

E = MC^2, IF M contains more than 500 electrons,

BUT

E **MIGHT NOT** be equal to MC^2 IF M contains less than 500 electrons

The equation is no longer purely mathematical but subject to other observations and qualifications that are not mathematical at all.

Dembski claims a mathematical evaluation of information is sufficient for his CSI, but in practice, every attempt at CSI I have seen, requires a unique subjective evaluation of the information in the artifact under study.

The determination of CSI becomes a very small amount of math, coupled with an exhausting study and knowledge of the object itself.>>

_____________

A few thoughts in response:

a –> First, let us remind ourselves of the log reduction itself, starting with Dembski’s 2005 chi expression:

χ = – log2[10^120 ·ϕS(T)·P(T|H)]  . . . eqn n1

1 –> 10^120 ~ 2^398

2 –> Following Hartley, we can define Information on a probability metric:

I = – log(p) . . .  eqn n2

3 –> So, we can re-present the Chi-metric:

Chi = – log2(2^398 * D2 * p)  . . .  eqn n3

Chi = Ip – (398 + K2) . . .  eqn n4

4 –> That is, the Dembski CSI Chi-metric is a measure of Information for samples from a target zone T on the presumption of a chance-dominated process, beyond a threshold of at least 398 bits, covering 10^120 possibilities.

5 –> Where also, K2 is a further increment to the threshold that naturally peaks at about 100 further bits . . . . As in (using Chi_500 for VJT’s CSI_lite):

Chi_500 = Ip – 500,  bits beyond the [solar system resources] threshold  . . . eqn n5

Chi_1000 = Ip – 1000, bits beyond the observable cosmos, 125 byte/ 143 ASCII character threshold . . . eqn n6

Chi_1024 = Ip – 1024, bits beyond a 2^10, 128 byte/147 ASCII character version of the threshold in n6, with a config space of 1.80*10^308 possibilities, not 1.07*10^301 . . . eqn n6a . . . .

Using Durston’s Fits from his Table 1, in the Dembski style metric of bits beyond the threshold, and simply setting the threshold at 500 bits:

RecA: 242 AA, 832 fits, Chi: 332 bits beyond

SecY: 342 AA, 688 fits, Chi: 188 bits beyond

Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond  . . . results n7

The two metrics are clearly consistent . . . .one may use the Durston metric as a good measure of the target zone’s actual encoded information content, which Table 1 also conveniently reduces to bits per symbol so we can see how the redundancy affects the information used across the domains of life to achieve a given protein’s function; not just the raw capacity in storage unit bits [= no.  of  AA’s * 4.32 bits/AA on 20 possibilities, as the chain is not particularly constrained.]

b –> In short, we are here reducing the explanatory filter to a formula. Once we have specific, observed functional information of Ip bits,  and we compare it to a threshold of a sufficiently large configuration space, we may infer that the instance of FSCI (or more broadly CSI)  is sufficiently isolated that the accessible search resources make it maximally unlikely that its best explanation is unintelligent cause by blind chance plus mechanical necessity. Instead, the best, and empirically massively supported causal explanation is design:

Fig 1: The ID Explanatory Filter

c –> This is especially clear when we use the 1,000 bit threshold, but in fact the “practical” universe we have is our solar system. And so, since the number of Planck time quantum states of our solar system since the usual date of the big bang is not more than 10^102, something that is in a config space of 10^150 [500 bits worth of possibilities] is 48 orders of magnitude beyond that threshold.

d –> So, something from a config space of 10^150 or more (500+ functionally specific bits) is on infinite monkey analysis grounds, comfortably beyond available search resources. 1,000 bits puts it beyond the resources of the observable cosmos:

Fig 2: The Observed Cosmos search window

e –> What the reduced Chi metric is telling us is that if say we had 140 functional bits [20 ASCII characters] , we would be 360 bits short of the threshold, and in principle a random walk based search could find something like that. For, while the reduced chi metric is giving us a value, it tells us we are falling short and by how much:

Chi_500(140 bits) = 140 – 500 = – 360 specific bits, within the threshold

f –> So, the Chi_500 metric tells us instances of this could happen by chance and trial and error testing.   Indeed, that is exactly what has happened with random text generation experiments:

One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the “monkeys” typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t” The first 19 letters of this sequence can be found in “The Two Gentlemen of Verona”. Other teams have reproduced 18 characters from “Timon of Athens”, 17 from “Troilus and Cressida”, and 16 from “Richard II”.[20]

A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took “2,737,850 million billion billion billion monkey-years” to reach 24 matching characters:

RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d

g –> But, 500 bits or 72 ASCII characters, and beyond this 1,000 bits or 143 ASCII characters, are a very different proposition, relative to the search resources of the solar system or the observed cosmos.

h –> That is why, consistently, we observe CSI beyond that threshold [e.g. Toronto’s comment] being produced by intelligence, and ONLY as produced by intelligence.

i –> So, on inference to best empirically warranted explanation, and on infinite monkeys analytical grounds, we have excellent reason to have high confidence that the threshold metric is credible.

j –> As a bonus, we have exposed the strawman suggestion that the Chi metric only applies beyond the threshold. Nope, it applies within the threshold and correctly indicates that something of such an order could come about by chance and necessity within the solar system’s search resources.

k –> is a threshold metric inherently suspicious? Not at all. In control system studies, for instance, we learn that once you reduce your expression to a transfer function of form

G = [(s – z1)(s- z2) . . . ]/[(s – p1)(s-p2)(s – p3) . . . ]

. . . then, if poles appear in the RH side of the complex s-plane, you have an unstable system.

l –> A threshold, and one that, when poles approach close to the threshold from the LH half-plane, will show up in a tendency that can be detected in the frequency response as peakiness.

m –> Is the simplicity of the math in question, in the end [after you have done the hard work of specifying information, and identifying thresholds], suspicious? No, again. For instance, let us compare:

v = i* R

q = v* C

n = sin i/ sin r

F = m*a

F2 = – F1

s = k log W

E = m0*c^2

v = H0D

Ik = – log2 (pk)

E = h*νφ

n –> Each of these is elegantly simple, but awesomely powerful; indeed, the last — precisely, a threshold relationship — was a key component of Einstein’s Nobel Prize (Relativity was just plain too controversial). And, once we put them to work in practical, empirical situations, each of them ” . . .  is no longer purely mathematical but subject to other observations and qualifications that are not mathematical at all.”

(The objection is clearly selectively hyperskeptical. Since when was an expression about an empirical quantity or situation “purely mathematical”? Let’s try another expression:

How are its components measured and/or estimated, and with how much application of judgement calls, including those tracing to GAAP? [Cf discussion here.] Is this expression therefore meaningless and of no utility? What about M*VT = PT*T?)

o –> So, what about that horror, the involvement of the semiotic, judging agent as observer, who may even intervene and– shudder — judge? Of course, the observer is a major part of quantum mechanics, to the point where some are tempted to make it into a philosophical position. But the problem starts long before that, e.g. look at the problem of reading a meniscus! (Try, for Hg in glass, and for water in glass — the answers are different and can affect your results.)

Fig 3: Reading a meniscus to obtain volume of a liquid is both subjective and objective (Fair use clipping.)

p –> So, there is nothing in principle or in practice wrong with looking at information, and doing exercises — e.g. see the effect of deliberately injected noise of different levels, or of random variations — to test for specificity. Axe does just this, here, showing the islands of function effect dramatically. Clipping:

. . . if we take perfection to be the standard (i.e., no typos are tolerated) then P has a value of one in 10^60. If we lower the standard by allowing, say, four mutations per string, then mutants like these are considered acceptable:

no biologycaa ioformation by natutal means
no biologicaljinfommation by natcrll means
no biolojjcal information by natiral myans

and if we further lower the standard to accept five mutations, we allow strings like these to pass:

no ziolrgicgl informationpby natural muans
no biilogicab infjrmation by naturalnmaans
no biologilah informazion by n turalimeans

The readability deteriorates quickly, and while we might disagree by one or two mutations as to where we think the line should be drawn, we can all see that it needs to be drawn well below twelve mutations. If we draw the line at four mutations, we find P to have a value of about one in 10^50, whereas if we draw it at five mutations, the P value increases about a thousand-fold, becoming one in 10^47.

q –> Let us note how — when confronted with the same sort of skepticism regarding the link between information [a “subjective” quantity] and entropy [an “objective” one tabulated in steam tables etc] — Jaynes replied:

“. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.”

r –> In short, subjectivity of the investigating observer is not a barrier to the objectivity of the conclusions reached, providing they are warranted on empirical and analytical grounds. As has been provided for the Chi metric, in reduced form.  END

Mung: The sense of information that is gained is -- you guessed it -- FUNCTIONALLY SPECIFIC information; as in Abel et al, on functional sequence complexity. For at the first the sites will not bind, but when mistake count goes to zero due to convergence, the functional info reaches its targetted peak. And of course this is closely comparable to the action of a servo system that tracks and hits a moving target. If a MiG desperately trying to evade a Sidewinder by all sorts of acrobatics is a target, so are the sites that you want to match in ev. (And the launch platform may in turn be a target for said MiGs.) Of course the Shannon value peaks for a purely random sequence, but RSC is not at all the same as either OSC or FSC. Cf my background note here and onward links. (Notice my use of what I have subsequently called the X-metric for want of a better name.) Durston et al cashed out he qualitative analysis of 2005 with the H-based fits metric in 2007. Above in OP I use it to show that certain protein families have functional info values beyond the threshold where it makes good sense to infer to design. In short, there is excellent quantitative evidence of design in cell based life, on observation and derivation of numerical values for CSI. Fact, leading to metric, measured value, comparison to threshold, and well warranted conclusion. Hey, let's footnote to MG et al: If your problem (MG et al) is a worldview level one with the conclusion, please don't try to pretend that CSI is not an observable and significant fact. Please don't try to pretend that Ik = - log pk is not a well established metric of information, or that it makes no sense to identify that something may be functionally specific -- cf what we may call Schneider's blunder above -- or that the resulting measueres and conclusions are meaningless. And, most of all, please don't try to pretend that we are would be theocratic tyrants who have threatened you with the equivalent of thumbscrews. That was where you went totally over the top, MG. GEM of TKI kairosfocus
Mung, 285: My first comment in MG's guest post thread was to analyse the FSCI in her post. She ignored the point. GEM of TKI kairosfocus
PS: The estimation of Ik is a standard technique in telecommunications work. The results are as familiar as the size of computer files, in bits. kairosfocus
EZ: Re 284, cf the original post where you will see three worked out examples, building on the Durston et al FITS metric for 35 protein families, and if you will look at the UD WACs you will see a toy example at a level more suited for school children. If we are dealing with directly information-storing entities like DNA or ASCII text, Ik can be directly estimated to an order of magnitude (where also the presence of a coded digital store is enough to guarantee functional specificity so the code for a typical 300 AA protein (and there are hundreds in a living cell) would yield 1800 bits or 1300 beyond the solar system threshold -- this example has been given several times over in the course of the past 3 months but has been ignored by MG), and the result directly follows. GEM of TKI kairosfocus
MathGrrl,
ev, on the other hand, is not looking for a specific solution.
...nothing in ev knows what the solution should be so there is no target at which to aim.
Let's look at some quotes from the Schneider paper:
Here this method is used to observe information gain in the binding sites for an artificial 'protein' in a computer simulation of evolution. The simulation begins with zero information and, as in naturally occurring genetic systems, the information measured in the fully evolved binding sites is close to that needed to locate the sites in the genome. Locating sites in the genome sounds like a goal or target to me.
...one can use the size of the genome and the number of sites to compute how much information is needed to find the sites.
Finding sites in the genome sounds like a goal or target to me.
The purpose of this paper is to demonstrate that R_sequence can indeed evolve to match R_frequency (12). To simulate the biology, suppose we have a population of organisms each with a given length of DNA. This fixes the genome size, as in the biological situation. Then we need to specify a set of locations that a recognizer protein has to bind to. That fixes the number of sites, again as in nature. We need to code the recognizer into the genome so that it can co-evolve with the binding sites. Then we need to apply random mutations and selection for finding the sites and against finding non-sites. Given these conditions, the simulation will match the biology at every point.
Specifying a set of locations that a recognizer protein has to bind to. In advance. Finding the sites. MORE TARGETS SIR! PERMISSION TO FIRE!
Remarkably, the cyclic mutation and selection process leads to an organism that makes no mistakes in only 704 generations (Fig 2a).
Remarkable indeed. Good thing we weren't actually looking for such an organism. We might have destroyed it by mistake. Get real MathGrrl. Le me know when or if you want to talk about ev and CSI.
Mung
MathGrrl,
ev, on the other hand, is not looking for a specific solution. As I’ve emphasized a number of times during this discussion, in ev the recognizer co-evolves with the binding sites.
I'm pretty sure I brought it up first. Thanks for finally catching up. What do you mean by "a specific solution"? HOw does evn know when to stop running? What does the recognizer "recognize"? Does ev, at any point, compare the recognizer to the binding sites? Why can't we call the binding sites targets? Why can't we call the recognizer a target? Why can't we call "an organism that makes no mistakes" a target?
There is no measurement of Hamming distance because the solution is unknown to ev.
A solution is not required to be known in advance in order to perform a Hamming distance measurement. Perhaps this is where you are going wrong. Do you think that in order for something to qualify as a target it most be known in advance? Do you think that in order for something to qualify as a target it most be fixed and not change? Mung
MathGrrl, Speaking of Weasel you wrote:
In fact, the fitness function measures the Hamming distance to that target.
Does ev have a fitness function? I say yes. Does ev have a selection mechanism? I say yes. Does ev identify, for each "generation," which 50% of the population is "fit enough to survive" and which 50% is "not fit enough to survive"? Again, I say yes. How is "fitness" determined in ev? I say ev has a fitness function. What do you say? Mung
MathGrrl, You claim you'd like to read my comments on Schneider's dismissal of the Montanez paper. My comments are at the link I posted. If in our discussion of ev you think anything Schneider had to say in response to Montanez et. al is pertinent, please, by all means bring it up. I'm certainly not relying on the Montanez paper for any of my arguments and the only reason I even posted my link is because you brought it up. I thought I'd done a good job of explaining the differences between Weasel and ev. It's almost as if you didn't even read what I wrote. So I'm just going to not even address Weasel again or how it relates (or doesn't) to ev. I'd prefer to concentrate specifically on ev. Thanks. I repeat, no one here is modeling ev as a targeted search. It is a targeted search. Period. But I really did try to come to some basic understandings, and I don't see where you ever responded to those attempts on my part. Just what sort of search is it that you have in mind that searches for nothing at all? In fact, ev is searching for something. I see no problem with calling what it is searching for a target. Do you, and if so, why? Let's first get the semantics out of the way then perhaps we can make progress. There is a reason that GA's were developed, after all. Mung
http://en.wikipedia.org/wiki/Oracle_%28software_testing%29 http://en.wikipedia.org/wiki/Random_oracle http://en.wikipedia.org/wiki/Oracle_machine A Search Strategy Using a Hamming-Distance Oracle Efficient per query information extraction from a Hamming oracle Simply, an oracle accepts a query and returns a response. Mung
ellazimm, some good questions. That is precisely the sort of dialogue I attempted to generate with MathGrrl, but her reaction was to just to repeat the same old same old.
1. ‘Target’ being used in a general sense in this discussion but being rigorously defined in the programming/biological sense. Is a target an unchanging goal OR anything that gives a reward.
Could be. I've been asking MathGrrl to clarify what she's looking for as far as what qualifies as a target in her thinking. I've made it clear from my first posts on ev that it, unlike Weasel, did not have a single fixed target sequence that it was trying to match. But that does not change the underlying operation or the fact that ev is a search algorithm designed to perform better than a blind search. There is nothing about targets in general that requires that they be an unchanging goal. I think you'd agree, but that does seems to be what MathGrrl is arguing.
... but being rigorously defined in the programming/biological sense
I don't know what that means.
2. Is a ‘target’ something that is loaded before the simulation starts or can it arise later?
Well, in ev, the location of the binding sites can change between different runs of the program, but once the run begins the locations are fixed. The width is also fixed. How that can be taken to mean that there are no targets is beyond me.
AND can it arise spontaneously with no design implication?
If there's some underlying issue regarding whether ev is designed to do a specific thing I haven't heard it. I think we all know that it is designed.
And there is the whole issue of how accurately the simulation models the real world.
Well, that's not really at issue. Schneider claims it matches at every point, but I think we all know better because no one has even tried to defend that statement, lol.
I have skimmed this thread and will probably go back and reread some of the pertinent replies. And MathGrrl’s guest thread. Don’t hold your breath though!!
I would say don't waste your time. If you want to know about ev you and I can, I think, have a very reasonable discussion. Mung
On ev and targets: Schneider measures the information content, both before and after and subtracts the before from the after in order to get the information increase. How does he know when and where to measure? [That seems sort of backward to me, since the Shannon Information should be highest when the string is completely random, and so what he is measuring is the information decrease, but hey, what do I know.] Mung
KF, if you chose to do so, perhaps start out with a textual analysis of one of her posts :). Why we should think it the product of an intelligent cause. I think you can easily do this using Dembki's metric or your own. Mung
KF: Thank you, I shall pursue those references in the near future I hope. I personally would like to see any worked out examples of computing Dembski's metric. I never gave my students a formula without showing them how to use it. ellazimm
EZ: You will see a definition of what a Hamming Oracle is and does in the footnotes enfolded in the cite from the recent Dembski et al paper. GEM of TKI kairosfocus
EZ: Please, take a moment to look through 34 - 5 above on the subject of adequacy of warrant vs the rhetorical games that have been played with demands on "rigour" in "mathematical definitions," relative to what is discussed in the OP. Do you see why I have concluded that I am dealing with selective hyperskepticism as rhetorical gambit, especially when adequate warrant has been provided over and over and MG has failed to be responsive on ANY significant point, for something like three months now? Worse, she out and out conflated a probability calculation with a log reduction, and on being asked to explain herself, has failed to do so to date. I will only note in passing the outrage of alluding to how Galileo was made to recant by threat of the thumbscrews. That was utterly out of order and has never been explained or apologised for. Three months of red herrings led out to convenient strawmen soaked in ad hominems and ignited resulting in clouding, poisoning and polarising the atmosphere as I have just had to point out is enough. GEM of TKI kairosfocus
MathGrrl: I spent some time today reading the original paper and running the ev program and you've addressed a couple of issues I thought might be points of contention. Coming from a mathematical background I am always keenly aware of how terms are defined and used within a specific context. A tree, for example, in the field of graph theory has a specific and well defined meaning. I remember getting into an argument at a party with someone who thought they had found some wonderful set theoretical result and didn't understand what I meant when I told them their measure was not well defined. No wonder I didn't get invited to many parties. ellazimm
ellazim and Mung, I'm going to use ellazim's comment as a springboard to address what I believe is the root of the disagreement between Mung and myself with respect to whether or not ev is a targeted search in the same sense as Dawkins' Weasel program.
I'm assuming a target in this context is a goal in the program against which new 'individuals' are measured and ones that more closely match the target are allowed to 'breed' and the rest are destroyed?
I think we're in agreement, but I'd like to be very explicit about the difference between a goal and a target in the context of a GA. I addressed this here: https://mfinmoderation.wordpress.com/2011/03/14/mathgrrls-csi-thread/#comment-1858 as pointed out two or three times in this thread, but it can't hurt to provide more detail. Modeling ev as a "targeted search" and comparing it to Weasel confuses the problem domain and the solution domain. In Weasel, which Dawkins himself makes clear is nothing more than a simple pedagogical toy program, the two are conflated. The Weasel problem is to produce a particular string via cumulative, stochastic selection. The Weasel solution is that particular string. That is clearly a target. In fact, the fitness function measures the Hamming distance to that target. ev, on the other hand, is not looking for a specific solution. As I've emphasized a number of times during this discussion, in ev the recognizer co-evolves with the binding sites. Neither is specified in advance and the sections of the genome that represent them will be different in different runs. This makes it painfully clear that there is no target for the solution. There is no measurement of Hamming distance because the solution is unknown to ev. ev does, however, have constraints that make certain genomes more fit than others. Those constraints are the number and location of the binding sites. The only feedback provided by the environment modeled in ev is the number of incorrect binding sites coded for by a particular genome. As Schneider describes in the ev paper, this reflects the number and location of binding sites in real world biological genomes. If I understand Mung's argument, his confusion arises from considering those sites to be a target. They are not. The number and location of the binding sites are part of the problem domain and as such do not specify anything in the solution domain. This separation between the constraints of the problem and the specification of the solution are one of a number of important differences that distinguish ev and similar GAs from simple programs like Weasel. That difference alone demonstrates that ev cannot be modeled as a targeted search; nothing in ev knows what the solution should be so there is no target at which to aim. In the same vein, the various Steiner problem solutions can also not be modeled as targeted searches. The only measure of fitness in those solutions is the length of the graph, with shorter, connected graphs having higher fitness. There is no target and hence no measurement of the Hamming distance to the solution. I hope this clears up any confusion about the differences between Weasel and the other GAs we've been discussing. If I have missed any questions that are still relevant, Mung, please raise them again. MathGrrl
Mung,
The Schneider response to the Montanez paper has been shown to be without merit.
What can be asserted without evidence can be dismissed without evidence. -- Christopher Hitchens The link you provide does not demonstrate what you claim. If you feel that Schneider's response is somehow in error, I would be very interested in reading your defense of such a claim. MathGrrl
kairosfocus, Throughout the discussion of CSI on various threads here I have made it a practice to focus only on the points you raise that directly address the questions I've been asking. Your many other issues, while potentially interesting in their own right, would serve only to distract from the core topic of defining and calculating CSI. I'm going to break briefly from that focus to respond to a separate issue, primarily because you have provided a good conversational hook to extend an invitation from someone who is not allowed to participate here.
Perhaps it has not dawned on you that the situation has now fundamentally changed, once your side has tolerated outing behaviour and increasingly disrespectful rhetoric leading to the creation of an attack blog that resorts to vulgarity as well as slander-laced outing behaviour as its main tactics. Madam, you are now associated with and unavoidably tainted by a cesspit of misbehaviour, and have a lot to answer for.
GEM of TKI
If you are as concerned as you appear to be about your real name being associated with what you write here, you may want to reconsider using your actual initials as a signature. MathGrrl
KF: Could you give me a link where Hamming oracle is defined and discussed? I'm having trouble finding an online resource. I think I AM going to have to try and look at the ev program. Sigh. Oh well, it happens!! Thanks! ellazimm
KF: No, none of my examples generate digitally coded information but I was thinking maybe they created complex and specified information. Before a tree is born the information in it's trunk does not exist but after it dies the record is there. And to describe that information would take a measurable amount of bits of information . . . I only popped in for a few minutes between tasks. I'll spend more time thinking about all this. And looking up Hamming distance and oracle when I can. Earlier today I was looking over Dr Dembski's metric. And, according to Wikipedia he's only ever demonstrated it's application once! Is that true? Surely it's been calculated for other cases. If you've got any cases please let me know, aside from MathGrrl's scenarios obviously. I find worked out examples to be a great help in understanding the procedure of applying the theory. Fascinating stuff!!! Back later! ellazimm
KF: THANKS! Just off to do grocery shopping, etc and then over to a friend's for 'tea'. :-) Will come back and read and, hopefully, comprehend later!! ellazimm
EZ: Volcanoes create a new state of affairs (and, from experience, make chaos and a mess while doing it) but they do not in themselves create information. Particularly, digitally coded, functionally specific complex information that uses symbols to create a meaningful vocabulary, elements of which are combined according to rules to create messages. Similarly, a snowflake's form reflects the state of affairs where and when it formed, and the rings in a tree reflect the passage of time, the weather, etc, but these are simply dynamical results on initial conditions leading to outcomes, they are not information. It is we who look on who observe and study the dynamics, outcomes etc, creating information as we do so. Also, targets are somethings you are trying to hit. Collins English Dict:
target [?t??g?t] n 1. (Individual Sports & Recreations / Archery) a. an object or area at which an archer or marksman aims, usually a round flat surface marked with concentric rings b. (as modifier) target practice 2. a. any point or area aimed at; the object of an attack or a takeover bid b. (as modifier) target area target company 3. a fixed goal or objective the target for the appeal is £10 000 4. a person or thing at which an action or remark is directed or the object of a person's feelings a target for the teacher's sarcasm
Once you are trying to hit it, it is a target. In the relevant case, the receptor sites and the binding sites are in the genomes, so called, of ev. Values are assigned and algorithmic steps are taken in train to seek and hit the receptors. That the latter are also moving only means that you have a moving target to hit. A Hamming distance metric (number of mistakes) is used to detect better performing binding sites, and the worse performing are flushed, with the better ones being allowed to further move in. That's a Hamming oracle, with a warmer/colder homing approach. And Mung long since showed both fine tuning to get desired performance and language right there in Schneider's statements and in the program itself that underscored this. I pointed out how a chart Schneider put up shows the use of negative feedback to move to a target point (notice what happens when certain modules are turned off in the program, i.e. tracking ability is lost); in this case it is moving in a pseudo-space so the process is very similar to a servosystem, which is why I raised the comparison of guided missiles. MG cleverly refused to respond in the thread where that happened, and is turning up in a following thread to claim that such did not happen. Please go up to 137 above to link to the original discussion and see for yourself. GEM of TKI kairosfocus
There was a notion that crept into my head . . . might as well ask it here . . . please be gentle if it's completely stupid . . . Regarding the ability of non-directed processes for creating complex, specified information . . . you're going to tear me to shreds I'm sure . . . but . . . Do volcanoes create information? If an erupting volcano creates a mountain where there was none before is that new information? It certainly changes any mathematical model of the landscape. We've discovered that the Earth's magnetic field has completely moved/reversed over the millennia. We found that out by looking at the alignment of the magnetic particles in some igneous rocks. That was new information to us but was it created or recorded? When sedimentary rock is formed with defined chronologically arranged strata is that new information? Or just a recording of information? Are the layers of ice in Antarctica created information? What about oil deposits? Is recorded information new if there is no other way of finding it? When erosion creates pillars and arches which did not exist before is that new information? I'm thinking that an arch is more complex and specified than a huge block of sandstone. And even normal wind erosion gives indication of long tern wind patterns . . . is that information. How about tree rings? You can tell how old a tree is, spot wet and dry years, etc from tree rings. Over eons and eons the hills of Scotland have been transformed into peat deposits. Did the plants create new information that was not there before if you start from when the land was barren? Non-intelligent life forms have altered the Earth's atmosphere and that change would be detectable from a great distance and would indicate the presence of life. Is that creation of new information? Okay, have it!! I apologise ahead of time for not being around soon. It's 9:30 in the morning where I live and there's stuff to do!! But I'm very interested in how you see the above examples. Maybe it should be put into a new thread so that it gets a bit of independent attention? ellazimm
KF & Mung: I had read KF's earlier post about the incoming missies and Mung I think your reiterating of that analogy is fairly accurate. I THINK. I wonder if some of the confusion/disagreement comes from: 1. 'Target' being used in a general sense in this discussion but being rigorously defined in the programming/biological sense. Is a target an unchanging goal OR anything that gives a reward. 2. Is a 'target' something that is loaded before the simulation starts or can it arise later? AND can it arise spontaneously with no design implication? And there is the whole issue of how accurately the simulation models the real world. I have skimmed this thread and will probably go back and reread some of the pertinent replies. And MathGrrl's guest thread. Don't hold your breath though!! ellazimm
Mung: Bogies that are not only inbound but jinking, weaving and dancing unpredictably. Floating like a butterfly so they may yet sting like a bee. What's a missile-eer to do? GEM of TKI kairosfocus
ellazimm:
Why not point out one of the targets and see what she says?
Multiple inbound bogeys! I'm gunner on a ship. I have multiple inbound targets. How do I convince someone who denies that an inbound aircraft is a threat that the inbound aircraft is a valid target? MathGrrl doesn't understand searches or targets. Her assertion that ev has no targets has no basis in reality. You, on the other hand, might be amenable to being convinced. I accept that you're not familiar with my prior postings on this subject. 1. What is a search? 2. What is a target? 3. Can you conduct a search without a target? These are really simple and basic questions which MathGrrl refuses to address. An evolutionary algorithm (EA) is a search strategy. ev is an EA. (Finally admitted to by MG) Is a search that does not "search for" anything even coherent? Mung
EZ: Actually, the situation is one of multiple moving targets, approached by a population of self-replicating seekers. Or, as I suggested in 258 above:
we could picture ev as a barrage of self-replicating missiles chasing a moving formation, where in each generation half lose lock and are self-destructed, being replaced by doubling the half population that remains, which are in closer to lock condition.
GEM of TKI kairosfocus
1 2 3 9