Uncommon Descent Serving The Intelligent Design Community
meniscus

FOOTNOTE: On Einstein, Dembski, the Chi Metric and observation by the judging semiotic agent

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

(Follows up from here.)

Over at MF’s blog, there has been a continued stream of  objections to the recent log reduction of the chi metric in the recent CSI Newsflash thread.

Here is commentator Toronto:

__________

>> ID is qualifying a part of the equation’s terms with subjective observation.

If I do the same to Einstein’s, I might say;

E = MC^2, IF M contains more than 500 electrons,

BUT

E **MIGHT NOT** be equal to MC^2 IF M contains less than 500 electrons

The equation is no longer purely mathematical but subject to other observations and qualifications that are not mathematical at all.

Dembski claims a mathematical evaluation of information is sufficient for his CSI, but in practice, every attempt at CSI I have seen, requires a unique subjective evaluation of the information in the artifact under study.

The determination of CSI becomes a very small amount of math, coupled with an exhausting study and knowledge of the object itself.>>

_____________

A few thoughts in response:

a –> First, let us remind ourselves of the log reduction itself, starting with Dembski’s 2005 chi expression:

χ = – log2[10^120 ·ϕS(T)·P(T|H)]  . . . eqn n1

How about this (we are now embarking on an exercise in “open notebook” science):

1 –> 10^120 ~ 2^398

2 –> Following Hartley, we can define Information on a probability metric:

I = – log(p) . . .  eqn n2

3 –> So, we can re-present the Chi-metric:

Chi = – log2(2^398 * D2 * p)  . . .  eqn n3

Chi = Ip – (398 + K2) . . .  eqn n4

4 –> That is, the Dembski CSI Chi-metric is a measure of Information for samples from a target zone T on the presumption of a chance-dominated process, beyond a threshold of at least 398 bits, covering 10^120 possibilities.

5 –> Where also, K2 is a further increment to the threshold that naturally peaks at about 100 further bits . . . . As in (using Chi_500 for VJT’s CSI_lite):

Chi_500 = Ip – 500,  bits beyond the [solar system resources] threshold  . . . eqn n5

Chi_1000 = Ip – 1000, bits beyond the observable cosmos, 125 byte/ 143 ASCII character threshold . . . eqn n6

Chi_1024 = Ip – 1024, bits beyond a 2^10, 128 byte/147 ASCII character version of the threshold in n6, with a config space of 1.80*10^308 possibilities, not 1.07*10^301 . . . eqn n6a . . . .

Using Durston’s Fits from his Table 1, in the Dembski style metric of bits beyond the threshold, and simply setting the threshold at 500 bits:

RecA: 242 AA, 832 fits, Chi: 332 bits beyond

SecY: 342 AA, 688 fits, Chi: 188 bits beyond

Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond  . . . results n7

The two metrics are clearly consistent . . . .one may use the Durston metric as a good measure of the target zone’s actual encoded information content, which Table 1 also conveniently reduces to bits per symbol so we can see how the redundancy affects the information used across the domains of life to achieve a given protein’s function; not just the raw capacity in storage unit bits [= no.  of  AA’s * 4.32 bits/AA on 20 possibilities, as the chain is not particularly constrained.]

b –> In short, we are here reducing the explanatory filter to a formula. Once we have specific, observed functional information of Ip bits,  and we compare it to a threshold of a sufficiently large configuration space, we may infer that the instance of FSCI (or more broadly CSI)  is sufficiently isolated that the accessible search resources make it maximally unlikely that its best explanation is unintelligent cause by blind chance plus mechanical necessity. Instead, the best, and empirically massively supported causal explanation is design:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Fig 1: The ID Explanatory Filter

c –> This is especially clear when we use the 1,000 bit threshold, but in fact the “practical” universe we have is our solar system. And so, since the number of Planck time quantum states of our solar system since the usual date of the big bang is not more than 10^102, something that is in a config space of 10^150 [500 bits worth of possibilities] is 48 orders of magnitude beyond that threshold.

d –> So, something from a config space of 10^150 or more (500+ functionally specific bits) is on infinite monkey analysis grounds, comfortably beyond available search resources. 1,000 bits puts it beyond the resources of the observable cosmos:

Fig 2: The Observed Cosmos search window

e –> What the reduced Chi metric is telling us is that if say we had 140 functional bits [20 ASCII characters] , we would be 360 bits short of the threshold, and in principle a random walk based search could find something like that. For, while the reduced chi metric is giving us a value, it tells us we are falling short and by how much:

Chi_500(140 bits) = 140 – 500 = – 360 specific bits, within the threshold

f –> So, the Chi_500 metric tells us instances of this could happen by chance and trial and error testing.   Indeed, that is exactly what has happened with random text generation experiments:

One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the “monkeys” typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t” The first 19 letters of this sequence can be found in “The Two Gentlemen of Verona”. Other teams have reproduced 18 characters from “Timon of Athens”, 17 from “Troilus and Cressida”, and 16 from “Richard II”.[20]

A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took “2,737,850 million billion billion billion monkey-years” to reach 24 matching characters:

RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d

g –> But, 500 bits or 72 ASCII characters, and beyond this 1,000 bits or 143 ASCII characters, are a very different proposition, relative to the search resources of the solar system or the observed cosmos.

h –> That is why, consistently, we observe CSI beyond that threshold [e.g. Toronto’s comment] being produced by intelligence, and ONLY as produced by intelligence.

i –> So, on inference to best empirically warranted explanation, and on infinite monkeys analytical grounds, we have excellent reason to have high confidence that the threshold metric is credible.

j –> As a bonus, we have exposed the strawman suggestion that the Chi metric only applies beyond the threshold. Nope, it applies within the threshold and correctly indicates that something of such an order could come about by chance and necessity within the solar system’s search resources.

k –> is a threshold metric inherently suspicious? Not at all. In control system studies, for instance, we learn that once you reduce your expression to a transfer function of form

G = [(s – z1)(s- z2) . . . ]/[(s – p1)(s-p2)(s – p3) . . . ]

. . . then, if poles appear in the RH side of the complex s-plane, you have an unstable system.

l –> A threshold, and one that, when poles approach close to the threshold from the LH half-plane, will show up in a tendency that can be detected in the frequency response as peakiness.

m –> Is the simplicity of the math in question, in the end [after you have done the hard work of specifying information, and identifying thresholds], suspicious? No, again. For instance, let us compare:

v = i* R

q = v* C

n = sin i/ sin r

F = m*a

F2 = – F1

s = k log W

E = m0*c^2

v = H0D

Ik = – log2 (pk)

E = h*νφ

n –> Each of these is elegantly simple, but awesomely powerful; indeed, the last — precisely, a threshold relationship — was a key component of Einstein’s Nobel Prize (Relativity was just plain too controversial). And, once we put them to work in practical, empirical situations, each of them ” . . .  is no longer purely mathematical but subject to other observations and qualifications that are not mathematical at all.”

(The objection is clearly selectively hyperskeptical. Since when was an expression about an empirical quantity or situation “purely mathematical”? Let’s try another expression:

Y = C + I + G + [X – M].

How are its components measured and/or estimated, and with how much application of judgement calls, including those tracing to GAAP? [Cf discussion here.] Is this expression therefore meaningless and of no utility? What about M*VT = PT*T?)

o –> So, what about that horror, the involvement of the semiotic, judging agent as observer, who may even intervene and– shudder — judge? Of course, the observer is a major part of quantum mechanics, to the point where some are tempted to make it into a philosophical position. But the problem starts long before that, e.g. look at the problem of reading a meniscus! (Try, for Hg in glass, and for water in glass — the answers are different and can affect your results.)

Fig 3: Reading a meniscus to obtain volume of a liquid is both subjective and objective (Fair use clipping.)

p –> So, there is nothing in principle or in practice wrong with looking at information, and doing exercises — e.g. see the effect of deliberately injected noise of different levels, or of random variations — to test for specificity. Axe does just this, here, showing the islands of function effect dramatically. Clipping:

. . . if we take perfection to be the standard (i.e., no typos are tolerated) then P has a value of one in 10^60. If we lower the standard by allowing, say, four mutations per string, then mutants like these are considered acceptable:

no biologycaa ioformation by natutal means
no biologicaljinfommation by natcrll means
no biolojjcal information by natiral myans

and if we further lower the standard to accept five mutations, we allow strings like these to pass:

no ziolrgicgl informationpby natural muans
no biilogicab infjrmation by naturalnmaans
no biologilah informazion by n turalimeans

The readability deteriorates quickly, and while we might disagree by one or two mutations as to where we think the line should be drawn, we can all see that it needs to be drawn well below twelve mutations. If we draw the line at four mutations, we find P to have a value of about one in 10^50, whereas if we draw it at five mutations, the P value increases about a thousand-fold, becoming one in 10^47.

q –> Let us note how — when confronted with the same sort of skepticism regarding the link between information [a “subjective” quantity] and entropy [an “objective” one tabulated in steam tables etc] — Jaynes replied:

“. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.”

r –> In short, subjectivity of the investigating observer is not a barrier to the objectivity of the conclusions reached, providing they are warranted on empirical and analytical grounds. As has been provided for the Chi metric, in reduced form.  END

Comments
CD: I hear you. I saw an outing attempt on me and that was enough to tell me I no longer wished to have anything to do with MF's blog. In addition, observe the subtle incivility here where in a thread I have posted, MF manages to studiously ignore me. I find that quite rude. Also, a bit silly as MF -- as just pointed out -- is making simple errors that if he would pay attention, he could correct. GEM of TKIkairosfocus
May 19, 2011
May
05
May
19
19
2011
11:17 AM
11
11
17
AM
PDT
Excellent point about the destructive influence of natural selection, which is so often misunderstood as a creative influence, kairosfocus. I believe that genetics is so often misunderstood too. It's not all in the genes. There's a more important and bigger epigenetic picture that we've yet to fully understand: a design plan that blows chance explanations out of the water.Chris Doyle
May 19, 2011
May
05
May
19
19
2011
11:14 AM
11
11
14
AM
PDT
MF: You are trained as a philosopher, and you worked in the computer industry. Surely, you can do the simple research to find out that nucleotide bases take values A/G/C/T (or for RNA U) in any given position, and the sugar-phosphate chain is essentially independent of which is where in any one string -- the complementarity constraint is to key-lock fit across the two helices in the DNA. If the string sequence was strongly physically constrained by necessity, it could not store information, as information depends on the ability to have different possible states along the string based on content, not on constraints of necessity. If we constrained by physical necessity, we would be looking at a crystal, not an informational macromolecule that can and does vary to specify the particular protein. Such a string could also chain at random, but then that brings us straight to the point. At-random chains are such that the functional states are deeply isolated in the config space of possible AA strings; per the code. Also, FYI, 2^2 = 4 Thus, we have two bits storage capacity per base. Going further, the three letter codons used for AA sequencing therefore have 3 * 2 = 6 bits maximum potential storage. they are used for what is essentially a 20-state AA system, giving the 4.32 bits per AA you may see, due to the redundancy in the system. GEM of TKIkairosfocus
May 19, 2011
May
05
May
19
19
2011
11:04 AM
11
11
04
AM
PDT
EZ: Joseph is correct. As Darwin said, in his peroration to Origin:
It is interesting to contemplate a tangled bank, clothed with many plants of many kinds, with birds singing on the bushes, with various insects flitting about, and with worms crawling through the damp earth, and to reflect that these elaborately constructed forms, so different from each other, and dependent upon each other in so complex a manner, have all been produced by laws acting around us. These laws, taken in the largest sense, being Growth with Reproduction; Inheritance which is almost implied by reproduction; Variability from the indirect and direct action of the conditions of life and from use and disuse [yep, he had Lamarckian elements in his thought . . . ]: a Ratio of Increase so high as to lead to a Struggle for Life, and as a consequence to Natural Selection, entailing Divergence of Character and the Extinction of less-improved forms. Thus, from the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows. There is grandeur in this view of life, with its several powers, having been originally breathed by the Creator into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being evolved. [Origin, 6th edn, Ch 15]
Natural selection boils down to whatever variants are able to find niches in the ecosystem that allow them to reproduce successfully will take root and pass on their genes to future generations. The selection, contrary to popular opinion, is not a source of information, but patently a culler -- a remover -- of information. The variants that do not find niches do not survive to pass on genes. That which subtracts does not add. We have to look at that which supposedly adds, before we can see how subtraction may lead to survivors. By repeating the mantra "natural selection" one does not escape the need for engines of variation, and for specifically non-foresighted engines of variation, for darwinian type evolution. Within an island of function, that can account for hill climbing, but this does not at all account for the increments of information required to get to first life and onward to novel body plans. And, besides, the selection is itself a significantly chance based process: we are talking odds here, not determination. If the beasts with the wonderful new variant get eaten in the nest, or caught in a fire, or an epidemic, or are in a time of horrible drought or catastrophe, their superior genes -- as assumed -- will make very little difference. GEM of TKIkairosfocus
May 19, 2011
May
05
May
19
19
2011
10:57 AM
10
10
57
AM
PDT
MarkF:
How do you know that for any position any of them is possible, much less equiprobable?
Science.
Although we don’t know a simple law or formula that determines the order, nevertheless genes are created by a process with stochastic influences and limitations.
No one knows how genes originated.
Genes are not formed by throwing a lot of nucleotides into a bucket and selecting them at random.
No one knows how genes originated. And there isn't any evidence for blind, undirected processes producing one.
For example, in the case of gene duplication the nucleotides will almost certainly replicate the pattern of the gene that is being replicated – all other orders are very unlikely.
Do you read what I post? What claim did you make and need to support? This one:
Please explain how you conclude that there are two bits of information per nucleotide using that formula – without simply assuming that the probability of any given base pair is equally likely and independent of every other base pair (because that assumptions has no justification and is patently untrue).
Joseph
May 19, 2011
May
05
May
19
19
2011
10:33 AM
10
10
33
AM
PDT
Ellazimm:
Natural selection works on the same variance engine as artificial selection but the choices are made by the environment which favours some variations over others.
The environment doesn’t choose. There are a number of reasons why some survive and some do not. For natural selection the differential reproduction has to be due to heritable variation.Joseph
May 19, 2011
May
05
May
19
19
2011
10:24 AM
10
10
24
AM
PDT
Hello again Mark, Ah, I guess I didn't stick around long enough to see you pulling up "The Whole Truth" repeatedly. What I did see was Toronto defend him as merely a "frustrated commentator" and that just gave me the strong impression that that was a forum I no longer wanted to participate in. I don't know about Mathgrrl (disrespecting your opponents doesn't always manifest itself as explicitly incivil remarks: ignoring points that have been raised (for 6 months in my case!) and repeating the same refuted arguments over and over again is very disrespectful and a waste of all our time, for example) but she has been given at least one blog entry here and so a stronger platform than many of the rest of us! And you admitted yourself Mark, you've been rude and offensive in the past here... maybe that's why you and Joseph are winding each other up (there seem's to be a bit of disturbing witch-hunt going on for Joseph from the evolutionist side too). If we can somehow wipe the slate clean, and then we see evolutionists pulling each other up here (and elsewhere) you will see me and other UD contributors returning the favour I'm sure. Until then, please don't ask me to wade into the middle of a blood-feud!Chris Doyle
May 19, 2011
May
05
May
19
19
2011
10:23 AM
10
10
23
AM
PDT
Again there isn’t any law nor formula that detemines the odering of nucleotides down one strand of DNA. That means that any one locus any of the 4 nucleotides is possible.
How do you know that for any position any of them is possible, much less equiprobable?  Although we don’t know a simple law or formula that determines the order, nevertheless genes are created by a process with stochastic influences and limitations. Genes are not formed by throwing a lot of nucleotides into a bucket and selecting them at random. For example, in the case of gene duplication the nucleotides will almost certainly replicate the pattern of the gene that is being replicated – all other orders are very unlikely.  Similar considerations apply for insertions, inversions etc.
IOW MarkF do YOU have any evidence to support your claim?
I don’t know what claim you are talking about – can you clarify? markf
May 19, 2011
May
05
May
19
19
2011
10:20 AM
10
10
20
AM
PDT
Joseph: Natural selection works on the same variance engine as artificial selection but the choices are made by the environment which favours some variations over others. A different environment would favour other variations. Then there's sexual selection, gene drift, geographic distribution and others. It's not my field but I'm sure you can find a decent discussion of all the selection processes without spending much time or efort. Their truth or falsehood is not dependent on my poor ability to elucidate them here. Fortunately. But yeah, basically I agree with you: what ever is good enough survives. But good enough covers a lot of ground. And it doesn't make it random. Genetic mutations look pretty random; they seem to occur at predictable rates but you can't say ahead of time when one will occur. Selection processes favour certain variations over others non-randomly. Otherwise evolution would not occur.ellazimm
May 19, 2011
May
05
May
19
19
2011
09:44 AM
9
09
44
AM
PDT
ellazimm- Please provide the evidence for these alleged non-random selection processes. (I will give you artificial selection)- but natural selection is blind, mindless and purposeless. Whatever is good enough survives. And that can be any number of traits and allele combinations.Joseph
May 19, 2011
May
05
May
19
19
2011
09:07 AM
9
09
07
AM
PDT
MarkF:
Please explain how you conclude that there are two bits of information per nucleotide using that formula – without simply assuming that the probability of any given base pair is equally likely and independent of every other base pair (because that assumptions has no justification and is patently untrue).
Again there isn't any law nor formula that detemines the odering of nucleotides down one strand of DNA. That means that any one locus any of the 4 nucleotides is possible. That means my alleged "assmuption" isn't an assumption at all. IOW MarkF do YOU have any evidence to support your claim?Joseph
May 19, 2011
May
05
May
19
19
2011
09:01 AM
9
09
01
AM
PDT
#101 Chris
In the meantime, don’t you think enough personal remarks have been made here (and over on your blog)? At the same time as I was being assured that evolutionists are the good guys, never rude or offensive, some guy calling himself “The Whole Truth” completely contradicted everything that Toronto and co were saying.
And I repeatedly pointed out to “The Whole Truth” that I thought he was being uncivil and eventually he dropped out. I agree too many personal remarks have been made on this forum - but they still continue. I don't believe Mathgrrl (or I) has made any of them and I thought a comment from a pro-ID supporter who clearly cares about civility might curb them.markf
May 19, 2011
May
05
May
19
19
2011
08:59 AM
8
08
59
AM
PDT
KF: Sure, there is random variation in the way mutations, duplications, splices, etc occur in the genome. But, once the process gets started, the mutations arise from an existing base and then the very non-random selection processes have their way with them. There's no search. There's no arrival of the fittest, just fitter. Or, even better, more suited, better able to exploit the resources in the proximal environment. Able to out compete the competition. But you've all heard/read/debated these points before so I shan't belabour the points. I know I am NOT discussing how the first replicator arose. As has been pointed out many times here and elsewhere there are lots of notions and hypotheses being promulgated. Sometimes one aspect of a possible procession is deemed more or less likely but . . . no one knows yet. We may never know. But, not knowing doesn't mean it was designed. If you don't know what the first replicator was I don't think you can convincingly argue that it's so highly improbable as to force the design conclusion. And if you don't know what the first replicator was then how can you say it couldn't have arisen from inorganic processes?ellazimm
May 19, 2011
May
05
May
19
19
2011
08:54 AM
8
08
54
AM
PDT
#99 Joseph #99
You can’t even read what I post. I did NOT say there are 4 bits per nucleotide. And the order of NUCLEOTIDES is not determined by an law or formula.
I apologise – my typo.  Here is the corrected request: I have supplied the formula for Shannon information. Please explain how you conclude that there are two bits of information per nucleotide using that formula – without simply assuming that the probability of any given base pair is equally likely and independent of every other base pair (because that assumptions has no justification and is patently untrue).markf
May 19, 2011
May
05
May
19
19
2011
08:51 AM
8
08
51
AM
PDT
EZ: A brief point. On causative factors, we need to explain highly contingent phenomena. Necessity does not explain contingency, so the alternatives are chance and/or design. In the usual evolutionary representation we have: Chance variation + natural selection --> descent with modification [at pop level], aka evolution The variation does not come from differential reproductive success of sub-populations, but from the chance variation. All that natural selection -- which is usually headlined as if it did the main job -- does is that we have differences in reproductive success on already existing variants in populations, and the term describes that that happens. That which explains the survival of the [reproductive-success] fittest, does not explain ARRIVAL of the fittest. For that, we need engines of variation, and by definition, the evolutionary materialistic frame is ruling out design as one of those engines. So, however we may categorise them, the engines boil down to chance: the variation is utterly uncorrelated with any foresighted process or goal. So, we are back to chance vs design. More when I have time. GEM of TKIkairosfocus
May 19, 2011
May
05
May
19
19
2011
08:32 AM
8
08
32
AM
PDT
Hi Mark, I'll be in touch privately regarding SITC. In the meantime, don't you think enough personal remarks have been made here (and over on your blog)? At the same time as I was being assured that evolutionists are the good guys, never rude or offensive, some guy calling himself "The Whole Truth" completely contradicted everything that Toronto and co were saying. I've said everything I need to say about the way people conduct themselves in this debate: on your blog and to Astroboy on a separate thread over here this morning. The subject matter of this discussion is so fascinating and indeed, important, let's not waste it with unimportant, boring and damaging distractions. By the way, how many bits do you think there are in a nucleotide? If you offer us your insight, that might move your discussion with Joseph and co into healthier territory.Chris Doyle
May 19, 2011
May
05
May
19
19
2011
08:16 AM
8
08
16
AM
PDT
ellazimm:
I agree with MathGrrl (even is she does have weird spelling conventions), no evolutionary process is a random search.
It isn't a search at all. It is all "stuff just happens and what works well enough gets kept".Joseph
May 19, 2011
May
05
May
19
19
2011
08:15 AM
8
08
15
AM
PDT
MarF:
I have supplied the formula for Shannon information. Please explain how you conclude that there are four bits per nucleotide using that formula – without simply assuming that the probability of any given base pair is equally likely and independent of every other base pair (because that assumptions has no justification and is patently untrue).
You can't even read what I post. I did NOT say there are 4 bits per nucleotide. And the order of NUCLEOTIDES is not determined by an law or formula. Nucleotides Mark- ONE side of the DNA is what we are concerned with. BTW there isn't any evidece for blind, undirected chemical proceses creating a gene from scratch.Joseph
May 19, 2011
May
05
May
19
19
2011
08:06 AM
8
08
06
AM
PDT
MG: Pardon some direct words, re:
I have read through all of your responses since my comment numbered 60 in this thread . . . . you repeatedly claim that CSI has been rigorously defined mathematically, but nowhere do you provide that rigorous mathematical definition.
Now, I have repeatedly pointed you to and linked 23 - 4 and 34 - 5 [also linked to in my for the record at MF's blog] above, where this drumbeat strawman tactic empty rhetorical misrepresentation is corrected yet once again. I am sorry, your response as cited simply tells me that you are not acting seriously. When, several times, I linked you to the places where I -- again -- dealt with the issues, correcting not only your direct claims but the underlying logical errors and conceptual errors, and you come to me with a chirpy little "I have read through all of your responses since my comment numbered 60 in this thread . . .," all you are telling me is that you waited till the points you need to respond do were buried under further posts and exchanges. Put that with the pattern of willfully stating what you know or should know is false and/or misleading, and I am not impressed. I am in a break for a moment with a client, so I give you the opportunity to respond seriously on merits above, and while you are at it, to explain yourself on the concerns addressed here. Then, we would have a basis for a fresh, serious start. GEM of TKIkairosfocus
May 19, 2011
May
05
May
19
19
2011
08:04 AM
8
08
04
AM
PDT
KF: I agree with MathGrrl (even is she does have weird spelling conventions), no evolutionary process is a random search. But I was also making the point that IF it was a random search the process would not necessarily continue past a viable solution and that, if the search is random, a workable solution might arise at any time. But they're not random searches so your upper bound is only the most extreme case. Also probabilistic arguments are tricky. I'm sure some of you are aware of the counter-intuitive result when asking the question: How many people do you need to have the for probability of two (or more) of them having the same birthday (date and month NOT date and month and year) be one half?ellazimm
May 19, 2011
May
05
May
19
19
2011
08:02 AM
8
08
02
AM
PDT
Chris We were discussing the relative civility on ID proponents and opponents on this forum. Would you care to comment on Joseph as opposed to Mathgrrl?markf
May 19, 2011
May
05
May
19
19
2011
07:58 AM
7
07
58
AM
PDT
#91 Chris I am sorry I didn't get back to you. I have been rather busy but managed to get home early today. I appreciate your invitation to read Meyer's book (and even send a copy to me). I am unwilling to buy it (I have spent too much money on ID books that repeat the same errors in different ways already) and I doubt it will be in the library. Perhaps you could contact me by e-mail: mark dot t dot frank at gmail dot com?markf
May 19, 2011
May
05
May
19
19
2011
07:50 AM
7
07
50
AM
PDT
#88 What I said is that because there are 4 possible nucleotides that mans, per Shannon, there ae two bits of infrmation per nucleotide. I have explained this several times. Apparently I was correct and you are a waste of time. Joseph I have supplied the formula for Shannon information. Please explain how you conclude that there are four bits per nucleotide using that formula - without simply assuming that the probability of any given base pair is equally likely and independent of every other base pair (because that assumptions has no justification and is patently untrue).markf
May 19, 2011
May
05
May
19
19
2011
07:42 AM
7
07
42
AM
PDT
And MathGrrl, It has been demonstrated that ev is a targeted search. Sorry but you lose...Joseph
May 19, 2011
May
05
May
19
19
2011
07:12 AM
7
07
12
AM
PDT
kairosfocus, First I count the bits- via nucleotides- and then I check on the variation tolerance to get the specification via Durston, et al's metric. I provided Durston's paper in MathGrrl's guest post. The point in counting forst is this is going to give me the upper limit of information (that may be specified). It is like resistors in parallel- I look at the values and know the total R will be less than the lowest value resistor in the parallel network. That means if I do the calculation and come up with a number greater than the lowest R I did something wrong. The same goes for SI/ CSI. Once I know the information carrying capacity I know the final number (based on the specification) cannot be greater than that.Joseph
May 19, 2011
May
05
May
19
19
2011
07:09 AM
7
07
09
AM
PDT
Hello again Mark, I didn't hear back from you regarding "Signature in the Cell". That's a shame, because if you read Chapter 8 'Chance Elimination and Pattern Recognition' it would offer you some answers to the questions you pose about Information Theory.Chris Doyle
May 19, 2011
May
05
May
19
19
2011
07:07 AM
7
07
07
AM
PDT
Hi MathGrrl, As you've begun a theme of unanswered posts, I wonder if you'd be so kind as to respond to a post I addressed to you 6 months ago. You can find it here: https://uncommondescent.com/evolution/can-you-say-weasel/#comment-366931 Many Thanks, Chris PS. Check out post 23 on this thread for the answers you're looking for from kairosfocus. There's a difference between you not liking the answer and not being answered at all.Chris Doyle
May 19, 2011
May
05
May
19
19
2011
07:00 AM
7
07
00
AM
PDT
Yes, it is aligned with Dembski’s description and I have explained the mathematical rigor. MathGrrl
Simply asserting this does not make it so.
Strange that I provied Stephen C Meyer to support my claim. And anyone who read and understood NFL knows what I say is not an assertion. I have nothing else to say to you- you are a waste of time and bandwidth.Joseph
May 19, 2011
May
05
May
19
19
2011
06:58 AM
6
06
58
AM
PDT
MarkF;
You can of course define CSI as 2 bits per nucleotide.
You don't have any idea what you are talking about. What I said is that because there are 4 possible nucleotides that mans, per Shannon, there ae two bits of infrmation per nucleotide. I have explained this several times. Apparently I was correct and you are a waste of time.Joseph
May 19, 2011
May
05
May
19
19
2011
06:53 AM
6
06
53
AM
PDT
Jospeh #81
me- you cannot just look at a gene or amino acid and work out the amount of Shannon information it contains. Joseph - Yes, you can. me - You need to understand the context in which that gene or amino acid was created to calculate the Shannon information. Joseph - Good luck showing that to be true.
OK. I will give it a try (I have done this many times before - but not recently.) The formula for the Shannon information in any message is: -log2 P(i) where P(i ) is the probability of the observed outcome. The two issues are: 1) How do you define the outcome? 2) How do you calculate the probability of the outcome? The definition of the outcome depends on the specification e.g. if you thow a dice do you define the outcome as a six or as an even number?  If it is a gene are you talking about that exact sequence of nucleotides, any sequence with a similar function, any sequence that would not affect the organism’s fitness, or what? The subjective nature of the specification for a gene or an amino acid is what Heinrich was concerned with. The probability is even harder.  You appear to have simply assumed that all nucleotides are equally likely.  But genes are not created by throwing nucleotides together at random.  They are created by processes such as duplication, transposition, inversion and, of course, point mutation.  To assign a probability to a specific gene would imply knowing in some detail the process by which is arose. You can of course define CSI as 2 bits per nucleotide.  You can define it as anything you like.  But if you do so that is not Shannon information and you have wonder what significance the number has.markf
May 19, 2011
May
05
May
19
19
2011
06:11 AM
6
06
11
AM
PDT
1 5 6 7 8 9 10

Leave a Reply