Uncommon Descent Serving The Intelligent Design Community
meniscus

FOOTNOTE: On Einstein, Dembski, the Chi Metric and observation by the judging semiotic agent

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

(Follows up from here.)

Over at MF’s blog, there has been a continued stream of  objections to the recent log reduction of the chi metric in the recent CSI Newsflash thread.

Here is commentator Toronto:

__________

>> ID is qualifying a part of the equation’s terms with subjective observation.

If I do the same to Einstein’s, I might say;

E = MC^2, IF M contains more than 500 electrons,

BUT

E **MIGHT NOT** be equal to MC^2 IF M contains less than 500 electrons

The equation is no longer purely mathematical but subject to other observations and qualifications that are not mathematical at all.

Dembski claims a mathematical evaluation of information is sufficient for his CSI, but in practice, every attempt at CSI I have seen, requires a unique subjective evaluation of the information in the artifact under study.

The determination of CSI becomes a very small amount of math, coupled with an exhausting study and knowledge of the object itself.>>

_____________

A few thoughts in response:

a –> First, let us remind ourselves of the log reduction itself, starting with Dembski’s 2005 chi expression:

χ = – log2[10^120 ·ϕS(T)·P(T|H)]  . . . eqn n1

How about this (we are now embarking on an exercise in “open notebook” science):

1 –> 10^120 ~ 2^398

2 –> Following Hartley, we can define Information on a probability metric:

I = – log(p) . . .  eqn n2

3 –> So, we can re-present the Chi-metric:

Chi = – log2(2^398 * D2 * p)  . . .  eqn n3

Chi = Ip – (398 + K2) . . .  eqn n4

4 –> That is, the Dembski CSI Chi-metric is a measure of Information for samples from a target zone T on the presumption of a chance-dominated process, beyond a threshold of at least 398 bits, covering 10^120 possibilities.

5 –> Where also, K2 is a further increment to the threshold that naturally peaks at about 100 further bits . . . . As in (using Chi_500 for VJT’s CSI_lite):

Chi_500 = Ip – 500,  bits beyond the [solar system resources] threshold  . . . eqn n5

Chi_1000 = Ip – 1000, bits beyond the observable cosmos, 125 byte/ 143 ASCII character threshold . . . eqn n6

Chi_1024 = Ip – 1024, bits beyond a 2^10, 128 byte/147 ASCII character version of the threshold in n6, with a config space of 1.80*10^308 possibilities, not 1.07*10^301 . . . eqn n6a . . . .

Using Durston’s Fits from his Table 1, in the Dembski style metric of bits beyond the threshold, and simply setting the threshold at 500 bits:

RecA: 242 AA, 832 fits, Chi: 332 bits beyond

SecY: 342 AA, 688 fits, Chi: 188 bits beyond

Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond  . . . results n7

The two metrics are clearly consistent . . . .one may use the Durston metric as a good measure of the target zone’s actual encoded information content, which Table 1 also conveniently reduces to bits per symbol so we can see how the redundancy affects the information used across the domains of life to achieve a given protein’s function; not just the raw capacity in storage unit bits [= no.  of  AA’s * 4.32 bits/AA on 20 possibilities, as the chain is not particularly constrained.]

b –> In short, we are here reducing the explanatory filter to a formula. Once we have specific, observed functional information of Ip bits,  and we compare it to a threshold of a sufficiently large configuration space, we may infer that the instance of FSCI (or more broadly CSI)  is sufficiently isolated that the accessible search resources make it maximally unlikely that its best explanation is unintelligent cause by blind chance plus mechanical necessity. Instead, the best, and empirically massively supported causal explanation is design:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Fig 1: The ID Explanatory Filter

c –> This is especially clear when we use the 1,000 bit threshold, but in fact the “practical” universe we have is our solar system. And so, since the number of Planck time quantum states of our solar system since the usual date of the big bang is not more than 10^102, something that is in a config space of 10^150 [500 bits worth of possibilities] is 48 orders of magnitude beyond that threshold.

d –> So, something from a config space of 10^150 or more (500+ functionally specific bits) is on infinite monkey analysis grounds, comfortably beyond available search resources. 1,000 bits puts it beyond the resources of the observable cosmos:

Fig 2: The Observed Cosmos search window

e –> What the reduced Chi metric is telling us is that if say we had 140 functional bits [20 ASCII characters] , we would be 360 bits short of the threshold, and in principle a random walk based search could find something like that. For, while the reduced chi metric is giving us a value, it tells us we are falling short and by how much:

Chi_500(140 bits) = 140 – 500 = – 360 specific bits, within the threshold

f –> So, the Chi_500 metric tells us instances of this could happen by chance and trial and error testing.   Indeed, that is exactly what has happened with random text generation experiments:

One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the “monkeys” typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t” The first 19 letters of this sequence can be found in “The Two Gentlemen of Verona”. Other teams have reproduced 18 characters from “Timon of Athens”, 17 from “Troilus and Cressida”, and 16 from “Richard II”.[20]

A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took “2,737,850 million billion billion billion monkey-years” to reach 24 matching characters:

RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d

g –> But, 500 bits or 72 ASCII characters, and beyond this 1,000 bits or 143 ASCII characters, are a very different proposition, relative to the search resources of the solar system or the observed cosmos.

h –> That is why, consistently, we observe CSI beyond that threshold [e.g. Toronto’s comment] being produced by intelligence, and ONLY as produced by intelligence.

i –> So, on inference to best empirically warranted explanation, and on infinite monkeys analytical grounds, we have excellent reason to have high confidence that the threshold metric is credible.

j –> As a bonus, we have exposed the strawman suggestion that the Chi metric only applies beyond the threshold. Nope, it applies within the threshold and correctly indicates that something of such an order could come about by chance and necessity within the solar system’s search resources.

k –> is a threshold metric inherently suspicious? Not at all. In control system studies, for instance, we learn that once you reduce your expression to a transfer function of form

G = [(s – z1)(s- z2) . . . ]/[(s – p1)(s-p2)(s – p3) . . . ]

. . . then, if poles appear in the RH side of the complex s-plane, you have an unstable system.

l –> A threshold, and one that, when poles approach close to the threshold from the LH half-plane, will show up in a tendency that can be detected in the frequency response as peakiness.

m –> Is the simplicity of the math in question, in the end [after you have done the hard work of specifying information, and identifying thresholds], suspicious? No, again. For instance, let us compare:

v = i* R

q = v* C

n = sin i/ sin r

F = m*a

F2 = – F1

s = k log W

E = m0*c^2

v = H0D

Ik = – log2 (pk)

E = h*νφ

n –> Each of these is elegantly simple, but awesomely powerful; indeed, the last — precisely, a threshold relationship — was a key component of Einstein’s Nobel Prize (Relativity was just plain too controversial). And, once we put them to work in practical, empirical situations, each of them ” . . .  is no longer purely mathematical but subject to other observations and qualifications that are not mathematical at all.”

(The objection is clearly selectively hyperskeptical. Since when was an expression about an empirical quantity or situation “purely mathematical”? Let’s try another expression:

Y = C + I + G + [X – M].

How are its components measured and/or estimated, and with how much application of judgement calls, including those tracing to GAAP? [Cf discussion here.] Is this expression therefore meaningless and of no utility? What about M*VT = PT*T?)

o –> So, what about that horror, the involvement of the semiotic, judging agent as observer, who may even intervene and– shudder — judge? Of course, the observer is a major part of quantum mechanics, to the point where some are tempted to make it into a philosophical position. But the problem starts long before that, e.g. look at the problem of reading a meniscus! (Try, for Hg in glass, and for water in glass — the answers are different and can affect your results.)

Fig 3: Reading a meniscus to obtain volume of a liquid is both subjective and objective (Fair use clipping.)

p –> So, there is nothing in principle or in practice wrong with looking at information, and doing exercises — e.g. see the effect of deliberately injected noise of different levels, or of random variations — to test for specificity. Axe does just this, here, showing the islands of function effect dramatically. Clipping:

. . . if we take perfection to be the standard (i.e., no typos are tolerated) then P has a value of one in 10^60. If we lower the standard by allowing, say, four mutations per string, then mutants like these are considered acceptable:

no biologycaa ioformation by natutal means
no biologicaljinfommation by natcrll means
no biolojjcal information by natiral myans

and if we further lower the standard to accept five mutations, we allow strings like these to pass:

no ziolrgicgl informationpby natural muans
no biilogicab infjrmation by naturalnmaans
no biologilah informazion by n turalimeans

The readability deteriorates quickly, and while we might disagree by one or two mutations as to where we think the line should be drawn, we can all see that it needs to be drawn well below twelve mutations. If we draw the line at four mutations, we find P to have a value of about one in 10^50, whereas if we draw it at five mutations, the P value increases about a thousand-fold, becoming one in 10^47.

q –> Let us note how — when confronted with the same sort of skepticism regarding the link between information [a “subjective” quantity] and entropy [an “objective” one tabulated in steam tables etc] — Jaynes replied:

“. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.”

r –> In short, subjectivity of the investigating observer is not a barrier to the objectivity of the conclusions reached, providing they are warranted on empirical and analytical grounds. As has been provided for the Chi metric, in reduced form.  END

Comments
...is some manipulation of logarithms with numbers of unknown provenance.
One does not "manipulate" logarithms. Why do you call yourself MathGrrl?Mung
May 11, 2011
May
05
May
11
11
2011
07:38 AM
7
07
38
AM
PDT
Please provide a mathematically rigorous definition of CSI in response to this comment. Please show how to calculate CSI, in detail, for the first of the scenarios I proposed in my guest thread
Deja vu all over again. Broken record. Boring. MathGrrl, you're not interested in moving the debate along. You have nothing to offer beyond repeating ad nauseam the same ttwo demands. If you were truly interesetd, you would do, or at least attempt to do, what vjtorley requested you to do. You've asserted that ev can generate CSI without offering a shred of evidence, and Schneider seems to know enough about CSI to make the same claim on his web site. So are you now retracting your claim about ev?Mung
May 11, 2011
May
05
May
11
11
2011
07:34 AM
7
07
34
AM
PDT
[MG:] Note that if you want to use Durston’s work as an example of CSI, you must first demonstrate that his calculations are consistent with the description of CSI provided by Dembski, 20 --> This is in the teeth of analysis and citations already presented [cf points 10 - 11 in the Original CSI Newsflash Post onlookers], which show that Durston et al provided a metric of Information as actually used by building on the H-metric, average information per symbol based on a weighted average: H = - [SUM on i] pi log pi 21 --> Once we have this value in hand, it can easily be substituted into the Ip slot in the log reduced Dembski expression, and as was again excerpted in the original post, it yields values of information beyond the threshold for some of the values in the Table 1 of protein families. 22 --> If you are too closed-minded or lazy to read the information given and respond on its merits, instead of indulging in selectively hyperskeptical and strawman tactic dismissals, that is not my fault. since that definition is the basis for claims by ID proponents that CSI is an indicator of intelligent agency. 22 --> MG, you here acknowledge that Dembski has provided a definition. Of course, in your view it is not "rigorous," so you need to provide an explanation of why it is not. 23 --> Going further, the inference to design on CSI or more usually FSCI, is not a matter of toeing Dembski's line, it is a matter of inference to best explanation of an observed phenomenon remarked on by say Orgel and Wicken, that Dembski and others have provided relevant mathematical models for. 24 --> Can you kindly give us the best explanation for the text of your post: lucky noise or MG, a blog commenter? (And onlookers,this is exactly what I immediately r4esponded to MG's guest post at UD on, which she has never cogently responded to.) From my perusal of both authors, I don’t believe such a reconciliation is possible. 25 --> Scroll up to this thread's original post and see just how easily the two can be integrated, once you apply the log reduction and get to information in specified bits beyond a threshold. then follow the link already given to see the citation from Durston et al that supports that insertion. 26 --> Just for completeness, let me clip from the 2007 FITS metric paper, as cited in eh CSI Newsflash OP:
Consider that there are usually only 20 different amino acids possible per site for proteins, Eqn. (6) can be used to calculate a maximum Fit value/protein amino acid site of 4.32 Fits/site [NB: Log2 (20) = 4.32]. We use the formula log (20) – H(Xf) to calculate the functional information at a site specified by the variable Xf such that Xf corresponds to the aligned amino acids of each sequence with the same molecular function f. The measured FSC for the whole protein is then calculated as the summation of that for all aligned sites. The number of Fits quantifies the degree of algorithmic challenge, in terms of probability [info and probability are closely related], in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 10^49 different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106 percent of 121-residue sequence space. In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space. A high Fit value for individual sites within a protein indicates sites that require a high degree of functional information. High Fit values may also point to the key structural or binding sites within the overall 3-D structure.
The closest that I have seen you come to actually providing a calculation for (the yet to be rigorously defined) CSI is some manipulation of logarithms with numbers of unknown provenance. 27 --> Another lie, the sources of the numbers clipped for illustrative purposes were given. And, where I made an error [it was 22 BYTES], I have corrected it with a strike through. In at least one case, YOU were the source. I therefore propose that we clear the air and try to make some progress by two means. 28 --> Sorry, it is you who need to clear the air by providing some serious explanations [for weeks now], as already pointed out this morning. THIS IS A TURNABOUT AND FALSE ACCUSATION. First, please provide a mathematically rigorous definition of CSI in response to this comment. 29 --> This talking point has already been adequately rebutted. Adequate conceptions, descriptions and mathematical metrics have long been provided, just hey will never be acceptable to closed minded strawman tactic objectors. Second, please show how to calculate CSI, in detail, for the first of the scenarios I proposed in my guest thread: 30 --> This is an OUTRAGE. The answer to this case has been given right from the outset, on first encountering it; and it has been often repeated since on seeing he point over and over. Just, as is plainly the rhetorical agenda, it has been brushed aside. A simple gene duplication, without subsequent modification, that increases production of a particular protein from less than X to greater than X. The specification of this scenario is “Produces at least X amount of protein Y.” 31 --> The duplicate itself provides no additional information, just a copy [similar to how a mould fossil is just a copy and a copy of software that you download is just a copy, not an original creation]. In short, the base error is conceptual not mathematical. 32 --> But also, copies of digitally coded information [and genetic information is just that], where the scope of FSCI in the copy is beyond the FSCI threshold, are not credibly produced by chance. So, this points to a complex, functionally organised system and process of duplication. A further index of information tracing to design. 33 --> So, the word "simple" is a strawman tactic in itself. Note that discussions of islands of functionality, the computational power of the universe, presumed failures of modern evolutionary theory, Durston’s calculations, etc. are not relevant to answering these questions. 34 --> In short, having dismissed the cogent issues, I insist on my original opinion. This is blatant closed mindedness. The issue is whether or not CSI is a useful metric. Please demonstrate why you think it is. 35 --> Long since done (over and over), just ignored in the rush to push closed minded ideological talking points.>> ______________ In short, as of now, unless some very serious rhetorical gambits are taken back, MG, this will go nowhere. You have long had some serious explaining to do, now including on how you have treated not only the arguments of others, but how you have strawmannised and by implication willfully misrepresented them -- at the very least, by deliberately refusing to engage what they have actually had to say on the merits, then superciliously dismissing and deriding them. GEM of TKIkairosfocus
May 11, 2011
May
05
May
11
11
2011
06:32 AM
6
06
32
AM
PDT
MG: By now it is quite apparent that you are simply repeating the same cluster of misleading, false and in some cases accusatory strawman tactic talking points, despite having been corrected repeatedly for the past two months or so. You know or should know better. So, please, do better; or we will be warranted to conclude that we are dealing with closed minded, fundamentally dishonest rhetorical talking points. As, is all too common on the part of objectors to design theory and thought. (Just cf the UD Weak Argument Correctives, top right this and every UD page to see what I mean.) Anyway, one last time, I will respond, point by point, to the objections you have clipped and put up above. Now, too, I have no intention to make any further comments at MF's blog (remember the underlying attitude problem by MF . . . ) save brief notes for the record, and if there is anything of substance, I am sure that this can be reproduced here by those interested to find out what a cogent answer looks like. Clipping and interleaving responses on points: _______________ >> You repeatedly claim, in the thread previous to this one, that CSI has been rigorously defined mathematically, 1 --> This is a strawman caricature. Not a promising start. 2 --> CSI, I have explicitly said, many times, is a descriptive concept that describes an observed fact, one that Orgel and Wicken have aptly summarised and which Dembksi and others have subsequently provided mathematical analyses and models and metrics for. 3 --> Whatever objections one may have to the various models, the empirical reality still needs to be addressed. 4 --> And, given that the metric models build on a STANDARD metric for information, they are fundamentally valid. That standard metric, tracing to Hartley and Shannon [cf Taub and Schilling as repeatedly cited here at UD and at MF's blog, and my discussion on Connor's derivation, here in my always linked briefing note], is: Ik = log(1/pk) = - log pk 5 --> As the Original Post again excerpts, Dembski's metric boils down to a measure of functionally specific [self-]information beyond the threshold of sufficient complexity where the empirically and analytically warranted best explanation is intelligence. It therefore imposes a reasonable threshold of complexity, e.g. in reduced form: Chi_500 = Ip - 500, bits beyond the solar system resources threshold 6 --> Equations of values "beyond a threshold" have won at least one Nobel Prize in Physics, that of Einstein, as the Original Post notes. 7 --> In short, the Dembski approach is a reasonable one and provides a metric in general accord with standard usage and metrics of information. It does focus the information on especially functional specificity, but that is a matter of his interest being on that, instead of Shannon's on carrying capacity of telephone lines. And the OP has an addition from Axe on the subject of how such specificity can be observationally demonstrated. Durston et al provided a way to measure functionally specific information for protein families, so it is biologically relevant. 8 --> I am pretty sure that most of us are interested in information that is meaningful and functional, and therefore specific by means of that function and meaning according to the rules of particular codes. 9 --> So, any reasonable person would accept that Dembksi and others have provided useful metrics and models that can be used in SCIENTIFIC -- empirically tested -- investigations. but nowhere do you provide that rigorous mathematical definition. 10 --> Why do you insist on strawman tactic talking points in the teeth of repeated, cogent correctives? 11 --> I am sure you are aware that Calculus was developed form the 1600's on, and was in routine use for 200 or so years before the "rigorous" foundations for it were worked out from the 1800s on. It turns out that had there been an insistence on such foundations beforehand, the difficulty of getting to that stage would have blocked the road at the outset. The pioneers were correct to use intuitive concepts and practical tests of effectiveness and reliability. In short, they worked on Calculus as a scientific toolkit that was effective and they were right to do so. 12 --> I repeat, this talking point is a strawman tactic. I have insisted that the concept comes first, and is a commonplace of an engineering civilisation: complex, specified information, especially functionally specific complex information, is a characteristic feature and an empirically observable reality, for many, many systems, starting with computers, cars, cell phones, and posts in this blog. Libraries are repositories of CSI and more particularly FSCI. 13 --> Dembski and others have provided useful models and metrics that can be used in empirical investigations, building on a line of work that is 60 years old, and that is all they need to do. You could eliminate the need for your assertions by simply reproducing the definition in response to the challenges you are receiving. 14 --> You have received definitions [what do you think that Ik = - log pk is?], discussions and explanations, repeatedly, only to insist on repeating the same strawman tactic talking points. 15 --> The message you are now communicating is that you are making ideologically motivated closed minded and strawman tactic talking point objections, and will continue to do so regardless of correction or patient explanation. You have also generated a large amount of text without directly addressing the issue at hand, 16 --> This is now an outright slander-filled lie. 17 --> For the record: I -- and many others -- have provided analyses, citations, derivations/ reductions, and successful applications. At every point we have met the same talking point, with ZERO indication that you are interacting with the material provided. namely whether or not you can provide a mathematically rigorous definition of CSI (as described in Dembski’s papers and books) and detailed examples of how to calculate it. 18 --> This is a lie, based on the trick of selective hyperskepticism: "rigour" means that anything you want to object to will be deemed not rigorous and you will simply demand "rigour" when in fact YOU have blatantly blundered by confusing a log reduction for a probability calculation, and when asked to explain yourself, have dodged aside for a time only to come back to repeat the same tired talking points. After two months, we can find nowhere in the exchanges at UD any demonstration of your capacity to engage the substantial empirical and mathematical matters at stake. 19 --> As someone with a mind of his own, I also reserve the right to adjust or develop Dembksi's work, along lines that suit my interest. I am not a slave or robot of Dembski, locked into whatever he has said in some paper wherever. the issue is what is empirically credible and well warranted, not what Dembski may or may not have said whenever or wherever on whoever's interpretation. [ . . . ]kairosfocus
May 11, 2011
May
05
May
11
11
2011
06:31 AM
6
06
31
AM
PDT
Alex:
However, how will one know where in the mighty big stream of characters do the words of the great bard start? Of course, you need the works of Shakespeare in your hand before the experiment starts, so that you can make comparisons. With other words, the monkeys fail to produce new information, they only reproduce an existing work through a very inefficient method.
That is, you are seeing the fatal flaws of the process. Unintelligent processes are simply not configured to create functional information, and, if they happen to throw out relevant configs of entities, they are utterly unlikely to have correlated systems to put the symbol strings to good use. The rhetorical metaphor was more effective in the days when life was thought to be a sort of simple jelly, called protoplasm. Now that we know we are dealing with molecular nanotechnology and digital information processing, that is a very different kettle of fish indeed. GEM of TKIkairosfocus
May 11, 2011
May
05
May
11
11
2011
04:53 AM
4
04
53
AM
PDT
Posted on Mark Frank's blog: CSI- AGAIN- CSI is Shannon information of a certain complexity with meaning/ function. Using Shannon we see that there are 2 bits of information per nucleotide and 6 bits per amino acid (4 possible nucleotides = 2^2 = 2 bits- 64 possible codons for amino acids and STOP = 2^6 = 6 bits per amino acid) That said part of the “specification” is figuring out the variation tolerance, which is what Durston did. What that means is if we have a functional protein of 100 amino acids- a protein that cannot suffer any variation- then it has 606 bits of specifid information, which means it has CSI. Now if that protein will function the same no matter what the amino acid sequence is then it doesn’t hae ay specifiecation at all. And then there is everything in between and that is what needs to be determined. That said there isn’t any justification for the claim that gene duplications is a blind watchmaker process. added: These people are so intellectually dishonest it in't worth the effortJoseph
May 11, 2011
May
05
May
11
11
2011
04:31 AM
4
04
31
AM
PDT
MG: I think the astute onlooker will see that the excerpted objection answered in the original post above, suffices to show that my characterisation of the continued objections at MF's blog is materially accurate. I will respond on points later on, DV, but for now, please note that -- as long since pointed out here and at MF's blog -- for weeks now, you have some fairly serious explaining to do, on issues summarised here. In particular, you need to explain your persistent resort to repeated talking points in the teeth of cogent replies, e.g. on the alleged meaninglessness of CSI (and by extension, FSCI, as it is a subset). In that context, you need to explain your attempt to dismiss a log reduction to info beyond a threshold as a "probability" calculation. While you are at it, kindly explain Schneider's attempt to "correct" Dembski in identifying that the Hartley-suggested quantification is a quantitative definition of information: Ik = - log pk In addition, you need to explain the implications of such claimed "meaninglessness" in the light of the usage by Orgel and Wicken. You need to explain your four "examples," and in particular to respond to the information on the nature of ev unearthed by Mung and posted in the already linked thread. And, last but not least, you need to explain your resort to the suggestion of persecution of science, by citing Galileo's remark when he had been forced to publicly recant of his theories. Especially, in light of the evidence that we are seeing an imposition of a priori materialism on science, especially origins science, that needs to be corrected. GEM of TKIkairosfocus
May 11, 2011
May
05
May
11
11
2011
04:27 AM
4
04
27
AM
PDT
MathGrrl, As I am not all that well versed in math, it seems to me, as a outside observer, that you are making two claims. One, you are claiming that ID does not have a rigid mathematical foundation, and Two, by default of your first claim, you are claiming that neo-Darwinism does have a rigid mathematical definition.??? Now it seems to me, as a outside observer, that your claims are not bore out by the empirical evidence in the least. You have claimed Schneider's evolution algorithm as 'proof' that the universal probability bound for functional information generation has been violated. Now I appreciate such confidence in a woman, but perhaps you can see my skepticism in that I don't see how in the world that a program that is 'designed' to converge on a solution within well set, predefined, boundaries has anything to do with the grand neo-Darwinian claim that purely random, natural, processes can generate the unmatched levels of information we find in life. i.e. If neo-Darwinism truly is capable of generating the unmatched levels of information we find in life, which is far, far, more advanced than anything we have ever devised in our most advanced computer programs, should not you be able to set RM/NS to the task of generating computer programs in the first place, that exceed what we have done, indeed what Schneider has done, instead of programming a computer to 'converge on a solution'? i.e. Why not open up Schneider's O/S to mutations and see how far his algorithm will go towards improving what he himself has designed??? notes: MathGrrl, seeing your great concern for mathematical rigidity, perhaps you can pull the plank out you own eye first before you worry about the splinter in a other eye??? Perhaps you would care to apply for this job at Oxford which is seeking to supply a mathematical foundation for Darwinism??? Oxford University Seeks Mathemagician — May 5th, 2011 by Douglas Axe Excerpt: Grand theories in physics are usually expressed in mathematics. Newton’s mechanics and Einstein’s theory of special relativity are essentially equations. Words are needed only to interpret the terms. Darwin’s theory of evolution by natural selection has obstinately remained in words since 1859. … http://biologicinstitute.org/2011/05/05/oxford-university-seeks-mathemagician/ ---------------- further notes: Whale Evolution Vs. Population Genetics - Richard Sternberg PhD. in Evolutionary Biology - video http://www.metacafe.com/watch/4165203 Waiting Longer for Two Mutations, Part 5 - Michael Behe Excerpt: the appearance of a particular (beneficial) double mutation in humans would have an expected time of appearance of 216 million years, http://behe.uncommondescent.com/2009/03/waiting-longer-for-two-mutations-part-5/ Experimental Evolution in Fruit Flies (35 years of trying to force fruit flies to evolve in the laboratory fails, spectacularly) - October 2010 Excerpt: "Despite decades of sustained selection in relatively small, sexually reproducing laboratory populations, selection did not lead to the fixation of newly arising unconditionally advantageous alleles.,,, "This research really upends the dominant paradigm about how species evolve," said ecology and evolutionary biology professor Anthony Long, the primary investigator. http://www.arn.org/blogs/index.php/literature/2010/10/07/experimental_evolution_in_fruit_flies Michael Behe on Falsifying Intelligent Design - video http://www.youtube.com/watch?v=N8jXXJN4o_A MathGrrl after you get through cleaning your own house, perhaps you would care to address this, Quantum Information/Entanglement In DNA & Protein Folding - short video http://www.metacafe.com/watch/5936605/ It is very interesting to note that quantum entanglement, which conclusively demonstrates that ‘information’ in its pure 'quantum form' is completely transcendent of any time and space constraints, should be found in molecular biology on such a massive scale, for how can the quantum entanglement 'effect' in biology possibly be explained by a material (matter/energy) 'cause' when the quantum entanglement 'effect' falsified material particles as its own 'causation' in the first place? (A. Aspect) Appealing to the probability of various configurations of material particles, as Darwinism does, simply will not help since a timeless/spaceless cause must be supplied which is beyond the capacity of the material particles themselves to supply! To give a coherent explanation for an effect that is shown to be completely independent of any time and space constraints one is forced to appeal to a cause that is itself not limited to time and space! i.e. Put more simply, you cannot explain a effect by a cause that has been falsified by the very same effect you are seeking to explain! Improbability arguments of various 'special' configurations of material particles, which have been a staple of the arguments against neo-Darwinism, simply do not apply since the cause is not within the material particles in the first place! Yet it is also very interesting to note, in Darwinism's inability to explain this 'transcendent quantum effect' adequately, that Theism has always postulated a transcendent component to man that is not constrained by time and space. i.e. Theism has always postulated a 'eternal soul' for man that lives past the death of the body. Traveling At The Speed Of Light - Optical Effects - mathematical model video http://www.metacafe.com/watch/5733303/ The NDE and the Tunnel - Kevin Williams' research conclusions Excerpt: I started to move toward the light. The way I moved, the physics, was completely different than it is here on Earth. It was something I had never felt before and never felt since. It was a whole different sensation of motion. I obviously wasn't walking or skipping or crawling. I was not floating. I was flowing. I was flowing toward the light. I was accelerating and I knew I was accelerating, but then again, I didn't really feel the acceleration. I just knew I was accelerating toward the light. Again, the physics was different - the physics of motion of time, space, travel. It was completely different in that tunnel, than it is here on Earth. I came out into the light and when I came out into the light, I realized that I was in heaven.(Barbara Springer)bornagain77
May 11, 2011
May
05
May
11
11
2011
04:20 AM
4
04
20
AM
PDT
Matrrl:
You have also generated a large amount of text without directly addressing the issue at hand, namely whether or not you can provide a mathematically rigorous definition of CSI (as described in Dembski’s papers and books) and detailed examples of how to calculate it.
I provied that for you- complete with examples. MathGrrl:
Note that if you want to use Durston’s work as an example of CSI, you must first demonstrate that his calculations are consistent with the description of CSI provided by Dembski, since that definition is the basis for claims by ID proponents that CSI is an indicator of intelligent agency. From my perusal of both authors, I don’t believe such a reconciliation is possible.
What you "believe" is irrelevant. As Dembski hs written specified complexity and (C)SI refer to biological function whicis what Durston was refferring to- biological function. Part of the specification is how much variation is allowed- that is what Durston was doing.Joseph
May 11, 2011
May
05
May
11
11
2011
04:19 AM
4
04
19
AM
PDT
kf, I have a problem with the monkeys typing as an example of random sources generating information. The usual argument goes like this: Large enough number of monkeys during long enough time will type all the works of Shakespeare. However, how will one know where in the mighty big stream of characters do the words of the great bard start? Of course, you need the works of Shakespeare in your hand before the experiment starts, so that you can make comparisons. With other words, the monkeys fail to produce new information, they only reproduce an existing work through a very inefficient method. Now, if it is true for Shakespeare, am I not right saying that our monkeys do not produce new information at all? Consequently, if typing monkeys are somewhat analogous to DNA copying with random mutations, as it is claimed, is it not so that random mutations just plainly do not produce new information, but destroy or distort existing ones?Alex73
May 11, 2011
May
05
May
11
11
2011
02:31 AM
2
02
31
AM
PDT
kairosfocus, By the way, as noted by Toronto (http://mfinmoderation.wordpress.com/2011/05/07/mathgrrls-csi-thread-cont/#comment-2138) on Mark Frank's blog, a number of the participants there are not allowed to post comments here at UD. In the spirit of open discussion, I hope you will respond there.MathGrrl
May 11, 2011
May
05
May
11
11
2011
02:27 AM
2
02
27
AM
PDT
kairosfocus,
Over at MF’s blog, there has been a continued stream of objections to the recent log reduction of the chi metric in the recent CSI Newsflash thread.
That doesn't reflect my understanding of the issues being raised there. The topic being discussed on Mark's blog go more to the fundamental concept of CSI and its application. Here is my latest comment on that thread (http://mfinmoderation.wordpress.com/2011/05/07/mathgrrls-csi-thread-cont/#comment-2102), which I hope provides some clarification: [ begin copied comment ] kairosfocus, You repeatedly claim, in the thread previous to this one, that CSI has been rigorously defined mathematically, but nowhere do you provide that rigorous mathematical definition. You could eliminate the need for your assertions by simply reproducing the definition in response to the challenges you are receiving. You have also generated a large amount of text without directly addressing the issue at hand, namely whether or not you can provide a mathematically rigorous definition of CSI (as described in Dembski’s papers and books) and detailed examples of how to calculate it. Note that if you want to use Durston’s work as an example of CSI, you must first demonstrate that his calculations are consistent with the description of CSI provided by Dembski, since that definition is the basis for claims by ID proponents that CSI is an indicator of intelligent agency. From my perusal of both authors, I don’t believe such a reconciliation is possible. The closest that I have seen you come to actually providing a calculation for (the yet to be rigorously defined) CSI is some manipulation of logarithms with numbers of unknown provenance. I therefore propose that we clear the air and try to make some progress by two means. First, please provide a mathematically rigorous definition of CSI in response to this comment. Second, please show how to calculate CSI, in detail, for the first of the scenarios I proposed in my guest thread:
A simple gene duplication, without subsequent modification, that increases production of a particular protein from less than X to greater than X. The specification of this scenario is "Produces at least X amount of protein Y."
Note that discussions of islands of functionality, the computational power of the universe, presumed failures of modern evolutionary theory, Durston’s calculations, etc. are not relevant to answering these questions. The issue is whether or not CSI is a useful metric. Please demonstrate why you think it is. [ end copied comment ] I do hope you'll return to that thread to address these points.MathGrrl
May 11, 2011
May
05
May
11
11
2011
02:26 AM
2
02
26
AM
PDT
F/N: Koonin is appealing to the cosmic inflation form of the multiverse, in order precisely to overcome the search resources challenge that is discussed above. As he observes:
Recent developments in cosmology radically change the conception of the universe as well as the very notions of "probable" and "possible". The model of eternal inflation implies that all macroscopic histories permitted by laws of physics are repeated an infinite number of times in the infinite multiverse. In contrast to the traditional cosmological models of a single, finite universe, this worldview provides for the origin of an infinite number of complex systems by chance, even as the probability of complexity emerging in any given region of the multiverse is extremely low. This change in perspective has profound implications for the history of any phenomenon, and life on earth cannot be an exception.
Of course this raises the point that there is but one actually observed cosmos, and so this is a resort to speculative metaphysics; so also it should sit to the table of comparative difficulties -- on factual adequacy, coherence and explanatory simplicity vs simplistic-ness and/or ad hoc-ery patchworks -- with live options, without censorship. Including, that the cosmos is designed. Also, he glides over the point that the "cosmos bakery" to produce the relevant cluster of possible worlds in a distribution that is happily clustered on a zone in which life-permitting sub-cosmi are possible, is fine-tuned. Moreover, such radical expansion of contingency demands necessary being capable of such fine tuning as the causal root. That -- and recall we have now been in metaphysics not physics for the past several minutes -- strongly points to a necessary being with purpose and power to create a multiverse style cosmos. Multiverses with sub-cosmi fine-tuned for C-chemistry, cell based intelligent life point to a cosmos designer. Which immediately drastically undermines the reason to infer to such worlds -- inflation of material resources as imagined, so that the sort of probabilistic or search space, needle in haystack hurdles as the original post points out, are surmounted. So, the multiverse "solution" to the search resources challenge is self-undermining. But, it has this significance, those who advocate it are at least willing to face the infinite monkeys challenge. In Koonin's words:
Origin of life is a chicken and egg problem: for biological evolution that is governed, primarily, by natural selection, to take off, efficient systems for replication and translation are required, but even barebones cores of these systems appear to be products of extensive selection. The currently favored (partial) solution is an RNA world without proteins in which replication is catalyzed by ribozymes and which serves as the cradle for the translation system. However, the RNA world faces its own hard problems as ribozyme-catalyzed RNA replication remains a hypothesis and the selective pressures behind the origin of translation remain mysterious. Eternal inflation offers a viable alternative that is untenable in a finite universe, i.e., that a coupled system of translation and replication emerged by chance, and became the breakthrough stage from which biological evolution, centered around Darwinian selection, took off. A corollary of this hypothesis is that an RNA world, as a diverse population of replicating RNA molecules, might have never existed. In this model, the stage for Darwinian selection is set by anthropic selection of complex systems that rarely but inevitably emerge by chance in the infinite universe (multiverse).
This of course begs the question of the vastly more immense needle in haystack challenge of getting novel body plans, dozens of times over, in the compass of a single solar system. But at least, it admits the significance of the search space problem for spontaneous origin of a metabolising, vNSR self-replicating automaton. Against that backdrop, the simplistic bare bones model for first life highlights the scope of the challenge, for recall, just 125 bytes worth of info capacity for the requisite systems overwhelms the search capacity of the only actually observed cosmos. Clipping:
The origin(s) of replication and translation (hereinafter OORT) is qualitatively different from other problems in evolutionary biology and might be viewed as the hardest problem in all of biology. As soon as sufficiently fast and accurate genome replication emerges, biological evolution takes off [i.e. K fails to understand the body plan origination challenge -- looks like we need an infinity of life originating worlds to get to one with what we see, on top of the infinity of worlds to get to just one life originating one, we are looking at reductio ad absurdum] . . . . The crucial question, then, is how was the minimal complexity attained that is required to achieve the threshold replication fidelity. In even the simplest modern systems, such as RNA viruses with the replication fidelity of only ~10-3, replication is catalyzed by a complex protein replicase; even disregarding accessory subunits present in most replicases, the main catalytic subunit is a protein that consists of at least 300 amino acids [20]. The replicase, of course, is produced by translation of the respective mRNA which is mediated by a tremendously complex molecular machinery. Hence the first paradox of OORT: to attain the minimal complexity required for a biological system to start on the path of biological evolution, a system of a far greater complexity, i.e., a highly evolved one, appears to be required. How such a system could evolve, is a puzzle that defeats conventional evolutionary thinking . . . . The MWO model dramatically expands the interval on the axis of organizational complexity where the threshold can belong by making emergence of complexity attainable by chance (Fig. 1). In this framework, the possibility that the breakthrough stage for the onset of biological evolution was a high-complexity state, i.e., that the core of the coupled system of translation-replication emerged by chance, cannot be dismissed, however unlikely (i.e., extremely rare in the multiverse). The MWO model not only permits but guarantees that, somewhere in the infinite multiverse – moreover, in every single infinite universe, – such a system would emerge. The pertinent question is whether or not this is the most likely breakthrough stage the appearance of which on earth would be explained by chance and anthropic selection. I suggest that such a possibility should be taken seriously . . .
An infinity of unobserved infinities! The ultimate speculative complex-ification of the explanation. Without empirical basis on observational tests. (Apart from, the implicit, on evo mat assumptions, this is the sort of thing we need to get to what we see. In short, an implicit acknowledgement of the search space challenge implied by the Chi metric and the observed complex functional organisation of life -- the only biological life we do observe.) Reductio. Even, with the sort of simplifications of suggested biological life suggested by BA's clip above. I sing a song To weave a spell . . . Of needles And, haystacks . . . With infinities Of monkeys Pounding On keyboards . . . GEM of TKIkairosfocus
May 11, 2011
May
05
May
11
11
2011
01:18 AM
1
01
18
AM
PDT
thankskairosfocus
May 10, 2011
May
05
May
10
10
2011
05:10 PM
5
05
10
PM
PDT
Kairos: I don't know about the specific site, but here is the paper they are talking about: The cosmological model of eternal inflation and the transition from chance to biological evolution in the history of life - Koonin http://www.biology-direct.com/content/2/1/15 of note; I have not heard materialists talk too much about the many world's hypothesis, save for Koonin in this paper and I believe one more paper. Yet with 'quantum information' now found on a massive scale in molecular biology, whether they realize it or not, they must appeal to the 'science destroying' many-worlds scenario, since quantum information is not reducible to a material basis (A.Aspect)bornagain77
May 10, 2011
May
05
May
10
10
2011
04:57 PM
4
04
57
PM
PDT
BA: An interesting simple model. Turns out, though that the source is now banned from where I am, so could you give the onward source. (Don't you ever think the Internet is censorship-free.) GEM of TKI PS: I have decides to fill in some equations and their contexts, so that we can get a better understanding of what they are about and how they become meaningful and useful above and beyond niceties of abstract Mathematics. On this point, it bears noting that calculus was developed and in routine use for nearly 200 years before its rigorous underpinnings were identified and worked out. Some of the objectors in recent weeks know or should know that.kairosfocus
May 10, 2011
May
05
May
10
10
2011
04:44 PM
4
04
44
PM
PDT
,,, This may be of interest; Even the low end 'hypothetical' probability estimate given by evolutionist, for life spontaneously arising, is fantastically impossible: General and Special Evidence for Intelligent Design in Biology: - The requirements for the emergence of a primitive, coupled replication-translation system, which is considered a candidate for the breakthrough stage in this paper, are much greater. At a minimum, spontaneous formation of: - two rRNAs with a total size of at least 1000 nucleotides - ~10 primitive adaptors of ~30 nucleotides each, in total, ~300 nucleotides - at least one RNA encoding a replicase, ~500 nucleotides (low bound) is required. In the above notation, n = 1800, resulting in E < 10^-1018. That is, the chance of life occurring by natural processes is 1 in 10 followed by 1018 zeros. (Koonin's intent was to show that short of postulating a multiverse of an infinite number of universes (Many Worlds), the chance of life occurring on earth is vanishingly small.) http://www.conservapedia.com/General_and_Special_Evidence_for_Intelligent_Design_in_Biologybornagain77
May 10, 2011
May
05
May
10
10
2011
02:51 PM
2
02
51
PM
PDT
F/N: Me ca'an believe it! I forgot to put in Einstein's Nobel Prize- winning threshold metric equation in points m and n. Duly corrected!kairosfocus
May 10, 2011
May
05
May
10
10
2011
01:26 PM
1
01
26
PM
PDT
It is funny that a atheistic materialist would choose to use E = MC^2 as his example to 'unwisely' try to challenge you on this point of information kairos. For E = MC^2, by itself, actually points to a higher 'eternal' dimension that is above this 3-Dimensional material dimension, which should be a fairly unnerving thing for materialists!?! Please note in the following video how the 3-Dimensional material world 'folds and collapses' into a tunnel shape around the direction of travel as an observer moves towards the 'higher dimension' of the speed of light, Traveling At The Speed Of Light - Optical Effects - video http://www.metacafe.com/watch/5733303/ As well, Please compare the similarity of the optical effect, noted at the 3:22 minute mark of the preceding video with the 'light at the end of the tunnel' reported in very many Near Death Experiences: The NDE and the Tunnel - Kevin Williams' research conclusions Excerpt: I started to move toward the light. The way I moved, the physics, was completely different than it is here on Earth. It was something I had never felt before and never felt since. It was a whole different sensation of motion. I obviously wasn't walking or skipping or crawling. I was not floating. I was flowing. I was flowing toward the light. I was accelerating and I knew I was accelerating, but then again, I didn't really feel the acceleration. I just knew I was accelerating toward the light. Again, the physics was different - the physics of motion of time, space, travel. It was completely different in that tunnel, than it is here on Earth. I came out into the light and when I came out into the light, I realized that I was in heaven.(Barbara Springer) As well, traveling at the speed of light gets us to the eternal, 'past and future folding into now', framework of time. This higher dimension, 'eternal', inference for the time framework of light is warranted because light is not 'frozen within time' yet it is shown that time, as we understand it, does not pass for light. "I've just developed a new theory of eternity." Albert Einstein - The Einstein Factor - Reader's Digest "The laws of relativity have changed timeless existence from a theological claim to a physical reality. Light, you see, is outside of time, a fact of nature proven in thousands of experiments at hundreds of universities. I don’t pretend to know how tomorrow can exist simultaneously with today and yesterday. But at the speed of light they actually and rigorously do. Time does not pass." Richard Swenson - More Than Meets The Eye, Chpt. 12 Light and Quantum Entanglement Reflect Some Characteristics Of God - video http://www.metacafe.com/watch/4102182 It is very interesting to note that this strange higher dimensional, eternal, framework for time, found in special relativity, also finds corroboration in Near Death Experience testimonies: 'In the 'spirit world,,, instantly, there was no sense of time. See, everything on earth is related to time. You got up this morning, you are going to go to bed tonight. Something is new, it will get old. Something is born, it's going to die. Everything on the physical plane is relative to time, but everything in the spiritual plane is relative to eternity. Instantly I was in total consciousness and awareness of eternity, and you and I as we live in this earth cannot even comprehend it, because everything that we have here is filled within the veil of the temporal life. In the spirit life that is more real than anything else and it is awesome. Eternity as a concept is awesome. There is no such thing as time. I knew that whatever happened was going to go on and on.' Mickey Robinson - Near Death Experience testimony 'When you die, you enter eternity. It feels like you were always there, and you will always be there. You realize that existence on Earth is only just a brief instant.' Dr. Ken Ring - has extensively studied Near Death Experiences further note of interest is that atoms have been found to be reducible to quantum information; Ions have been teleported successfully for the first time by two independent research groups Excerpt: In fact, copying isn't quite the right word for it. In order to reproduce the quantum state of one atom in a second atom, the original has to be destroyed. This is unavoidable - it is enforced by the laws of quantum mechanics, which stipulate that you can't 'clone' a quantum state. In principle, however, the 'copy' can be indistinguishable from the original (that was destroyed),,, http://www.rsc.org/chemistryworld/Issues/2004/October/beammeup.asp Atom takes a quantum leap - 2009 Excerpt: Ytterbium ions have been 'teleported' over a distance of a metre.,,, "What you're moving is information, not the actual atoms," says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second. http://www.freerepublic.com/focus/news/2171769/posts Double-slit experiment Excerpt: In 1999 objects large enough to see under a microscope, buckyball (interlocking carbon atom) molecules (diameter about 0.7 nm, nearly half a million times that of a proton), were found to exhibit wave-like interference. http://en.wikipedia.org/wiki/Double-slit_experiment Dr. Quantum - Double Slit Experiment & Entanglement - video http://www.metacafe.com/watch/4096579bornagain77
May 10, 2011
May
05
May
10
10
2011
01:24 PM
1
01
24
PM
PDT
Mung, point. I would like to see observationally anchored evidence that it is possible to spontaneously assemble a metabolising, self-replicating automaton with a built in von Neumann self replicator facility, by blind chance and necessity. (Cf here.) And, that the required codes, algorithms and the like for the vNSR credibly can come about spontaneously. Further to this, I note that the 1,000 bit needle in the too big haystack threshold comes at 125 bytes worth of info. I think the others who have experience of assembly language coding of controllers will back me in strong doubts that any significant controller can be set up in that space, much less a vNSR. GEM of TKI kairosfocus
May 10, 2011
May
05
May
10
10
2011
12:49 PM
12
12
49
PM
PDT
...most of the inferred primordial polypeptide folding units are proposed to be only 40-60 amino acids in length.
Well, obviously it's not because polypeptides with longer amino acid sequences are not possible. We know they are. So why infer shorter lengths? Does it have anything to do with probabilities, as in the longer the sequence, the more unlikely it is, or the larger the search space? IOW, it's an important exercise even for origin of polypeptide theories.Mung
May 10, 2011
May
05
May
10
10
2011
12:43 PM
12
12
43
PM
PDT
Dr Rec: The cases you have in view are of MICRO evo, i.e. adaptations within an island of function. That is not in dispute by anyone, including the Young Earth Creationists. [F/N: Pardon my partial misreading. DR in no 3 is in part addressing the origin of proteins, using hypothetical short polypeptides as folding units, his linked abstract in part saying: "gene duplication and fusion” is the evolutionary mechanism generally hypothesized to be responsible for their emergence from simple peptide motifs." The problem here is that functional proteins for life -- required in clusters -- are not going to be 60 AA's long, so the overall protein sits in a fold domain that is deeply isolated and there must be a large number of functional proteins from the outset for life to start as a metabolising automaton with an integral von Neumann Self-Replicator. Proteins must fold stably, must fit the key-lock role,and must function chemically or structurally or in whatever way. Not just one at a time but in interactive clusters in a viable organism that starts form a fertilised ovum and unfolds embryologically into a body plan. That brings right back on the table the issue of origin of large quantities of functionally specific, complex info, the core challenge for OOL and for macro evo. In the latter case, the idea of genes for proteins assembling themselves by chance in 60-AA blocks, then these blocks coming together -- across a cluster of required proteins and regulatory circuits, by happy coincidence to form an embryologically feasible organism, dozens of times over, is so utterly beyond the search capacity of the observed cosmos as to be a reductio ad absurdum on its face. And yet, that seems to be what is being put forward.] The Chi metric's target is MACRO-evo, the arrival at islands of function for novel body plans. The DNA complement of the first cellular life was credibly about 300 - 1,000 k bases or so as smaller genome organisms are incomplete and parasitical. We are looking at over 100 k bits worth of info there. The config space is well beyond the thresholds. And, to get to embryologically feasible novel body plans we are looking at 10 - 100+ M bases of DNA, a major challenge for the evo mat view. As tothe notion that there is a smoothly branching tree of life from unicellular organisms to life forms as we see them, there is no credible evidence for that, and every evidence against it. In short, there is a reason why the observed sudden appearances, stasis and disappearances or continuity to the modern world that dominate the fossil record are there. GEM of TKIkairosfocus
May 10, 2011
May
05
May
10
10
2011
11:25 AM
11
11
25
AM
PDT
Mung: Chi is simply a Greek letter that Dembski chose. The units for CSI, are in bits beyond the threshold. I had a Physics prof once who when he ran out of Latin and Greek, would resort to one of the Indian scripts. And, Cantor, a Jew, used Aleph in his famous result on transfinite numbers. Entropy is a measure, of micro-scale disorder, that is reflected up in the macroscale through two micro-related variables, as Clausius used: ds >/= d'Q/T, which gives units as J/K in the SI system. Joules per Kelvin. [Degrees K were dropped decades ago.] (Heat is an increment in random motion due to radiation, conduction or convection, and Temperature is a measure of average random energy per microscopic degree of freedom; often translation, rotation and vibration.) The ignorance in question is that about the specific distribution of masses and momentum and energies etc at micro-level, given that there are a great many specific microstates consistent with a lab level macrostate of given Temp, Pressure, Volume, Mass, magnetic moment, etc etc etc. Shannon information is a weighted average information per symbol: H = - [SUM on i] pi log pi, in bits if the log is base 2 It is connected to thermal entropy as Jaynes pointed out and as others have now substantiated so that the hot controversy (over subjectivity in the hallowed halls of physics) is dying off. GEM of TKIkairosfocus
May 10, 2011
May
05
May
10
10
2011
11:18 AM
11
11
18
AM
PDT
kairosfocus, I think this is an important and interesting exercise. However, from an evolutionary standpoint, most of the inferred primordial polypeptide folding units are proposed to be only 40-60 amino acids in length. Using your calculation these (and many modern proteins in the table) seem to fall well under the threshold. See for example here: Experimental support for the evolution of symmetric protein architecture from a simple peptide motif www.pnas.org/content/early/2010/12/15/1015032108.shortDrREC
May 10, 2011
May
05
May
10
10
2011
10:54 AM
10
10
54
AM
PDT
Hi kf, Thanks for the new posting. Why do you call it a "chi expression," or "chi metric?" Is it just because of the Greek letter on the left, or is there some other significance? Also, I see you've even answered a question I had about entropy, which is whether it is a measure, and what is it a measure of. So from that I then ask, is Shannon Information also a measure, and if so what is it a measure of? Is it also a measure of the ignorance of a "receiver" about something? I think I had an "ah hah!" moment last night, but I need to follow up on it and make sure it wasn't an "ah oops" moment. ________________ ED: Mung, cf the discussion in a previous thread here, on what H -- avg info per symbol -- is about, and how information received reduces uncertainty (concerning the source's state), which implies reduction of ignorance in the case of a potentially knowing subject.Mung
May 10, 2011
May
05
May
10
10
2011
10:51 AM
10
10
51
AM
PDT
How could I have forgotten Axe's exercise on islands of function!kairosfocus
May 10, 2011
May
05
May
10
10
2011
10:40 AM
10
10
40
AM
PDT
1 8 9 10

Leave a Reply