Uncommon Descent Serving The Intelligent Design Community

Is the CSI concept well-founded mathematically, and can it be applied to the real world, giving real and useful numbers?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Those who have been following the recently heated up exchanges on the theory of intelligent design and the key design inference on tested, empirically reliable signs, through the ID explanatory filter, will know that a key move in recent months was the meteoric rise of the mysterious internet persona MathGrrl (who is evidently NOT the Calculus Prof who has long used the same handle).

MG as the handle is abbreviated, is well known for “her” confident-manner assertion — now commonly stated as if it were established fact in the Darwin Zealot fever swamps that are backing the current cyberbullying tactics that have tried to hold my family hostage —  that:

without a rigorous mathematical definition and examples of how to calculate [CSI], the metric is literally meaningless. Without such a definition and examples, it isn’t possible even in principle to associate the term with a real world referent.

As the strike-through emphasises, every one of these claims has long been exploded.

You doubt me?

Well, let us cut down the clip from the CSI Newsflash thread of April 18, 2011, which was again further discussed in a footnote thread of 10th May (H’mm, anniversary of the German Attack in France in 1940), which was again clipped yesterday at fair length.

( BREAK IN TRANSMISSION: BTW, antidotes to the intoxicating Darwin Zealot fever swamp “MG dunit” talking points were collected here — Graham, why did you ask the question but never stopped by to discuss the answer? And the “rigour” question was answered step by step at length here.  In a nutshell, as the real MathGrrl will doubtless be able to tell you, the Calculus itself, historically, was founded on sound mathematical intuitive insights on limits and infinitesimals, leading to the warrant of astonishing insights and empirically warranted success, for 200 years. And when Math was finally advanced enough to provide an axiomatic basis — at the cost of the sanity of a mathematician or two [doff caps for a minute in memory of Cantor] — it became plain that such a basis was so difficult that it could not have been developed in C17. Had there been an undue insistence on absolute rigour as opposed to reasonable warrant, the great breakthroughs of physics and other fields that crucially depended on the power of Calculus, would not have happened.  For real world work, what we need is reasonable warrant and empirical validation of models and metrics, so that we know them to be sufficiently reliable to be used.  The design inference is backed up by the infinite monkeys analysis tracing to statistical thermodynamics, and is strongly empirically validated on billions of test cases, the whole Internet and the collection of libraries across the world being just a sample of the point that the only credibly known source for functionally specific complex information and associated organisation [FSCO/I]  is design.  )

After all, a bit of  careful citation always helps:

_________________

>>1 –> 10^120 ~ 2^398

I = – log(p) . . .  eqn n2
3 –> So, we can re-present the Chi-metric:
[where, from Dembski, Specification 2005,  χ = – log2[10^120 ·ϕS(T)·P(T|H)]  . . . eqn n1]
Chi = – log2(2^398 * D2 * p)  . . .  eqn n3
Chi = Ip – (398 + K2) . . .  eqn n4
4 –> That is, the Dembski CSI Chi-metric is a measure of Information for samples from a target zone T on the presumption of a chance-dominated process, beyond a threshold of at least 398 bits, covering 10^120 possibilities.
5 –> Where also, K2 is a further increment to the threshold that naturally peaks at about 100 further bits . . . .
6 –> So, the idea of the Dembski metric in the end — debates about peculiarities in derivation notwithstanding — is that if the Hartley-Shannon- derived information measure for items from a hot or target zone in a field of possibilities is beyond 398 – 500 or so bits, it is so deeply isolated that a chance dominated process is maximally unlikely to find it, but of course intelligent agents routinely produce information beyond such a threshold.

7 –> In addition, the only observed cause of information beyond such a threshold is the now proverbial intelligent semiotic agents.
8 –> Even at 398 bits that makes sense as the total number of Planck-time quantum states for the atoms of the solar system [most of which are in the Sun] since its formation does not exceed ~ 10^102, as Abel showed in his 2009 Universal Plausibility Metric paper. The search resources in our solar system just are not there.
9 –> So, we now clearly have a simple but fairly sound context to understand the Dembski result, conceptually and mathematically [cf. more details here]; tracing back to Orgel and onward to Shannon and Hartley . . . .
As in (using Chi_500 for VJT’s CSI_lite [UPDATE, July 3: and S for a dummy variable that is 1/0 accordingly as the information in I is empirically or otherwise shown to be specific, i.e. from a narrow target zone T, strongly UNREPRESENTATIVE of the bulk of the distribution of possible configurations, W]):
Chi_500 = Ip*S – 500,  bits beyond the [solar system resources] threshold  . . . eqn n5
Chi_1000 = Ip*S – 1000, bits beyond the observable cosmos, 125 byte/ 143 ASCII character threshold . . . eqn n6
Chi_1024 = Ip*S – 1024, bits beyond a 2^10, 128 byte/147 ASCII character version of the threshold in n6, with a config space of 1.80*10^308 possibilities, not 1.07*10^301 . . . eqn n6a
[UPDATE, July 3: So, if we have a string of 1,000 fair coins, and toss at random, we will by overwhelming probability expect to get a near 50-50 distribution typical of the bulk of the 2^1,000 possibilities W. On the Chi-500 metric, I would be high, 1,000 bits, but S would be 0, so the value for Chi_500 would be – 500, i.e. well within the possibilities of chance.  However, if we came to the same string later and saw that the coins somehow now had the bit pattern of the ASCII codes for the first 143 or so characters of this post, we would have excellent reason to infer that an intelligent designer, using choice contingency, had intelligently reconfigured the coins. that is because, using the same I = 1,000 capacity value, S is now 1, and so Chi_500 = 500 bits beyond the solar system threshold. If the 10^57 or so atoms of our solar system, for its lifespan, were to be converted into coins and tables etc, and tossed at an impossibly fast rate, it would be impossible to sample enough of the possibilities space W to have confidence that something from so unrepresentative a zone T,  could reasonably be explained on chance. So, as long as an intelligent agent capable of choice is possible, choice — i.e. design — would be the rational, best explanation on the sign observed, functionally specific, complex information.]
10 –> Similarly, the work of Durston and colleagues, published in 2007, fits this same general framework . . . .
We use the formula log (20) – H(Xf) to calculate the functional information at a site specified by the variable Xf such that Xf corresponds to the aligned amino acids of each sequence with the same molecular function f. The measured FSC for the whole protein is then calculated as the summation of that for all aligned sites. The number of Fits quantifies the degree of algorithmic challenge, in terms of probability [info and probability are closely related], in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 10^49 different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106 percent of 121-residue sequence space. In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space . . . .
11 –> So, Durston et al are targetting the same goal, but have chosen a different path from the start-point of the Shannon-Hartley log probability metric for information. That is, they use Shannon’s H, the average information per symbol, and address shifts in it from a ground to a functional state on investigation of protein family amino acid sequences. They also do not identify an explicit threshold for degree of complexity. [Added, Apr 18, from comment 11 below:] However, their information values can be integrated with the reduced Chi metric:
Using Durston’s Fits from his Table 1, in the Dembski style metric of bits beyond the threshold, and simply setting the threshold at 500 bits:
RecA: 242 AA, 832 fits, Chi: 332 bits beyond
SecY: 342 AA, 688 fits, Chi: 188 bits beyond
Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond  . . . results n7
The two metrics are clearly consistent . . .  (Think about the cumulative fits metric for the proteins for a cell . . . )
In short one may use the Durston metric as a good measure of the target zone’s actual encoded information content, which Table 1 also conveniently reduces to bits per symbol so we can see how the redundancy affects the information used across the domains of life to achieve a given protein’s function; not just the raw capacity in storage unit bits [= no.  of  AA’s * 4.32 bits/AA on 20 possibilities, as the chain is not particularly constrained.]>>

_________________

So, there we have it folks:

I: Dembski’s CSI metric is closely related to standard and widely used work in Information theory, starting with I = – log p

II: It is reducible on taking the appropriate logs, to an information beyond a threshold value

III: The threshold is reasonably set by referring to the accessible search resources of a relevant system, i.e. our solar system or the observed cosmos as a whole.

IV: Where, once an observed configuration — event E, per NFL — that bears or implies information is from a separately and “simply” describable narrow zone T that is strongly unrepresentative — that’s key — of the space of possible configurations, W, then

V: since the search applied is of a very small fraction of W, it is unreasonable to expect that chance can reasonably account for E in T, instead of the far more typical possibilities in W of in aggregate, overwhelming statistical weight.

(For instance the 10^57 or so atoms of our solar system will go through about 10^102 Planck-time Quantum states in the time since its founding on the usual timeline. 10^150 possibilities [500 bits worth of possibilities] is 48 orders of magnitude beyond that reach, where it takes 10^30 P-time states to execute the fastest chemical reactions.  1,000 bits worth of possibilities is 150 orders of magnitude beyond the 10^150 P-time Q-states of the about 10^80 atoms of our observed cosmos. When you are looking for needles in haystacks, you don’t expect to find them on relatively tiny and superficial searches.)

VI: Where also, in empirical investigations we observe that an aspect of an object, system, process or phenomenon that is controlled by mechanical necessity will show itself in low contingency. A dropped, heavy object falls reliably at g. We can make up a set of differential equations and model how events will play out on a given starting condition, i.e we identify an empirically reliable natural law.

VII: By contrast, highly contingent outcomes — those that vary significantly on similar initial conditions, reliably trace to chance factors and/or choice, e.g we may drop a fair die and it will tumble to a value essentially by chance. (This is in part an ostensive definition, by key example and family resemblance.)  Or, I may choose to compose a text string, writing it this way or the next. Or as the 1,000 coins in a string example above shows, coins may be strung by chance or by choice.

VIII: Choice and chance can be reliably empirically distinguished, as we routinely do in day to day life, decision-making, the court room, and fields of science like forensics.  FSCO/I is one of the key signs for that and the Dembski-style CSI metric helps us quantify that, as was shown.

IX:  Shown, based on a reasonable reduction from standard approaches, and shown by application to real world cases, including biologically relevant ones.

We can safely bet, though, that you would not have known that this was done months ago — over and over again — in response to MG’s challenge, if you were going by the intoxicant fulminations billowing up from the fever swamps of the Darwin zealots.

Let that be a guide to evaluating their credibility — and, since this was repeatedly drawn to their attention and just as repeatedly brushed aside in the haste to go on beating the even more intoxicating talking point drums,  sadly, this also raises serious questions on the motives and attitudes of the chief ones responsible for those drumbeat talking points and for the fever swamps that give off the poisonous, burning strawman rhetorical fumes that make the talking points seem stronger than they are.  (If that is offensive to you, try to understand: this is coming from a man whose argument as summarised above has repeatedly been replied to by drumbeat dismissals without serious consideration, led on to the most outrageous abuses by the more extreme Darwin zealots (who were too often tolerated by host sites advocating alleged “uncensored commenting,” until it was too late), culminating now in a patent threat to his family by obviously unhinged bigots.)

And, now also you know the most likely why of TWT’s attempt to hold my family hostage by making the mafioso style threat: we know you, we know where you are and we know those you care about. END

Comments
gpuccio:
Elizabeth: Well, here is step one. a) An empirical definition of conscious intelligent being. That’s easy. Just any being we know to have conscious representation, including the basic cognitive functions, like abstract thought, logical thinking, and so on. No need of special definitions or theories here. The important thing is that we agree that humans are conscious intelligent beings (in general), due to the fundamental inference that they are subjectively conscious as each of us is. The requisite of agreement on conscious representations is fundamental here. A computer would not apply, because it is not conscious (or at least there certainly is not agreement that it is conscious). Non human beings (let’s say aliens) on which a general agreement were reached that they were conscious and intelligent, would qualify. The definition is purely empirical, but it requires a shared inference that the being is conscious. For the moment, we will aply it to humans, very simply.
Well, it's not really empirical, gpuccio. To be an empirical criterion for conscious capacity, it would have to enable us to determine whether, entirely through empirical testing, the property of being conscious can be inferred - i.e. by objective observation. So, for example, if a candidate being was able to articulate abstract concepts, perform logical functions, respond flexibly to its surroundings, recognise objects, avoid moving obstacles etc, one might say it was conscious. But you explicitly rule this out - you say we should rule out a computer as being conscious - simply because everyone agrees it is not conscious! But isn't that begging the question? On what basis could they decide it was not conscious if not on empirical grounds? And if not on empirical ground, then it isn't an empirical criterion. Can you see the problem?Elizabeth Liddle
July 18, 2011
July
07
Jul
18
18
2011
08:34 AM
8
08
34
AM
PDT
Elizabeth: Well, while I way your objection to steps one and two, I can well apply to step three, which is probably the most important. I may need to split it into parts, let's see. c) A definition of a specific property, called dFSCI (digital Functionally Specified Complex Information), nad of how to evaluate it in an object. That includes a definition of functional specification, and a definition of complexity. It also includes a discussion of the problem of compressibility and of necessity based explanations. First I would like to briefly explain why I use the concept of dFSCI instead of the more vast concept of CSI. The concept is essentially the same (information that is specified and complex at the same time). But I use only one type of specification: functional specification. The reason is that this way I can define it empirically, and I don't need complex mathemathical or phisolophical discussions. I will define functional specification in a moment. The second reason is that I limit the discussion to digital information. That makes the quantification much simpler. IOWs, dFSCI is a subset of CSI, where the information is in digital form, and the specification is functional. Whatever CSI may be in general, dFSCI is a subset that can be defined with greater precision. Moreover, there is no problem to limit our treatment of CSI to dFSCI, because the final purpose of our discussion is to infer something about biological information, and biological information, at least the kind we will discuss (protein and DNA sequences) is bot digital and functionally specified. So, let's see the components of the definition one by one. It is important to remember that here we are just defining a formal property of objects. We are not creating any theory of what dFSCI really is, ofits meaning, or of anything else. We just want to define a property which can be present in an object or not, and then be able to say, for existing objects, if that property can be observed or not. It is a purely practical procedure. I will go on in next post.gpuccio
July 18, 2011
July
07
Jul
18
18
2011
08:27 AM
8
08
27
AM
PDT
Elizabeth: And, just to enter a little more into argument, the second step: b) An explicit definition of design, of the process of design, and of designed object. We define design as the purposeful imprint of some consciously represented form from a conscious intelligent being to a material object. The conscious intelligent being is called "designer", the process by which he imprints a special form to the object is called "process of design", and the object itself is called "designed object". The only important point here is that design is defined as any situation where a conscious intelligent being represents a form and purposefully outputs it to an external object. The conscious representation and the intent must be present, otherwise we don't call it design. Moreover, the designer can contribute even only in part to the final form of the object, but that part must be guided by his conscious and purposeful representations. IOWs, the designer must know what he wants to obtain, and must have the intent to obtain the result. The result can correspond more or less perfectly to the intentions of the designer, but the object is anyway more or less shaped by the design process.gpuccio
July 18, 2011
July
07
Jul
18
18
2011
08:05 AM
8
08
05
AM
PDT
Elizabeth: Well, here is step one. a) An empirical definition of conscious intelligent being. That's easy. Just any being we know to have conscious representation, including the basic cognitive functions, like abstract thought, logical thinking, and so on. No need of special definitions or theories here. The important thing is that we agree that humans are conscious intelligent beings (in general), due to the fundamental inference that they are subjectively conscious as each of us is. The requisite of agreement on conscious representations is fundamental here. A computer would not apply, because it is not conscious (or at least there certainly is not agreement that it is conscious). Non human beings (let's say aliens) on which a general agreement were reached that they were conscious and intelligent, would qualify. The definition is purely empirical, but it requires a shared inference that the being is conscious. For the moment, we will aply it to humans, very simply.gpuccio
July 18, 2011
July
07
Jul
18
18
2011
07:54 AM
7
07
54
AM
PDT
Well, I mean that the patterns deemed to have CSI (e.g. the patterns in living things) can be created by Chance and Necessity. And I follow, in principle, what CSI is. I find it's mathematical definition a little odd, as it wraps what is normally called the "alpha" value into the definition. That part doesn't concern me though. What I'd like to know is how to compute the other two terms for a given pattern. Then I could try and produce it, using only Chance and Necessity.Elizabeth Liddle
July 18, 2011
July
07
Jul
18
18
2011
07:29 AM
7
07
29
AM
PDT
Complex Specified Information is Shannon Information, of a specified complexity, with meaning/ function. But anyway if that is your position then when you said:
Well, I think your error is the basic ID error – the assumption Chance and Necessity can’t create CSI!
what did you mean if you don't even know what CSI is?Joseph
July 18, 2011
July
07
Jul
18
18
2011
07:16 AM
7
07
16
AM
PDT
Joseph: I'd love to. But we'd still need an operational definition of CSI - I believe vjtorley had a go, perhaps someone could link to it.Elizabeth Liddle
July 18, 2011
July
07
Jul
18
18
2011
06:04 AM
6
06
04
AM
PDT
Chris - thanks! No, no eggs involved! And it wouldn't matter if there were! That will be hugely helpful.Elizabeth Liddle
July 18, 2011
July
07
Jul
18
18
2011
06:02 AM
6
06
02
AM
PDT
Yes indeed. That would be an excellent plan gpuccio. Step by step sounds good :) Fire away.Elizabeth Liddle
July 18, 2011
July
07
Jul
18
18
2011
06:01 AM
6
06
01
AM
PDT
Elizabeth: Let's go this way. I will try to express here, if you follow me, a simple, empirical and, I believe, complete concept of CSI and its application to design inference in biological information. The terminology os not exactly identical to Dembski's, but it will be defined explicitly step by step. I would start by affirming that the concept of CSi is simple and intuitive, although its rigorous definition is more difficult, and must be done according to a specific context. In general, we can say that CSI is the presence in an object of information that is both specified and complex. But we will go back to that in due time. Here I will just outline the general form of my reasoning, and the steps of the argument. The argument is completely empirical. a) An empirical definition of conscious intelligent being. b) An explicit definition of design, of the process of design, and of designed object. c) A definition of a specific property, called dFSCI (digital Functionally Specified Complex Information), nad of how to evaluate it in an object. That includes a definition of functional specification, and a definition of complexity. It also includes a discussion of the problem of compressibility and of necessity based explanations. d) An empirical appreciation of the correlation between dFSCI and design in all known cases. e) An appreciation of the existence of vast quantities of objects exhibiting dFSCI in the biological world, and nowhere else (excluding designed objects). f) A final inference, by analogy, of a design process as the cause of dFSCI in living beings. These are essentially the steps. We can go step by step, if you are interested.gpuccio
July 18, 2011
July
07
Jul
18
18
2011
05:55 AM
5
05
55
AM
PDT
Elizabeth:
Well, I think your error is the basic ID error – the assumption Chance and Necessity can’t create CSI!
That is based on our experience with cause and effect relationships. So have at it- perhaps YOU can be the FIRST person on this planet to demonstrate that CSI can arise via necessity and chance.Joseph
July 18, 2011
July
07
Jul
18
18
2011
05:21 AM
5
05
21
AM
PDT
Hi Lizzie, Forgive me if I'm teaching a granny to suck eggs, but if you enter the word 'feed' at the end of the web address (right after the final '/') you can then subscribe to all of the comments that are made on that specific thread. I use Internet Explorer here at work, so any web feeds I subscribe to appear under the 'Favorites' button on the toolbar. But you should be able to use any feed reader really (Google Reader is quite good). This will allow you to easily keep track of any new comments without having to manually check every single thread you bookmark. If you're not doing something like this already, you will find it particularly useful given that you are deservedly the centre of attention here at the moment!Chris Doyle
July 18, 2011
July
07
Jul
18
18
2011
04:53 AM
4
04
53
AM
PDT
I should also say, I'm pretty busy this week. But I've bookmarked this thread, and will keep checking in.Elizabeth Liddle
July 18, 2011
July
07
Jul
18
18
2011
12:43 AM
12
12
43
AM
PDT
I'm grateful to you, Mung, for keeping track. Boy is this site awkward (has it occurred to anyone at UD to move to forum instead of blog format? Or even Scoop?) OK. You’ve stated at least three reasons why you reject intelligent design.
One of them you’ve stated as follows:
the hypothesis as put forward by Dembski, for example, I think, is incorrectly operationalised. Specifically, I think the null hypothesis is wrongly formulated, and that this invalidates the design inference.
gpuccio: Could you please elucidate? I am very interested.
The above statement can be found in your post HERE.
Well, I am all prepared to elucidate my statement with reference to Dembski's paper "Specification: the pattern that signifies intelligence", which Dembski himself seems to consider as an summary, clarification and extension of his previous work. However, gpuccio shares my view that this is a poor piece of work. So, either we can take that as agreed, or choose another paper by Dembski that someone thinks is a better formulation of his hypothesis. He has written many.
An additional objection against ID you’ve stated is that chance + necessity can generate information, therefore no intelligent cause is required.
Perhaps you can tell us what definition of information you have in mind. I expect that you and Chris will be discussing this.
By a number of definitions. I had in mind Dembski's CSI, but my claim probably holds for others as well.
How are you coming on Signature in the Cell?
About half way through.
Actually, I think you’ll find that most of us here at UD are truly interested in valid objections to ID.
Excellent :) But it needs to be treated as specific hypotheses to be properly critiqued. Some have more weight than others IMO. So far, I'd say Meyer has the best point.
But like Upright BiPed has said, it been some two months now that we’ve been waiting for you to support your second objection to ID.
I've presented what I hope is now a viable operationalisation of my claim that operationalises UPD's conceptual definition of information. When he, or someone else, agrees that it is sufficient to enable the results of my project to be evaluated, I am ready to begin. But obviously I won't begin until then. It is here:
You stated an additional objection:
we have no trace of a mechanism by which an external designer might have designed living things. We do, in contrast, have many traces of an intrinsic design mechanism (essentially, Darwin’s).
If split that into two. 1. No “design mechanism” + “no external designer.” 2. Darwinism offers a design mechanism. So I’ll ask if you have any more objections to ID.
I do not have a global objection to ID in principle. I have specific objections to specific ID arguments and inferences. I have not read any ID argument that I have yet found persuasive, though I have read some that point to gaps in the history of life that have not yet been convincingly filled - OOL being the obvious example.
And then perhaps ask you to number or label the objections you do have, or if you wish you can restated them, and then we can better keep track of them.
Alternatively, we could take specific ID arguments (this would be a much better way of sifting them IMO). For example: CSI The EF Irreducible Complexity Meyer's.
How’s that sound?
In principle, good. I'd still prefer to start what seems to be Dembski's most recent (and, according to him, his most refined) exposition of his idea. If we can agree on where it falls down, that might lead us to the point at which he took a wrong turn. If we find it does not, then we have all learned something.Elizabeth Liddle
July 18, 2011
July
07
Jul
18
18
2011
12:42 AM
12
12
42
AM
PDT
Yes, Lizzie, some of us have nothing better to do than keep track of stuff like this. :) Actually, I think you'll find that most of us here at UD are truly interested in valid objections to ID. But like Upright BiPed has said, it been some two months now that we've been waiting for you to support your second objection to ID. You stated an additional objection:
we have no trace of a mechanism by which an external designer might have designed living things. We do, in contrast, have many traces of an intrinsic design mechanism (essentially, Darwin’s).
If split that into two. 1. No "design mechanism" + "no external designer." 2. Darwinism offers a design mechanism. So I'll ask if you have any more objections to ID. And then perhaps ask you to number or label the objections you do have, or if you wish you can restated them, and then we can better keep track of them. How's that sound? Cheers.Mung
July 17, 2011
July
07
Jul
17
17
2011
06:07 PM
6
06
07
PM
PDT
Elizabeth Liddle:
OK, Mung and gpuccio: where do you want to start?
You've stated at least three reasons why you reject intelligent design. One of them you've stated as follows:
the hypothesis as put forward by Dembski, for example, I think, is incorrectly operationalised. Specifically, I think the null hypothesis is wrongly formulated, and that this invalidates the design inference. gpuccio: Could you please elucidate? I am very interested.
The above statement can be found in your post HERE. An additional objection against ID you've stated is that chance + necessity can generate information, therefore no intelligent cause is required. Perhaps you can tell us what definition of information you have in mind. I expect that you and Chris will be discussing this. How are you coming on Signature in the Cell?Mung
July 17, 2011
July
07
Jul
17
17
2011
06:00 PM
6
06
00
PM
PDT
Try the OP above. With a dash of here and here. Also cf review article here.kairosfocus
July 17, 2011
July
07
Jul
17
17
2011
03:53 PM
3
03
53
PM
PDT
OK, Mung and gpuccio: where do you want to start? I thought Dembski's paper here: http://www.designinference.com/documents/2005.06.Specification.pdf was a pretty good place. But if someone would like to point me to an alternative (preferably just one, to start with!), that's cool.Elizabeth Liddle
July 17, 2011
July
07
Jul
17
17
2011
12:43 PM
12
12
43
PM
PDT
Thot Expt: To illustrate necessity vs chance vs choice, in a sense relevant to the concept of CSI, and more particularly FSCI. 1: Imagine a 128-side die (similar to the 100 side Zocchihedra that have been made, but somehow made to be fully fair) with the character set for the 7-bit ASCII codes on it. 2: Set up a tray as a string with 73 such in it, equivalent to 500 bits of info storage, or about 10^150 possibility states. 3: Convert the 10^57 atoms of our solar system into such trays, and toss them for c 5 bn years, a typical estimate for the age of the solar system, scanning each time for a coherent message of 73 characters in English such as the 1st 73 characters for this post or a similar message. 4: the number of runs will be well under 10^102, and so the trays could not sample as much as 1 in 10^48 of the configs of the 10^150 for the system. 5: So, if something is significantly rare in the space W of possibilities, i.e it is a cluster of outcomes E comprising a narrow and unrepresentative zone T, the set of samples is maximally unlikely to hit on any E in T. 6: And yet the first 73 characters of this post were composed by intelligent choice in a few minutes, indeed I think less than one. 7: We thus see how chance contingency is deeply challenged on the scope of the resources of our solar system, to create an instance of such FSCI, while choice directed by purposeful intelligence routinely does such. So, we see why FSCI is seen as a reliable sign of choice not chance. 8: Likewise, if we were to take the dice and drop them, reliably they would fall. Indeed we can characterise the relevant differential equations that under given initial circumstances will reliably predict the observed natural regularity of falling. 9: thus we see mechanical necessity as characterised by natural regularities of low contingency. 10: this leads to the explanatory filter used in design theory, whereby natural regularities of an aspect of a phenomenon, lead to the explanation law. 11: high contingency indicates the relevant aspect is driven by chance sand/or choice, with the sort of distinguishing sign like FSCI -- per the exercise just above -- reliably highlighting choice as best explanation. GEM of TKIkairosfocus
July 9, 2011
July
07
Jul
9
09
2011
03:28 PM
3
03
28
PM
PDT
Kuartus: Let' see: >> A frequent criticism (see Elsberry and Shallit) is that Dembski has used the terms “complexity”, “information” and “improbability” interchangeably. a: misleading and strawmannish, cf NFL 144 and 148 above as well as OP. Complex specified info has to meet several separate criteria These numbers measure properties of things of different types: b: pounding on the strawman and pretending or imagining you are answering the real issue. Complexity measures how hard it is to describe an object (such as a bitstring) c: as WD says , information measures how close to uniform a random probability distribution is d: a misleading view of the Hartley suggested metric I - = log p and improbability measures how unlikely an event is given a probability distribution e: citing a commonplace as if it were a refutation >> And: >> Dembski’s calculations show how a simple smooth function cannot gain information. f: distortion. if a search to find a target is beyond search resources esp at cosmos level, then a chance based random walk is utterly unlikely to find zones T He therefore concludes that there must be a designer to obtain CSI. g: misrepresentation, intelligences are routinely observed and the only observed causes of CSI, such as posts in this blog. However, natural selection has a branching mapping from one to many (replication) followed by pruning mapping of the many back down to a few (selection). h: As Weasel showed inadvertently,t he real problem is to get TO shores of islands of function in zones T so that hill climbing can begin. This is a begging of the question of getting to such islands of body plans that work, starting with the embryological development program. i: embryogensis is known to be highly sensitive to disruption, i.e we are looking at credible islands of function. j: this starts with the very first body plan, OOL When information is replicated, some copies can be differently modified while others remain the same, allowing information to increase. k: one may move around within an island of function all one pleases without explaining how one arrives at such an island of function, and of course the GAs show examples of how that is known to happen: they are designed and built by intelligent designers. l: notice the switcheroo on the question to be answered, kept up in the teeth of repeated pointing out that the real issue lies elsewhere, starting with OOL These increasing and reductional mappings were not modeled by Dembski m: because he was addressing the REAL question, as in the one you have ducked; namely getting to the shores of islands of function in large config spaces utterly dominated by non-function. >> See the same problems again and again? GEM of TKIkairosfocus
July 7, 2011
July
07
Jul
7
07
2011
01:55 PM
1
01
55
PM
PDT
Hi kariosfocus, Wikipedia also says here: http://en.wikipedia.org/wiki/Specified_complexity#Criticisms "A frequent criticism (see Elsberry and Shallit) is that Dembski has used the terms "complexity", "information" and "improbability" interchangeably. These numbers measure properties of things of different types: Complexity measures how hard it is to describe an object (such as a bitstring), information measures how close to uniform a random probability distribution is and improbability measures how unlikely an event is given a probability distribution" Also, "Dembski's calculations show how a simple smooth function cannot gain information. He therefore concludes that there must be a designer to obtain CSI. However, natural selection has a branching mapping from one to many (replication) followed by pruning mapping of the many back down to a few (selection). When information is replicated, some copies can be differently modified while others remain the same, allowing information to increase. These increasing and reductional mappings were not modeled by Dembski" Do these criticisms have any weight?kuartus
July 7, 2011
July
07
Jul
7
07
2011
01:33 PM
1
01
33
PM
PDT
F/N: I have decided to reply to EugenS's question on Wiki's challenge to the CSI concept here, as it better fits. I will notify in the other thread. ES: has anyone addressed in detail the problems identified in the article on CSI in Wikipedia? Quick notes on excerpts from Wiki on CSI in their ID article, showing why Wiki is utterly untrustworthy on this, just going through several examples in succession: >>In 1986 the creationist chemist Charles Thaxton used the term "specified complexity" from information theory when claiming that messages transmitted by DNA in the cell were specified by intelligence, and must have originated with an intelligent agent. >> 1 --> Thaxton was a design theory pioneer, not a "creationist," this is labelling to smear, poison and dismiss. This is a PhD chemist working on thermodynamics, with a PhD polymer expert [Bradley] and a PhD Geologist/mining engineer [Olsen]. Wiki is grossly disrespectful. 2 --> The concept of specified complexity in the modern era is not from Thaxton -- as he and his co authors cited in their own work, it comes from OOL researcher ORGEL in 1973, which Wiki knows or should know -- notice the article is locked against correction:
. . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.]
3 --> the summary of TBO's argument in TMLO [full pdf d/load] is loaded and unfair as well. A complex technical thermodynamic argument on the spontaneous concentration of relevant biopolymers on chemical kinetics to form precursors to life -- this is on OOL which if wiki were honest it would admit is the biggest single hole in the evolutionists' story -- is reduced to a caricature, and a suggestion that the best explanation for such is design given the resulting values that boil down to being zero molecules on a planetary prebiotic soup [conc 10^-338 molar IIRC . . . ], or indeed a cosmic scale soup, with a specific note that this does not warrant inference to designer beyond or within the cosmos is strawmannised. 4 --> This is beyond merely careless, it is willfully distorting and strawmannising. >> Dembski defines complex specified information (CSI) as anything with a less than 1 in 10^150 chance of occurring by (natural) chance. Critics say that this renders the argument a tautology: complex specified information cannot occur naturally because Dembski has defined it thus, so the real question becomes whether or not CSI actually exists in nature. >> 5 --> this so distorts and strawmannises what WmAD actually gave on pp 144 and 148 of NFL, just for one instance, as to be libellous:
p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ” [cf original post above] p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity [[cf. here below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . . Biological specification always refers to function . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .”
6 --> A cursory glance at what is being described and the description proffered will show that Dembski is NOT giving a tautology, but is identifying that when an event E comes from a narrow and unrepresentative zone T -- one separately and simply describable -- in a field of possibilities W, then no reasonable chance driven process is likely to ever land on T because of the swamping of the search capacity of the observed cosmos by the scope of the challenge posed by T in W, where T is like 1 in 10^ 150 of W. 7 --> Nor do we see a recognition of where that threshold comes from, the exhaustion of the Planck time quantum state resources of the observed cosmos, which comes up to 10^150 states. Remember, the fastest chemical reactions, ionic ones take 10^30 P-times. 8 --> And the issue is not tautology but search challenge on chance plus necessity sans intelligence; with the contrast that 73 ascii characters worth of meaningful information is routinely -- and only -- observed as caused by intelligence. Like this post and your own. 9 --> critics who cry tautology in the face of that discredit anything more they have to say. >> The conceptual soundness of Dembski's specified complexity/CSI argument has been widely discredited by the scientific and mathematical communities.[13][15][54] Specified complexity has yet to be shown to have wide applications in other fields as Dembski asserts. John Wilkins and Wesley Elsberry characterize Dembski's "explanatory filter" as eliminative, because it eliminates explanations sequentially: first regularity, then chance, finally defaulting to design. They argue that this procedure is flawed as a model for scientific inference because the asymmetric way it treats the different possible explanations renders it prone to making false conclusions. >> 10 --> This first hurls the elephant of the presumed authority of the scientific community then gives a misleading complaint then asserts a dismissal that is wrong. 11 --> In fact we routinely infer to intelligence not lucky noise on encountering say blog comments, or on seeing evident artifacts vs natural stones in an archaeological site, etc etc etc, or even on needing to evaluate on signs whether a patient is conscious and coherent using the Glasgow Coma Scale; and we do so on precisely the intuitive form of the explanatory process that Dembski highlighted. (Cf my per aspect development of it here.) 12 --> the cited objection is little more than the usual stricture that an inductive inference across competing possibilities on signs and statistical patterns will possibly make an error. The issue is not absolute proof but reliability, on pain of selective hyperskepticism. The objectors have for years been unable to provide a clear case where we know the causal story and CSI is the result of chance and necessity without design. A whole internet is there to substantiate the point of the known source of CSI. >> Richard Dawkins, another critic of intelligent design, argues in The God Delusion that allowing for an intelligent designer to account for unlikely complexity only postpones the problem, as such a designer would need to be at least as complex.[56]>> 13 --> RUBBISH. the issue was, is there an empirically reliable sign of design of THIS object in hand, of THIS process, etc. to that he answer is yes. 14 --> On the strength of the FSCI in DNA, we have good reason to infer on sign to design of life. It matters but little at his level that a sufficient cause for the living cell would be a molecular nanotech lab some generations beyond the one run by Venter, after all, intelligent design of DNA is now a matter of published empirical fact, Venter even signs his name in the resulting proteins with a watermark!!! 15 --> this is also grossly ignorant on the cosmological level. On evidence our observed cosmos had a beginning, and is finetuned in a way that sets it at a delicately balanced operating point that supports C-chemistry, cell based life. 16 --> This empirically and logically grounds inference to a cause beyond our cosmos, pointing ultimately to a necessary being with intelligence, power and purpose to create a cosmos of 10^80 atoms and supportive of C-chemistry cell based life. 17 --> That such a necessary being -- one without external necessary causal factors and so without beginning or end [at the simple level relations of necessary logical truth like 2 + 2 = 4 are of this class] -- may or may not on some sense be more complex than the cosmos is irrelevant, apart from an inference on the greatness of such a necessary being. >> Other scientists have argued that evolution through selection is better able to explain the observed complexity, as is evident from the use of selective evolution to design certain electronic, aeronautic and automotive systems that are considered problems too complex for human "intelligent designers". >> 19 --> This is a willful distortion of the known facts of GAs and the like. Such are intelligently designed and work by moving around in an island of designed function. 20 --> As we can s3ee from NFL pp 144 and 148, the problem tacked by the ID issue is to get to the shores of such islands. The questions are being begged and the results are being willfully distorted. +++++++++++ As shown in outline, Wiki has no credibility onthe subject of intelligent design. Ansd since the gross errors, distortions and misrepresentaitons have been corrected any number of times but have been reverted and are now locked in literally, this is willful. Willful deception. There is another, sharper word for this that is unfortunately well warranted: L--s. Sorry if that offends, but this is blatant and willful. GEM of TKI PS: I don't know if anyone wants to carry this forward further. Feel free, it is time we did a major expose of Wiki on this subject as ES invited us to.kairosfocus
July 7, 2011
July
07
Jul
7
07
2011
12:42 PM
12
12
42
PM
PDT
Dr Liddle, I have looked over your post at 17, but to be completely honest with you, I don't see the dire connection between these questions and the conversation we have been having. Perhaps these questions are arising in your mind as wa result of reading "Sig in the Cell" - which is all fine and good - but they don't seem to directly bear on the topics we had been discussing. Perhaps you can set me straight on the implications of these questions to the larger set of topics in our previous posts. To help make the connection, I will post the bulk of your last substantial post from the previous conversation. - - - - - - - - - - - - - - - BIPED: Dr Liddle, To endure the amount of grief that ID proponents have to take, one would think that at the bottom of the theory there would at least be a big booming “tah-dah” and perhaps a crashing cymbal or two. But unfortunately that’s not the case; the theory doesn’t postulate anything acting outside the known laws of the universe. LIDDLE: Cool. Yes, I understand that. BIPED: I bring this up because you want to design a simulation intended to reflect reality to the very best of your ability, and in this simulated reality you want to show something can happen which ID theory says doesn’t happen. Knowing full well that reality can’t be truly simulated, it’s interesting that the closer you get to truly simulating reality, the more stubborn my argument becomes. Only by not simulating reality does your argument have even a chance of being true. LIDDLE: Heh. I recognise the sentiment. The devil is always in the details But we shall see. BIPED: Yet, if ID says that everything in the material universe acts within the laws of the universe, then what is it exactly to be demonstrated within this simulation? In other words, what is the IT? Of course, since this is set up to be a falsification, the IT is for prescriptive information exchange to spontaneously arise from chance and necessity. But that result may be subject to interpretation, and so consequently you want to know exactly what must form in order for me to concede that your falsification as valid. LIDDLE: Thanks. I’m not familiar with the abbreviation “IT” unfortunately, but I think I get your drift. I hope so. I would certainly agree that the Study Hypothesis (H1 in my language) is “for prescriptive information exchange to spontaneously arise from chance and necessity”. And so to falsify the null (that prescriptive information exchange can spontaneously arise from chance and necessity) yes, I want to know the answer to that question. Good! BIPED: I intend to try and fully answer that question in this post. I’m sure you are aware of the Rosetta stone, the ancient stone with the same text written in three separate ancient scripts. Generally, it gave us the ability to decode the meaning of the ancient hieroglyphs by leading us to the discrete protocols behind the recorded symbols. This dovetails precisely with the conversations we’ve had thus far regarding symbols, in that there is a necessary mapping between the symbol and what it is to be symbolized. And in fact, it is the prime characteristic of recorded information that it does indeed always confer that such a mapping exists – by virtue of those protocols it becomes about something, and is therefore recorded information as opposed to noise. LIDDLE: Trying to parse: the prime characteristic of recorded information is that it confers (establishes? requires?) a [necessary] mapping between symbol and what is symbolised. So what about these “protocols”? What I’m thinking is that in living things, the big genetic question is: by what means does the genotype impact the phenotype? And the answer is something like a protocol I like. But let me read on…. BIPED: In retrospect, when I stated that recorded information requires symbols in order to exist, it would have been more correct to say that recorded information requires both symbols and the discrete protocols that actualize them. Without symbols, recorded information cannot exist, and without protocols it cannot be transferred. Yet, we know in the cell that information both exists and is transferred. LIDDLE: Yes. And I like that you refer to “the cell” and not simply “the DNA”. BIPED: This goes to the very heart of the claim that ID makes regarding the necessity of a living agent in the causal chain leading to the origin of biological information. LIDDLE: Let me be clear here: by “living agent”, are you referring to the postulated Intelligent Designer[s]? Or am I misunderstanding you? BIPED: ID views these symbols and their discrete protocols as formal, abstract, and with their origins associated only with the living kingdom (never with the remaining inanimate world). Their very presence reflects a break in the causal chain, where on one side is pure physicality (chance contingency + physical law) and on the other side is formalism (choice contingency + physical law). Your simulation should be an attempt to cause the rise of symbols and their discrete protocols (two of the fundamental requirements of recorded information between a sender and a receiver) from a source of nothing more than chance contingency and physical law. LIDDLE: Cool. I like that. BIPED: And therefore, to be an actual falsification of ID, your simulation would be required to demonstrate that indeed symbols and their discrete protocols came into physical existence by nothing more than chance and physical law. LIDDLE: Right. BIPED: The question immediately becomes “how would we know?” How is the presence of symbols and their discrete protocols observed in order to be able to demonstrate they exist? For this, I suggest we can use life itself as a model, since that is the subject on the table. We could also easily consider any number of human inventions where information (symbols and protocols) are used in an “autonomous” (non-conscious) system. LIDDLE: OK. BIPED: For instance, in a computer (where information is processed) we physically instantiate into the system the protocols that are to be used in decoding the symbols. The same can be said of any number of similar systems. Within these systems (highlighting the very nature of information) we can change the protocols and symbols and the information can (and will) continue to flow. Within the cell, the discrete protocols for decoding the symbols in DNA are physically instantiated in the tRNA and its coworkers. (This of course makes complete sense in a self-replicating system, and leads us to the observed paradox where you need to decode the information in DNA to in order to build the system capable of decoding the information in DNA). LIDDLE: Nicely put. And my intention is to show that it is not a paradox – that a beginning consisting of a unfeasibly improbable assemblage of molecules, brought together by no more than Chance (stochastic processes) and Necessity (physical and chemical properties) can bootstrap itself into a cycle of coding:building:coding:building: etc. BIPED: Given this is the way in which we find symbols and protocols physically instantiated in living systems (allowing for the exchange of information), it would be reasonable to expect to see these same dynamics at work in your simulation. LIDDLE: Yes, I agree. Cool! BIPED: I hope that helps you “get to the heart of what [I] think evolutionary processes can’t do”. LIDDLE: Yes, I think so. That is enormously helpful and just what I was looking for. - - - - - - - - - - - - - So Dr Liddle, its seems to me we were working through an agreement on exactly what must be demonstrated by your simulation, and how that the presence of each requirement would be observed and/or verified. Is this not where we are at?Upright BiPed
July 6, 2011
July
07
Jul
6
06
2011
03:30 PM
3
03
30
PM
PDT
a) can self-replicators (with variance) evolve from non-self-replicators?
I'm going to go out on a limb here and say no. :) Evolution requires replication, so replication is not something which can evolve from non-replication. a') Can an evolvable self-replicator magically appear?Mung
July 6, 2011
July
07
Jul
6
06
2011
03:04 PM
3
03
04
PM
PDT
KF,
it is self replication as an additional facility of something that is separately complex and functional as an automaton.
That could not be more vague. So not only does it have to replicate itself it has to have a purpose? A function in life? What, do you, counts as such a suitable function for a thing such as is being proposed? So that if observed it proves the point one way or another? Is moving towards food/energy sufficiently "complex and functional" to satisfy you in that regard? What about just movement? Or what about replication? Perhaps replicating with just the right error rate, not too much not too little? Or what about having certain attitudes to life? Feelings? The domain is small. A minimal self replicator. What additional functionality other then self replication will have to arise for it to become "relevant" in your eyes? Make a prediction!WilliamRoache
July 6, 2011
July
07
Jul
6
06
2011
02:32 PM
2
02
32
PM
PDT
Dr Liddle: I wish you best success on your tour. Please recall, though, it is self replication as an additional facility of something that is separately complex and functional as an automaton. In a context of coded representation. That is what is relevant. GEM of TKI LINKkairosfocus
July 6, 2011
July
07
Jul
6
06
2011
01:32 PM
1
01
32
PM
PDT
Thank you Dr Liddle for the response. I will consider your questions and return shortly.Upright BiPed
July 6, 2011
July
07
Jul
6
06
2011
01:27 PM
1
01
27
PM
PDT
Sorry UPD - someone said that comments were closed on that thread, but I haven't forgotten, just been dashing round the country visting universities with my son. Also, reading Signature in the Cell, as I think I said, because I think it's very useful, and also having a productive (I think) conversation with Mung and kairosfocus about CSI and the EF. I've still got a couple more university visits to do this week, I'm busy all day Saturday, and have my father visting on Sunday, so still a bit snowed under. What I have done though, is roughed out three issues separable issues that we need to disentangle: One is, can we get a self-replicator to arise from scratch (without a specific self-replication algorithm built in) that will be capable of Darwinian evolution (i.e. optimise itself for continuation in its virtual environment)? Second is: how do we measure the information it generates (if it does?) Third is: does something as complicated as a ribosome remain irreducibly complex, and so require an ID, regardless of whether a simpler self-replicator can emerge from a non-self-replicating set of starting items? Which I guess we could express as: a) can self-replicators (with variance) evolve from non-self-replicators? b) If so, can they generate complex specified information? c) If so, can they generate information as complex and specified as that we see in a living cell? I won't attempt c! Given that, do you regard a and b as worth attempting?Elizabeth Liddle
July 6, 2011
July
07
Jul
6
06
2011
01:21 PM
1
01
21
PM
PDT
UB: Such a refutation would be well within the ambit of this thread. Let's see . . .kairosfocus
July 6, 2011
July
07
Jul
6
06
2011
01:15 PM
1
01
15
PM
PDT
Dr Liddle, are you out there? Do you not yet have anything for us to discuss? I have agreed to help you falsify ID with your simulation, and it would seem that we have much to do. I have been waiting since the 17th of June for your next response.Upright BiPed
July 6, 2011
July
07
Jul
6
06
2011
12:42 PM
12
12
42
PM
PDT
1 3 4 5 6

Leave a Reply