Uncommon Descent Serving The Intelligent Design Community

Is the CSI concept well-founded mathematically, and can it be applied to the real world, giving real and useful numbers?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Those who have been following the recently heated up exchanges on the theory of intelligent design and the key design inference on tested, empirically reliable signs, through the ID explanatory filter, will know that a key move in recent months was the meteoric rise of the mysterious internet persona MathGrrl (who is evidently NOT the Calculus Prof who has long used the same handle).

MG as the handle is abbreviated, is well known for “her” confident-manner assertion — now commonly stated as if it were established fact in the Darwin Zealot fever swamps that are backing the current cyberbullying tactics that have tried to hold my family hostage —  that:

without a rigorous mathematical definition and examples of how to calculate [CSI], the metric is literally meaningless. Without such a definition and examples, it isn’t possible even in principle to associate the term with a real world referent.

As the strike-through emphasises, every one of these claims has long been exploded.

You doubt me?

Well, let us cut down the clip from the CSI Newsflash thread of April 18, 2011, which was again further discussed in a footnote thread of 10th May (H’mm, anniversary of the German Attack in France in 1940), which was again clipped yesterday at fair length.

( BREAK IN TRANSMISSION: BTW, antidotes to the intoxicating Darwin Zealot fever swamp “MG dunit” talking points were collected here — Graham, why did you ask the question but never stopped by to discuss the answer? And the “rigour” question was answered step by step at length here.  In a nutshell, as the real MathGrrl will doubtless be able to tell you, the Calculus itself, historically, was founded on sound mathematical intuitive insights on limits and infinitesimals, leading to the warrant of astonishing insights and empirically warranted success, for 200 years. And when Math was finally advanced enough to provide an axiomatic basis — at the cost of the sanity of a mathematician or two [doff caps for a minute in memory of Cantor] — it became plain that such a basis was so difficult that it could not have been developed in C17. Had there been an undue insistence on absolute rigour as opposed to reasonable warrant, the great breakthroughs of physics and other fields that crucially depended on the power of Calculus, would not have happened.  For real world work, what we need is reasonable warrant and empirical validation of models and metrics, so that we know them to be sufficiently reliable to be used.  The design inference is backed up by the infinite monkeys analysis tracing to statistical thermodynamics, and is strongly empirically validated on billions of test cases, the whole Internet and the collection of libraries across the world being just a sample of the point that the only credibly known source for functionally specific complex information and associated organisation [FSCO/I]  is design.  )

After all, a bit of  careful citation always helps:

_________________

>>1 –> 10^120 ~ 2^398

I = – log(p) . . .  eqn n2
3 –> So, we can re-present the Chi-metric:
[where, from Dembski, Specification 2005,  χ = – log2[10^120 ·ϕS(T)·P(T|H)]  . . . eqn n1]
Chi = – log2(2^398 * D2 * p)  . . .  eqn n3
Chi = Ip – (398 + K2) . . .  eqn n4
4 –> That is, the Dembski CSI Chi-metric is a measure of Information for samples from a target zone T on the presumption of a chance-dominated process, beyond a threshold of at least 398 bits, covering 10^120 possibilities.
5 –> Where also, K2 is a further increment to the threshold that naturally peaks at about 100 further bits . . . .
6 –> So, the idea of the Dembski metric in the end — debates about peculiarities in derivation notwithstanding — is that if the Hartley-Shannon- derived information measure for items from a hot or target zone in a field of possibilities is beyond 398 – 500 or so bits, it is so deeply isolated that a chance dominated process is maximally unlikely to find it, but of course intelligent agents routinely produce information beyond such a threshold.

7 –> In addition, the only observed cause of information beyond such a threshold is the now proverbial intelligent semiotic agents.
8 –> Even at 398 bits that makes sense as the total number of Planck-time quantum states for the atoms of the solar system [most of which are in the Sun] since its formation does not exceed ~ 10^102, as Abel showed in his 2009 Universal Plausibility Metric paper. The search resources in our solar system just are not there.
9 –> So, we now clearly have a simple but fairly sound context to understand the Dembski result, conceptually and mathematically [cf. more details here]; tracing back to Orgel and onward to Shannon and Hartley . . . .
As in (using Chi_500 for VJT’s CSI_lite [UPDATE, July 3: and S for a dummy variable that is 1/0 accordingly as the information in I is empirically or otherwise shown to be specific, i.e. from a narrow target zone T, strongly UNREPRESENTATIVE of the bulk of the distribution of possible configurations, W]):
Chi_500 = Ip*S – 500,  bits beyond the [solar system resources] threshold  . . . eqn n5
Chi_1000 = Ip*S – 1000, bits beyond the observable cosmos, 125 byte/ 143 ASCII character threshold . . . eqn n6
Chi_1024 = Ip*S – 1024, bits beyond a 2^10, 128 byte/147 ASCII character version of the threshold in n6, with a config space of 1.80*10^308 possibilities, not 1.07*10^301 . . . eqn n6a
[UPDATE, July 3: So, if we have a string of 1,000 fair coins, and toss at random, we will by overwhelming probability expect to get a near 50-50 distribution typical of the bulk of the 2^1,000 possibilities W. On the Chi-500 metric, I would be high, 1,000 bits, but S would be 0, so the value for Chi_500 would be – 500, i.e. well within the possibilities of chance.  However, if we came to the same string later and saw that the coins somehow now had the bit pattern of the ASCII codes for the first 143 or so characters of this post, we would have excellent reason to infer that an intelligent designer, using choice contingency, had intelligently reconfigured the coins. that is because, using the same I = 1,000 capacity value, S is now 1, and so Chi_500 = 500 bits beyond the solar system threshold. If the 10^57 or so atoms of our solar system, for its lifespan, were to be converted into coins and tables etc, and tossed at an impossibly fast rate, it would be impossible to sample enough of the possibilities space W to have confidence that something from so unrepresentative a zone T,  could reasonably be explained on chance. So, as long as an intelligent agent capable of choice is possible, choice — i.e. design — would be the rational, best explanation on the sign observed, functionally specific, complex information.]
10 –> Similarly, the work of Durston and colleagues, published in 2007, fits this same general framework . . . .
We use the formula log (20) – H(Xf) to calculate the functional information at a site specified by the variable Xf such that Xf corresponds to the aligned amino acids of each sequence with the same molecular function f. The measured FSC for the whole protein is then calculated as the summation of that for all aligned sites. The number of Fits quantifies the degree of algorithmic challenge, in terms of probability [info and probability are closely related], in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 10^49 different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106 percent of 121-residue sequence space. In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space . . . .
11 –> So, Durston et al are targetting the same goal, but have chosen a different path from the start-point of the Shannon-Hartley log probability metric for information. That is, they use Shannon’s H, the average information per symbol, and address shifts in it from a ground to a functional state on investigation of protein family amino acid sequences. They also do not identify an explicit threshold for degree of complexity. [Added, Apr 18, from comment 11 below:] However, their information values can be integrated with the reduced Chi metric:
Using Durston’s Fits from his Table 1, in the Dembski style metric of bits beyond the threshold, and simply setting the threshold at 500 bits:
RecA: 242 AA, 832 fits, Chi: 332 bits beyond
SecY: 342 AA, 688 fits, Chi: 188 bits beyond
Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond  . . . results n7
The two metrics are clearly consistent . . .  (Think about the cumulative fits metric for the proteins for a cell . . . )
In short one may use the Durston metric as a good measure of the target zone’s actual encoded information content, which Table 1 also conveniently reduces to bits per symbol so we can see how the redundancy affects the information used across the domains of life to achieve a given protein’s function; not just the raw capacity in storage unit bits [= no.  of  AA’s * 4.32 bits/AA on 20 possibilities, as the chain is not particularly constrained.]>>

_________________

So, there we have it folks:

I: Dembski’s CSI metric is closely related to standard and widely used work in Information theory, starting with I = – log p

II: It is reducible on taking the appropriate logs, to an information beyond a threshold value

III: The threshold is reasonably set by referring to the accessible search resources of a relevant system, i.e. our solar system or the observed cosmos as a whole.

IV: Where, once an observed configuration — event E, per NFL — that bears or implies information is from a separately and “simply” describable narrow zone T that is strongly unrepresentative — that’s key — of the space of possible configurations, W, then

V: since the search applied is of a very small fraction of W, it is unreasonable to expect that chance can reasonably account for E in T, instead of the far more typical possibilities in W of in aggregate, overwhelming statistical weight.

(For instance the 10^57 or so atoms of our solar system will go through about 10^102 Planck-time Quantum states in the time since its founding on the usual timeline. 10^150 possibilities [500 bits worth of possibilities] is 48 orders of magnitude beyond that reach, where it takes 10^30 P-time states to execute the fastest chemical reactions.  1,000 bits worth of possibilities is 150 orders of magnitude beyond the 10^150 P-time Q-states of the about 10^80 atoms of our observed cosmos. When you are looking for needles in haystacks, you don’t expect to find them on relatively tiny and superficial searches.)

VI: Where also, in empirical investigations we observe that an aspect of an object, system, process or phenomenon that is controlled by mechanical necessity will show itself in low contingency. A dropped, heavy object falls reliably at g. We can make up a set of differential equations and model how events will play out on a given starting condition, i.e we identify an empirically reliable natural law.

VII: By contrast, highly contingent outcomes — those that vary significantly on similar initial conditions, reliably trace to chance factors and/or choice, e.g we may drop a fair die and it will tumble to a value essentially by chance. (This is in part an ostensive definition, by key example and family resemblance.)  Or, I may choose to compose a text string, writing it this way or the next. Or as the 1,000 coins in a string example above shows, coins may be strung by chance or by choice.

VIII: Choice and chance can be reliably empirically distinguished, as we routinely do in day to day life, decision-making, the court room, and fields of science like forensics.  FSCO/I is one of the key signs for that and the Dembski-style CSI metric helps us quantify that, as was shown.

IX:  Shown, based on a reasonable reduction from standard approaches, and shown by application to real world cases, including biologically relevant ones.

We can safely bet, though, that you would not have known that this was done months ago — over and over again — in response to MG’s challenge, if you were going by the intoxicant fulminations billowing up from the fever swamps of the Darwin zealots.

Let that be a guide to evaluating their credibility — and, since this was repeatedly drawn to their attention and just as repeatedly brushed aside in the haste to go on beating the even more intoxicating talking point drums,  sadly, this also raises serious questions on the motives and attitudes of the chief ones responsible for those drumbeat talking points and for the fever swamps that give off the poisonous, burning strawman rhetorical fumes that make the talking points seem stronger than they are.  (If that is offensive to you, try to understand: this is coming from a man whose argument as summarised above has repeatedly been replied to by drumbeat dismissals without serious consideration, led on to the most outrageous abuses by the more extreme Darwin zealots (who were too often tolerated by host sites advocating alleged “uncensored commenting,” until it was too late), culminating now in a patent threat to his family by obviously unhinged bigots.)

And, now also you know the most likely why of TWT’s attempt to hold my family hostage by making the mafioso style threat: we know you, we know where you are and we know those you care about. END

Comments
Serious: The above was an answer to essentially a mathematical challenge, and is couched in those terms in light of the issues then under contention. It was necessary to connect CSI to the standard results of info theory, and to show how a metric could be deduced and applied in direct response to Patrick May's sock-puppet persona, Mathgrrl. Notice the core challenge he raised:
a rigorous mathematical definition and examples of how to calculate [CSI]
That is what you see above. If you want a somewhat simpler look at the issue, try here. Sorry, the math is necessary when something mathematical has to be shown. And this math is much simpler than the alternative formulations on statistical and classical thermodynamics. Yes the multiverse does imply that the fine tuning point is made, but it is the credibility of CSI as a mathematical quantity that can be used in actual biological contexts which was on the table here. KF kairosfocus
I think 3 points must be made. First, these arguments must be reduced and simplified as the points are lost. There is no point in arguing with someone whose point has a ton of convoluted garbage on top of it. The 2nd is Cosmology has already moved to Multiverse --which is essentially conceding design to any unbiased thinker. The probabilities in Cosmology are actually tiny in comparison to abiogenesis. The 3 point is most biologist consider multiverse to be metaphysical nonsense. So we have a conundrum here. Cosmologists and Theoretical physicists accept must lower odds. Essentially, the universe we live in is impossible if there is only one. Yet biologists will not accept what is obvious to everyone else --indicating a pathological bias. So you are trying to reason with people who have crippled their intellect. So arguments must contain grander premises and not allow this piling on of techno babble garbage that these crippled minds can hide underneath serious123
Elizabeth, are crystals self-replicators? Mung
Elizabeth, do you consider crystals to be self-replicators? Mung
"And if every Planck Time Quantum state in this cosmos spawned another universe of same size with the same number of PTQS’s, we go to 10^150 squared or 10^300." Brilliant! Now we're making headway. In such a situation, ten-million, trillion, trillion, trillion universes would be spawned every second, which would be 10^43 UPS (universes per second) and 10^123 atoms (that's a trillion, trillion, trillion, trillion, trillion, trillion, trillion... never mind) per second. It would take a full 15 billion years of 10^43 UPS to approach your 10^300. (I thought your 1000 bits was too generous. Actually, I think 500 bits is also. Then again, that's the point, isn't it.) And we still fall well-short of imagining 10^520, which is the search space of a single solitary protein with a paltry 400 aminos. material.infantacy
Yup. And if every Planck Time Quantum state in this cosmos spawned another universe of same size with the same number of PTQS's, we go to 10^150 squared or 10^300. Hence my resort of going to 1,000 bits not 500. if you cannot sample as much as 1 in 10^150th of a space of possibilities, such a relatively tiny sample can only reasonably hope to capture typical configs. So, if we are looking at events E from UNrepresentative narrow zones T in a possibilities space W, where E is functionally specific, complex and informational, we have good reason to accept that it is only intelligence that is credibly able to land us in T. A blind sample, we have no good reason to hope will land us in such a narrow and UNrepresentative T. But then if you are wedded to a different view, this can be hard to see indeed. G kairosfocus
"Tell me, did you actually count off 520 zeros?" Yes. It is my way. Speaking of HP calculators, these numbers choke my old 49g. Properties of logarithms to the rescue. Considering ten raised to the power of five-hundred and twenty (10^520). Just how large is this number: If every atom in the universe were another identical universe, we'd only have 10^160 atoms. material.infantacy
MI: Tell me, did you actually count off 520 zeros? Shakkin me haid . . . And you are right, I ent goin to count off 60,000+ zeros! Thank God for old Mr Smith and the Cambridge Elementary mathematical tables. Not to mention a good old circular slide rule! Then there was that fondly remembered HP 21 . . . GEM of TKI kairosfocus
KF, "Sometimes, I think part of the problem lies in a power of the very exponential notation used." I agree. Seeing something like 10^520 hardly does justice the depth of this number. But then again, neither does seeing this: 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000. (I dare you to try that with your numbers.) xp material.infantacy
Chris:
Morning Lizzie, We certainly do have different mental images here! I think that’s because I’m talking about self-replication and you’re talking about something else (I’m trying to think of a technical term for disassembling and reassembling lego blocks… but can’t)!
Well, I'm talking about what I mean by self-replication - when something does something that results in two copies of the original.
Crucially, the ability of the first self-replicating molecule to self-replicate (a small part of which may involve splitting down the middle) comes from its molecular shape and content so, that will include its sequence.
Not necessarily, although possibly.
The process of self-replication does not and cannot rely upon random free-floating monomers being in the right place at the right time: self-replication needs to be far more self-contained than that or else the parent would almost certainly lose the daughter (and itself!) in the very process of reproduction. Besides, if self-replication was merely the “ability to split down the middle, and for both halves to then attract to each now-unmated unit” then surely the most likely outcome would be for the two halves of the peptide chain to merely come back together again.
Possibly. Certainly in the absence stuff to make a copy out of, a thing won't copy. That's why photocopiers need paper and toner, and people need food :) But yes, you need at least two conditions for self-replication to occur: a mechanism, and resources. You can't spin straw into gold.
So, the self-replicating molecule would (and we’re dramatically oversimplifying here) need to start off with something like: AC CA CA DB DB BD CA CA AC AC DB BD And then, after a self-contained and straightforward process of self-replication, we end up with two: AC AC CA CA CA CA DB DB DB DB BD BD CA CA CA CA AC AC AC AC DB DB BD BD
Yes. The first splits vertically, like this: A C C A C A D B D B B D C A C A A C A C D B B D Then spare monomers attach to the links of each single chain, resulting in two identical chains, like this: AC CA CA AC CA AC DB BD DB BD BD DB CA AC CA AC AC CA AC CA DB BD BD DB Flip the second one over and you have: AC AC CA CA CA CA DB DB DB DB BD BD CA CA CA CA AC AC AC AC DB DB BD BD
Given that such a self-replicating molecule must have existed if life just made itself, then I can see no scope for copying error that will not impair or destroy the ability to self-replicate.
Why not? Consider: AC CA CA DB splits to form A C C A C A D B Then the first acquires a B on the end (a passing monomer): A C C D B The second loses a C from one end (bumps into a rock): A A B Each then joins up with free matching monomers giving: AC CA CA DB BD and AC AC BD Which, if we flip the second over are: AC CA CA CA CA DB DB BC both being mutant daughters of the original AC CA CA DB And both being just as capable of reproducing themselves as their parent.
There is only perfect and eternal cloning. And, this heredity is a much more important feature of life than copying errors. So, tell me Lizzie, how can we realistically move beyond this first self-replicating molecule?
But there isn't only "perfect and eternal cloning". There can be lots of imperfect clones. And if some of them turn out to be better than others, more clones of the good cloners will exist. Elizabeth Liddle
I need to learn how to use tags properly (actually, this might work)! Here's a good old fashioned link to the Joe Francis article: http://www.4truth.net/fourtruthpbscience.aspx?pageid=8589952959 Chris Doyle
"So why isn't every can of soup you open teeming with novel life forms as a new start world of life?" A very good question, kairosfocus. We've provided a number of major obstacles on this thread - not one of which has been dealt with by our opponents yet. If the arguement is not already settled in our favour, then I think this article by Joe Francis surely finishes the job off: Oxygen, Water and Light, Oh My! The Toxicity of Life's Basic Necessities (hopefully that's a link) Chris Doyle
GP: Not only must these paths exist, but they would have to be accessible so that within a few 100 mn yrs of the end of the late bombardment era on the usual timeline, life emerges on earth. This means they search for such has to be MUCH simpler than a search for a 1000 bit dFSCI object. More like that for a 20 - 30 bit object. So, using the usual RNA world type model, we are looking at 10 - 20 monomer RNAs that self replicate and template "proteins" at that length that have significant replication- rewardable function etc. There is of course a lot of speculation on this sort of thing, too much of it presented as though it were proved fact. As for the onward imagined process by which such would now create a code for DNA [a near optimal code!], regulatory networks, and the complex machinery of life to implement algorithms expressed in the DNA etc etc, of this we find nowhere the faintest empirical trace in the body of observations. And, we are looking at the issue of 100+ k bits worth of control, data and algorithmic code writing themselves out of lucky noise here on our one little planet or in our solar system. [Has anyone worked out how long drift at any reasonable speed for organic molecules that are fairly fragile -- a supernova would rip 'em up -- takes to bridge interstellar space!] Berlinski's recent rebuke to such unbridled speculation is well worth clipping:
At the conclusion of a long essay, it is customary to summarize what has been learned. In the present case, I suspect it would be more prudent to recall how much has been assumed: First, that the pre-biotic atmosphere was chemically reductive; second, that nature found a way to synthesize cytosine; third, that nature also found a way to synthesize ribose; fourth, that nature found the means to assemble nucleotides into polynucleotides; fifth, that nature discovered a self-replicating molecule; and sixth, that having done all that, nature promoted a self-replicating molecule into a full system of coded chemistry. These assumptions are not only vexing but progressively so, ending in a serious impediment to thought. That, indeed, may be why a number of biologists have lately reported a weakening of their commitment to the RNA world altogether, and a desire to look elsewhere for an explanation of the emergence of life on earth. "It's part of a quiet paradigm revolution going on in biology," the biophysicist Harold Morowitz put it in an interview in New Scientist, "in which the radical randomness of Darwinism is being replaced by a much more scientific law-regulated emergence of life." Morowitz is not a man inclined to wait for the details to accumulate before reorganizing the vista of modern biology. In a series of articles, he has argued for a global vision based on the biochemistry of living systems rather than on their molecular biology or on Darwinian adaptations. His vision treats the living system as more fundamental than its particular species, claiming to represent the "universal and deterministic features of any system of chemical interactions based on a water-covered but rocky planet such as ours." This view of things - metabolism first, as it is often called - is not only intriguing in itself but is enhanced by a firm commitment to chemistry and to "the model for what science should be." It has been argued with great vigor by Morowitz and others. It represents an alternative to the RNA world. It is a work in progress, and it may well be right. Nonetheless, it suffers from one outstanding defect. There is as yet no evidence that it is true . . .
And a look at the video here will help. Folks, we can put this in terms of a pointed question: your friendly local soup can has in it much more than the sort of thing that is being imagined for OOL. So why isn't every can of soup you open teeming with novel life forms as a new start world of life? GEM of TKI kairosfocus
Indium: By the way, I still cannot access your links. Could you please post the complete URL? gpuccio
Indium: Therefore, the ratio of the target/total search space is irrelevant as long as there is a somehow viable evolutionary pathway. That's the point you have wrong. As long as we know, those pathway don't exist. We have never seen them, and there is absolutely no logical reason, except for the blind faith of darwinists, for them to exist. All that we know about complex information, both in human artifacts and computer programming and biology, tells us that those path do not exist. These are the facts. So, unless and until someone shows those paths to exist, my argument is completely relevant. Remember again, science is about known facts, not fairy tales or dogmatic hopes and beliefs. gpuccio
MI: Understood. I just wanted to underscore the force of the point. Sometimes, I think part of the problem lies in a power of the very exponential notation used. Its compression. Odds of 1 in 10^60 or a ratio like that are comparable to a grain of sand to the mass of the atomic matter in our observed universe. Or try one atom to our solar system. A genome of 100,000 -- too small, actually -- corresponds to a space of configs of order 9.98*10^60,205. That is a staggering number. Maybe if we had to write out pages and pages of zeroes it would help us see the unimaginable scale of such a number. The resources of the observed universe could not begin to even pretend to sample it. And the space we are dealing with for all genomes runs out to about 700 BILLION bases, so far. 3.75 * 10^ 421,441, 993,929 possibilities swamps the number of individual organisms that has ever existed. Sampling theory tells us loud and clear that when we take a tiny nick out of a huge population, the most we can hope for is that we have something reasonably typical. Even at the 1,000 bits end, our cosmos cannot begin to explore a tiny fraction of the possibilities. So, the notion that a chance based random walk can land us on the shores of ANY island of function that is even reasonably isolated, is nonsense. We are looking at a very different set of options for evo mat at this point:
1: functional states DOMINATE the space, which is patently not so. 2: Function begins so early, at such a simple level of complexity, and leads through a smoothly branched tree pattern to all major forms, that the first forms are easy to get to and lead on to the others. 80+ years of OOL studies give the lie to this. 3: Life is literally programmed into the laws of physics, so once a suitable environment forms, it is inevitable. (This would mean that every can of soup in the supermarket would be brimming over with spontaneously formed life.) 4: Our understanding of thermodynamics [e.g the implications of diffusion], information theory, chemistry and physics is totally inept. 5: The apparent design is not just apparent. And, actually, 3 would be the same as this.
Perhaps someone can suggest another, but so far it looks like the signs are speaking loud and clear. GEM of TKI kairosfocus
Hi KF, Thanks for taking the time to answer my questions. They were partially of a rhetorical nature, however the answers are still interesting. 1) Has anyone an idea of how many likely functional targets exist in a space of, say, 20^400? (This represents a protein chain of 400 aminos.) I take it from your reply that it's veritably irrelevant, since the search space is so large and the search resources so small. My point was that we would need a huge number of targets, > 10^370, in order for the feasibility of even this one protein to be found by a blind process. (20^400 is around 10^520.) I realize this is a drastic understatement of the problem, since it is compounded with folding contingencies, and again with every disparate protein in a cell, and again with every contingent arrangement of proteins into functional parts (organelles in the single-celled case). The magnitude of the problem is staggering. It would literally take a multiverse probablistic resources just for a single 400 length chain to be found by a random search.
Actually that falls apart on closer inspection: not least, this quietly slips across the border into metaphysical, philosophical speculation. The empirical, observed evidence for a quasi-infinite, eternal multiverse is NIL.
World views (beliefs) seem to be given top billing as a fact of human nature. There is nothing we observe or reasone that cannot be sacrificed to protect our world view. The only hope we have is to believe that truth exists. This is a spiritual matter, of course. I've seen reason itself denied (in the name of reason, go figure) more than once on this blog by those opposed to the possibility of design in nature.
For the 128 ascii characters to be used to make sentences in English, some highly restrictive rules have to be used to select sequences in strings, and this locks out the vast majority of sequences.
Yes, even this paragraph is beyond the search capabilities of the entire universe, multiplied many times. This is the observed reality that must be rejected if one is to maintain a world view that doesn't include a creator.
The gap here is that the problem is to get to the shorelines of those incredibly isolated islands of function, not to move around within such an island. (This has been underscored many, many times, but is routinely ignored or brushed aside. But it is the decisive issue
I think that movement on these islands would be limited. Even if we stipulate to some mysterious process finding the island, I don't believe it's been shown (by anyone, ever) that we can jump to a neighboring function, unless it's within rowing distance. In other words, for a given protein, it's limited to rather minor variations (a few aminos), rather than bridging larger differences in protein structure (and therefore function). If these bridges exist, we should know already. 2) Is there any known probability distribution that would allow bridges between functional targets given a reasonable target estimate? I'm assuming the answer here is plainly no. Any uniform or non-uniform distribution of targets across the search space will yeild practically the same results for most proteins, since the search space is so large and the functional permutations likely vanishingly small by comparison. m.i. material.infantacy
MI: Actually, all that is needed for the considerations above to become extremely relevant, is for:
1: the appropriate config space to be extremely large relative to the available search resources so as to reduce a search process on the relevant scale to be not significantly different from zero in a needle in a haystack search [500 bits -- solar system swamped, 1000 bits -- observed cosmos swamped] 2: we have credible reason to hold that the objects in view are functionally specific, and complex so that they are deeply isolated in the space of possibilities for something of that bit depth of complexity
The first criterion is fairly obvious. If you can only do a relatively tiny sample the only things you will credibly pick up by a random walk and trial and error search process -- and something dominated by molecular noise or the like will be like that -- will be something typical of the broad bulk of the distribution. This is not far from the statistical basis for the second law of thermodynamics. It is the second criterion that seems difficult to accept. But the principle can be seen from the text in this thread. For the 128 ascii characters to be used to make sentences in English, some highly restrictive rules have to be used to select sequences in strings, and this locks out the vast majority of sequences. Similarly, when we look at proteins, at length DM seems to have had to acknowledge that the vast majority of AA sequences -- we need not pause to look at issues on chirality, peptide vs non peptide bonds and interference by cross reactions etc -- will be unlikely to fold and function properly in the context of a living cell. (Indeed the whole class of prion diseases is one where propagating misfolds play havoc with properly sequenced proteins. Thence, mad cow disease etc.) Similar constraints affect DNA and RNA chains. So, the blind evolutionary argument [which is not the same as all evolutionary arguments] is in effect asserting that chance sequences can find function. For small changes in things that already work, such may work, but when you are looking at coming up with novel body plans the problems literally exponentiate. Every additional bit of info doubles the space of possible configs. So, once we reach the sort of thresholds above, we run into the problem that the search resources that are relevant -- and for practical purposes our solar system is our chemically available "universe" -- soon become utterly inadequate. Indeed this is a part motivation for the sudden popularity of quasi-infinite multiverse models or rather speculations. Proponents of such, basically hope that the quasi infinite scope they propose will swamp all search space challenges, so that we "should" not be surprised if we see ourselves as the lucky winners. Actually that falls apart on closer inspection: not least, this quietly slips across the border into metaphysical, philosophical speculation. The empirical, observed evidence for a quasi-infinite, eternal multiverse is NIL. And once we are in the province of phil -- despite intemperate objections -- other non-scientific, historical and current evidence becomes very relevant such as the literally millions over the centuries who claim to have met and come to know God in positively life transforming ways. Once the door to phil is open, thank you we can go through it and address the full panoply of comparative difficulties across live option worldviews. But the focus is on scientific matters here. For that, we have empirical evidence of but one observed cosmos, with a credible beginning, and with evidence of locally highly sensitive fine-tuning that puts it at an operating point fitted for C-chemistry, cell based life. We also observe that that life is based on digital code bearing molecular technologies and algorithms. The only empirically credible explanation for such is intelligence, and we have excellent analytical grounds for backing that up. Now that does not tell us whodunit, where, when or how, but it strongly suggests on empirical evidence that twerdun is the best explanation. And through that door lie many interesting scientific and even technological possibilities. GEM of TKI PS: Indium in 142 does not seem to recognise a key begged question:
There are huge amounts of possible sequences which most likely don´t make any sense and don´t even result in folded proteins. Evolution does not waste its resources to search these spaces, by definition it only looks in the vicinity of existing genomes. Therefore, the ratio of the target/total search space is irrelevant as long as there is a somehow viable evolutionary pathway.
1 --> The gap here is that the problem is to get to the shorelines of those incredibly isolated islands of function, not to move around within such an island. (This has been underscored many, many times, but is routinely ignored or brushed aside. But it is the decisive issue.) 2 --> Such starts with OOL and goes on to origin of body plans. (Cf here on whales and here on OO life.) 3 --> If you don't have an existing functional genome for a body plan, you don't have a start point for searching for improved performance in its near vicinity. 4 --> So, let us un-beg that question: on empirical evidence, how can chance variations in still warm ponds or unicellular organisms filtered through trial and error and relative success. originate cell based life and originate embryologically feasible body plans? 5 --> Specifically, how do we account for the origin of the cell and the origin of the whale, on OBSERVED evidence that answers to the functional info origination challenge, within the relevant scopes of resources? Beyond just so stories. 6 --> Where was the work done, by whom, when, and where is it published in the serious literature? kairosfocus
Indium tells me that:
Evolution is not random.
According to the theory the processes are blind and undirected/ unguided- ie accumulations of genetic accidents. And you still cannot provide any evidence for that mechanism constructing new, useful and functional multi-part systems. So the bottom line remains-> every time we have observed CSI and knew the cause it has always been via agency involvement- always. Therefor it is safe to infer a designing agency was involved when we observe CSI and do not know the cause. Joseph
Mung @ 142: "And that makes your argument a non sequitur, which means you should reformulate it or discard it." Also, if one cares to notice it, one can see that Indium's particular argument you quote and critique is denying that Darwinism is true, is denying that "evolution" has the power to move off those "incredibly tiny" "islands of fitness." Or, to look at it in a slightly different direction, he is denying that "evolution" has the power to create new "islands of fitness." Ilion
"There are huge amounts of possible sequences which most likely don´t make any sense and don´t even result in folded proteins."
Has anyone an idea of how many likely functional targets exist in a space of, say, 20^400? It seems that if finding any functional target in the first place is to be considered plausible then somebody should have an idea. That's not to say we can start with the proposition that *evolution* is true and then reason backwards that there are at least 20^400 / 10^150 functional targets in that sequence space (~10^520 / 10^150 = 10^370) although I can see where that would be tempting for some. I'd wager it's the other way around, that in a 20^400 space there are fewer than 10^150 functional permutations, leaving an effective search space of 10^370 (inhibitively large). Islands of function in a sea of configurations doesn't do this problem justice. We'd be better off thinking of this as atoms of function in a multiverse of configurations.
"Evolution does not waste its ressources to search these spaces, by definition it only looks in the vicinity of existing genomes."
I'm not sure what "by definition" means here. Rather, if variations are confined around existing function, it would suggest a mechanism for a non-random search.
"Therefore, the ratio of the target/total search space is irrelevant as long as there is a somehow viable evolutionary pathway."
A viable pathway would seem a requisite condition for accepting the validity of a variation mechanism independent of design. Also, the target/space ratio is still completely relevant, since reasonable estimates of the distance between functional intermediates would depend on knowing both the search space and the likely number of permutations that would yeild valid targets. Is there any known probability distribution that would allow bridges between functional targets given a reasonable target estimate? I'd wager again that in order for functional targets to be bridged by any Darwinian variation mechanism, all reachable functions couldn't be more than a couple aminos apart. If each permutation in the target space were a vertex in a graph, the graph's edges would be formed by any two vertices where the cost of traversal was within either the known abilities of Darwinian evolution, or within an exploration bound reasonable to the size and age of the universe (read UPB) for a random search. That's not even considering the difficulties of trait fixation and selective advantage.
"...and for many or even most cases we may never recover these pathways."
That's a little too convenient for a purported biological theory of everything.
"...at this point your CSI argument is more or less completely irrelevant."
It couldn't be more relevant, especailly given the statement that "There are huge amounts of possible sequences which most likely don´t make any sense and don´t even result in folded proteins." The very concept of CSI is present in the analysis of the search space vs the functional target space. If SS/TS > 10^150, there's not even a prayer of a blind mechanism driving novelty in biology, and that's being extremely generous. --- m.i. --- material.infantacy
Indium:
There are huge amounts of possible sequences which most likely don´t make any sense and don´t even result in folded proteins.
That is correct. And the possible sequences are what define the size of the search space. Agreed?
Evolution does not waste its resources to search these spaces
Evolution does not know whether it is wasting resources or not. But you made the following claim: you guys often vastly overestimate the search space. How so? And you appeared to reason that this is the case because: Evolution searches in the vicinity of viable and reproducing organisms and therefore only “tests” an incredibly tiny amount of the total possible DNA sequences. So I ask again, how does the fact that evolution can only search an “incredibly tiny amount” of the search space somehow change the size of the search space? The answer is simple, it does not. Why not just say so? And that makes your argument a non sequitur, which means you should reformulate it or discard it.
Therefore, the ratio of the target/total search space is irrelevant as long as there is a somehow viable evolutionary pathway.
A completely different argument. If that is what you meant you should have said so rather than accusing us of vastly overestimating the search space. I mean you should hear yourself. First you say we often vastly overestimate the search space. Then you say it's irrelevant. So you say what's important is whether there is a pathway. Fine. How many pathways are there? Call each pathway a target. You're still faced with the same problem. You make it sound like viable pathways are no problem. And you make it seem like finding a viable pathway is similarly no problem. The only way you have to look for a viable pathway is a toss of the proverbial dice. So you're right back where we started, with no counter-argument. Mung
Gpuccio:
Have you models for more recent protein families?
I will probably not be able to match the level of detail you are used to from your design hypothesises, but a quick search shows things like this. But maybe you can first comment on the previous paper I linked? Or you can link me to a similarly detailed design hypothesis and we might argue about that? Mung:
How does the fact that evolution can only search an “incredibly tiny amount” of the search space somehow change the size of the search space?
There are huge amounts of possible sequences which most likely don´t make any sense and don´t even result in folded proteins. Evolution does not waste its ressources to search these spaces, by definition it only looks in the vicinity of existing genomes. Therefore, the ratio of the target/total search space is irrelevant as long as there is a somehow viable evolutionary pathway. You may dispute this, and for many or even most cases we may never recover these pathways. In any case, at this point your CSI argument is more or less completely irrelevant. Joseph: Evolution is not random. Indium
Onlookers: First, within an island of existing function, you can indeed move around by hill climbing. That is common ground that even the despised Young Earth Creationists would agree to. The issue is, to get to islands of function, and to do so from initial reasonable starting points. To give an idea of what is going on, observe above, whether there is any darwinian answer to say transforming a wolf or a cow into a whale. Plainly not. So why is there such a confident presentation of the sequences alleged to show this? (Cf Sternberg's video here, recall this is an evolutionary biologist speaking. And while you are at it, take a look at Paley's self replicating watch case in light of the key point on ADDITIONAL capacity of self-replication -- the watch story has been routinely stramannised in the common discussions by Darwinists; they only discuss Ch I, and skip over Ch II. That would cover a self replicating house too, complete with sprinkler; cf. the vNSR discussion here.) Remember, the claim is, functional small steps from one end to the other. Where are they? I mean, observationally? Who has actually OBSERVED such, where, when, published where? Without empirical facts in evidence as an answer to empirical tests are we not really talking philosophical speculation, not science? Chirp, chirp, chirp . . . Now, if we turn instead to moving from a unicellular animal to get a multicellular one, we are talking of moving the genome from about 1 million bases, lets round town to 1 mn bits. Going to about 10 - 100+ mn, on observed genome sizes. How are you going to by random variation, innovate then fix then re innonvate to get embryologically feasible body plans, 9 mn bases -- lets call them bits to allow for various issues -- worth? As a matter of fact, we do not have even a good account of the origin of the flagellum on darwinian terms, only a just so story on something that looks very much like a derivative of the flagellum, the T3SS. In short, this stuff is v long on speculation, very short on empirical data to show the alleged branching tree pattern of small incremental variations at genetic level. We do directly know that intelligences routinely generate 1 mn to 10 or 100 mn bits worth of functionally specific complex information. So the only leap there is like causes like, and the problem of making a self replicating automaton. We can analyse them, but have yet to build a kinematic self replicator. So, we can easily infer to best explanation on known capacity and known lack of capacity. Design is not a gaps inference, but an inference from what has known capacity as opposed to what has known challenges to do the task. Have you ever seen complex functional digital codes, algorithms, and implementing machines originate from scratch, without intelligent input? Life forms self replicate such systems, but -- even as Paley also pointed out 200 years ago -- replication is not origination. It is origination that needs to be answered to, and complex information bearing codes are not observed to occur by lucky noise captured by success. They are designed. So much so that we are entitled to infer from these as reliable signs to their most credible cause, per inference to best explanation. This is not about geochronology or the debated interpretations of religious texts. Nope, this is on the conventional timeline and sequences of life forms, starting with the Cambrian explosion and going on to other significant body plan innovations. The fossil timeline -- since the days of Darwin -- says we have had a burst of dozens of body plans some 500 - 600 MYA. That means, dozens of times over, accounting for 10+ mn bits of fresh functionally specific info in the genome, preferably with fossil evidence to back up such a claim. Wasn't there in Darwin's day, still isn't there. And we know that the atomic resources of our observed cosmos are insufficient to search out he set of configs for just 1,000 bits of storage. We are dealing with UN-representative, highly specific, organised informational patterns. So, the needle in the haystack challenge with an effectively zero scope sample, obtains, for the resources of the observed cosmos. But, just considering this post, which is well past the search capacity of the observed cosmos, the author as an intelligent designer was able to type it up in some minutes on a computer, which weighs about 2 lbs. That should tell us the difference between random walks rewarded by successful trial and error and intelligence. So plain is the point that it is clear that the reason it is resisted is plainly as Lewontin et al pointed out so plainly: ideological captivity of science to a priori evolutionary materialism. So, let Philip Johnson have the last word for the moment:
For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose." . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
GEM of TKI kairosfocus
Clearly there’s more that doesn’t work in that analogy than the fact that houses don’t reproduce and sprinkler systems aren’t passed on as traits, but at least adding such would address Chris’ issue.
You're right. The analogy fails. Every analogy fails. The trouble is that we can't find anything in the same league as biology to compare to it. We could compare it to houses, satellites, supercomputers, literature, in fact the sum of all human knowledge (and let's not forget sprinkler systems.) But then we'd be oversimplifying.
Of course, I didn’t say anything about dropping in the sprinkler system fully formed
No, you didn't. You said "randomly generated," which I suppose is much more plausible. After all, that is the underlying principle currently used to explain all of life. ScottAndrews
Hi Doveton, The thing is, the human body is vastly more sophisticated than a house and our immune system is vastly more sophisticated than a sprinkler system. If the analogy fails, it is only because it doesn't do justice to say, human physiology and biochemistry. When evolutionists try to downplay impossible probabilities and buy themselves near infinite time and resources to explore the sequence space they haven't managed to sweep under the carpet with the preceding downplaying, they are forgetting just how urgent and inexplicably interdependent these amazingly advanced functions are. Frankly, you've got more chance of success dealing with spontaneous randomly appearing sprinkler systems, than you do with dealing with real-life biology! Chris Doyle
Blind and unguided processes for a search? Now that is funny... Joseph
Indium:
At the same time, you guys often vastly overestimate the search space. Evolution searches in the vicinity of viable and reproducing organisms and therefore only “tests” an incredibly tiny amount of the total possible DNA sequences for example.
So you agree the size of the search space is very large and is related to the total possible DNA sequences, correct? How does the fact that evolution can only search an "incredibly tiny amount" of the search space somehow change the size of the search space? Mung
ScottAndrews,
At first I thought you were making a very funny joke. “Randomly generated sprinkler system?”
I think people get their ideas about evolution from X-Men and Heroes. It’s this magic force that randomly drops fully-developed, useful gifts on unsuspecting individuals.
I'm just running with the inaccurate sprinkler system analogy as presented. Clearly there's more that doesn't work in that analogy than the fact that houses don't reproduce and sprinkler systems aren't passed on as traits, but at least adding such would address Chris' issue. Of course, I didn't say anything about dropping in the sprinkler system fully formed, so I'm not sure why you think I presume such represents how evolution actually works. I pointed out one aspect of actual evolution that would have to be in place to overcome Chris' problem. If you want to discuss how organisms might actually gain a sprinkler system in a biological context, that's a different discussion. Doveton
Ilion: a) The target space designation is post hoc and arbritary. It is based on what we observe and must explain. That's what science does. b) The search space volume is vastly exaggerated. Evolution does not have to explore the full sequence space when there are intermediate bridges. Those bridges don't exist. Show them if you can. Therefore, the search space is not exaggerated at all. c) So the only question that remains is this: Can we construct such bridges or not. I say you can't. But you are free to try. After all, it's you who have faith that they exist. I don't. d) Your fancy numbers however have no meaning at all, they can tell us nothing regarding the question wether or how some sequence evolved. You will always have to look at potential precursers to do an exact calculation. This is probably why you are concentrating on RP S2: It is so old that it is quite unlikely that we will be able to build a model of it´s evolutionary history. Have you models for more recent protein families? d) I can't access the link. e) On the other hand I am happy to accept that we don´t know exactly how these extremely old parts of the genome evolved. I accept your result that they did not form randomly, so, ahem, well done! Thank you. gpuccio
Indium:
Ok, what about the search space? You seem to assume that the search space is based on the total number of AA acids.
How do you calculate the size of a search space? One poster here claimed that because there were only four possible bases in DNA the size of the search space was 4. Do you agree? Mung
Doveton @130 At first I thought you were making a very funny joke. "Randomly generated sprinkler system?" I think people get their ideas about evolution from X-Men and Heroes. It's this magic force that randomly drops fully-developed, useful gifts on unsuspecting individuals. ScottAndrews
The search space volume is vastly exaggerated. Evolution does not have to explore the full sequence space when there are intermediate bridges.
IOW, it doesn't have to search the ocean for islands. It just follows bridges. Leaving aside that the bridges are purely hypothetical, this does nothing to mitigate the problem. If points B, C, D, E, etc. are only accessible by bridges originating from point A, then A is still a primary target. (Not to mention that evolution must create its bridges, not find them.) You could explain this away by reasoning that point A is also arbitrary. After all, maybe biology could have taken some different form. But that's just supporting speculation with more speculation. Let's explain the natural occurrence of current biology before we start imagining new ones. ScottAndrews
Chris,
But if the need to install a sprinkler system meant the difference between life and death, you wouldn’t have much of an opportunity for a random search before the house burnt down!
You would if all the houses in a given neighborhood had reproductive capability and could pass on improvements to the next generation of houses that didn't burn down all the way due to the randomly generated sprinkler system in the parent stock of houses. Doveton
Gpuccio It is almost funny how you dance around the main points: 1. The target space designation is post hoc and arbritary. 2. The search space volume is vastly exaggerated. Evolution does not have to explore the full sequence space when there are intermediate bridges. So the only question that remains is this: Can we construct such bridges or not. Your fancy numbers however have no meaning at all, they can tell us nothing regarding the question wether or how some sequence evolved. You will always have to look at potential precursers to do an exact calculation. This is probably why you are concentrating on RP S2: It is so old that it is quite unlikely that we will be able to build a model of it´s evolutionary history. In any case there *are* models for the origin of the superfamilies, for example this paper. On the other hand I am happy to accept that we don´t know exactly how these extremely old parts of the genome evolved. I accept your result that they did not form randomly, so, ahem, well done! Indium
But if the need to install a sprinkler system meant the difference between life and death, you wouldn't have much of an opportunity for a random search before the house burnt down! Chris Doyle
There are a number of beneficial enhancements I could make to my home. (Within the context, those would include giving it mobility, sentience, and the ability to self-repair.) Can I mitigate the absurdity of finding a new bathroom via a random search by reasoning that it's not the only possible improvement? After all, that random search could find a pool table or a second floor instead. ScottAndrews
Joseph: "All this anti-ID CSI bashing and there STILL isn’t any evidence that genetic accidents can accumulate in such a way as to give rise to new, useful and functional multi-part systems." A great truth indeed! Perhaps if darwinists gave less time to bashing ID, and more time to imagination, we could have a greater number of just so stories about why the darwinian theory works :) . gpuccio
Indium: I am horrified at your epistemology, but anyway, what can I expext from a convinced darwinist? Durston is making an estimate of the target space, not an exact measure. A margin of error is possible, in both directions. It is anyway a reasoonable estimate, the best I have found up to now. Darwinists don't even try, even if that estimate is crucia for their theory. They seem to prefer ignorance, and arrogance. Or they just cheat. The best attempt at an estimate from the darwinian part is the famous Szostak paper, which is methodologically completely wrong. About the abundance of "solutions": the fact remains that each solution must be found in a specific context (organism, environment). As I have tried to show, in a highly organized and complex organism, like a bacterium, whatever the environment, new solutions of some importance are usually few and complex. That's because a really new function must not only work, but also be compatible with the organism, and integrated with its already existing machinery. For instance, in this discussion I am willfully avoiding to consider higher levels of complexity and improbability, for instance that the new protein must be correctly transcribed and translated, at the right time and in the right quantity, in other ways it must be integrated in the complex regulation system of the cell. I am discussing only the sheer improbability of the basic sequence. But goiong back to the discussion: let's imagine that, in a certain cell and in a certain environment there may be, say 10^3 (about 2^10) new complex proteins that, as they are, could confer by themselves a reproductive advantage (IMO, I am very generous here). Let's assume that each of them has a probability of being generated by chance of 1:2^462 bits. Well, how much would the probability be of hitting at least one of the 1000 solutions? You must sum the target space of the solutions, while the search space remains the same. So, the general probability would still be of 1:2^452, that is 452 bits. Not a great improvements, as you can see. So, unless you believe to forests of functional solutions everywhere (well, you are a darwinist, it should not be difficult for you), your position is still rather questionable. You say: "Before a lottery every single person has a very small probability to win, after the lottery one person has won it with a quite high probability." Well, this is really simple ignorance or just, as you asy, "a cheap trick". In a lottery, the probability of one winning is 1 (100%). The victory of a ticket is a necessary event. In the scenario I described, the probability of at least one of the 1000 functional solutions to be found is 1:2^452. We are near the UPB of 500 bits. I would say that it is not the same thing. Maybe even a darwinist should understand that. And yet, the "lottery" argument comes back once in a while, together with other ridiculous "pseudo-darwinian" arguments (I am trying here not to offend the few serious darwinists, who would never use such propaganda tools). You say: You seem to assume that the search space is based on the total number of AA acids. It is. But this is quite obviously false, since, as you admit, in reality you have to look at the number of “new” AAs. No, if you have read well my posts, the meaning should be clear. But we can make it more clear. If we are evaluating the dFSCI of a whole molecule, such as Ribosomal S2 in the Durston table, we refer to all the AAs, and calculate the reduction of uncertainty for each AA position on the total number of aligned sequences, and then sum the results. The global functional complexity of the molecule is the improbability of the whole molecule emerging from a random walk. If, instead, you want to compute the functional complexity of a transition, you take into account only the new AAs. That point is very clear in the Durston paper. Have you read it? So, please let us know what is the precusrsor of the family of Ribosomal S2, and we can do a transition calculation. The only reason why we calculate the whole functional complexity for protein families is that no precursor is known. Indeed, I would say tha no precursor exists. Please, remember that the basic protein superfamilies are totally unrelated at the sequence level. Therefore, using a protein from one superfamily as a precursor for another superfamily is not different than starting from a random sequence. It is, however, a random walk. Every selectable intermediate step makes your argument weaker and weaker. If there is a viable evolutionary series from a starting to the end point, your argument breaks down (as you admit). Yes. You are right. Like all scientific arguments, my argument can be falsified. That is the reason why it is a scientific argument (let me be popperian, once in a while :) ). So, please falsify it. This means that in the end you are just making a quite elaborate god of the gaps argument. No. I am making a quite elaborate sientific and falsifiable argument. Please, review your epistemology, and stop the propaganda. If nobody can show you with a pathetic level of detail how something evolved you just assume design. This is not correct. If you have followed the argument, other things are necessary. And why "pathetic"? A credible level of detail would do. Never seen any in darwinist reasonings. As kf says, it´s just Paleys watch all over again. Kf forever! And Paleys' watch is a very good argument. Always has been. Maybe not "quite elaborate" as mine :) . But vey good just the same. gpuccio
PS: Durston et al: _________ >> The measure of Functional Sequence Complexity, denoted as ?, is defined as the change in functional uncertainty from the ground state H(Xg(ti)) to the functional state H(Xf(ti)), or ? = ?H (Xg(ti), Xf(tj)). (6) The resulting unit of measure is defined on the joint data and functionality variable, which we call Fits (or Functional bits). The unit Fit thus defined is related to the intuitive concept of functional information, including genetic instruction and, thus, provides an important distinction between functional information and Shannon information [6,32]. Eqn. (6) describes a measure to calculate the functional information of the whole molecule, that is, with respect to the functionality of the protein considered. The functionality of the protein can be known and is consistent with the whole protein family, given as inputs from the database. However, the functionality of a sub-sequence or particular sites of a molecule can be substantially different [12]. The functionality of a sub-molecule, though clearly extremely important, has to be identified and discovered. This problem of estimating the functionality as well as where it is expressed at the sub-molecular level is currently an active area of research in our group. To avoid the complication of considering functionality at the sub-molecular level, we crudely assume that each site in a molecule, when calculated to have a high measure of FSC, correlates with the functionality of the whole molecule. The measure of FSC of the whole molecule, is then the total sum of the measured FSC for each site in the aligned sequences. Consider that there are usually only 20 different amino acids possible per site for proteins, Eqn. (6) can be used to calculate a maximum Fit value/protein amino acid site of 4.32 Fits/site. We use the formula log (20) - H(Xf) to calculate the functional information at a site specified by the variable Xf such that Xf corresponds to the aligned amino acids of each sequence with the same molecular function f. The measured FSC for the whole protein is then calculated as the summation of that for all aligned sites. The number of Fits quantifies the degree of algorithmic challenge, in terms of probability, in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 10^49 different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106 percent of 121-residue sequence space. In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space. A high Fit value for individual sites within a protein indicates sites that require a high degree of functional information. High Fit values may also point to the key structural or binding sites within the overall 3-D structure. Since the functional uncertainty, as defined by Eqn. (1) is proportional to the -log of the probability, we can see that the cost of a linear increase in FSC is an exponential decrease in probability. For the current approach, both equi-probability of monomer availability/reactivity and independence of selection at each site within the strand can be assumed as a starting point, using the null state as our ground state. For the functional state, however, an a posteriori probability estimate based on the given aligned sequence ensemble must be made. Although there are a variety of methods to estimate P(Xf(t)), the method we use here, as an approximation, is as follows. First, a set of aligned sequences with the same presumed function, is produced by methods such as CLUSTAL, downloaded from Pfam. Since real sequence data is used, the effect of the genetic code on amino acid frequency is already incorporated into the outcome. Let the total number of sequences with the specified function in the set be denoted by M. The data set can be represented by the N-tuple X = (X1, ... XN) where N denotes the aligned sequence length as mentioned earlier. The total number of occurrences, denoted by d, of a specific amino acid "aa" in a given site is computed. An estimate for the probability that the given amino acid will occur in that site Xi, denoted by P(Xi = "aa") is then made by dividing the number of occurrences d by M, or, P(Xi = "aa") = d/M. (7) For example, if in a set of 2,134 aligned sequences, we observe that proline occurs 351 times at the third site, then P ("proline") = 351/2,134. Note that P ("proline") is a conditional probability for that site variable on condition of the presumed function f. This is calculated for each amino acid for all sites. The functional uncertainty of the amino acids in a given site is then computed using Eqn. (1) using the estimated probabilities for each amino acid observed. The Fit value for that site is then obtained by subtracting the functional uncertainty of that site from the null state, in this case using Eqn. (4), log20. The individual Fit values for each site can be tabulated and analyzed. The summed total of the fitness values for each site can be used as an estimate for the overall FSC value for the entire protein and compared with other proteins. >> __________ All of this is reasonable, and is related to how information content of real world codes or examples of languages generating text is estimated. kairosfocus
Indium: Pardon, but I must be direct. Why is it that you keep on dodging easily accessible corrective information? For instance, kindly cf 111 above, where you will see as a PS:
Indium, sampling theory will tell us that a random or reasonably random sample will be representative of a space. Have you seen how the relative frequency patterns of symbols in codes are identified? By precisely the sort of sampling approach Durston et al took; even though someone out there actually composed and published a whole book in which the letter E never occurs. You are being selectively hyperskeptical.
The question you raise as though it is a crippling objection to what Durston et al did, is in fact an objection to reasonable extension of a standard practice in evaluating the information content of coded messages: see symbol frequencies and use that to estimate the probabilities needed in Shannon's H-metric of avg info per symbol. In the case of Durston et al, if say 5 of the 20 AAs can work -- functional state as opposed to ground state -- in a given position, across the world of life, then we see that the relevant ratio for information measure is 1 in 4 not 1 in 20 [so, less info per symbol at that point: 2 functional bits not 4.32 for that position . . . and in my own look see on Cytochrome C, that was the sort of range up to, where some positions were pretty well fixed to just one AA], and we can go across the aligned segments of the relevant protein families like that. Why not look at point 9 here [scroll down], for a bit of a discussion? Bradley:
9] Recently, Bradley has done further work on this, using Cytochrome C, which is a 110-monomer protein. He reports, for this case (noting along the way that Shannon information is of course really a metric of information-carrying capacity and using Brillouin information as a measure of complex specified information, i.e IB = ICSI below), that: Cytochrome c (protein) -- chain of 110 amino acids of 20 types If each amino acid has pi = .05, then average information “i” per amino acid is given by log2 (20) = 4.32 The total Shannon information is given by I = N * i = 110 * 4.32 = 475, with total number of unique sequences “W0” that are possible is W0 = 2^I = 2^475 = 10^143 Amino acids in cytochrome c are not equiprobable (pi ? 0.05) as assumed above. If one takes the actual probabilities of occurrence of the amino acids in cytochrome c [i.e. by observing relative frequencies across the various forms of this protein across the domain of life], one may calculate the average information per residue (or link in our 110 link polymer chain) to be 4.139 using i = - ? pi log2 pi [TKI NB: which is related of course to the Boltzmann expression for S] Total Shannon information is given by I = N * i = 4.139 x 110 = 455. The total number of unique sequences “W0” that are possible for the set of amino acids in cytochrome c is given by W0 = 2^455 = 1.85 x 10^137 . . . . Some amino acid residues (sites along chain) allow several different amino acids to be used interchangeably in cytochrome-c without loss of function, reducing i from 4.19 to 2.82 and I (i x 110) from 475 to 310 (Yockey) M = 2^310 = 2.1 x 10^93 = W1 Wo / W1 = 1.85 x 10^137 / 2.1 x 10^93 = 8.8 x 10^44 Recalculating for a 39 amino acid racemic prebiotic soup [as Glycine is achiral] he then deduces (appar., following Yockey): W1 is calculated to be 4.26 x 10^62 Wo/W1 = 1.85 x 10^137 / 4.26 x 10^62 = 4.35 x 10^74 ICSI = log2 (4.35 x 10^74) = 248 bits He then compares results from two experimental studies: Two recent experimental studies on other proteins have found the same incredibly low probabilities for accidental formation of a functional protein that Yockey found 1 in 10^75 (Strait and Dewey, 1996) and 1 in 10^65 (Bowie, Reidhaar-Olson, Lim and Sauer, 1990). --> Of course, to make a functioning life form we need dozens of proteins and other similar information-rich molecules all in close proximity and forming an integrated system, in turn requiring a protective enclosing membrane. --> The probabilities of this happening by the relevant chance conditions and natural regularities alone, in aggregate are effectively negligibly different from zero in the gamut of the observed cosmos. --> But of course, we know that agents, sometimes using chance and natural regularities as part of what they do, routinely produce FSCI-rich systems. [Indeed, that is just what the Nanobots and Micro-jets thought experiment shows by a conceivable though not yet technically feasible example.]
GEM of TKI kairosfocus
All this anti-ID CSI bashing and there STILL isn't any evidence that genetic accidents can accumulate in such a way as to give rise to new, useful and functional multi-part systems. Joseph
Indium,
Evolution searchs in the vicinity of viable and reproducing organisms and therefore only “tests” an incredibly tiny amount of the total possible DNA sequences for example. And as long there is a working path between a starting and an end point, evolution might have the ressources to find it.
You're describing a speculative hypothesis - that between two points there is an evolutionary path - as if it were an observed phenomenon like cloud formation or digestion. It's a bit clearer if we refer to these "searches" and paths as what they are - speculative hypothesis. Otherwise we risk giving them too much weight and credibility. ScottAndrews
Gpuccio: I find this
Durston needs not know anything like that. What Durston says is very simple: given the existing variants of that protein with that function in the whole proteome, the functional complexity computed by calculating the reduction of uncertainness for each AA position derived from the existing sequences is such and such.
a bit hard to parse, but anyway: You explicitly said that to determine DFCSI one has to determine the target space (number of sequences performing a specific function). Durston cannot exhaust the total target space for the function he is looking at, since he doesn´t know which other, maybe even much shorter sequences might have a similar function or the same function in a different environment. But we are repeating ourselfes here. And of course the restraints on evolution are huge. But to look at just one function and to then declare that the evolution of *exactly* this solution as incredibly unlikely is kind of a cheap trick. Before a lottery every single person has a very small probability to win, after the lottery one person has won it with a quite high probability. Ok, what about the search space? You seem to assume that the search space is based on the total number of AA acids. But this is quite obviously false, since, as you admit, in reality you have to look at the number of "new" AAs. Every selectable intermediate step makes your argument weaker and weaker. If there is a viable evolutionary series from a starting to the end point, your argument breaks down (as you admit). This means that in the end you are just making a quite elaborate god of the gaps argument. If nobody can show you with a pathetic level of detail how something evolved you just assume design. As kf says, it´s just Paleys watch all over again. Indium
GP: Excellent. G kairosfocus
Indium: A few simple comments for you: 1) You ask: "How does Durston know he has exhausted all possible ways to generate the same function in the phenotype?" Durston needs not know anything like that. What Durston says is very simple: given the existing variants of that protein with that function in the whole proteome, the functional complexity computed by calculating the reduction of uncertainness for each AA position derived from the existing sequences is such and such. I have commented, in my post #103 (to you): "The Durston method, applying the Shannon reduction of uncertainty to single amioacid position in large and old protein families, is a reasonable method to approximate the target space, provided we assume that in those cases the functional space has been almost completely traversed by neutral evolution, which is a very reasonable assumption, supported by all existing data." The fact is, proteins diverge in a perotein family because of neutral evolution. So, they "explore" their target space in the course of natural history. That is the big bang theory of protein evolution, and it is supported by observed facts. It is true that many protein families, even old, are still diverging, but the general idea is that in old families most of the target space has been explored. This is a reasonable, empirically based assumption. 2) You ask: "How does Durston know which other functions could have evolved instead of the one he is looking at? Which other “targets” there might have been?" Durston needs not know anything like that. He is just measuring functional complexity in protein families. My definition of dFSCI, too, has no relationship with this "problem". I have explicitly stated that dFSCI must be calculated for one explicitly defined function. So, what do you mean with your question? Well, I will try to state it more consistently for you. It is a form of an old objection, that I usually call the "any possible function" objection. I have answered that objection many times, and I will do that again for you now. The objection, roughly stated, goes as follows: "But evolution needs not find a specific functional target. It can reach any possible functional target, any possible function. So, its chances of succewding are huge." Well, that is not true at all. Darwinian evolution has two severe restraints: a) It must reach targets visible to NS b) It must reach targets visible to NS ina an already existing, and very complex, and very integrated system (the replicator). Obviously, b) is valid only for darwinian evolution "after" OOL. let's say certainly after LUCA. If you prefer to discuss OOL, we can do that, but I am not sure it would be better for you. :) So, what does that mean? It means that, unless and until a target whioch is naturally selectable in a specific replicator emerges, NS cannot "come into action", and therefore the only "tool" acting according to darwinian theory is Random Variation. So, let's try to define how you should formulate a calculation of dFSCI for the emergence of some new functionally complex property in some bacterium species. What you simply should do is: a) Define a series of steps, one leading to the other, that are naturally selectable. b) Each step must have the following properties: 1) It has a definite reproductive advantage versus the previous step (is naturally selectable). IOWs adding that variation to an existing bacterial population (the precious step) the new trait will rapidly expand to all or most of the population. 2) The transition from each to the following one must not be too complex (certainly, it must not be of the dFSCI order of magnitude), so that it is in the range of random variation. 3) The final transition, form the initial state to the final state, whatever the number of intermediate steps, muts be complex enough to exhibit dFSCI. IOWs the new function emerging as the final result of all the transitions must have a functional complexity, let's say, of at least 150 bits. For simplicity, let's say that it must include at least 35 new AAs, each of them necessary for the new function. Well, show me such a path, explicitly proposed and if possible verified in the lab, for any of the existing complex protein families. That is what is meant by a serious scientific theory. If that does not exist, not even for one case, then the darwinian theory is what it really is: a just so story. c) You say: "I don´t know how big the target space is and neither do you I guess. But when you follow Gpuccios algorithm to determine DFCSI you have to calculate it. And my point is that if you assume there was just this “WEASEL” that had to be reached you´re underestimating the target space by a vast amount. Since this concentration on just one target is one of the weaknesses of Dawkins WEASEL I don´t understand why ID people would make this error." First of all, nobody is supposing that there is just one "WEASEL". The Durston method approximates the target space for one function, and that target space is usually very big. Take the valus for Ribosomal S2, for instance, a protein of 197 AAs, analyzed in a family of 605 sequences. The search space is 851 bits. The functional complexity is "only" 462 bits. That means that the method computes the target space for that protein at 389 bits, that is 2^389 functional sequences. That is hardly a small functional space, I would say. The method works, and it works very fine. Moreover, the weakness of the WEASEL example is another one, and very simple: the algorithm already knows the solution. It is a very trivial example of intelligent selection. gpuccio
Morning Lizzie, We certainly do have different mental images here! I think that’s because I’m talking about self-replication and you’re talking about something else (I’m trying to think of a technical term for disassembling and reassembling lego blocks… but can’t)! Crucially, the ability of the first self-replicating molecule to self-replicate (a small part of which may involve splitting down the middle) comes from its molecular shape and content so, that will include its sequence. The process of self-replication does not and cannot rely upon random free-floating monomers being in the right place at the right time: self-replication needs to be far more self-contained than that or else the parent would almost certainly lose the daughter (and itself!) in the very process of reproduction. Besides, if self-replication was merely the “ability to split down the middle, and for both halves to then attract to each now-unmated unit” then surely the most likely outcome would be for the two halves of the peptide chain to merely come back together again. So, the self-replicating molecule would (and we’re dramatically oversimplifying here) need to start off with something like: AC CA CA DB DB BD CA CA AC AC DB BD And then, after a self-contained and straightforward process of self-replication, we end up with two: AC AC CA CA CA CA DB DB DB DB BD BD CA CA CA CA AC AC AC AC DB DB BD BD Given that such a self-replicating molecule must have existed if life just made itself, then I can see no scope for copying error that will not impair or destroy the ability to self-replicate. There is only perfect and eternal cloning. And, this heredity is a much more important feature of life than copying errors. So, tell me Lizzie, how can we realistically move beyond this first self-replicating molecule? Chris Doyle
ScottAndrews:
It’s misleading to imply that the big picture changes just because there are multiple potential targets.
I don´t know how big the target space is and neither do you I guess. But when you follow Gpuccios algorithm to determine DFCSI you have to calculate it. And my point is that if you assume there was just this "WEASEL" that had to be reached you´re underestimating the target space by a vast amount. Since this concentration on just one target is one of the weaknesses of Dawkins WEASEL I don´t understand why ID people would make this error. At the same time, you guys often vastly overestimate the search space. Evolution searchs in the vicinity of viable and reproducing organisms and therefore only "tests" an incredibly tiny amount of the total possible DNA sequences for example. And as long there is a working path between a starting and an end point, evolution might have the ressources to find it. Of course it might still be the case that you can in some way prove that certain developments biologists think have happened are extremely unlikely. But you cannot prove this by vastly overestimating the search space and at the same time vastly underestimating the potential target space to arrive at a magic ultra small probability. Indium
The concept of a target is highly misleading anyway. Whatever biological structure you look at, it was never a target that had to be reached.
That makes it sound a bit like you can't spit without hitting a potential biological structure. The truth is that every known biological structure and anything we can imagine when combined amount to a really tiny target, like a pinhead on the moon. It's misleading to imply that the big picture changes just because there are multiple potential targets. ScottAndrews
Elizabeth Liddle:
Each of these then “mates” with the appropriate A, B, C and D monomers in the environment, resulting in tow chains that are identical to the parent chain.
How long will the chain need to be? Why can't it just be one monomer long? Why do you think a chain of monomers like that would contain any information? How much information would the chain contain? How do you propose to measure the amount of information? Mung
3. The concept of a target is highly misleading anyway. Whatever biological structure you look at, it was never a target that had to be reached.
Neither was the watch the Paley stumbled upon while out walking on the heath.
But that's simply not true. Human designed objects are the result of striving for a target. Biological structures are not. It's a key difference. It's the most important difference between things that are designed and things that have the appearance of design. The chain of descent means that there is no individual that can be said to be an intermediate between species, because every individual is a transitional. There is no abstract form involved, just instances of living things that have descended continuously from other living things. Petrushka
3. The concept of a target is highly misleading anyway. Whatever biological structure you look at, it was never a target that had to be reached.
Neither was the watch the Paley stumbled upon while out walking on the heath. Mung
Even if a mutant could still self-replicate, if there is no real competition for resources (because the original strain flourishes along with the mutant strain), then why should the original strain die out?
Why didn't the most efficient self-replicator gobble up all the resources required for it to self-replicate? Mung
Mung: Yup, searches can do better than average. Problem comes in when the search is next to nil relative to the space and is looking for a needle in the haystack, as I showed for Indium above. The average search under those conditions is next to no chance of catching something that is deeply isolated. The NFL does not FORBID, but the circumstances create a practical impossibility, quite similar to how he statistical form of the 2nd law of thermodynamics does not forbid classical 2nd law violations, but the rarity in the space of possibilities locks it out for all practical purposes once you are above a reasonable threshold of system scale/complexity. And, there is no evidence -- Zachriel's little games notwithstanding, Indium -- that we have the sort of abundance that would make functional states TYPICAL of the config space. And, remember that starts at 500 - 1,000 bits worth of configs. GEM of TKI PS; Indium, sampling theory will tell us that a random or reasonably random sample will be representative of a space. Have you seen how the relative frequency patterns of symbols in codes are identified? By precisely the sort of sampling approach Durston et al took; even though someone out there actually composed and published a whole book in which the letter E never occurs. You are being selectively hyperskeptical. kairosfocus
You don't need to respond Lizzie. Carry on with Chris and gpuccio. Mung
Elizabeth Liddle:
It’s also the title of a pair of theorems by Wolpert and MacReady which don’t apply to evolutionary algorithms, and which Dembski tries to apply to evolutionary algorithms here...
Another source:
What is true, though, is that the NFL theorems, while perfectly applicable to all kinds of algorithms including the Darwinian evolutionary algorithms (with a possible exception for co-evolution), contrary to Dembski's assertions, do not in any way prohibit Darwinian evolution. The NFL theorems do not at all prevent evolutionary algorithms from outperforming a random sampling (or blind search) because these theorems are about performance averaged over all possible fitness functions. They say nothing about performance of different algorithms on specific fitness landscapes. In real-life situations, it is the performance on a specific landscape that counts and this is where evolutionary algorithms routinely outperform random searches and do so very efficiently, both when the processes are targeted (as in Dawkins's algorithm –see [8]) and when they are non-targeted (as Darwinian evolution is). here
Mung
It may be true that we cannot both be right; however it is not necessarily true that one of us is lying. Do try to remember that there is a difference between a mistake and a lie, Mung, it is quite important. But you'll have to wait for my response as I've got something I need to do urgently which might take a few days. See you later. Elizabeth Liddle
Elizabeth Liddle:
It’s also the title of a pair of theorems by Wolpert and MacReady which don’t apply to evolutionary algorithms, and which Dembski tries to apply to evolutionary algorithms here...
LOL! You're are so amazing at times. Simply amazing. You just spout off without having a clue about what you're talking about. And if you will just say anything, and even claim to believe it to be true, though it is false, what am I supposed to call that? One of the first objections to Dembski was that while NFL theorems are applicable to EA's, evolution is not a search, therefore Dembski is wrong. [A non sequitur, at that.] The question is is he wrong about EA's? H. Allen Orr:
The NFL theorems compare the efficiency of evolutionary algorithms; roughly speaking, they ask how often different search algorithms reach a target within some number of steps.
And then:
The problem with all this is so simple that I hate to bring it up. But here goes: Darwinism isn't trying to reach a prespecified target...Evolution isn't searching for anything and Darwinism is not therefore a search algorithm.
One of you has to be wrong. One of you is not telling the truth. Orr then goes on to say:
The proper conclusion is that evolutionary algorithms are flawed analogies for Darwinism.
You Darwinists really ought to get your stories straight. For not only are you wrong about Dembski and NFL, you've argued repeatedly here at UD that evolutionary algorithms are a great analogy for Darwinian evolution. You and H. Allen Orr cannot both be right. http://bostonreview.net/BR27.3/orr.html See more: http://www.iscid.org/boards/ubb-get_topic-f-6-t-000240.html Mung
How does Durston know he has exhausted all possible ways to generate the same function in the phenotype? How does Durston know which other functions could have evolved instead of the one he is looking at? Which other "targets" there might have been? Indium
By the way, I would say that Indium has really made me a favor, repeating essentially the objections I had anticipated in my post 96. It seems that I know well my darwinists! :) gpuccio
Indium: Why is it you "never" follow the links or pointers to the places where I do deal with biosystems, e.g. the citations on Durston et al and related calcs in the OP above? Why do you erect and knock over strawmen arguments, in other words? I give cases from text to illustrate patterns that coded text faces whether it is biological or technological or human in origin. The relative rarity of meaningful or functional complex coded strings is a capital case in point; as compared to the space of all possible configs. Remember, for a conceptual 100 k base genome, we are looking at 4^100,000 = 9.98*10^60,205 possibilities. the Planck time resources of the observed cosmos -- 10^150 states -- could not sample more than 1 in 10^60,000+ of that. So, unless functional states are absolutely overwhelmingly abundant to the point where they are practically falling off the tree into our hands, we have a zero scope search for a needle in a haystack problem. And, we know from the decades of observation of coded digital strings that meaningful strings are credibly going to be quite rare. I think you will see that if you start with say 3-letter clusters you can easily go like: rat -- cat -- bat -- mat -- eat -- ear -- car, etc. (I have given this or similar examples many times. The fatal errors in Zachriel's example are that he is plainly intelligently directing the process and is relying on short words where the function is easy to bridge; so the analogy breaks down very rapidly. When you have to do a real world system control, you are not going to get it to fit into 70 or 130 bytes or so, not to control a serious system, never mind one that is going to be self-replicating. I am sick of strawman arguments.) But just you try the same with 73 ASCII letter strings that need to be meaningful every time. Try changing the first 73 letters of this post into say the last 73, preserving meaningfulness every step of the way. In short as string length -- a measure of complexity -- is increased, and functional specificity is required, the functional strings become naturally far more isolated. GEM of TKI kairosfocus
Indium: Briefly: 1) Again, look at the Durston paper. 2) The Durston method, applying the Shannon reduction of uncertainty to single amioacid position in large and old protein families, is a reasonable method to approximate the target space, provided we assume that in those cases the functional space has been almost completely traversed by neutral evolution, which is a very reasonable assumption, supported by all existing data. IMO, the Durston method tends rather to overestimate the target space, and if you look at the estimated target spaces, they are very big. 3) The concept of target is not misleading at all. We do observe functional proteins, thousands of different superfamilies of them, and highly integrated in living beings. "Target" just mean specific function, the function that is needed in each specific context to implement a real novelty. That we have to explain. We see the results, and we don't know how they were reached. With a dumb theory reigning in the academic world, which attributes to random variation the generation of biological information, and to NS its expansion, we do need to calculate if what is attributed to random variation is really in its reach. And it is not. Darwinists are not troubled by that, but sincere scientists certainly should. 4) You say: "The total ratio of target- to searchspace is mostly irrelevant if there is a path of viable organisms from some starting point to the target. " That's exactly the point. That path does not exist. It is a complete fairy tale, unsupported by logic and unsupported by facts. Can you show me a path to any of the basic protein superfamilies? Starting from whatever you like. And specifying the supposed naturally selected intermediates, eahc of which in the range of a microevolutionary variation from the previou, and each of which conferring a reproductive advantage. I am waiting. gpuccio
KF: Obviously. Inferences by analogy are the best, and are the basis of all our knowledge. gpuccio
kf Why do you always talk about letters and not about real biological objects. Go ahead and demonstrate how gpuccios dfcsi definition can be put to work. Or maybe you can share your thoughts regarding my questions? In any case, you can generate some quite interesting results with words... Word Mutagenation Indium
indium: Kindly look at the original post, to see what is feasible and achieved. the analytical context leads to an empirically valid procedure, per standard techniques commonly used in info theory. Are you willing to argue that a typical at random 73 or worse 143 ASCII character string will be reasonably likely to be a valid text in English? Similarly, when you look at the constraints to be met by protein sequences to fold and function in a key-lock fit context, it is quite reasonable that these will come from narrow and unrepresentative zones in AA sequence space. Why don't you show us a few cases of say 300 AA biofunctional proteins formed successfully through random AA chaining, or through converting one AA one or a few AA's at a time, at random then filtered for function, into a completely different protein? [That is a good analogy to the job of converting say this paragraph at random steps filtered for function, into a completely different one.) GEM of TKI kairosfocus
GP: Analogy, only in the sense that inductions are rooted in analogies. The argument is strictly an inference to best explanation, on empirical evidence. An abduction in the sense of Peirce. GEM of TKI kairosfocus
gpuccio: You said
2) Compute as well as possible the target space for that function and the search space, and calculate the rate, expressing it as -log P (in base 2)
I have a few questions/remarks: 1. Can you give an example calculation of the target and search space for a real biological object? 2. In your calculation, how do you know you have exhaustively described the target space? 3. The concept of a target is highly misleading anyway. Whatever biological structure you loook at, it was never a target that had to be reached. 4. Evolution does not have to reach a specific target at once. The total ratio of target- to searchspace is mostly irrelevant if there is a path of viable organisms from some starting point to the target. Evolution never "searches" in the whole search space, it always just looks in a very small "shell" around the existing sphere of genomes (streching high order geometry here a bit, but anyway). Indium
Elizabeth: Now, briefly, the last step, and then I can rest. f) A final inference, by analogy, of a design process as the cause of dFSCI in living beings. That should be simple now. But please note: 1) The design inference is an inference, not a logical deduction. 2) It is an inference by analogy. As the design process by a conscious intelligent being is the only process coneected, as far as we know, to the emergence of dFSCI, and as we observe dFSCI in abundance in biological beings, of which we have no definite observed experience of the origin, it is perfectly natural to hypothesize a process of design by a conscious intelligent being as the explanation for a very important part of reality that, otherwise, we can in no way explain. Inferences are not dogmas. Nobody must necessarily accept them. That is true for any scientific inference, that is for all empirical science. But those who do not accept the design inference, have the duty to try to really explain the origin of dFSCI in biological information. Dogmatic prejudices ("only humans are conscious intelligent beings, and there were no humans there", or "the design inference implies a god, a science cannot accept that", and so on). Nothing of that is true. The only thing the design inference implies is a conscious intelligent designer as the origin of biological information. Deying the possibility that conscious intelligent agents may be implied in the origin of something we observe and cannot explain dofferently is imply dogma. Conscious intelligent agents generate dFSCI all the time. Whoever is certain that humans are the only conscious intelligent agents in reality is simply expressing his own religion, not making science. gpuccio
Elizabeth: The following point is: e) An appreciation of the existence of vast quantities of objects exhibiting dFSCI in the biological world, and nowhere else (excluding designed objects). I will not go into detail about that now. We have in part already discussed that. Let's say that we in ID believe that there is already a lot of evodence that most basic biological information, and certainly all basic protein coding genes, and especially basic protein superfamilies, abundantly exhibit dFSCI. Darwinists do not agree. Confrontation on this point is certainly useful and will go on as new data come from research. The only "reasonable" objections, that you have already embraced, are IMO two: 1) Biological information does not exhibit dFSCI because the ID ideas about the target space are wrong, and the target space is much bigger than we in ID believe, either because the single functional islands are much bigger than we thinkm or because there are many more functional islands than we think, and many more naturally selectable functions than we think. 2) Biological information does not exhibit dFSCI, because there is a specific necessity mechanism that can explain it, that is the neo darwinian mechanism: macroevolution can be deconstructed into microevolutionary steps, each of them visible to NS. Well, I believe that all these arguments are deeply flawed, and that there is really no empirical support in their favour. But, obviously, that is the field where healthy scientific confrontation should take place. If you are aware of other fundamental objections, please let me know. gpuccio
Elizabeth: So, the next point: d) An empirical appreciation of the correlation between dFSCI and design in all known cases. Here I would like to be very clear abot where we stay. We have an explicit definition of dFSCI. You have objected that the value of complexity is usually very difficult to compute, and I agree. But that does not mean that the concept is not completely defined, and that we cannot attempt calculations in specific contexts, even is approximate. Now, we have defined dFSCI so that its presence can reasonably exclude a random origin of functional information, and at the same time a necessity explanation must not be available. At this point, dFSCI could well, in principle, not exist in the universe. But we know that it exists. We can easily observe it around us. A lot of objects in our world certailnly exhibit dFSCI: practically all writings longer than one page, practically all software programs, can easily be shown to exhibit dFSCI, even using an extreme threshold like UPB. The simple fact is: all these kinds of objects are human artifacts, and all of them are designed (according to our initial definition). Please note that even for language and computer programs, the computation of the target space is difficult. And yet, by a simple approximation, we can easily become convinced that they certainly exìhibit dFSCI. For instance, I have discussed once dFSCI as related to Hamlet, defining the function as the ability of a text of similar length to give a reader a complete understanding of the story, the cheracters, the meaning and themes of the drama, and so on. I believe that there cannot be any doubt that Hamlet exhibits dFSCI even in relation to UPB. Hamlet is about 140,000 characters long. Taking the alphabet at 26 values, the search space is 26^140000, that is, if I am not wrong, about 1,128,000 bits. So, unles you believe that about 2^1,127,500 texts of that length could fully convey the plot and meaning of Hamlet, then you have to admit that Hamlet exhibits dFSCI, tons of it. The same reason could be made for some simple computer program, let's say a simple spreadsheet working in Windows. Let's say its length is 1 Mbyte, more or less 8,000,000 bits. How many sequences do you believe will work as spreadsheets in Windows? So, we can well say that language and computer programs are objects that very easily exhibit dFSCI. So, at thos point we can simply make an empirical evaluation of where dFSCI can be observed. Let's start with human artifacts. They are designed by definition (observed design). Well, do they all exhibit dFSCI? Absolutely not. Many of them are simple, even those in digital form. If I write a message: "I am here", it is certanly functional (transmits a specific meaning), but its maximum complexity (about 40 bits if considered for one functional sequence, just for simplicity) does not qualify it as dFSCI in any relevant context. So, designed things are of two kinds: simple and complex, and if we choose some specific threshold, we can try to separate the simple ones from the complex ones. What about non designed objects? Well, I affirm here that no non designed object found in nature, and of which we can understand the origin well enough to be sure that it is not a human artifact, exhibits dFSCI, not even with the lower threshold of 150 bits I have proposed. With one exception, and only one: biological information. This point we can obviously discuss. I am reasy to take into consideration any counter - example you may want to present. So, if we exclude biological information (which is exactly the object of our research), all things existing in the known universe (and which can be read as digital sequences)can be roughly categorized in three empirical classes: 1) They are designed (observed design) and they do exhibit dFSCI 2) They are designed (observed design) and they do not exhibit dFSCI 3) THey are not designed (observed origin which does not imply a conscious intervention) and do not exhibit dFSCI. So we can empirically say that there is a correlation between the presence of dFSCI and designed things. If the threshold is high enough, the correlation will be as follows: a) All things exhibiting dFSCI are designed. No false positives. b) Many designed things do not exhibit dFSCI. Many false negatives. The absence of false positives, and the presence of a lot of false negatives, is the consequence of having chosen an extreme threshold (een at the 150 bit level). If we lowered the threshold, we would certainly have less false negatives, but would could start to observe false positives. Please note that the correlation between dFSCi and design can be empirically verified for human artifatcs. We can well prepare a range of digital sequences, some of them designed and complex, others designed and simple, and others non designed (generated in a random system) or derived by any natural, non biological system. We know in advance which are designed and which are not. Then we ask some observer to assess dFSCI in them. I affirm that the observe can have as many false negatives as we can imagine, but if he rightly assesses dFSCI in some cases, none of them will be a false positive. They will all be designed sequences. gpuccio
Elizabeth: I have been too nusy the last two days! I have not checked the previous discussion, but first of all I would like to go on with the points I had outlined (I hate to leave things unfinished). So, to complete the point of complexity and dFSCI definition, I have to discuss the threshold of functional complexity. Well, in principle we could just compute the complexity, and say that a certain object has a digital Functionally Specified Information, for a certain function, of, say, complexity amounting to 132 bits. That's perfectly correct. What it mean is that you need 132 bits of specific information to code for that function. Usually, for the discussions related to design inference, it is useful to fix a threshold of complexity. Now, an important point is that he threshold is arbitrary, or rather conventional: it must be chosen so that for functional information higher than that value generation in a random system, in a specific context, is empirically impossible. It is important to emphasyze that the threshold must be appropriate for the contex. That's because the probability of a certain result emerging randomly depends not only on the absolute probability of the result (the functional complexity), but also by what Dembski calls the probabilistic resources. So, a certain result can have a probability of, say, 1:10^5, but if my system can try 10^8 times to get the result, it is almmost certain that it will emerge. That's the real meaning of the threshold: it must be high enough that the result remains virtually impossile guven the probabilistic resources ofn the system we are considering. Now, as you probably know, Dembski has often referred to the UPB (about 500 bits of complexity), as the level of sppecified complexity which guarantees that a result remain completely unlikely even given all the probablistic resources available in the whole universe and in its whole span of existence. That's fine if we want to define an event as absolutely unlikely, but for our biological reasoning it's certainly too much. That's why I have suggested a lower threshold that gives reliable improbability in any biological system on our planet, given its span of existence, and referring to the replicators with the highest population number and the highest reproduction rate (bacteria). I have even tried to calculate a reasonable threshold for that. I don't remember now the details of those calculations, but if I remember well the result, I would suggest a biological threshold of complexity of about 150 bits (more or less 35 AAs). I can obviously accept any other value, if a better computation of the maximun probabilistic resources in a biological setting on our planet is done. So, if we want to affirm if dFSCI is present in an object for a design inference in a biologicalm context, we have to do the following things: 1) define a function 2) Compute as well as possible the target space for that function and the search space, and calculate the rate, expressing it as -log P (in base 2) 3) Verify that the digital information is scarcely compressible, and that no explicit necessity algorithm is known that can act in alternative to random generation. 4) If all the above is satisfied, and the specified complexity is higher than our threshold, we say that the object exhibits dFSCI. If our context is different, for instance cosmological, then it will probably be more appropriate to use UPB as a threshold to asses dFSCI. gpuccio
Ah - gap is only visible at preview. Seems to be OK in the post. Ignore my PS :) Elizabeth Liddle
Chris, I think we have different mental images here. I am visualising of a peptide chain that can split in two, and each half then attract the monomers that will complete, it and result in two chains where first there was one. Now the sequence of units in that chain may be completely irrelevant to its ability to self-replicate - what makes it a self-replicator is not a specific sequence, but it's doubleness. Let's say A mates to C and B mates to D. And we have two chains: AC CA CA DB DB BD and CA CA AC AC DB BD Both are equally capable of self-replication because what gives them their self-replicating property is their ability to split down the middle, and for both halves to then attract to each now-unmated unit, the corresponding unit. So the first chain splits into: A C C D D B and C A A B B D Each of these then "mates" with the appropriate A, B, C and D monomers in the environment, resulting in tow chains that are identical to the parent chain. But this self-replicative property does not derive from the sequence of units, but from the pairing-properties of the units. So the second chain will be just as capable of self-replication as the first, and so will any daughter, no matter how many copying "error" take place in the sequence, because the self-replicative properties are not derived from the sequence but from the doubleness of the chain! So as it stands, these peptides do not meet the minimum criterion for Darwinian evolution - there is no phenotypic consequence of the transmitted sequence. Got to go and collect some data now! See you later. Cheers Lizzie PS: there seems to be a stray gap before the last term in each of my sequences - I can't seem to get rid of it - please ignore it! Seems to be a replication error.... Elizabeth Liddle
But if the daughter molecules are only “more like the parent molecule than a randomly selected molecule would be” then why would the daughter molecules necessarily have the ability to self-replicate? Given that the first self-replicating molecule would have done virtually nothing else but self-replicate, any ‘copying errors’ in the daughters would almost certainly lead to the impairment or loss of the ability to self-replicate. How can it be otherwise? You must also agree that perfect cloning of the parent molecule is all that we need for perfect, eternal self-replication. So where does the variance come into it? And if variance does come into it, how can you be so sure that it wouldn’t lead to impairment or loss of self-replication: particularly given the fact that the first self-replicating molecule is doing little else apart from self-replicating in the first place! I can certainly see how you can ‘imagine’ overcoming these insurmountable obstacles, but we need to be realistic otherwise there is no truth value to be found here. Chris Doyle
NFL is accessible at Google Books, in a good slice preview. kairosfocus
Chris:
Hiya Lizzie, To my mind, the first successful self-replicating molecule need to be: 1. Stable enough at all times to survive for long enough to self-replicate correctly. That means protected from detrimental chemical reactions.
Well, I'd say that it simply needs to replicate with sufficient fidelity that the daughter molecules are more like the parent molecule than a randomly selected molecule would be. Then we can say that some minimal "self-replication" has occurred.
2. Involve a form of self-replication that perfectly cloned the original. That means true self-replication or else the ability to self-replicate would be impaired or lost within a few generations.
No, I don't think that is a necessary requirement for natural selection to occur. What is necessary, as I said above, is that the sequence itself has to affect replication probability. This is unlikely, I think, unless the molecules are enclosed in some way (although I could be wrong). In other words, the genotype needs to affect the self-replicative capacity of a phenotype. Until we have a a phenotype, as well as a genotype, we don't have the necessary conditions for Darwinian evolution. Perfect fidelity of self-replication is not a condition however - indeed Darwinian evolution presupposes self-replication with variance, and that variance has to include phenotypic effects that affect the probability of further self-replication. Elizabeth Liddle
Hiya Lizzie, To my mind, the first successful self-replicating molecule need to be: 1. Stable enough at all times to survive for long enough to self-replicate correctly. That means protected from detrimental chemical reactions. 2. Involve a form of self-replication that perfectly cloned the original. That means true self-replication or else the ability to self-replicate would be impaired or lost within a few generations. Do you agree that these are necessary conditions? Chris Doyle
Well, there's plenty of scope for copying error, but I think what you may be getting at is: "what scope is there for copying error that will make a difference to the ability to self replicate?" If a double strand of a sequence of mating base pairs splits, and each single strand attractes loose monomers to its now-unmated bases, so that we have two identical strands in place of the original, then it may well be that the sequence itself is sometimes disrupted (perhaps the end of the strand detaches itself; perhaps it joins to another floating strand), but the sequence my not be important to the self-replicating properties; the critical property may simply be the double strandedness, and the mating properties fo the bases. And your question would then be: how could it ever improve its self-replicating properties? Well, alone, it probably couldn't. But recall that selection operates at the level of the "phenotype" not the level of the genome. What we have is here is a phenotype-free genome, essentially. But if those self-replicating peptides find themselves captured by a lipid vesicle, moreover one that tends to grow to a critical size then subdivide, then we have a potential "phenotype". If the sequence of bases in the peptide, for instance, are such that, say, the odd rRNA molecule tends to form with properties that, for example, increase the critical size at which the vesicle divides, allowing more replications of the peptide, and a great chance that the daughter vesicles will both contain copies of the peptide, then we do have a selectable phenotype - a population of vesicles in which some grow larger before subdividing than others, and therefore stand a greater probability of forming two daughter vesicles containing matching daughter peptides. Does that make more sense? Elizabeth Liddle
But hang on, Lizzie. We're just talking about the first self-replicating molecule. This must have had the ability to replicate (along with its clone descendants) without a "lipid vesicle". And the only way the first ever self-replicating molecule could self-replicate was the way it self-replicated: that's axiomatic. Where is the scope for copying error? How could copying error not result in impairment or loss of the ability to self-replicate given that that is the only method available to the first self-replicating molecule? Chris Doyle
Because self-replication is the only capability of the first self-replicating molecule and, as far as we can tell, there is only one way to self-replicate.
Well, firstly, I'd say that there are lots of way that a thing, or assemblage of things, might self-replicate. Secondly, I'd say that even if they all did it the same way, they wouldn't necessarily be identical in terms of their likelihood of surviving to do it time and time again. For instance a self-replicating peptide loose in the soup might be far more vulnerable to disintegration than one that found itself inside a lipid vesicle. And if that vesicle itself tended to subdivide (as vesicles do) you've now got a peptide-vesicle combo that reproduces more efficiently than a peptide alone. Elizabeth Liddle
Because self-replication is the only capability of the first self-replicating molecule and, as far as we can tell, there is only one way to self-replicate. Chris Doyle
Chris:
Any deviation from the clone of this first self-replicant is only going to impair or remove the ability to self-replicate, surely?
Why? Elizabeth Liddle
Hiya Lizzie, I'm glad that RSS is proving useful. I'm wary of rushing into paper chemistry when some big questions remain unanswered and we're completely lacking an empirical basis to proceed. Let's come back to that first self-replicating molecule: the descendants of which are clones. Now, I'm still not clear on this point: given that the urge to reproduce has been satisfied by this first self-replicating molecule, which must have been able to survive long enough to replicate itself (as must its descendants) why should it have evolved at all? The simpler the better, surely? On what grounds can we believe that the first self-replicating molecule actually had any scope for copying error? Again, eternal, perfect cloning is sufficient. Any deviation from the clone of this first self-replicant is only going to impair or remove the ability to self-replicate, surely? Even if a mutant could still self-replicate, if there is no real competition for resources (because the original strain flourishes along with the mutant strain), then why should the original strain die out? Chris Doyle
Chris:
Don’t worry, I’ll remind you of the unanswered posts if the conversation returns to those territories!
Thanks! And I'm finding the RSS feed a great boon.
Given that we’ve already got successful survival of clones (ie. of the first self-replicating molecule) then why would mutants arise and then be selected? The monkey god is being served nicely, thank-you very much. Given that the most sophisticated part of a self-replicating molecule is the ability to self-replicate, surely any mutation of that molecule will result in the impairment or even loss of the ability to self-replicate? Whatever resources the first self-replicating molecule depended on (what were they?) must have been absolutely abundant in order for it to arise in the first place. If they were scarce, then it would never have lasted long enough to evolve. So, you have not yet established if there was competition for resources at all. Also, Lenski’s LTEE showed that when a previously untapped resource enters into the equation then both strains thrive (because of the reduced competition for the original resource), not just the new one.
Well, that last point is a good answer to your question about why diversity occurs, I would have thought. As for your other points - yes I would agree that for the limits of evolution to expanded, there has to be room, as it were, for improvement. However, there is (as you point out, wrt to Lenski's evidence) more than one dimension along which things can be improved. So "the ability to self-replicate" isn't just the physical ability to do so (which in some ways is quite simple, once you have an entity with two halves that can split) but the ability to survive long enough to do so again and again. And that may depend on the specific properties of the self-replicator. Which is why, I think, OOL researchers have looked at the combination of self-replicating peptides AND lipid vesicles that could enclose them. If I ever get round to my simulation, that's what I hope will happen. Then, the longevity and self-replication capacity of the whole combo - vesicle+peptide has more chance of depending on the specific properties of the peptide.
So, let’s not go into just-so stories until we have an empirical foundation to build them on. Especially when there’s an enormous difference between “swallowing something” and “metabolising something”.
Sure. But it's also good to get back to basics - the reason a protobiont needs "resources" in the first place is to build copies of itself. It may have all the self-replicating power you could shake a stick at, but it can't put it into practice unless it has access to the bits it needs to build it's replica with. The most fundamental thing about organisms is that you end up with more of the stuff than you started. That stuff has to come from somewhere. And in the early days, simply "swallowing" it - useful monomers, maybe bits of old organisms, or just more bits of the original "soup" - may be all it needed - stuff to be chemically attracted to the naked bases of a split peptide to make two whole ones. That's what I hoped would happen in my sim (and still do!) - from a "primordial soup" of monomers with certain "chemical" properties, I hope that vesicles will form that will enclose polymers that will tend to replicate themselves, given a rich enough supply of monomers, then the whole thing will get so big it divides in two, leaving us with two vesicles containing copies of the parent peptides. And, after a while, I hope that vesicles with peptides that for some reason help the vesicle to maintain its integrity for longer, maybe by acting as a template for the generation some kind of "protein" liner product from more floating stuff, will produce more copies of themselves, will come to dominate the population, leaving me with a population of self-replicating peptide-containing vesicles in which the particular peptides have the best vesicle-preserving properties. It'll be a challenge though :) Elizabeth Liddle
Hiya Lizzie, Don’t worry, I’ll remind you of the unanswered posts if the conversation returns to those territories! Given that we’ve already got successful survival of clones (ie. of the first self-replicating molecule) then why would mutants arise and then be selected? The monkey god is being served nicely, thank-you very much. Given that the most sophisticated part of a self-replicating molecule is the ability to self-replicate, surely any mutation of that molecule will result in the impairment or even loss of the ability to self-replicate? Whatever resources the first self-replicating molecule depended on (what were they?) must have been absolutely abundant in order for it to arise in the first place. If they were scarce, then it would never have lasted long enough to evolve. So, you have not yet established if there was competition for resources at all. Also, Lenski’s LTEE showed that when a previously untapped resource enters into the equation then both strains thrive (because of the reduced competition for the original resource), not just the new one. So, let’s not go into just-so stories until we have an empirical foundation to build them on. Especially when there’s an enormous difference between “swallowing something” and “metabolising something”. Chris Doyle
Chris: (oops, I owe you another response as well....)
Good Afternoon Lizzie, If the monkey god (wow, things are getting weird around here!) only cares about monkeys typing, then, coming back to the real world for a minute, why did we ever evolve beyond the first self-replicating molecule in the first place (or even evolve beyond unicellular organisms)? Clearly the first self-replicating molecule (and unicellular organisms) can just go on typing forever and ever and ever. Things which complicate that (such as multicellularity and sexual reproduction) should have been weeded out by the monkey god from the very beginning.
Good question - let's abandon the monkeys though! Firstly, the unicellular organisms will tend to form a family tree - a mutant that survives will start a lineage of descendents bearing that mutation. So after a short time there will be many strains of unicellular forms all competing for resources. Any strain (any population of one lineage) that finds itself able to exploit a new resource will do better than strains competing for the same resource. So we have the beginnings of, if not, technically "species" (because they do not interbreed) at least of differentiated populations, adapted to utilise different resources. Now, let's say that one population (this is purely just-so story, of course, I just made it up, but it might work) utilises another population as a resource. Eats it. And that other population mutates away, until a sub-strain happens to emerge where the offspring tend to stick together, making them harder for the predator strain to eat them. The outer ones will be picked off, of course, but the inner ones will be protected, and as long as they keep breeding as fast as the outer ones are being picked off, the whole colony will survive. Now, let's say one of these colonies comes up with a mutation that means that some protein that aids utilisation of some resource is only made if the cell in question is surrounded on all sides by other cells, enabling it to absorb enough of some important enzyme from its neighbours. Now, such a mutation would obviously be highly deleterious in a non-colony, however, in a colony, it's only going to harm the cells near the edge. And that won't matter as long as the cells in the middle keep dividing as fast as the ones at the edge keep dying off. However, an unexpected advantage shows up- the dead cells at the edge of the colony prove to be inedible to the predator. So the edge of the colony now not only protects the inner members by being sacrificial victims, giving them time to replicate before the predator population reaches them, but actually keeps the predator population off. The colony has, in fact, become a single organism, with two organs - a skin, consisting of dead cells, and a middle, consisting of live ones. And these two organs are differentiated by a primitive kind of cell signalling - the leakage of enzymes from one cell to another. If enough enzymes get through, the cell goes on metabolising and dividing; if not, the cell dies and protects the colony, and this happens only at the edge, or, as we can now call it "skin". This enables the colony to keep growing, indefinitely, "skin" forming whenever the inside grows so big it tears the outside. However, there is probably an upper limit to the size - beyond a certain size, the thing will tend to break up, just because of basic structural forces including bending moments. At which point, the pieces rapidly grow a "skin", and continue to grow, and self replicate in turn. Now, selection may kick in at the level of the colony. Colonies that happen to have mutations in which other proteins are dependent on the concentration of enzymes leaching across from neighbouring cells may start to differentiate still more - perhaps cavities appear in the the structure, to increase the exposure if the centre to nutrients. We now have the beginning of something like a sponge. And so on. OK, I just made all that up. But then that's the beginning of the scientific process: Just So story -> speculation -> explanatory theory -> hypothesis -> prediction -> new data conforms to/violates prediction. I guess my point is simply that speculation is at least possible, and that there are actual theories out there, including sponges. As for "why are there still sponges" - well I guess some other colony found a new niche :) Be back later, gotta run. Elizabeth Liddle
Joseph: That we don't know yet (although we have some ideas). But first, let'g get clear: given the monkeys, and the typewriters etc, i.e. given the minimal entity capable of replication with variance in the ability to self-replicate, People - including Shakespeare - are one of many possible outcomes. Then (or even simulaneously) we can discuss where that first entity came from. Elizabeth Liddle
Where did the monkeys get the typewriters, the ribbbons and the paper? Joseph
Good Afternoon Lizzie, If the monkey god (wow, things are getting weird around here!) only cares about monkeys typing, then, coming back to the real world for a minute, why did we ever evolve beyond the first self-replicating molecule in the first place (or even evolve beyond unicellular organisms)? Clearly the first self-replicating molecule (and unicellular organisms) can just go on typing forever and ever and ever. Things which complicate that (such as multicellularity and sexual reproduction) should have been weeded out by the monkey god from the very beginning. Chris Doyle
No, I ran out of coffee. No problem though: the monkey god does have to know Shakespeare. You've underestimated the target space again. All the monkey god cares about is that the monkeys go on typing, and anything they type that increases the probability that they will go on typing, will of course, be typed more often. Differential repetition, if you like. As you say - that's what the monkey god is. Elizabeth Liddle
Fine. But that’s why enables the monkeys to type Shakespeare.
Have you had your morning caffeine yet? The monkeys don't type Shakespeare. They type whatever they type, mostly nonsense. But how is it that the monkey god knows Shakespeare? Isn't that horribly non-Darwinian? Mung
Mung:
No Free Lunch is a book written by Dembski which you haven’t even read.
It's also the title of a pair of theorems by Wolpert and MacReady which don't apply to evolutionary algorithms, and which Dembski tries to apply to evolutionary algorithms here: http://www.designinference.com/documents/2005.03.Searching_Large_Spaces.pdf defending his application by appeal to what he calls a "no free lunch regress". His defense is faulty. Elizabeth Liddle
Fine. But that's why enables the monkeys to type Shakespeare. It's The Whole Point. As you rightly point out. Elizabeth Liddle
Elizabeth Liddle:
Darwinian theory is not a monkeys-at-keyboard theory.
Yes, it is. It just adds a monkey god to determine which sequences typed by the monkeys ought to be saved and which should be discarded. Mung
This is what is wrong with Dembski’s application of No Free Lunch in fact, but we needn’t worry about that here.
No Free Lunch is a book written by Dembski which you haven't even read. Mung
Elizabeth: I would like anyway to go on with the remaining parts of the procedure. Maybe later, ot tomorrow. gpuccio
Elizabeth: I must sat that you follow my reasoning very well. Your comments are relevant, and good. I don't know if I have the time to go on for much today. I will try to make some clarifications which are simple enough: You say: "Ah, OK. You actually want to use compressibility to exclude “Necessity” – simple algorithms. That’s sort of the opposite of Dembski " Yes, I know. And that is exactly one of the main differences. That is, mainly, because I am interested in a direct application of the concept to a real biological context, rather than in a general logical definition of CSI. So, if you agree, we can stick to my concept, for now. I agree with you that the computation of the target space is the most difficult point. And I agree that it is a still unsolved point. We have already discussed that in part. For me, the Durston methon remains at present the best approximation available for the target space. I do believe, for various reasons, that it is a good approximation, but I agree that it rests on some asuumtions (very reasonable ones, anyway). One of them is that existing protein family have more or less, in the course of evolution, traversed the target space, if not completely, in great part. That is consistent with the other paper about the protein big bang model. Remember that we need not a precise value, but a reasonable order of magnitude. Research on the size of protein target space must go on, and is going on, on both sides. The fact that a measure is difficult does not make it impossible. But, in principle, the target space is measurale, and the concept of dFSCI is perfectly working. The "fixed length" iddue is not really important, IMO. It is just a way to fix the computation to a tractable model. Shorter sequences with the same function can exist in some cases, but in general we have no reason to believe that they change much the ratio, especially considering that for longer sequences the search is almost less favourable. Anyway, the dFSCI is always computed only for a specific function. Your objection of "all possible funtions" that evolution can access, instead, is a more general one, and IMO scarcely relevant. I have already answered it once with you, and I can do that again, if you want. gpuccio
PS: Here's my FSCI test. Generate true and flat random binary digits, feeding them into an ASCII text reader. Compare against say the Gutenberg library for code strings. Find out how long of a valid string you are going to get. So far, the results are up to 20 - 24 characters, picking from a space of 10^50 or so. We are looking at spaces of 10^150 and more. kairosfocus
Dr Liddle: In short, you don't trust the search of life on the ground to adequately sample the space successfully and empirically! (Remember my darts dropped on charts example? the typical pattern of a population will usually show itself soon enough!) And, the table is hardly in isolation from the paper and its context in the wider discussion. Have you worked through it? What are your conclusions? On what grounds? Mine, are that they have a reasonable method to estimate the information per AA in the protein families, reduced from the 4.32 bits per symbol that a flat random distribution would give, based on actual patterns of usage; similar to methods that a Yockey or the like or even a Shannon might use; or, your intelligent code cracker. There may be outliers out there but if you were to go into the business of synthesising this is what I would lay in to know how much of what to stock up on. Just as printers did int eh old days of lead type -- Morse I gather sent an assistant over to a printers to make up a table of frequencies of use of typical letters. BTW, that is how comms theory generally works in assigning bit values per symbol in typical message patterns on traffic analysis. Objections as to what is logically possible don't make much difference to what is seen as the typical pattern. And that pattern is one of islands of function, falling into fold domains. GEM of TKI kairosfocus
Dr Liddle: Remember, the life function problem starts with OOL. The relevant functional polymers would have had to be structured on thermodynamics of chaining reactions, which ARE a random walk driven challenge, i.e. monkeys at keyboards as a good illustration. Then, to create embryologically feasible new body plans, we have to go to changes that are complex -- 10 - 100+ million bits, with no intelligent guidance to structure proteins to form cell types and tissues, or regulatory circuits etc to specify structures to make up complex organisms. The recent Marks-Dembsky work has shown that a blind search will on average do at most as well as chance based random walks rewarded by trial and error. So there is method to the madness. GEM of TKI PS: In short, to get to the concepts we have to use ostensive definitions, and the operationalisation, modellinfg, application and quantification follow from the conceptualisation. Just as it took from the 1660s to the 1960s to find a way to mathematicise infinitesimals coherently, by creating an extension to the reals, the hyper-reals. I suspect my learning of calculus would have been eased by that, but that would have been cutting edge at the time. kairosfocus
Yes, kf, and I looked at it. But that was simply a table of extant family members. It doesn't tell us a) how many possible other family members there might be (combos that would result in an equally viable protein), nor does it tell us how many other genes, even, might achieve the same phenotypic function. So it must be an underestimate of the target space. So cited table does not tell us that the proteins are "deeply isolated islands". It tells us nothing about how isolated they are. It just tells us that in general there are a lot of ways of producing a functional protein, and that some of them are observed doing so. Elizabeth Liddle
Dr Liddle: Over some months you have been repeatedly directed to Durston et al and their table of 35 protein families, joined to the metric they developed. they have empirically identified per study the range of values for which aligned proteins will still function. So, life itself has worked out the reasonable range. That is why I used their table. And, as proteins are expressed from DNA etc, this is an answer on the reasonable range that works for DNA. More broadly, we know that proteins are restricted by folding and functional requirements, and fall in fold domains, which are deeply isolated in sequence space. Some positions are fairly non-critical, but others are just the opposite. All in all, we end up with deeply isolated islands of function, i.e an arbitrary sequence of AAs will not be a functional protein, typically. GEM of TKI kairosfocus
Oh, and of course operational definitions are derivative. They have to be, or they would be useless. We start with a conceptual hypothesis, then we operationalise it so that we have a way of testing whether our conceptual hypothesis is supported. It may be derivative but that does not mean we don't need it - it's essential! Elizabeth Liddle
But kf: no-one is proposing monkey-at-keyboards. This is a really really fundamental point. Darwinian theory is not a monkeys-at-keyboard theory. It assumes that near-neutral and slightly beneficial combinations accumulate, while deleterious, relatively as well as absolutely, are purged. This hugely affects the search space, and thus your calculations. And that's before we tackle the target space itself, which is, I suggest, far larger than you are calculating. Elizabeth Liddle
Dr Liddle: Operational definitions are at best derivative. If one makes the error that the logical positivists did, one will fall into self referential incoherence. Ostensive definitions are the first ones we apply to rough out concepts. By key example and material family resemblance. For instance this is the only definition we can make of life, the general subject matter. Does that mean that absent an operational definition, life is meaningless or unobservable and untestable? Plainly not, or biology as a science collapses. In any case, I have given you an actual quantification of dFSCI, above; one that implies a process of observation and measurement that applies well known techniques in engineering relevant to information systems and communication systems. Processes that have been in use, literally, for decades, and are as familiar as the measure of information in files in your PC. The fake Internet personality MG was trying to make a pretended mountain out of a molehill, to advance a rhetorical agenda. He -- most likely -- was answered repeatedly, but refused any and all answers. Please do not go down that road. Or if you do insist on another loop around a wearing path, kindly first look at the OP above. You will find there all the answer a reasonable person needs on what CSI is, and FSCI as the key part of it, and how it can be reduced to a mathematically based measure, then applied to even biological systems. There are a great many now barking out the mantra that CSI/FSCI is ill defined and meaningless, but hey do so in the teeth of plain and easily accessible evidence. Just as is the case with those who are still trotting out the calumny that Design theory is merely repackaged creationism. GEM of TKI kairosfocus
OK, gpuccio: So, complexity - the nub of the problem:
Well, we are dealing with digital strings, so we will define complexity, in general, as the probability of that sequence versus the total number of possible sequences of the same length. For simplicity, we can express that in bits, like in Shannon’s theory, by talking the negative base 2 log.
OK.
So, each specific sequence in binary language has a generic complexity of 1 : 2^length.
Right. So, for a DNA strand, it will be 1:4^length, right?
But we are intersted in the functional complexity, that is the probability of any string of that length which still expresses the function, as defined.
So we could be talking about a gene, with a known function?
So we have to compute, ot at least approximate, the “target space”, the functional space: The total number of sequences of that length which still express the function, as defined.
Well, I have a question here: how do we find out how many sequences of that length will still express the target function? And why that length? We know that genes can vary in length and still perform their function - either express the same protein or an equally functional one. We often know how many alleles (variants of a gene) are actually found in a population, but how do we know what the total possible number of variants are?
The ratio of the target space to the search space, expressed in bits, is defined as the functional complexity of that string for that function, with the definition and assesment method we have taken into consideration.
Well, I still don't know how you are defining the target space.
In reality, we have to exclude any known algorithm that can generate the functional string in a simpler way. That means, IOWs that the string must be sarcely compressible, at least for what we know. This point is important, and we can discuss it in more detail.
Well, as you have defined your target space in terms of the number of sequences [of the same length] that will serve that function (I put "of the same length" in square brackets because it seems to me that is an invalid constraint), then why do we also need to worry about "compressibility"? We have our specification right there. On the other hand, compressibility may be relevant for identifying our target space (because right now I don't know how to get a number on your target space).
In general, protein sequences are considered as scarcely compressible, and for the moment that will do. If any explicity necessity mechanism can be shown to generate the string, even in part, the evaluation of complexity must be done again, limiting the calculation to the functional complexity that cannot be explained by the knwn necessity mechanism.
Ah, OK. You actually want to use compressibility to exclude "Necessity" - simple algorithms. That's sort of the opposite of Dembski :) But yes, if an algorithm can produce the sequence, then I agree that we can exclude Design (if that is what you are saying). But I then think that the evolutionary algorithm can probably produce your pattern :) However, I'm bothered by your target space issue. Ironically we are in danger of an inverse WEASEL here: of defining complexity in terms of too small a target. Firstly, it is not clear to me how to compute the target space for a single protein, because we'd still have to model a probability distribution for introns. Then we don't know how many protein variants would be equally functional, val/met substitutions for instance. Nor do we know whether a quite different protein might do the equivalent job in a different way, or whether an entire set of proteins, expressed at different times, might be equally effective in "solving" the problem that the protein in question forms a part-solution. And this gets to the heart of the evolutionary story - that Darwinian evolution makes no prediction about what "solutions" a population will find to survive in a particular environment - all it predicts is that adaptation will tend to occur, not form that adaptation will take. So the "target space" is absolutely critical. And so indeed is the "search space", which is not the kind of search you get by rolling four sided dice N times, where N equals the length of a candidate sequence, and repeating the process until you hit a useful protein. This is what is wrong with Dembski's application of No Free Lunch in fact, but we needn't worry about that here. What concerns us more is how many possible sequences offer a slight phenotypic advantage of whatever kind and how many have a near neutral effect, and of those, ditto, and of those, ditto, and of those, ditto. , and, of those, how many offer. So my concern is that you have a) underestimated the target space and b) overestimated the search space. Shall we try your claim out with some real data and see if I'm right? Elizabeth Liddle
KF: thank you for your wonderful and more technical contribution. I am not a mathemathician, so I am trying to stay as simple as possible. I have not yet discussed the threshold for a biological context, but as you know I usually put it at a mcuh lower level of complexity. That will be the subject of my next post. gpuccio
Elizabeth:
Well, I mean that the patterns deemed to have CSI (e.g. the patterns in living things) can be created by Chance and Necessity.
Yet that has never been observed. And there isn't any evidence that genetic accidents can accumulate in such a way as to give rise to new, useful and functional multi-part systems. Joseph
GP: Here's how I build on WD. Conceive of an observed event E, for specificity a string of digital elements ( as every other data structure can be broken into connected organised strings, and systems that are not digital can be captured as digital reps). Now, define a set -- a separately definable collection -- T, where the E's of relevance to our interest come from T. Further, T is itself a narrow and UN-representative set in a space of possibilities W. That is, if you were to take small samples of W, you would be unlikely to end up in T, a special zone. Now, let the elements of T store at least 500 bits of info, info that has to meet certain specifications to be in T. (e.g. T for this thread would be posts longer than 73 ASCII characters in English and contextually responsive to the theme of the thread.) Now, the zone of possibilities for at least 500 bits is vast. Indeed at the 500 bit limit, we are looking at 3 * 10^150 possibilities. That is on the usual estimates 10^48 times the number of Planck time quantum states of the atoms in our solar system since its birth. No random walk sample from W on the gamut of the 10^57 or so atoms in our solar system, would be reasonably likely to land in T by chance. T, the island of function, is deeply isolated in and unrepresentative of the possibilities W. The typical at random 500 bit string would be 73 characters of garbled rubbish. And yet, each participant, in a matter of a few minutes at most, well within the resources of the solar system [our bodies are about 10^27 atoms], pounded out 10- word strings. Monkeys at keyboards could not reasonably do the same in anything like a reasonable scope of resources. A whole internet is there to back up the point. We have a working definition and rationale for the metric in the OP: Chi_500 = I*S - 500, bits beyond the solar system threshold. I on I = - log p or extensions or applications. S, a dummy variable that goes to 1 if Es can be spotted in a defined T in a much wider W. We can go up to 1,000 bits easily, if you need an observed cosmos threshold. Remember 10^30 PTQSs for the fastest chemical reactions. GEM of TKI kairosfocus
Elizabeth: I agree with your definition of an operational definition. gpuccio
kf: I don't know what an "ostensive" definition is, I'm afraid, but I think that operational definitions are vital if a claim is to be put to the test. Indeed, they are vital by definition! Wiki has a good definition at present:
An operational definition defines something (e.g. a variable, term, or object) in terms of the specific process or set of validation tests used to determine its presence and quantity. That is, one defines something in terms of the operations that count as measuring it.
If the claim is that a certain kind of pattern (a pattern with certain properties) can only be created by Design, then it is important to have a set of validation tests in order to "determine its presence" if the claim is to be tested. If I produce a simulation, where the only inputs are Chance and a set of Rules of Necessity, then if I end up with a candidate pattern, we need to be able to determine the presence or otherwise of the claimed Design signature. This is what an operational definition is. Elizabeth Liddle
PS: by consensus and experience, we are intelligent, conscious, purposeful and designing. The relevant set is non empty and by material family resemblance other cases would be recognised; as we moved from earth's moon to those first four of Jupiter and onwards. kairosfocus
Elizabeth at 49: OK, more or less I would say it is correct. Go on. gpuccio
Elizabeth: So, complexity. Well, we are dealing with digital strings, so we will define complexity, in general, as the probability of that sequence versus the total number of possible sequences of the same length. For simplicity, we can express that in bits, like in Shannon's theory, by talking the negative base 2 log. So, each specific sequence in binary language has a generic complexity of 1 : 2^length. But we are intersted in the functional complexity, that is the probability of any string of that length which still expresses the function, as defined. So we have to compute, ot at least approximate, the "target space", the functional space: The total number of sequences of that length which still express the function, as defined. The ratio of the target space to the search space, expressed in bits, is defined as the functional complexity of that string for that function, with the definition and assesment method we have taken into consideration. In reality, we have to exclude any known algorithm that can generate the functional string in a simpler way. That means, IOWs that the string must be sarcely compressible, at least for what we know. This point is important, and we can discuss it in more detail. In general, protein sequences are considered as scarcely compressible, and for the moment that will do. If any explicity necessity mechanism can be shown to generate the string, even in part, the evaluation of complexity must be done again, limiting the calculation to the functional complexity that cannot be explained by the knwn necessity mechanism. gpuccio
Dr Liddle: On definitions of CSI cf OP and onward links to April 18 and May 10 threads. Do not overlook provided examples for families of proteins. BTW, ostensive definitions are the most relevant, not so much, operational or precising denotative ones. GEM of TKI kairosfocus
Hang on, gpuccio, we really do need to go step by step. Let me try to summarise: Step 1: A conscious being is a being that many human beings agree is conscious. Step2: Design is the purposeful imposition of form on an object by a conscious being. Step3: digital Functionally Specified Complex Information (dFSCI) is: 1. Digital: can be read by an observer as as series of digital values e.g. DNA. 2. Functionally specified: e.g. has a function that can be described by a conscious observer (e.g. us). 3. Complexity: tba. Can you let me know if any of that is incorrect? Elizabeth Liddle
Elizabeth: Ah, the excuhtive power. Well, it is not important. If we observe a designer in the process of designing, that means that he has the executive power. That power can be different, according to the method of impèlementation of design. But at present this is not important. For defining a function, the only executive power is that the observer must recognize a function, define it and give a method to assess it. Maybe I must be mpore clear. I am not saying that, if a certain observer cannot see any function in an object, that means that no function exists for the object. Another observer can recognize a function that the first observer missed. All that has not consequences on the discussion. All I am saying is that, if a cosncious observer can explicitly define a function for the object, and give an objective way to assess it, we will take that function into consideration as a specification for that object. Nothing more. gpuccio
Elizabeth: Your objections may have value or not, but they are not relevant to my procedure. I need the concept of conscious intelligent being only for two steps of the procedure: 1) To define design. So, do you agree that humans are, in general, conscious intelligent agents? That's the important part, because we will consider exactly kuman artifacts as examples of design. It is not important, for now, to know if a computer is conscious or not. If it is conscious, we could also consider its outputs as designed, but that is not really important, because the computer, being designed by humans, implies anyway the purposeful intervention of a designer for its ecistence, and so that does not change anything. So, if you agree that humans are conscious intelligent agents, we can go on with the discussion, and leave for the moment unsolved if computers, or aliens, are. 2) The secon point is to define functional specification. Here, too, all we need is that a human observe may define a function for the onbject explicitly, and give a way to assess it, so that other conscious intelligent beings, like other humans, may agree. Again, it is of no relevance if computers or aliens are cosncious. If they are, they can certainly follow the reaoning with us :) gpuccio
Elizabeth: d = digital. We will consider only objects in which some sequence can be read by an observer as a digital sequence of values. It does not matter if those aspects of the object (points on paper, aminoacids in a protein, nucleotides in a DNA molecule) are really symbols of something or not, the only thing that matters is that we, as observers, can give some numeric value to the sequence we observe, abd describe it as a digital sequence. We can certainly do that for nucleotides in DNA protein coding genes, for instance. They are sequences of 4 different nucleotides, and the sequence can easily be writte as a sequnce of A, T, C and G on paper, conserving essentially the same digital form of the original molecule. FS = functionally specified. To assess our specification, all we need is a simple empirical procedure. I a conscious intelligent agent can define a function for the object, we say that the object is specified relatively to that function, provided that: 1) The function can be explicitly defined, so that anyone can understand it and verify it. 2) Some explicit method is given to measure the function and assess its presence or absence, preferably by a quantitative threshold. Please note that ir perfectly possibly that more than one function is defined for the same object. A good example is a PC. We can define it as an useful paperweight, and indeed it can certainly be used as suchh (if we have not excessive expectations). But we can certainly define it as capable of computing, and give methods to verify that function. There is no problem there. Any function wchich can be explicitly defined and objectively assessed can be considered. Each function, however, has to be considered separately. Let's leave complexity to the next post. gpuccio
I like your definition in Step 2, I think. It's good to include purpose explicitly. However, it is absolutely dependent on Step One, and, moreover, requires an additional criterion - that the being is not only conscious but able to execute intentions (so would rule a conscious being with no executive power, e.g. a conscious but paralysed being). So we have to find an empirical method for determining a) consciousness and b) executive power. We don't have either yet :) Elizabeth Liddle
gpuccio:
Elizabeth: Well, here is step one. a) An empirical definition of conscious intelligent being. That’s easy. Just any being we know to have conscious representation, including the basic cognitive functions, like abstract thought, logical thinking, and so on. No need of special definitions or theories here. The important thing is that we agree that humans are conscious intelligent beings (in general), due to the fundamental inference that they are subjectively conscious as each of us is. The requisite of agreement on conscious representations is fundamental here. A computer would not apply, because it is not conscious (or at least there certainly is not agreement that it is conscious). Non human beings (let’s say aliens) on which a general agreement were reached that they were conscious and intelligent, would qualify. The definition is purely empirical, but it requires a shared inference that the being is conscious. For the moment, we will aply it to humans, very simply.
Well, it's not really empirical, gpuccio. To be an empirical criterion for conscious capacity, it would have to enable us to determine whether, entirely through empirical testing, the property of being conscious can be inferred - i.e. by objective observation. So, for example, if a candidate being was able to articulate abstract concepts, perform logical functions, respond flexibly to its surroundings, recognise objects, avoid moving obstacles etc, one might say it was conscious. But you explicitly rule this out - you say we should rule out a computer as being conscious - simply because everyone agrees it is not conscious! But isn't that begging the question? On what basis could they decide it was not conscious if not on empirical grounds? And if not on empirical ground, then it isn't an empirical criterion. Can you see the problem? Elizabeth Liddle
Elizabeth: Well, while I way your objection to steps one and two, I can well apply to step three, which is probably the most important. I may need to split it into parts, let's see. c) A definition of a specific property, called dFSCI (digital Functionally Specified Complex Information), nad of how to evaluate it in an object. That includes a definition of functional specification, and a definition of complexity. It also includes a discussion of the problem of compressibility and of necessity based explanations. First I would like to briefly explain why I use the concept of dFSCI instead of the more vast concept of CSI. The concept is essentially the same (information that is specified and complex at the same time). But I use only one type of specification: functional specification. The reason is that this way I can define it empirically, and I don't need complex mathemathical or phisolophical discussions. I will define functional specification in a moment. The second reason is that I limit the discussion to digital information. That makes the quantification much simpler. IOWs, dFSCI is a subset of CSI, where the information is in digital form, and the specification is functional. Whatever CSI may be in general, dFSCI is a subset that can be defined with greater precision. Moreover, there is no problem to limit our treatment of CSI to dFSCI, because the final purpose of our discussion is to infer something about biological information, and biological information, at least the kind we will discuss (protein and DNA sequences) is bot digital and functionally specified. So, let's see the components of the definition one by one. It is important to remember that here we are just defining a formal property of objects. We are not creating any theory of what dFSCI really is, ofits meaning, or of anything else. We just want to define a property which can be present in an object or not, and then be able to say, for existing objects, if that property can be observed or not. It is a purely practical procedure. I will go on in next post. gpuccio
Elizabeth: And, just to enter a little more into argument, the second step: b) An explicit definition of design, of the process of design, and of designed object. We define design as the purposeful imprint of some consciously represented form from a conscious intelligent being to a material object. The conscious intelligent being is called "designer", the process by which he imprints a special form to the object is called "process of design", and the object itself is called "designed object". The only important point here is that design is defined as any situation where a conscious intelligent being represents a form and purposefully outputs it to an external object. The conscious representation and the intent must be present, otherwise we don't call it design. Moreover, the designer can contribute even only in part to the final form of the object, but that part must be guided by his conscious and purposeful representations. IOWs, the designer must know what he wants to obtain, and must have the intent to obtain the result. The result can correspond more or less perfectly to the intentions of the designer, but the object is anyway more or less shaped by the design process. gpuccio
Elizabeth: Well, here is step one. a) An empirical definition of conscious intelligent being. That's easy. Just any being we know to have conscious representation, including the basic cognitive functions, like abstract thought, logical thinking, and so on. No need of special definitions or theories here. The important thing is that we agree that humans are conscious intelligent beings (in general), due to the fundamental inference that they are subjectively conscious as each of us is. The requisite of agreement on conscious representations is fundamental here. A computer would not apply, because it is not conscious (or at least there certainly is not agreement that it is conscious). Non human beings (let's say aliens) on which a general agreement were reached that they were conscious and intelligent, would qualify. The definition is purely empirical, but it requires a shared inference that the being is conscious. For the moment, we will aply it to humans, very simply. gpuccio
Well, I mean that the patterns deemed to have CSI (e.g. the patterns in living things) can be created by Chance and Necessity. And I follow, in principle, what CSI is. I find it's mathematical definition a little odd, as it wraps what is normally called the "alpha" value into the definition. That part doesn't concern me though. What I'd like to know is how to compute the other two terms for a given pattern. Then I could try and produce it, using only Chance and Necessity. Elizabeth Liddle
Complex Specified Information is Shannon Information, of a specified complexity, with meaning/ function. But anyway if that is your position then when you said:
Well, I think your error is the basic ID error – the assumption Chance and Necessity can’t create CSI!
what did you mean if you don't even know what CSI is? Joseph
Joseph: I'd love to. But we'd still need an operational definition of CSI - I believe vjtorley had a go, perhaps someone could link to it. Elizabeth Liddle
Chris - thanks! No, no eggs involved! And it wouldn't matter if there were! That will be hugely helpful. Elizabeth Liddle
Yes indeed. That would be an excellent plan gpuccio. Step by step sounds good :) Fire away. Elizabeth Liddle
Elizabeth: Let's go this way. I will try to express here, if you follow me, a simple, empirical and, I believe, complete concept of CSI and its application to design inference in biological information. The terminology os not exactly identical to Dembski's, but it will be defined explicitly step by step. I would start by affirming that the concept of CSi is simple and intuitive, although its rigorous definition is more difficult, and must be done according to a specific context. In general, we can say that CSI is the presence in an object of information that is both specified and complex. But we will go back to that in due time. Here I will just outline the general form of my reasoning, and the steps of the argument. The argument is completely empirical. a) An empirical definition of conscious intelligent being. b) An explicit definition of design, of the process of design, and of designed object. c) A definition of a specific property, called dFSCI (digital Functionally Specified Complex Information), nad of how to evaluate it in an object. That includes a definition of functional specification, and a definition of complexity. It also includes a discussion of the problem of compressibility and of necessity based explanations. d) An empirical appreciation of the correlation between dFSCI and design in all known cases. e) An appreciation of the existence of vast quantities of objects exhibiting dFSCI in the biological world, and nowhere else (excluding designed objects). f) A final inference, by analogy, of a design process as the cause of dFSCI in living beings. These are essentially the steps. We can go step by step, if you are interested. gpuccio
Elizabeth:
Well, I think your error is the basic ID error – the assumption Chance and Necessity can’t create CSI!
That is based on our experience with cause and effect relationships. So have at it- perhaps YOU can be the FIRST person on this planet to demonstrate that CSI can arise via necessity and chance. Joseph
Hi Lizzie, Forgive me if I'm teaching a granny to suck eggs, but if you enter the word 'feed' at the end of the web address (right after the final '/') you can then subscribe to all of the comments that are made on that specific thread. I use Internet Explorer here at work, so any web feeds I subscribe to appear under the 'Favorites' button on the toolbar. But you should be able to use any feed reader really (Google Reader is quite good). This will allow you to easily keep track of any new comments without having to manually check every single thread you bookmark. If you're not doing something like this already, you will find it particularly useful given that you are deservedly the centre of attention here at the moment! Chris Doyle
I should also say, I'm pretty busy this week. But I've bookmarked this thread, and will keep checking in. Elizabeth Liddle
I'm grateful to you, Mung, for keeping track. Boy is this site awkward (has it occurred to anyone at UD to move to forum instead of blog format? Or even Scoop?) OK. You’ve stated at least three reasons why you reject intelligent design.
One of them you’ve stated as follows:
the hypothesis as put forward by Dembski, for example, I think, is incorrectly operationalised. Specifically, I think the null hypothesis is wrongly formulated, and that this invalidates the design inference.
gpuccio: Could you please elucidate? I am very interested.
The above statement can be found in your post HERE.
Well, I am all prepared to elucidate my statement with reference to Dembski's paper "Specification: the pattern that signifies intelligence", which Dembski himself seems to consider as an summary, clarification and extension of his previous work. However, gpuccio shares my view that this is a poor piece of work. So, either we can take that as agreed, or choose another paper by Dembski that someone thinks is a better formulation of his hypothesis. He has written many.
An additional objection against ID you’ve stated is that chance + necessity can generate information, therefore no intelligent cause is required.
Perhaps you can tell us what definition of information you have in mind. I expect that you and Chris will be discussing this.
By a number of definitions. I had in mind Dembski's CSI, but my claim probably holds for others as well.
How are you coming on Signature in the Cell?
About half way through.
Actually, I think you’ll find that most of us here at UD are truly interested in valid objections to ID.
Excellent :) But it needs to be treated as specific hypotheses to be properly critiqued. Some have more weight than others IMO. So far, I'd say Meyer has the best point.
But like Upright BiPed has said, it been some two months now that we’ve been waiting for you to support your second objection to ID.
I've presented what I hope is now a viable operationalisation of my claim that operationalises UPD's conceptual definition of information. When he, or someone else, agrees that it is sufficient to enable the results of my project to be evaluated, I am ready to begin. But obviously I won't begin until then. It is here:
You stated an additional objection:
we have no trace of a mechanism by which an external designer might have designed living things. We do, in contrast, have many traces of an intrinsic design mechanism (essentially, Darwin’s).
If split that into two. 1. No “design mechanism” + “no external designer.” 2. Darwinism offers a design mechanism. So I’ll ask if you have any more objections to ID.
I do not have a global objection to ID in principle. I have specific objections to specific ID arguments and inferences. I have not read any ID argument that I have yet found persuasive, though I have read some that point to gaps in the history of life that have not yet been convincingly filled - OOL being the obvious example.
And then perhaps ask you to number or label the objections you do have, or if you wish you can restated them, and then we can better keep track of them.
Alternatively, we could take specific ID arguments (this would be a much better way of sifting them IMO). For example: CSI The EF Irreducible Complexity Meyer's.
How’s that sound?
In principle, good. I'd still prefer to start what seems to be Dembski's most recent (and, according to him, his most refined) exposition of his idea. If we can agree on where it falls down, that might lead us to the point at which he took a wrong turn. If we find it does not, then we have all learned something. Elizabeth Liddle
Yes, Lizzie, some of us have nothing better to do than keep track of stuff like this. :) Actually, I think you'll find that most of us here at UD are truly interested in valid objections to ID. But like Upright BiPed has said, it been some two months now that we've been waiting for you to support your second objection to ID. You stated an additional objection:
we have no trace of a mechanism by which an external designer might have designed living things. We do, in contrast, have many traces of an intrinsic design mechanism (essentially, Darwin’s).
If split that into two. 1. No "design mechanism" + "no external designer." 2. Darwinism offers a design mechanism. So I'll ask if you have any more objections to ID. And then perhaps ask you to number or label the objections you do have, or if you wish you can restated them, and then we can better keep track of them. How's that sound? Cheers. Mung
Elizabeth Liddle:
OK, Mung and gpuccio: where do you want to start?
You've stated at least three reasons why you reject intelligent design. One of them you've stated as follows:
the hypothesis as put forward by Dembski, for example, I think, is incorrectly operationalised. Specifically, I think the null hypothesis is wrongly formulated, and that this invalidates the design inference. gpuccio: Could you please elucidate? I am very interested.
The above statement can be found in your post HERE. An additional objection against ID you've stated is that chance + necessity can generate information, therefore no intelligent cause is required. Perhaps you can tell us what definition of information you have in mind. I expect that you and Chris will be discussing this. How are you coming on Signature in the Cell? Mung
Try the OP above. With a dash of here and here. Also cf review article here. kairosfocus
OK, Mung and gpuccio: where do you want to start? I thought Dembski's paper here: http://www.designinference.com/documents/2005.06.Specification.pdf was a pretty good place. But if someone would like to point me to an alternative (preferably just one, to start with!), that's cool. Elizabeth Liddle
Thot Expt: To illustrate necessity vs chance vs choice, in a sense relevant to the concept of CSI, and more particularly FSCI. 1: Imagine a 128-side die (similar to the 100 side Zocchihedra that have been made, but somehow made to be fully fair) with the character set for the 7-bit ASCII codes on it. 2: Set up a tray as a string with 73 such in it, equivalent to 500 bits of info storage, or about 10^150 possibility states. 3: Convert the 10^57 atoms of our solar system into such trays, and toss them for c 5 bn years, a typical estimate for the age of the solar system, scanning each time for a coherent message of 73 characters in English such as the 1st 73 characters for this post or a similar message. 4: the number of runs will be well under 10^102, and so the trays could not sample as much as 1 in 10^48 of the configs of the 10^150 for the system. 5: So, if something is significantly rare in the space W of possibilities, i.e it is a cluster of outcomes E comprising a narrow and unrepresentative zone T, the set of samples is maximally unlikely to hit on any E in T. 6: And yet the first 73 characters of this post were composed by intelligent choice in a few minutes, indeed I think less than one. 7: We thus see how chance contingency is deeply challenged on the scope of the resources of our solar system, to create an instance of such FSCI, while choice directed by purposeful intelligence routinely does such. So, we see why FSCI is seen as a reliable sign of choice not chance. 8: Likewise, if we were to take the dice and drop them, reliably they would fall. Indeed we can characterise the relevant differential equations that under given initial circumstances will reliably predict the observed natural regularity of falling. 9: thus we see mechanical necessity as characterised by natural regularities of low contingency. 10: this leads to the explanatory filter used in design theory, whereby natural regularities of an aspect of a phenomenon, lead to the explanation law. 11: high contingency indicates the relevant aspect is driven by chance sand/or choice, with the sort of distinguishing sign like FSCI -- per the exercise just above -- reliably highlighting choice as best explanation. GEM of TKI kairosfocus
Kuartus: Let' see: >> A frequent criticism (see Elsberry and Shallit) is that Dembski has used the terms “complexity”, “information” and “improbability” interchangeably. a: misleading and strawmannish, cf NFL 144 and 148 above as well as OP. Complex specified info has to meet several separate criteria These numbers measure properties of things of different types: b: pounding on the strawman and pretending or imagining you are answering the real issue. Complexity measures how hard it is to describe an object (such as a bitstring) c: as WD says , information measures how close to uniform a random probability distribution is d: a misleading view of the Hartley suggested metric I - = log p and improbability measures how unlikely an event is given a probability distribution e: citing a commonplace as if it were a refutation >> And: >> Dembski’s calculations show how a simple smooth function cannot gain information. f: distortion. if a search to find a target is beyond search resources esp at cosmos level, then a chance based random walk is utterly unlikely to find zones T He therefore concludes that there must be a designer to obtain CSI. g: misrepresentation, intelligences are routinely observed and the only observed causes of CSI, such as posts in this blog. However, natural selection has a branching mapping from one to many (replication) followed by pruning mapping of the many back down to a few (selection). h: As Weasel showed inadvertently,t he real problem is to get TO shores of islands of function in zones T so that hill climbing can begin. This is a begging of the question of getting to such islands of body plans that work, starting with the embryological development program. i: embryogensis is known to be highly sensitive to disruption, i.e we are looking at credible islands of function. j: this starts with the very first body plan, OOL When information is replicated, some copies can be differently modified while others remain the same, allowing information to increase. k: one may move around within an island of function all one pleases without explaining how one arrives at such an island of function, and of course the GAs show examples of how that is known to happen: they are designed and built by intelligent designers. l: notice the switcheroo on the question to be answered, kept up in the teeth of repeated pointing out that the real issue lies elsewhere, starting with OOL These increasing and reductional mappings were not modeled by Dembski m: because he was addressing the REAL question, as in the one you have ducked; namely getting to the shores of islands of function in large config spaces utterly dominated by non-function. >> See the same problems again and again? GEM of TKI kairosfocus
Hi kariosfocus, Wikipedia also says here: http://en.wikipedia.org/wiki/Specified_complexity#Criticisms "A frequent criticism (see Elsberry and Shallit) is that Dembski has used the terms "complexity", "information" and "improbability" interchangeably. These numbers measure properties of things of different types: Complexity measures how hard it is to describe an object (such as a bitstring), information measures how close to uniform a random probability distribution is and improbability measures how unlikely an event is given a probability distribution" Also, "Dembski's calculations show how a simple smooth function cannot gain information. He therefore concludes that there must be a designer to obtain CSI. However, natural selection has a branching mapping from one to many (replication) followed by pruning mapping of the many back down to a few (selection). When information is replicated, some copies can be differently modified while others remain the same, allowing information to increase. These increasing and reductional mappings were not modeled by Dembski" Do these criticisms have any weight? kuartus
F/N: I have decided to reply to EugenS's question on Wiki's challenge to the CSI concept here, as it better fits. I will notify in the other thread. ES: has anyone addressed in detail the problems identified in the article on CSI in Wikipedia? Quick notes on excerpts from Wiki on CSI in their ID article, showing why Wiki is utterly untrustworthy on this, just going through several examples in succession: >>In 1986 the creationist chemist Charles Thaxton used the term "specified complexity" from information theory when claiming that messages transmitted by DNA in the cell were specified by intelligence, and must have originated with an intelligent agent. >> 1 --> Thaxton was a design theory pioneer, not a "creationist," this is labelling to smear, poison and dismiss. This is a PhD chemist working on thermodynamics, with a PhD polymer expert [Bradley] and a PhD Geologist/mining engineer [Olsen]. Wiki is grossly disrespectful. 2 --> The concept of specified complexity in the modern era is not from Thaxton -- as he and his co authors cited in their own work, it comes from OOL researcher ORGEL in 1973, which Wiki knows or should know -- notice the article is locked against correction:
. . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.]
3 --> the summary of TBO's argument in TMLO [full pdf d/load] is loaded and unfair as well. A complex technical thermodynamic argument on the spontaneous concentration of relevant biopolymers on chemical kinetics to form precursors to life -- this is on OOL which if wiki were honest it would admit is the biggest single hole in the evolutionists' story -- is reduced to a caricature, and a suggestion that the best explanation for such is design given the resulting values that boil down to being zero molecules on a planetary prebiotic soup [conc 10^-338 molar IIRC . . . ], or indeed a cosmic scale soup, with a specific note that this does not warrant inference to designer beyond or within the cosmos is strawmannised. 4 --> This is beyond merely careless, it is willfully distorting and strawmannising. >> Dembski defines complex specified information (CSI) as anything with a less than 1 in 10^150 chance of occurring by (natural) chance. Critics say that this renders the argument a tautology: complex specified information cannot occur naturally because Dembski has defined it thus, so the real question becomes whether or not CSI actually exists in nature. >> 5 --> this so distorts and strawmannises what WmAD actually gave on pp 144 and 148 of NFL, just for one instance, as to be libellous:
p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ” [cf original post above] p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity [[cf. here below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . . Biological specification always refers to function . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .”
6 --> A cursory glance at what is being described and the description proffered will show that Dembski is NOT giving a tautology, but is identifying that when an event E comes from a narrow and unrepresentative zone T -- one separately and simply describable -- in a field of possibilities W, then no reasonable chance driven process is likely to ever land on T because of the swamping of the search capacity of the observed cosmos by the scope of the challenge posed by T in W, where T is like 1 in 10^ 150 of W. 7 --> Nor do we see a recognition of where that threshold comes from, the exhaustion of the Planck time quantum state resources of the observed cosmos, which comes up to 10^150 states. Remember, the fastest chemical reactions, ionic ones take 10^30 P-times. 8 --> And the issue is not tautology but search challenge on chance plus necessity sans intelligence; with the contrast that 73 ascii characters worth of meaningful information is routinely -- and only -- observed as caused by intelligence. Like this post and your own. 9 --> critics who cry tautology in the face of that discredit anything more they have to say. >> The conceptual soundness of Dembski's specified complexity/CSI argument has been widely discredited by the scientific and mathematical communities.[13][15][54] Specified complexity has yet to be shown to have wide applications in other fields as Dembski asserts. John Wilkins and Wesley Elsberry characterize Dembski's "explanatory filter" as eliminative, because it eliminates explanations sequentially: first regularity, then chance, finally defaulting to design. They argue that this procedure is flawed as a model for scientific inference because the asymmetric way it treats the different possible explanations renders it prone to making false conclusions. >> 10 --> This first hurls the elephant of the presumed authority of the scientific community then gives a misleading complaint then asserts a dismissal that is wrong. 11 --> In fact we routinely infer to intelligence not lucky noise on encountering say blog comments, or on seeing evident artifacts vs natural stones in an archaeological site, etc etc etc, or even on needing to evaluate on signs whether a patient is conscious and coherent using the Glasgow Coma Scale; and we do so on precisely the intuitive form of the explanatory process that Dembski highlighted. (Cf my per aspect development of it here.) 12 --> the cited objection is little more than the usual stricture that an inductive inference across competing possibilities on signs and statistical patterns will possibly make an error. The issue is not absolute proof but reliability, on pain of selective hyperskepticism. The objectors have for years been unable to provide a clear case where we know the causal story and CSI is the result of chance and necessity without design. A whole internet is there to substantiate the point of the known source of CSI. >> Richard Dawkins, another critic of intelligent design, argues in The God Delusion that allowing for an intelligent designer to account for unlikely complexity only postpones the problem, as such a designer would need to be at least as complex.[56]>> 13 --> RUBBISH. the issue was, is there an empirically reliable sign of design of THIS object in hand, of THIS process, etc. to that he answer is yes. 14 --> On the strength of the FSCI in DNA, we have good reason to infer on sign to design of life. It matters but little at his level that a sufficient cause for the living cell would be a molecular nanotech lab some generations beyond the one run by Venter, after all, intelligent design of DNA is now a matter of published empirical fact, Venter even signs his name in the resulting proteins with a watermark!!! 15 --> this is also grossly ignorant on the cosmological level. On evidence our observed cosmos had a beginning, and is finetuned in a way that sets it at a delicately balanced operating point that supports C-chemistry, cell based life. 16 --> This empirically and logically grounds inference to a cause beyond our cosmos, pointing ultimately to a necessary being with intelligence, power and purpose to create a cosmos of 10^80 atoms and supportive of C-chemistry cell based life. 17 --> That such a necessary being -- one without external necessary causal factors and so without beginning or end [at the simple level relations of necessary logical truth like 2 + 2 = 4 are of this class] -- may or may not on some sense be more complex than the cosmos is irrelevant, apart from an inference on the greatness of such a necessary being. >> Other scientists have argued that evolution through selection is better able to explain the observed complexity, as is evident from the use of selective evolution to design certain electronic, aeronautic and automotive systems that are considered problems too complex for human "intelligent designers". >> 19 --> This is a willful distortion of the known facts of GAs and the like. Such are intelligently designed and work by moving around in an island of designed function. 20 --> As we can s3ee from NFL pp 144 and 148, the problem tacked by the ID issue is to get to the shores of such islands. The questions are being begged and the results are being willfully distorted. +++++++++++ As shown in outline, Wiki has no credibility onthe subject of intelligent design. Ansd since the gross errors, distortions and misrepresentaitons have been corrected any number of times but have been reverted and are now locked in literally, this is willful. Willful deception. There is another, sharper word for this that is unfortunately well warranted: L--s. Sorry if that offends, but this is blatant and willful. GEM of TKI PS: I don't know if anyone wants to carry this forward further. Feel free, it is time we did a major expose of Wiki on this subject as ES invited us to. kairosfocus
Dr Liddle, I have looked over your post at 17, but to be completely honest with you, I don't see the dire connection between these questions and the conversation we have been having. Perhaps these questions are arising in your mind as wa result of reading "Sig in the Cell" - which is all fine and good - but they don't seem to directly bear on the topics we had been discussing. Perhaps you can set me straight on the implications of these questions to the larger set of topics in our previous posts. To help make the connection, I will post the bulk of your last substantial post from the previous conversation. - - - - - - - - - - - - - - - BIPED: Dr Liddle, To endure the amount of grief that ID proponents have to take, one would think that at the bottom of the theory there would at least be a big booming “tah-dah” and perhaps a crashing cymbal or two. But unfortunately that’s not the case; the theory doesn’t postulate anything acting outside the known laws of the universe. LIDDLE: Cool. Yes, I understand that. BIPED: I bring this up because you want to design a simulation intended to reflect reality to the very best of your ability, and in this simulated reality you want to show something can happen which ID theory says doesn’t happen. Knowing full well that reality can’t be truly simulated, it’s interesting that the closer you get to truly simulating reality, the more stubborn my argument becomes. Only by not simulating reality does your argument have even a chance of being true. LIDDLE: Heh. I recognise the sentiment. The devil is always in the details But we shall see. BIPED: Yet, if ID says that everything in the material universe acts within the laws of the universe, then what is it exactly to be demonstrated within this simulation? In other words, what is the IT? Of course, since this is set up to be a falsification, the IT is for prescriptive information exchange to spontaneously arise from chance and necessity. But that result may be subject to interpretation, and so consequently you want to know exactly what must form in order for me to concede that your falsification as valid. LIDDLE: Thanks. I’m not familiar with the abbreviation “IT” unfortunately, but I think I get your drift. I hope so. I would certainly agree that the Study Hypothesis (H1 in my language) is “for prescriptive information exchange to spontaneously arise from chance and necessity”. And so to falsify the null (that prescriptive information exchange can spontaneously arise from chance and necessity) yes, I want to know the answer to that question. Good! BIPED: I intend to try and fully answer that question in this post. I’m sure you are aware of the Rosetta stone, the ancient stone with the same text written in three separate ancient scripts. Generally, it gave us the ability to decode the meaning of the ancient hieroglyphs by leading us to the discrete protocols behind the recorded symbols. This dovetails precisely with the conversations we’ve had thus far regarding symbols, in that there is a necessary mapping between the symbol and what it is to be symbolized. And in fact, it is the prime characteristic of recorded information that it does indeed always confer that such a mapping exists – by virtue of those protocols it becomes about something, and is therefore recorded information as opposed to noise. LIDDLE: Trying to parse: the prime characteristic of recorded information is that it confers (establishes? requires?) a [necessary] mapping between symbol and what is symbolised. So what about these “protocols”? What I’m thinking is that in living things, the big genetic question is: by what means does the genotype impact the phenotype? And the answer is something like a protocol I like. But let me read on…. BIPED: In retrospect, when I stated that recorded information requires symbols in order to exist, it would have been more correct to say that recorded information requires both symbols and the discrete protocols that actualize them. Without symbols, recorded information cannot exist, and without protocols it cannot be transferred. Yet, we know in the cell that information both exists and is transferred. LIDDLE: Yes. And I like that you refer to “the cell” and not simply “the DNA”. BIPED: This goes to the very heart of the claim that ID makes regarding the necessity of a living agent in the causal chain leading to the origin of biological information. LIDDLE: Let me be clear here: by “living agent”, are you referring to the postulated Intelligent Designer[s]? Or am I misunderstanding you? BIPED: ID views these symbols and their discrete protocols as formal, abstract, and with their origins associated only with the living kingdom (never with the remaining inanimate world). Their very presence reflects a break in the causal chain, where on one side is pure physicality (chance contingency + physical law) and on the other side is formalism (choice contingency + physical law). Your simulation should be an attempt to cause the rise of symbols and their discrete protocols (two of the fundamental requirements of recorded information between a sender and a receiver) from a source of nothing more than chance contingency and physical law. LIDDLE: Cool. I like that. BIPED: And therefore, to be an actual falsification of ID, your simulation would be required to demonstrate that indeed symbols and their discrete protocols came into physical existence by nothing more than chance and physical law. LIDDLE: Right. BIPED: The question immediately becomes “how would we know?” How is the presence of symbols and their discrete protocols observed in order to be able to demonstrate they exist? For this, I suggest we can use life itself as a model, since that is the subject on the table. We could also easily consider any number of human inventions where information (symbols and protocols) are used in an “autonomous” (non-conscious) system. LIDDLE: OK. BIPED: For instance, in a computer (where information is processed) we physically instantiate into the system the protocols that are to be used in decoding the symbols. The same can be said of any number of similar systems. Within these systems (highlighting the very nature of information) we can change the protocols and symbols and the information can (and will) continue to flow. Within the cell, the discrete protocols for decoding the symbols in DNA are physically instantiated in the tRNA and its coworkers. (This of course makes complete sense in a self-replicating system, and leads us to the observed paradox where you need to decode the information in DNA to in order to build the system capable of decoding the information in DNA). LIDDLE: Nicely put. And my intention is to show that it is not a paradox – that a beginning consisting of a unfeasibly improbable assemblage of molecules, brought together by no more than Chance (stochastic processes) and Necessity (physical and chemical properties) can bootstrap itself into a cycle of coding:building:coding:building: etc. BIPED: Given this is the way in which we find symbols and protocols physically instantiated in living systems (allowing for the exchange of information), it would be reasonable to expect to see these same dynamics at work in your simulation. LIDDLE: Yes, I agree. Cool! BIPED: I hope that helps you “get to the heart of what [I] think evolutionary processes can’t do”. LIDDLE: Yes, I think so. That is enormously helpful and just what I was looking for. - - - - - - - - - - - - - So Dr Liddle, its seems to me we were working through an agreement on exactly what must be demonstrated by your simulation, and how that the presence of each requirement would be observed and/or verified. Is this not where we are at? Upright BiPed
a) can self-replicators (with variance) evolve from non-self-replicators?
I'm going to go out on a limb here and say no. :) Evolution requires replication, so replication is not something which can evolve from non-replication. a') Can an evolvable self-replicator magically appear? Mung
KF,
it is self replication as an additional facility of something that is separately complex and functional as an automaton.
That could not be more vague. So not only does it have to replicate itself it has to have a purpose? A function in life? What, do you, counts as such a suitable function for a thing such as is being proposed? So that if observed it proves the point one way or another? Is moving towards food/energy sufficiently "complex and functional" to satisfy you in that regard? What about just movement? Or what about replication? Perhaps replicating with just the right error rate, not too much not too little? Or what about having certain attitudes to life? Feelings? The domain is small. A minimal self replicator. What additional functionality other then self replication will have to arise for it to become "relevant" in your eyes? Make a prediction! WilliamRoache
Dr Liddle: I wish you best success on your tour. Please recall, though, it is self replication as an additional facility of something that is separately complex and functional as an automaton. In a context of coded representation. That is what is relevant. GEM of TKI LINK kairosfocus
Thank you Dr Liddle for the response. I will consider your questions and return shortly. Upright BiPed
Sorry UPD - someone said that comments were closed on that thread, but I haven't forgotten, just been dashing round the country visting universities with my son. Also, reading Signature in the Cell, as I think I said, because I think it's very useful, and also having a productive (I think) conversation with Mung and kairosfocus about CSI and the EF. I've still got a couple more university visits to do this week, I'm busy all day Saturday, and have my father visting on Sunday, so still a bit snowed under. What I have done though, is roughed out three issues separable issues that we need to disentangle: One is, can we get a self-replicator to arise from scratch (without a specific self-replication algorithm built in) that will be capable of Darwinian evolution (i.e. optimise itself for continuation in its virtual environment)? Second is: how do we measure the information it generates (if it does?) Third is: does something as complicated as a ribosome remain irreducibly complex, and so require an ID, regardless of whether a simpler self-replicator can emerge from a non-self-replicating set of starting items? Which I guess we could express as: a) can self-replicators (with variance) evolve from non-self-replicators? b) If so, can they generate complex specified information? c) If so, can they generate information as complex and specified as that we see in a living cell? I won't attempt c! Given that, do you regard a and b as worth attempting? Elizabeth Liddle
UB: Such a refutation would be well within the ambit of this thread. Let's see . . . kairosfocus
Dr Liddle, are you out there? Do you not yet have anything for us to discuss? I have agreed to help you falsify ID with your simulation, and it would seem that we have much to do. I have been waiting since the 17th of June for your next response. Upright BiPed
Notice: no explanation for the trumpeted, drumbeat assertions against CSI, and the Chi metric, or its reduction. G kairosfocus
As ever, I point out re: "Chance and necessity" that the 'chance' is assumed, from the point of view of science. Intelligent agents are capable of making large, sudden changes, I admit. They're also capable of making small, incremental changes, or of employing such changes. That said, it's also reasonable to suppose real and practical limits to the small and incremental. nullasalus
Liz: And so my answer would be that your brain has been “programmed” (scare quotes deliberate) both by evolution and by your own actions and experiences to do what you rightly say it can do.
Prove it. mike1962
Elizabeth, I think I've been honest and fair towards you but this is the weakest argument I've seen from you, besides "thriving life = Genetic Entropy is not true". You are simply begging the question of how/when chance and necessity have ever created CSI. If you can provide evidence of this you could single-handedly shut down the ID movement.
your error is the basic ID error – the assumption Chance and Necessity can’t create CSI!
It's not an assumption l, it's an observation from decades of experiments and computer simulations and thousands of years of human experience, as well as the extinguished fantasies of many a computer programmer.
your brain has been “programmed” (scare quotes deliberate) both by evolution and by your own actions and experiences to do what you rightly say it can do
Who/what is "you" and what does it have to do with mindless materialism? If my brain is simply an adaptable algorithm, what does it matter what "I" think my brain can do? Are you not simply dumping the CSI-production into the mysterious algorithm "you"? And would you mind linking some of the "systems materialism" information you speak of? uoflcard
Elizabeth Liddle:
Well, I think your error is the basic ID error – the assumption Chance and Necessity can’t create CSI!
Do you deny that humans do so all the time? Do you agree that humans can and do create computer programs, such as genetic algorithms? Do these programs require CSI to function? Do you think genetic algorithms can generate CSI? If so, where does the CSI come from?
I don’t think this is anything like demonstrated, and, indeed, I think it is demonstrably untrue.
So now you want the ID theorist to demonstrate that Chance + Necessity could not possibly create CSI? Where have we seen this before?
Bring on “systems materialism”!
Does that mean the old materialism has been falsified? Mung
Quick notes: ULC: Please cf the discussion of the Glasgow coma scale here. Dr Liddle: Kindly provide a case of chance and mechanical necessity without intelligent direction, producing CSI, especially FSCI. (The just linked would also help, I think.) UB: Indeed, I would like to see that response. GEM of TKI kairosfocus
Hello Dr Liddle, I have returned, and have looked around for your response to our previous conversation (where you were going to demonstrate the rise of information processing from chance and necessity). The last post to which you said you would address yourself was dated June 17th. If you have made such a response, then please provide a link. If you have not, then, what have you got for me? Upright BiPed
Well, I think your error is the basic ID error - the assumption Chance and Necessity can't create CSI! I don't think this is anything like demonstrated, and, indeed, I think it is demonstrably untrue. And so my answer would be that your brain has been "programmed" (scare quotes deliberate) both by evolution and by your own actions and experiences to do what you rightly say it can do. We even know a lot about just how those programs work. And yes, although it sounds circular, I don't think it is - a spiral is not a circle, and bootstrapping is of course possible. I'd say that we bootstrap ourselves into consciousness, will, and identity, and it is our brains, honed by billions of years of evolution, that allow us to do this. But another flaw in your logic (from my PoV!) is to characterise materialism as "reducing" our brains and bodies to atoms. As I've said a few times now, reductive materialism is neither effective nor necessary. Bring on "systems materialism"! It's what we do in any case. And the systems by which we make sense of our environment, and decide on appropriate action in light not just of our immediate needs, but our long term goals, desires, and abstract principles is not only fascinating, but well studied :) (And interesting, whether or not you buy the Ghost in the Machine or not). Elizabeth Liddle
Here is something I've been wondering about lately. I think it's on topic, but sorry if it's not... Our minds output CSI constantly while conscious. This is completely undeniable, even more undeniable than CSI in a genome. How can a materialist possibly account for this? If ultimately our entire body ("mind" included) is reducible to atoms, and the arrangement of these atoms are all reducible to DNA (at least "templates" are reducible to DNA), then our minds must be in our DNA. But how do we get so much CSI (thoughts) out of, relatively, so little (genome)? Shouldn't our brains need to be at least as complex and specified as any thought that comes out of it? But then how do they get programmed to be this complex and specified? It's common knowledge among computer programmers that whatever intelligent information comes out of a program needed to be previously programmed into it, yet this is exactly what "mainstream" atheis...err, I mean biologists must violate in order for their assumptions to be accommodated. And input from the environment doesn't seem to explain it because an input, for a purely material system, must have a pre-programmed "decoder" in order to make sense of it. A security scanner at an airport is great at recognizing guns hidden in suitcases because it has been programmed to look for certain features of known guns. But if I hook a microphone up to it (assuming it even had a port for one), I could tell it I was going to hi-jack flight # xxxx and it wouldn't do a thing. Why? Because it hadn't been programmed to interpret such information. So how does a mindless brain accomplish this task? What is the mainstream response to this prompt? There might be some error in my logic because I'm a little out of my comfort zone with this topic, but I'm curious for a response from both sides of the debate. uoflcard
F/N: Did a bit of checking on the Chi_500 expression and saw that some are trying the old "painting the target after the arrow hits" talking point. This is doubly erroneous. First, as the observed event E is recognised as coming from a separately definable zone of interest T that must on good grounds be seen as narrow and unrepresentative of the bulk of possibilities W. Second, the detection of specificity of such a configuration is amenable to empirical test or analytical test. That is:
a: We can simply perturb the explicit or implied data string on the ground or in a validated simulation, and see if it makes a difference -- direct or simulation testing of the island of function type case. b: We can see if valid values for T are constrained in such a way as will make them come from a narrow and unrepresentative cluster in W. c: For instance, if the 1,000 coin string is such that the ASCII code equivalents are going to be --
as I have often given as an example, and as has been used as a classic example since Thaxton et al in TMLO, 1985, the very first ID technical work [there is no excuse] --
. . . a contextually responsive string in English, then it is sharply constrained by the requirements for English spelling, grammar and of course the issue of being meaningful and related to a context. So, beyond dispute there will only be a narrow and unrepresentative cross section of the field of possibilities W that will be acceptable. d: Similarly, computer code or data structures starting with strings used in information systems must conform to the requirements of symbolisation conventions, and rules for expressing meaningful and functional possibilities. e: Going further, something like a blueprint, a wiring diagram or an exploded diagram or the specs for a part will be reducible to a set of structured strings, e.g. as CAD software routinely does [PC active storage is based on ordered, stacked strings AKA memory]. f: taking such an entity, one may perturb the string by injecting noise triggering a random walk in a config space. tests can then show whether the resulting varied item will fit within an island of function or whether there is a wide variability that will still allow for a working part. g: We already know -- or should know -- the general result. While there are tolerances, parts in complex systems generally need to be within such ranges and to be appropriately connected to the other parts. h: Just to pick a jet aircraft case, back in the 1950's for the old Saberjet F 86, IIRC, there was a particular bolt in the wing that needed to be put in in a way that was opposite to what had been the usual way around in previous aircraft. There were several fatal crashes, and on tracing, it was found that the problems correlated with shifts by a particular older worker. He had followed his habit, not the blueprint, and had unintentionally killed several men. (I gather the investigators spared him the horrific consequences of that error, but made sure it did not happen again.) i: This is a case where there was a tolerance for error of zero bits. j: Similarly, it is notorious in military aircraft maintenance, that an aircraft that is unfit for service may have a cluster of parts that are all individually in-tolerance but collectively yield a non-functional subsystem. k: In living systems, embryological feasibility of body-plan affecting mutations -- which must be expressed early in development, while the body plan is being laid down in the embryo as it forms -- is a notorious source of lethality of mutation. (That is, body plans per evidence, come in islands of function.)
In short, this is not a case of painting the target after the arrow hits. And, BTW, Dembski pointed this out in NFL a decade or more ago. There is a reason why T is independently and simply describable. GEM of TKI PS: It seems that some have taken up the mistaken epistemological view that if one can object, one may then easily dismiss. This is self-referentially incoherent selective hyperskepticism. Before one justifiably claims to know that A or to know that not-A, alike, one has the same duty of warrant. For, Not-A is just as much a commitment as A. And, for things that are really important, "I don't know so I dismiss" is even less defensible. There are things that one knows or SHOULD know. And if one is actually accepting substantially similar cases but rejects on a prejudice against a particular example, that is irresponsible. Let's make this concrete: we routinely read text and confidently know it comes from an intelligent source, precisely because of the utter unlikelihood that such would result form blind chance and/or necessity. We have in hand something that draws out the reason behind this. To then turn around and say I don't like that this might let a Divine Foot in the door so I dismiss is prejudice not responsible thought. (Especially, as Lewontin's excuse that a world in which miracles are possible is a chaos in which science is impossible is flat out contradicted by the fact that it was theistic thinkers who founded modern science, and did so on the understanding that God is a God of order who sustains the cosmos by his powerful word, which is precisely why we use the odd little term: "LAW of nature." Miracles, as C S Lewis often pointed out, will only work as signs pointing beyond the usual course of the world, if there is such a usual course, i.e., a world in which miracles are possible is ALSO a world in which science is possible.) kairosfocus
Arkady: Thanks. Given your likely background, do you have some counsel for us? I would love to hear it. GEM of TKI kairosfocus
Historical footnote. kairosfocus
KF: "(H’mm, anniversary of the German Attack in France in 1940)" is that just an aside for the history buffs here or is it more significant? paragwinn
This situation has occured, I think, because this is not about science, but about maintaining control of a system that depends upon accepting a consensus of opinion without question. Unfortunately, it's a cultural hegemony. Lay-persons are as dependant on the system as are the hard-line professional contenders for it. I suspect this is because the philosophical implications of the consensus are congenial to the majority stakeholder's assumptions about the nature of life at a social level. In my opinion, with the exception of a small minority, this is unlikely likely to change, even in the presence of overwhelming evidence. Anyone with a substantial scientific argument for design will likely, and sadly, be be subjected to the same types of attack, and virulence of those attacks will likely be in direct proportion to the strength of the evidence supporting the design inference. arkady967

Leave a Reply