Uncommon Descent Serving The Intelligent Design Community

Falsification of certain ID hypotheses for remotely controllable “fair” dice and chemical homochirality

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
arroba Email

Even though I’m an Advantage Player, I would never dream of hosting illegal dice games and fleecing people (I swear never, never). But, ahem, for some reason I did take an interest in this product that could roll 6 and 8 at will!

[youtube 3MynUHA6DTs]

Goodness, that guy could earn a mint in the betting on 6 and 8! 😈

The user can use his key chain and force the dice to certain orientations. As far as I know the dice can behave as if they are fair if the remote control is not in force. For the sake of this discussion, let us suppose the dice will behave fairly when the remote control is not in force.

Suppose for the sake of argument I made this claim: “CSI indicates intelligent agency.”

Suppose further someone objected, “Sal that’s an unprovable, meaningless claim, especially since you can’t define what ‘intelligent agency’ is”.

I would respond by saying, “for the sake of argument, suppose you are right, I can still falsify the claim of CSI for certain events, and therefore falsify the claim of intelligent agency, or at least render the claim moot or irrelevant.”

Indeed, that’s how I can assert in specialized cases, the ID claim can be falsified, or at least rendered moot by falsifying the claim that an artifact or event was CSI to begin with.

To illustrate further, suppose hypothetically someone (let us call him Mr. Unsuspecting) was unfamiliar and naïve to the fine points of high tech devices such as these dice. One could conceivable mesmerize Mr. Unsuspecting into thinking some paranormal intelligence was at play. We let Mr. Unsuspecting play with the dice while having the remote control off, and thus the Mr. Unsuspecting convinces himself the dice are fair. Say further Mr. Unsuspecting hypothesizes: “if the dice roll certain sequences of numbers, a paranormal intelligence was in play”.

We then let the magician running the game and “magically” call out the numbers before the rolls: 6 8 6 8 6 8 ….

When the remote control is running the show, the distribution function is changed as a result of the engineering of the dice and remote control mechanism. The observer thus concluded CSI using the chance hypothesis as compared to the actual outcome: 6 8 6 8 6 8….

The magician then explains what was really going on and that no paranormal intelligence was involved. Hence, the original hypothesis of a paranormal intelligence (by Mr. Unsuspecting) was falsified, and there was no paranormal intelligence as he supposed initially.

It would be fair to say, Mr. Unsuspecting should then formulate an amended CSI hypothesis given that the whole charade was intelligently designed with modern technology, and further the designer of the charade was available to explain it all. Mr. Unsuspecting’s original distribution function (equiprobable outcomes) was wrong, so he inferred CSI for the wrong reasons, and hence his original inference to CSI is faulty not because his conclusion was incorrect (in fact his conclusion of CSI was correct but for the wrong reasons) but his inferential route was wrong. Further his hypothesis of the paranormal designer was totally false, a more accessible human designer was the cause.

The point being, the original hypothesis of CSI, or any claim that an object evidences CSI, can be falsified or amended by a future discovery at least in principle. The whole insistence by Darwinists that IDists get the right distribution before making a claim is misplaced. Claims can be put on the table to be falsified or amended, and there could be many nuances that amend the reality of the situation in light of new discoveries.

IDists can claim Darwinian evolution in the wild in the present day will not increase complexity on average. They can say that increase in complexity in the present day can falsify some of ID’s claims about biology. That claim can be falsified. FWIW, it doesn’t look like it will be falsified, it’s actually being validated, at least at first glance:
The price of cherry picking for addicted gamblers and believers in Darwinism

Suppose we presumed some paranormal or supernatural disembodied intelligence was responsible for homochirality in the first life. If some chemist figures a plausible route to the homochirality, then the CSI hypothesis for the homochirality can be reasonably or at least provisionally falsified, and hence the presumed intelligent agency hypothesis for homochirality of life (even if intelligence is poorly defined to begin with) is also falsified.

Does it bother me CSI of homochirality could be falsified? Yes, in as much as I’d like to know for sure the Designer exists. But I’m not betting on its falsification anytime soon. And formally speaking there could have been a Designer designing the laws of chemistry, so even if the original CSI hypothesis was formulated with the wrong distribution function, there could still be an Intelligent Designer involved….

The essay was meant to capture the many nuances of the Design debate. It’s far more nuanced than I supposed at first. That said, I’d rather wager on the Designer than Darwin, any day…

ACKNOWLEDGEMNENTS

RDFish for spawning this discussion. Mark Frank and Elizabeth Liddle for their criticisms of other possible distribution functions rather than just a single one presumed. And thanks to all my ID colleagues and supporters.

[Denyse O’Leary requested I post a little extra this week to help alleviate the news desk. I didn’t have any immediate news at this time so I posted this since it seem of current interest]

Comments
And as doe WR400?s demand that I identify the contents of H, that is funny, it is an implicit admission of absence of empirically warranted mechanism Nah - it was an attempt to make you stop for just one second and see that you can't justify the things you've been saying about the power of log transforming a number. I'm sure those onlookers you are so concerned about can see that well enough, so I'll take my leave from this. wd400
KF,
I used the known relationship from Info to probabilities to infer the relevant probabilities based on non-intelligent stochastic processes... I took time to go back to the rot of the situation — something studiously dodged above, and ground the fact that under abiotic circumstances we normally see racemic forms of organic molecules formed.
Whether we are talking about evolution or OOL makes no difference. Pure chance and design do not exhaust the possibilities. Evolution is obviously more than pure chance since selection is nonrandom. But OOL is also nonrandom, because chemistry is not the random assembly of atoms into molecules. CH4 is a possible molecule; CH6 isn't. Chemistry involves nonrandom rules and very strong nonrandom electrical forces. You can't model it with a flat distribution.
Blind statistics based on biases will lead to gibberish with high reliability...
True, and that's a pretty accurate assessment of your argument.
Now as for what the chance based hyps are, obviously they are blind search mechanisms, if design is excluded...
Untrue. Neither evolution nor OOL is a blind search. Blind search is when you pick search points completely randomly out of the entire search space, then turn around and do the same thing again. In evolution, by contrast, you start from wherever you are in the search space and search only those areas that are within the reach of mutation -- a tiny subset of the entire search space. If any of those small areas contains a viable configuration, then you repeat the process, starting from that configuration and searching only the tiny subset of the search space that is reachable from it by mutation. It's highly nonrandom and nothing like a true blind search, though there is a random component to it. OOL is the same. You don't pick a spot in the search space by taking a large number of atoms at random and blindly throwing them together, then repeating the process. You start from whatever molecules you already have, and you see which tiny portions of the search space you can reach from there. Then you repeat the process. There's randomness involved, but you are not searching the entire space -- only a tiny subset. All of your emphasis on the gargantuan size of the search space is therefore misplaced. It's not the overall size of the space that matters, but the size of the space being searched at each step.
Now as to specifics, it is well known that evolutionary mechanisms warranted from empirical grounds relate to chance variations at mutation level and at expression and organisation level...
Mutations are random with respect to fitness. Selection isn't.
And as doe WR400?s demand that I identify the contents of H, that is funny, it is an implicit admission of absence of empirically warranted mechanisms.
He's asking you to enumerate the contents of H because it is apparent that you have neglected to include anything but pure chance. The fact that you won't answer his question is an implicit admission that you cannot justify your CSI and P(T|H) values.
In any case the info to antilog transformation step says in effect that per the statistics [especially redundancy in proteins and the like that reflect their history and however much of chance processes have happened, and whatever survival filtering happened that is traced in the statistics] the information content implies that BLIND processes capable of such statistics will face a probabilistic hurdle of the magnitude described.
Evolution and OOL are blind, but not blind in the way you are using the term above. See my remarks above on why evolution and OOL are not blind searches. keiths
Onlookers: it would be amusing if it were not so sad to see the red herrings and strawman arguments above. Since I am busy elsewhere, I will note that I have first started from the empirically known fact, info content of bio molecules, which can be evaluated per Durston et al, and were published six years ago. I used the known relationship from Info to probabilities to infer the relevant probabilities based on non-intelligent stochastic processes (and took time to lay out a more familiar illustration of why it is legitimate to do this). I took time to go back to the rot of the situation -- something studiously dodged above, and ground the fact that under abiotic circumstances we normally see racemic forms of organic molecules formed. That applies to the OOL challenge, and I then worked on just one facet of info present there, leading to the unsurprising answer, not credible by blind chance and mechanical necessity. We therefore have the only known, empirically grounded, reliably known cause of FSCO/I sitting at the table from first life on up. I pointed out utter absence of empirical evidence for formation of protein families by observed darwinian processes. I took the generic route of applying the known tracers of chance processes of relevant kinds in an information-bearing outcome. Blind statistics based on biases will lead to gibberish with high reliability on the gamut of our solar system, for reasons laid out above: deep isolation of zones of FSCO/I in config spaces as parts are required to be matched, arranged and coupled correctly to work, but there is no good way to blindly match search to locations of functional zones. The logical answer is the one unacceptable to the objectors for fundamentally ideological reasons,a s also outlined and cited above. Now as for what the chance based hyps are, obviously they are blind search mechanisms, if design is excluded, so they will by definition fit in with chance, whether flat random of biased towards particular zones makes little difference once we realise that we are dealing with hundreds of proteins, linked hundreds of DNA specifications, huge amounts of regulatory and co-ordinating info and more. Search for search will get you everytime. The only viable way that blind mechanisms worked is if they were well matched to the spaces, and those would be again pointing to design. Now as to specifics, it is well known that evolutionary mechanisms warranted from empirical grounds relate to chance variations at mutation level and at expression and organisation level, I think it was 47 engines of variation suggested some years back. These are not mysterious, and there is no warrant to go off on imagining unknown major mechanisms. The whole incrementalist scheme then founders on the point that we know that we are credibly dealing with islands of function, deeply isolated in the config space. So, we have to bridge info gaps of 10 - 100+ mn bits dozens of times over. Well beyond the search capacity of the observed cosmos. Until the chance and necessity advocates can bridge the gap from ponds with plausible chemistry to life forms, they are dead on arrival. Until they can bridge the gaps from simple cells to complex body plans they are dead again coming out the starting gates. Beyond that all else is making stuff up out of thin air. A notorious Darwinist speciality. And as doe WR400's demand that I identify the contents of H, that is funny, it is an implicit admission of absence of empirically warranted mechanisms. In any case the info to antilog transformation step says in effect that per the statistics [especially redundancy in proteins and the like that reflect their history and however much of chance processes have happened, and whatever survival filtering happened that is traced in the statistics] the information content implies that BLIND processes capable of such statistics will face a probabilistic hurdle of the magnitude described. A hurdle consistent with the finding that such is beyond the available atomic and temporal resources in our solar system. KF kairosfocus
Onlookers- keiths the totally clueless:
KF, pick a protein family. Show us all possible evolutionary pathways leading to that protein family. Identify all the necessary mutations for each pathway. Quantify the probability for each of those mutations. Specify the exact fitness landscape that is in effect at the time of each mutation, and show us (based on the fitness landscape) the probability that each mutation is selected for. Now combine all of this information into a single P(T|H) value.
keiths, no one can demonstrate blind and undirected chemical processes can produce ONE protein on a planet that didn't have any. IOW it is just as I said, you clowns cannot give us any numbers for the probability wrt to your position. Joe
KF:
And the precise point of the log-antilog process is that it allows us to move from info content ascertainable empirically to the relevant probability that you had made ever so much rhetorical hay out of, which now turned out to be a straw hut in a storm.
No, it just allows us to add logs instead of multiply probabilities, and then convert the result into a probability again at the end. If a probability is information, it will remain information whether or not you transform it into bits. If it isn't, transforming into bits won't turn it into information. And it certainly won't transform one hypothesis into another. Elizabeth B Liddle
KF:
What I did above is to use a credible I value to give a reasonable value of the related P that EL et al have been trying to make so much of. It is nothing like the much lower value that the likes of Dawkins have wanted to suggest as accessible by incrementalism per alleged Darwinian mechanisms.
So the way you compute P(T|H) where H takes into account "Darwinian and other material mechanisms" is to compute it for H = random independent draw, and then increase to a "reasonable", but not as much as Dawkins would? And then conclude that the negative base 2 log of your P is greater than 500, and therefore we can reject H? This seems to boil down to mere assertion that your reasonable value of P is more reasonable than my (or Dawkins') reasonable value of P! If not, and it is the result of an actual calculation,could you please show me HOW you adjust P to take into account Darwinian and other material mechanisms. I don't mind how approximate your values are, but I do want to see how you derive them from empirical data, and where you plug them into your calculation. Thanks. Elizabeth B Liddle
KF, if you think the log-transform get's you to P(T|H), what is "H". wd400
KS: Your rhetorical twist-about stunts have provided no answers to the matter on the merits. And the precise point of the log-antilog process is that it allows us to move from info content ascertainable empirically to the relevant probability that you had made ever so much rhetorical hay out of, which now turned out to be a straw hut in a storm. Good bye. KF kairosfocus
Onlookers, Note the continued bluffing from KF:
By now it should be obvious that by doing an analysis of protein families and the redundancies involved in these, Durston et al published empirically grounded measures for info content of same. This boils down to doing a measure of info content per symbol in light of redundancies across observed life forms.
All of which is completely irrelevant to the problem of determining P(T|H), which is the probability of obtaining the protein family in question by all possible "Darwinian and material mechanisms."
Next, it is not even an issue that information in bits is a metric of form I = – log_2 p.
Exactly right. As we've told you repeatedly, there is nothing magic about taking a log or an antilog. The information remains the same, just expressed in a different form.
What I did above is to use a credible I value to give a reasonable value of the related P that EL et al have been trying to make so much of.
But your I value isn't credible at all because it does not take all "Darwinian and material mechanisms" into account. To come up with a credible I value, you need a credible P(T|H) value. To obtain a credible P(T|H) value, you would need to
...pick a protein family. Show us all possible evolutionary pathways leading to that protein family. Identify all the necessary mutations for each pathway. Quantify the probability for each of those mutations. Specify the exact fitness landscape that is in effect at the time of each mutation, and show us (based on the fitness landscape) the probability that each mutation is selected for. Now combine all of this information into a single P(T|H) value.
KF:
Materialist rhetorical game over.
Do you think anyone is buying your bluff, KF, or that an outpouring of words can obscure the gaping hole in your argument where P(T|H) is supposed to be? Dualist obfuscatory and evasive game over. keiths
Onlookers: By now it should be obvious that by doing an analysis of protein families and the redundancies involved in these, Durston et al published empirically grounded measures for info content of same. This boils down to doing a measure of info content per symbol in light of redundancies across observed life forms. Thus, we have in hand a somewhat more nuanced metric than the 4.32 bits per character that a flat random assumption would have given for 20 possibilities per AA; due to redundancies. So when EL suggests that I am inferring to 4.32 bits/character, she is flat wrong; that would be the null state, TWO analytical states away from their functional state. Her error is a typical example of the strawman tactic I pointed out above. Next, it is not even an issue that information in bits is a metric of form I = - log_2 p. What I did above is to use a credible I value to give a reasonable value of the related P that EL et al have been trying to make so much of. It is nothing like the much lower value that the likes of Dawkins have wanted to suggest as accessible by incrementalism per alleged Darwinian mechanisms. Of course, at no point has the Darwinist establishment actually empirically shown a typical sized protein of say 300 AA in a novel fold domain -- these are known to be deeply isolated islands in AA sequence space -- originating by blind chance and mechanical necessity in either the pre-life or the body plan evo settings. I can comfortably say that, noting as a quick cross check the absence of the Nobel Prize for doing that. (So, they are already blowing blue smoke and using trick mirrors.) Next, EL and KS ignore the root case, the pivotal one, which I addressed already. OOL, no root, no shoot, no tree. I simply took up the massive fact of homochirality and geometric dependence leading to key-lock fitting as a major feature of life chemistry. That alone, on the fact that non-biosyntheses generate the thermodynamically expected result, racemic mixes, gives us 1 bit per monomer, roughly. (Glycine is achiral.) So we can easily see that a typical protein of 300 AA or the RNA that codes for it, will dispose of 300 or 900 bits. And we need hundreds in proximity, properly arranged and coupled, for life. Simply on chirality, OOL is inexplicable on blind chance and mechanical necessity. That leaves only one empirically grounded source of high contingency on the table. Which, also happens to be the only empirically grounded source of FSCO/I, design. Design is sitting at the table, from the root on up. When it comes to body plan level macro evo, you need new cell types, tissues, organs, organisation and phased regulation. That pushes up genome size -- an index of the info required -- by some 10 - 100+ mn bits per main body plan, as can easily be checked. And we have the well known point that complex functions that depend on particular organisation are naturally tightly coupled and specific, that is you have to get a lot of things just right or no function emerges. This implies isolated islands of function, and again, high info costs, which transform over to very low probabilities. This is already evident at the level of making required proteins from AA monomers. All of this is multiplied by a known feature of searches: in hard to find cases, searches have to be well matched to the space or they will be on average no better than blind at random walks across a config space. Where search for search is a much harder problem in the space of possible searches of a space -- much larger. In short, blind searches tend to fail in cases where there is a lot of hay, needles are isolated, and there are few resources. (This, we have strongly shown with the 500 H toy example, and predictanbly the usual objectors refuse to acknowledge the massively evident point.) Multiply by the lack of evidence of a vast continent of functional forms bridge-able in incremental steps. (The Cambrian life revo is just one case among many, there just simply is not the sort of overwhelming abundance of transitionals that should be there on Darwinian implications, in a context where the sampling by now should be good enough to capture dominant patterns. (A relatively few cases that through the eye of Darwinist faith -- which demonstrably often runs in circles -- could be construed as transitionals have got headlines out of all due proportion to the weight of the evidence.) In short, we have good reason to see that the high info content of functional forms and organisation for complex body plans is also real. And this again transforms over to exceedingly low probabilities. Maybe this would help. Back in the days when Morse made up his code he talked with printers on the frequency statistics of English, and learned from the counts they used to tell them how many E's and Z's etc they needed. Now imagine that on any reasonable blind search algor in such a printer's case of letters, a text was composed letter by letter, with replacement. A flat random sample would have the statistics of typical English, but would be predictably gibberish. A biased sample would have different statistics, but again would be predictably gibberish, non functional. In either case, the statistics of the text would reflect the probability patterns -- the relevant chance hyps -- at work. Now, impose a criterion that if legitimate words are formed they can be kept. This will easily give us things like is, and, the etc, but not so easy on longer words. Now, we can then try combining words at random in various ways. Predictably, this will fail to give us significantly functional long texts unless there is some sort of targetting. And if words may decay and garbled texts with short words in them are lost, we run into the barriers of isolation of complex structures with functional specificity. This extends to computer code, and so to genomes which have that sort of code. In other words, the underlying point that the statistics of the result will reflect the chance processes that are relevant, is reasonable. And, the basic problem the info metrics are trying to tell KS and EL etc is also underscored. Of course, the evidence is that first viable life forms need genomes of order 100 - 1000 kbits. If EL has an empirically grounded, observed counter example then let her put it on the table: _________________ Prediction -- she cannot. Similarly, if she has an empirically warranted -- observed -- case of blind chance and mechanical necessity generating the FSCO/I for a successful novel body plan, let her give it: ________________ Prediction -- she cannot. The same, for KS. This very post is yet another among billions, nay, trillions, of examples that FSCO/I is routinely and habitually, only observed to be produced by design. The inductively well warranted conclusion (subject to empirical counter-example but not speculations or question begging or ideological a prioris) is that FSCO/I is a reliable empirical sign or marker of design as cause. Materialist rhetorical game over. KF kairosfocus
Kairosfocus:
KS: Why do you insist on making a strawman?
It's not a "strawman" - we are not attacking a misrepresentation of your hypothesis, we are asking you fix your misrepresentation of ours! What you are, formally, rejecting, when you assume random draw as your null, is not the hypothesis anybody proposes. In other words you really have rejected a "strawman". I'm not clear whether you really think that the Darwinian hypothesis is random draw, or whether you somehow think that something you've done "automatically" extends your null to include "Darwinian and other mechanisms". But it doesn't. Just consider, for a couple of minutes, the possibility that you have made an error here.
The pivotal issue is that the information is empirically evaluable [as Durston et al published six years ago], which gives us a basis to address the problem scientifically, once transformed into information.
No. Durston et al computed size of subsets of functional proteins, and, using reasonable assumptions regarding the distribution of amino acids in functional proteins, and of the distribution of sequences among functional proteins, what the probability of each protein would be on the assumption of "random independent draw" i.e. N(functional variants)/N(possible comparable sequences). They then took the negative base2 log of this value, and called it "Fits". Fine. But those Fits represent the probability of getting such a protein by random independent draw, NOT the probability that such a protein will evolve by "Darwinian and other material mechanisms". The log transform makes no difference - it simply allows you add instead of multiply.
I did so, and on seeing how insistent you all were on evaluating p(T|H) I showed how the empirical data allows that to be done by back-transforming the info metric into a probability metric;
No. Sure, you can get the p back by untransforming the log transform, but that doesn't alter what that p is a probability OF! It still doesn't give you p(T|H) where H is the probability of getting the sequence by "Darwinian or other material mechanisms!. It is still the probability of getting that sequence by independent random draw. Or do you think that "Darwinian and other material mechanisms" is the same as "random independent draw"? Elizabeth B Liddle
I wonder if KF has misread, or misunderstood Durston, and thinks that because they considered the possibility that the possible amino acids are not equiprobable, and also note that that functional sequences are not significantly different from those that would be produced by random independent draw, that they have "automatically" dealt with the issue as to whether H need be anything other than random independent draw (of course Durston et al do not claim in that paper that 500+Fits renders the protein unevolvable). Hence my toy example. A random independent draw is random independent draw, whether or not the items in the draw are equiprobable, and whether or not the Target sequences are indistinguishable from randomly independently drawn sequences (in fact I randomly drew them from their respective distributions). If KF wants to reject the null of "Darwinian and other material mechanisms" as Dembski specifies, he needs to do your calculation, not use "independent random draw". And why he thinks that log transforming the p values into bits does any more for the problem than z transforming them into sigmas is really beyond me. It's just a rejection criterion however you transform it. Elizabeth B Liddle
KF, If you weren't bluffing, then you would be able to do what I asked:
KF, pick a protein family. Show us all possible evolutionary pathways leading to that protein family. Identify all the necessary mutations for each pathway. Quantify the probability for each of those mutations. Specify the exact fitness landscape that is in effect at the time of each mutation, and show us (based on the fitness landscape) the probability that each mutation is selected for. Now combine all of this information into a single P(T|H) value.
You can't do it. keiths
KS: Why do you insist on making a strawman? The pivotal issue is that the information is empirically evaluable [as Durston et al published six years ago], which gives us a basis to address the problem scientifically, once transformed into information. I did so, and on seeing how insistent you all were on evaluating p(T|H) I showed how the empirical data allows that to be done by back-transforming the info metric into a probability metric; and how it underscores the needle in haystack search conundrum. Whatever the history of life is, and however much of chance processes were involved, the information based on the statistics of protein families enfolds that, especially the degree of redundancy found in locations on the AA string, which is what is brought out in Durston's move from null to ground to functional state. It shows what can change, what cannot -- based on the actual result of origin of diverse life forms across the history of life, and how much difference that makes. The end result is that we are looking at needle in haystack searches beyond the reasonable capacity of our solar system, just for protein molecules. Beyond, when it comes to assembling a living cell and getting it to be a code based self replicating system, the problem just gets worse and worse. And to move further beyond that to the much larger information gaps implied by complex body plans, that is much worse yet again. Though you will refuse to acknowledge it, P(T|H) rhetorical game over. KF kairosfocus
KF to Lizzie:
Onthe matter in hand, changing the subject away from first pretending that the un-addressed elephant in the room — recall your attempted cute rendering of P(T|H) as elephant? — was P(T|H) then refusing to accept that this is a transform away from an info metric that is more tractable, then now refusing to accept that it is a reasonable step of analysis to work back from that which is observable through a transform relationship to that which was deemed a puzzle, simply cemented your discredit.
KF, You're really floundering here. Every mathematically competent person reading this knows that taking a logarithm (or an antilogarithm) does not magically solve your problem. It just represents the same information in a different form. You need P(T|H) in order to determine CSI. It's right there in the equation. Dembski himself specifies that P(T|H) must encompass all "Darwinian and material mechanisms". If you can't come up with a P(T|H) value (or upper bound) based on all "Darwinian and natural mechanisms" and justify it, then the game is over. An ersatz P(T|H) based on random draw won't cut it. You're bluffing, and it isn't pretty. keiths
Dr Liddle: Your predictable refusal to acknowledge any point made by a design thinker at this stage has zero impression on me, especially coming form someone who has been harbouring slander and then denying it then defending it then pretending that nothing is wrong. Going further, the response to the case of discovering a box of 500 coins all reading H, tells us that you are unwilling to acknowledge the fundamental challenge of searching for a needle in a haystack. Onthe matter in hand, changing the subject away from first pretending that the un-addressed elephant in the room -- recall your attempted cute rendering of P(T|H) as elephant? -- was P(T|H) then refusing to accept that this is a transform away from an info metric that is more tractable, then now refusing to accept that it is a reasonable step of analysis to work back from that which is observable through a transform relationship to that which was deemed a puzzle, simply cemented your discredit. Your current talking point that I am not answering issues you have raised is to be seen in that light, and it carries no weight. The subsequent exercise on balls is simply a red herring led away to a strawman. (And BTW, until the pool of undrawn balls is very large indeed, knowing the size of the pool makes a potentially significant difference. But, I am not playing along with your latest red herring. The real issue on the table is back to where it was two years ago. WmAD's 2005 metric can be shown to be transformable into a measure of info beyond a threshold. You have spent months tryong to hammer away at the pretence that until one can assess P(T|H) one has nothing. I have shown that the information is measurable and can then be transformed into P(T|H) if desired. So, P(T|H) is the steep cliff solution, when a transform away is an easier and empirically more tractable approach. In addition, I have shown how for OOL, the pivotal case for both design and Darwinian, tree of life evo -- no roots, no shoot and no tree -- we do have a pretty good idea of the relevant distributions for chirality, as a beginning, and the result is already showing us that first life is not plausibly a product of blinf chance and mechanical necessity, but design is known to produce FSCO/I. beyond that, we have at least outlined how chance based searches are the pivot of the proposed increase of bio info, which both runs into the islands of function problem and the search for search problem. S4S points to how on average no blind search will be better than simple blind chance, which takes us back tot he same point. And remember, chirality is distinct from protein families and each such family is distinct. they are all pointing as convergent compass needles to a common pole. Design is the best explanation of the world of life, but that is where you are determined not to go.) The conclusion has been written you are just looking for an argument to make it seem plausible to the Darwinist choir. Rhetorical game over. G'day KF kairosfocus
KF, really, you have NOT "adequately answered" this objection. That's why we keep asking you. I think we have somehow not managed to make our objection clear. The reason I think this is that you keep presenting answers to objections we haven't actually made. Let me try to clarify, by asking some questions: Let's say that we have a large bag of coloured balls - red, blue, green and yellow. Let's say that there are an equal number of each colour in the bag. Let's also say that I have written on a piece of paper, the following five-ball sequences: 'Red', 'Blue', 'Yellow', 'Red', 'Red' 'Red', 'Green', 'Green', 'Green', 'Blue' 'Green', 'Blue', 'Green', 'Red', 'Green' 'Red', 'Blue', 'Green', 'Yellow', 'Red' 'Yellow', 'Yellow', 'Blue', 'Blue', 'Blue' 'Blue', 'Green', 'Green', 'Yellow', 'Yellow' 'Green', 'Blue', 'Yellow', 'Green', 'Blue' 'Yellow', 'Yellow', 'Green', 'Green', 'Green' 'Red', 'Blue', 'Blue', 'Red', 'Yellow' 'Red', 'Red', 'Red', 'Red', 'Blue' I invite you to pull balls five at a time out of the bag, then put them back and shake the bag. So my first question is: what is the probality that you will draw one of the above sequences on any one trial? Now, repeat the exercise, only this time, there are twice as many red balls as any other, and the sequences I have written on my piece of paper are these: 'Yellow', 'Blue', 'Red', 'Red', 'Blue' 'Blue', 'Red', 'Green', 'Green', 'Red' 'Red', 'Red', 'Red', 'Blue', 'Blue' 'Red', 'Red', 'Yellow', 'Red', 'Yellow' 'Green', 'Blue', 'Blue', 'Red', 'Blue' 'Yellow', 'Blue', 'Blue', 'Red', 'Blue' 'Green', 'Green', 'Red', 'Red', 'Yellow' 'Red', 'Yellow', 'Yellow', 'Green', 'Red' 'Red', 'Red', 'Green', 'Red', 'Green' 'Red', 'Green', 'Blue', 'Green', 'Green' Again, what is the probability that you will draw one of these sequences on any one trial? Final question: are both exercises examples of "random independent draw", despite the fact that in the first, each colour is equiprobable, and in the second, reds are more probable than any other? Elizabeth B Liddle
KS: You are simply 4repeating adequately answered objections and ducking the fact that I ties back to the P(T|H) you tried to use as a rhetorical bludgeon and roadblock. but5 I is pretty easily accessible by inspection and evaluation of empirical observations. (Indeed, the use of families of proteins is in reality a refinement, to try to factor in redundancies as being of less info.) Once we do have an info metric, we can then evaluate back to the relevant probability using a straightforward relationship. That probability automatically brings to bear whatever actual chance/noise influences were at work, and also allows us to evaluate the chance of something with that sort of message statistics coming about by chance in any of its forms. The answer is, well below 1 in 10^150. Rhetorical game over. KF kairosfocus
keiths:
There are many possible evolutionary pathways to the same endpoint.
How many blind watchmaker pathways? And how did you determine that? THAT is the problem- you equivocate, pontificate and never, ever produce any evidence to support your tripe. Joe
keiths:
Your derivation doesn’t take all “Darwinian and material mechanisms” into account, as required by the definition of CSI.
No one can demonstrate that “Darwinian and material mechanisms” can construct anything. YOU can't even produce a replicator from those processes, keiths. And you sure as hell cannot produce anything else from a simple replicator. Evidence, keiths, your position doesn't have any to support it. Joe
KF,
KS is refusing to see that there is more than one way to skin a catfish.
No, I'm suggesting that a hand drill isn't the right tool for the job.
Whatever hyps may be empirically warranted, they have certainly been taken up in the processes that have created the actually observed protein families. So, we can work back and give an upper bound tothe effect of blind chance and mechanical necessity, from teh known result of whatever processes have acted.
You're bluffing, KF, and I'm calling you on it. Your vague and evasive wording gives you away:
Whatever hyps may be empirically warranted, they have certainly been taken up in the processes that have created the actually observed protein families.
Not true. There are many possible evolutionary pathways to the same endpoint. All of them must be considered and quantified even though only one of them actually occurred. It's even worse than that, in fact, because you, as an IDer, don't think that any of them actually happened. Your statement directly contradicts the ID hypothesis!
So, we can work back and give an upper bound tothe effect of blind chance and mechanical necessity, from teh known result of whatever processes have acted.
Same error. According to you, design acted, not evolution. KF, pick a protein family. Show us all possible evolutionary pathways leading to that protein family. Identify all the necessary mutations for each pathway. Quantify the probability for each of those mutations. Specify the exact fitness landscape that is in effect at the time of each mutation, and show us (based on the fitness landscape) the probability that each mutation is selected for. Now combine all of this information into a single P(T|H) value. You can't do it. You're bluffing. keiths
PS: I should add one more thing. materialistic evolutionary mechanisms come in two main stripes, pre-life and post OOL. Pre OOL, we are looking at chemical dynamics and physics, which pose so many challenges to forming a living cell, directly or indirectly that apart from paper fantasies and their computer kin, there is no serious answer. Just the chirality challenge alone is enough to scupper them. And that is just the beginning. Once we are dealing with cell based life, we then have the point that chance variation [non-foresighted variation] is the creative means, and the consequence of injecting such into an environment is differential reproductive success. This last is not an active agent, it has no creative powers, even in hopes, it simply means that what is not fit is subtracted from the population. An information sink not a source. So we are left with whatever variety of blind search, in a context where as already highlighted, we have good reason to expect to see deeply isolated islands of function in config spaces that are beyond astronomically large. (Just for 1,000 bits much less 100 - 1000 K or 10 - 100+ M bits, is already beyond the capacity of the observed cosmos. 500 bits is enough to swamp the solar system, and the reality is earth's biosphere is MUCH smaller than that.) And, since we are looking at blind mechanisms, we face the problem of blind search for a well-fitted search. Where he number of possible searches on a space of size W is hugely more than |W|. Many of which searches will be hopeless or worse than hopeless, in a situation that was not encouraging to begin with. A result we can accept is that a direct flat random search of the space, on average, will be as good as any other. Those who propose a BLIND search, blindly chosen that is well matched to the space and capable of finding islands of function within relevant resources, need to justify such a claim. Actually, they don't, there is a tendency to assume a vast continent of incrementally traversible ever more complex function, and there is a tendency to suggest that incremental hill-climbing is good enough as a result. The first of these is not justified, never mind the brazen attempt to say this is default show us why we have to look at islands of function. (To which the proper response is, look all around you at things that have to work based on multiple well matched properly arranged and coupled parts. Starting with things like this post's strings of symbols in ASCII code. Requisites of function tie us down sharply to restricted, specific zones in config spaces. And a solar system scope search would in effect be picking from the sun to Pluto, 4 - 7 bn km away. That is a lot of space in which to arrange things.) The second, begs the question of getting to shores of function, and massively extrapolates local hill climbing to a scope that simply lacks empirical warrant. In short, we have no reason to think that here are going to be blind searches that for hundreds of genes and proteins etc will so frequently beat blind flat random chance. Which brings us right back to the relevance of the empirical info derived metrics above for P(T|H) -- in effect for one protein family among hundreds each -- and their message, nope blind chance is not credible. (And remember the method used, which is ex post facto, will factor in relevant chance processes automatically.) KF kairosfocus
Onlookers: KS is refusing to see that there is more than one way to skin a catfish. We have pretty direct empirical access tot he info content of key biomolecules, and we have a relatioship that links that to the probabilites that he and others have been pushing as a talking point as they imagine it is an obstacle. All I have done is to use the bridge. Whatever hyps may be empirically warranted, they have certainly been taken up in the processes that have created the actually observed protein families. So, we can work back and give an upper bound tothe effect of blind chance and mechanical necessity, from teh known result of whatever processes have acted. The message is that the resulting quantum of info is beyond a reasonable plausibility for blind chance and mechanical necessity to have been the decisive input. the only known source of accessing configs that isolated in possibilities spaces, is imaginative, creative design. Only, that does not fit the ideology so it is dismissed as unimpressive. (And BTW that sort of ideologically loaded "reasoning" by evo mat advocates is much of why design-supportive papers or arguments in general are so often unjustly dismissed as dumb or stupid or blunders or deceptive. A man convinced against his will is of the same opinion still.) Rhetorical game over. KF kairosfocus
PS: I should briefly note on why FSCO/I is naturally found in narrow islands in wide config spaces. First the complexity means the number of Y/N questions to specify the config is large, i.e. we are beyond 500 - 1,000 bits. Second, to function in specific ways we now have multiple well matched properly aligned, coupled and configured parts. This is easy to see with text such as this post and since AutoCAD etc show us that complex 3-d functioning systems can be reduced to strings of descriptive symbols in accordance with a convention, this focus on strings is WLOG. So, as say a car engine shows, or the code string to specify a protein that folds, fits and functions, we will expect specific configuration which at once locks us into islands in the space of possibilities. This is actualy a further challenge to the darwinist view as it imagines a vast continent of function incrementally traversible by steps thrown out at random and filtered for hill-climbing by sub population competition for scarce resources or the like. Which already points to another problem, time to fix variations given pop sizes, incidence of actually advantageous mutations [the rates issue] and more. In short we have a theory that can make arguments for variations within islands of function being extrapolated without adequate warrant on empirical evidence into a theory presumed practically certain about much larger changes which we have every good empirical reason to see will require jumps across large Hamming distances in spaces of possible configs. KF kairosfocus
KF,
Predictably, not even the actual presentation of the derivation of P(T|H) per the empirical evidence on the information content of protein families, makes any impression on the likes of KS.
It actually did make an impression. Just not a good one. Your derivation doesn't take all "Darwinian and material mechanisms" into account, as required by the definition of CSI. As you like to say, GIGO -- Garbage In, Garbage Out. END :) keiths
Onlookers: Predictably, not even the actual presentation of the derivation of P(T|H) per the empirical evidence on the information content of protein families, makes any impression on the likes of KS. And now we see the lame argument that well we don't know just what the very first life form was like, whether it was homochiral. We have a world of life out there that is based on the key-lock fitting of macromolecules in a system where that fold, fit and function pattern is critically and overwhelmingly dependent on just that one-handedness of the right sort for the key bio-molecules, which would be drasrically deranged by the shift to racemic mixes on grounds of geometry and resulting forces being wrong for the fold and function by fitting. That is what the empirical evidence points to. But the objection is tossed up in the teeth of that evidence as if it trumps all. In short that was never the topic, was it. The real topic was that for such science is a handy way to promote a priori materialism, by pushing a redefinition that takes advantage of the prestige of science. Next we see some red herring and strawman debate points on the information content of 500 coins, which holding 2 states have 500 bits of info storage capacity. From this, we can see that we have 2^500 possibilities in a space of possible configs, leading to deep isolation of significant specific forms such as 500 H or more relevantly, 72 or so ASCII characters in English or object code or the like. The point of which is that since the resources of our solar system of 10^57 atoms and 10^17s at chem rxn rates, could only sample as the size of 1 straw to a cubical haystack 1,000 Light years across, so that if superposed on our galactic neighbourhood with all but absolute certainty we would only pick up the bulk, straw, we have every reason to see why it is that cases of FSCO/I on uniform regularly repeated experience are characteristic signs of design as cause. If one saw a box of coins all H, one would know to empirically warranted practical certainty that it was setr like that by design. If the coins were in a line spelling out the first 72 characters of this post in ASCII code or a Hello World program etc, we would have high confidence to infer the same. Now, the next problem is that on evidence of the world of life we do see (the empirical basis that is supposedly the framework for science) we know that for a first cell based life form -- the only biological C-Chemistry life we have evidence of -- we credibly start out at requiring 100 - 1,000 kbits of genetic info, orders of magnitude beyond the FSCO/I threshold. Allt hat empty speculation about self replicators and hill climing begs the question of getting first tot he shores of islands of exceeedingly complex funciton. At best Darwinia mechanisms have some power to explain some types of micro evo, but none at all to explain OOL -- the root of the tree of life. That is BEFORE the code based replication systems we see originated and the evo mat advocate needs to explain that too. On empirical evidence. Ducked. As usual. KF kairosfocus
(i meant to say, the drift example was motivated by a comment by Liz in another thread. And if you want to see it in action it's easily coded in the statistical language R sim <- function(){ coins <- ifelse(rbinom(500,1,p=0.5), "H", "T") gen = 1 while(max(table(coins)) != 500){ coins <- ifelse(rbinom(500,1,p=mean(coins=="H")), "H", "T") gen = gen + 1 } return(gen) } wd400
KF, How much information is there in a series of 500 heads or tails from coin flips? So much that you'd be throwing sets of 500 coins for ever and never see one. So much, in fact that you'd rule out that result as plausibly coming from a fair coin, I guess. Now. Instead of throwing sets of 500 coins, let's take a random walk Throw your first 500 coins and record your "H"s and "T"s as usually. But this time your next round of tosses will influenced by the last. The very special coins you will through will have a probability of coming up "H" exactly equal to the frequency of "H" in the last round. This may may sound far-fethched for coins, but's exactly how genetic drift words (sampling from a population of gametes). The end result of these runs is always a set of 500 "H"s or "T" and instead of taking more than the age of universe for that result to arise, it usually happens within a thousand sets of coin flips. So, what I'm trying to ask, is taking the log of p(500_H|random_coin_tosses) helping us understand 500 "H" sequence when we know the hypothesis (random coin tosses) isn't the one that's generating the sequence? That's what you've argued so far, which is very strange. wd400
KF,
First, it does not seem to have registered that I have addressed the root problem, as the decisive case, forming the molecules of life.
It hasn't registered because you haven't done it.
And, thanks to the racemic forms that routinely form in non biological syntheses we DO know the relevant distribution, which already shows that the proposed Darwinian path cannot get going without a blind chance and necessity answer to the origin of the info in just the fact of homochirality.
You haven't established that the original replicator must have been homochiral (or even that it was chiral at all), so you don't know the relevant distribution. Even if you could prove that it was homochiral, you would also have to know the size of the replicator in order to establish the distribution. You don't know that either. Nobody currently knows the distribution. You certainly don't.
That is, the power of the [log] transform allows us to apply an empirical value to what is a more difficult to solve problem the other way. once we do know the info content of the protein families by a reasonable method, we can then work the expression back ways to see the value of P(T|H). And so lo and behold we do not actually have to have detailed expositions on H to do so, once we have the Information value, we automatically cover the effect of H etc.
KF, logs and antilogs are trivial. It's the same information expressed in a different way. To prove this to yourself, take the log of a number, then take the antilog of the log. You get the original number back. Obviously. So whether you are computing P(T|H) or CSI, you have to consider all "Darwinian and material mechanisms", as Dembski said. There are no shortcuts, and the log transform does not provide any. And since neither you, nor Sal, nor anyone else can calculate P(T|H), you can't make the design inference. At least not rationally. Not to mention the fact, as Lizzie and I keep telling all of you, that the argument from CSI is a circular argument. You already have to know that something couldn't have evolved before you attribute CSI to it. Therefore, using CSI to determine that something couldn't have evolved is useless. It tells you nothing that you didn't already know. It's a waste of time. keiths
PS: Remember the above info metric does not include the additional info included in homochirality. The issues are distinct. kairosfocus
F/N: The above responses by EL and KS leave me shaking my head. First, it does not seem to have registered that I have addressed the root problem, as the decisive case, forming the molecules of life. And, thanks to the racemic forms that routinely form in non biological syntheses we DO know the relevant distribution, which already shows that the proposed Darwinian path cannot get going without a blind chance and necessity answer to the origin of the info in just the fact of homochirality. For which no such answer is reasonable. The only empirically warranted source for FSCO/I as is required here, is design. So, immediately there is design sitting at the table from the root up. Next, it does not seem to register that oftentimes a problem that is hard when phrased one way becomes much easier when transformed. In this case, going to information allows us to use the empirically evaluated information values to resolve the matter, e.g. those of Durston et al. As was worked out here, we know we need an info beyond a threshold, we have conservative thresholds at 500 - 1,000 bits, and we know the info content of functional protein families of interest. We deduced: Chi_500 (solar system) = I*S - 500 bits beyond the SS threshold Where also I = - log_2(p) The answer from the information threshold expression is, that again, we can easily see that protein families are well beyond the FSCO/I threshold where blind chance and mechanical necessity are plausible explanations of the functional molecules of life. Using the three examples from Durston I have commonly cited, and the solar system threshold:
RecA: 242 AA, 832 fits, Chi: 332 bits beyond SecY: 342 AA, 688 fits, Chi: 188 bits beyond Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond
But that is not all, since we know the info values empirically, and we know the relationship that I = - log_2 (P), we can deduce the P(T|H) values for all relevant hypotheses that may have acted by simply working back from I:
RecA: 242 AA, 832 fits, P(T|H) = 3.49 * 10^-251 SecY: 342 AA, 688 fits, P(T|H) = 7.79 * 10^-208 Corona S2: 445 AA, 1285 fits, P(T|H) = 1.50 * 10^-387
That is, the power of the transform allows us to apply an empirical value to what is a more difficult to solve problem the other way. once we do know the info content of the protein families by a reasonable method, we can then work the expression back ways to see the value of P(T|H). And so lo and behold we do not actually have to have detailed expositions on H to do so, once we have the Information value, we automatically cover the effect of H etc. As was said long since but dismissively brushed aside by EL and KS. And consistently these are probabilities that are far too low to be plausible on the gamut of our solar system, which is the ambit in which body plan level evolution would have had to happen. (Indeed, I could reasonably use a much tighter threshold, the resources of earth's biosphere, but that would be overkill.) Now, do I expect EL and KS to accept this result, which boils down to evaluating the value of 2^-I, as we have I in hand empirically? Not at all, they have long since shown themselves to be ideology driven and resistant to reason (not to mention enabling of slander), as the recent example of the 500 H coin flip exercise showed to any reasonable person. But this does not stop here. Joe is right, there is NO empirical evidence that darwinisn mechanisms are able to generate significant increments in biological information and thence new body plans. All of this -- things that are too often promoted as being as certain as say the orbiting of planets around the sun or gravity -- is extrapolation from small changes, most often loss of function that happens to confer an advantage in a stressed environment such as under insecticide or sickle cells that malaria parasites cannot take over. Of course, such is backed by the sort of imposed a priori materialism I highlighted earlier today. What is plain is that the whole evolutionary materialist scheme for origin of the world of life, from OOL to OO body plans and onwards to our own origin, cannot stand from the root on up. KF kairosfocus
It has become very clear that we do not need to determine P(T|H) for all “Darwinian and material mechanisms”, because no one from the darwinain camp can even say there is a feasibility for darwinian and material mechanisms... Joe
Sal, The entire premise of this thread is odd. You seem to be saying in effect: "I can't actually determine whether X has CSI, because I can't determine P(T|H) for all "Darwinian and material mechanisms". However, I can hypothesize that it does by assuming a single distribution and computing CSI from that assumed distribution. I can then infer design. But that's okay because my hypothesis can be falsified." Well, sure, you could do that. But who is going to be persuaded if you can't calculate the actual CSI value, or at least establish a believable lower bound for it? If you can't determine P(T|H) (or establish an upper bound on it), then you can't justify your CSI value. If you can't justify your CSI value (or lower bound), then you can't justify the design inference. P(T|H) is the key, and you can't get P(T|H) by looking at just one distribution (and the associated hypothetical mechanism). You have to consider all possible "Darwinian and material mechanisms", or at least all that have sufficiently high probabilities to make a difference in the final P(T|H). keiths
Lizzie,
A null hypothesis doesn’t magically become a different null hypothesis when you log transform the probability distribution.
You underestimate the power of the Designer. Poof! keiths
Not a "cheap rhetorical shot" at all, KF. You have shown no evidence at all that you understand the fundamental principle of null hypothesis testing, namely, that you must compute the expected distribution of your data under the null you want to reject. If the only null you have computed the expected probability distribution for is "random independent draws", then that is the only null you are entitled to reject if you make an observation that falls in your rejection region. You can't reject all other nulls just because you have rejected that one, and "Darwinian mechanisms" are not "random independent draws". Elizabeth B Liddle
EL: Cheap, predictable rhetorical shot and wrong. Notice, I am starting with an upper threshold on the probabilities/lower threshold on the info, based on the well known constraints imposed by thermodynamics and reaction kinetics, at formation of relevant molecules for OOL. No root, no shoot and no tree. The challenges get much steeper from there on, and as you by now know but as usual are denying, a needle in haystack sampling challenge like this is decisive. KF kairosfocus
And still no positive evidence to support darwinian evolution. Joe
As are rethought some issues, I realized this was an instructive illustration for me. Mr. Unsuspecting is like us. We have limited knowledge, we grope around in the dark. The equiprobable hypothesis is reasonable given that Mr. Unsuspecting examined the dice and put forward his best hypothesis for a distribution. It turned out the distribution was wrong, in a sense, and right in another. When the intelligent designer in this case wanted to act, he was able to change the distribution, so in a sense the original CSI inference was correct (and yes I'm somewhat retracting my OP in that sense). Further it shows why the inference of who the Designer is, is unwarranted from CSI. Many at UD believe the Intellgent Designer of life is God, but formally speaking that cannot be inferred from CSI, even if a true statement, formally it would be a non-sequitur where the conclusion does not follow from the limited set of premises. I felt it was also important to raise the question of which distribution is chosen. I tried to explain, one can start with a working hypothesis of what distribution is correct or even approximately correct. CSI can be asserted with respect to: 1. a presumed distribution 2. a given recognizable pattern It does not mean the presumed distribution is correct. If it badly approximates real probabilities, the presumption can in principle be falsified, and possibly the CSI inference itself. In some cases, a modified distribution will still result in CSI. I also was trying to address some concerns which I felt were accurate, namely by RDFish. In my opinion he is right to be concerned with the definition of intelligence and to point out "CSI implies intelligence" is an axiomatic belief, not formally provable statement. I think it is a reasonable belief, I accept it personally as true, it can at least be a working hypothesis, but it doesn't have the strength of a math theorem. The connection is also not falsifiable, but that doesn't bother me. Operationally speaking, a claim of CSI for an object is falsifiable and that is good enough for that part of ID to be science. After all, it is perfectly legitimate scientifically to make an observation and provide an estimated distribution and then expose the hypothesized distribution to falsification. That is science. I have no problem calling that science. I think that's what Bill had in mind when he said there is no mandate the one has to proceed from an assumption Designer is real (even though he, and many IDists who are creationists believe He is real):
Thus, a scientist may view design and its appeal to a designer as simply a fruitful device for understanding the world, not attaching any significance to questions such as whether a theory of design is in some ultimate sense true or whether the designer actually exists. Philosophers of science would call this a constructive empiricist approach to design
I agree with that, even though I feel the Designer is real, in my view that conclusion is formally unprovable, but one that is assumed by many IDists, and is reasonable from the resemblance of Design even as Dawkins said:
Some of the greatest scientists who have ever lived ­ including Newton, who may have been the greatest of all ­ believed in God. But it was hard to be an atheist before Darwin: the illusion of living design is so overwhelming. Richard Dawkins
CSI formally demonstrates that resemblance, even if the distribution function is wrong. If the distribution function used to infer CSI is wrong then the CSI hypothesis can be falsified. Good example the craters on the moon. Some scientist long ago saw the craters looking like perfect circles, and inferred design. The CSI inference was faulty and then falsified. Same could be argued with the Chladni plate experiment if one declared CSI to explain the patterns. See: https://uncommondesc.wpengine.com/intelligent-design/order-is-not-the-same-thing-as-complexity-a-response-to-harry-mccall/ An ID hypothesis can be falsified by falsifying the CSI claim that underlies it. The assertion that "CSI can only be generated by intelligence" is assumed even if: 1. intelligence is left undefined, or even poorly defined 2. the statement is wrong to begin with 3. the statement is unprovable That's not the claim that empirically important, the claim that is empirically important is the CSI claim for an object. That claim can be falsified. And if that claim is falsified, the ID claim for that object is potentially (not necessarily) falsified as well. That is definitely the case for homochirality. scordova
KF: it is plain to me that you are speaking in the way that a person unfamiliar with null hypothesis testing would speak. Please explain how setting H as "random independent draw" "automatically" also "take[s] into account all real world relevant processes." How can it? A null hypothesis doesn't magically become a different null hypothesis when you log transform the probability distribution. Elizabeth B Liddle
EL and KS: It seems you are both speaking in a way that one unfamiliar with information metrics would speak, or else the way that one ruthlessly seeking to exploit the ignorance of those unfamiliar with such would speak. The point of such a metric is that its statistical base and assessment of redundancies in a space of possibilities will automatically capture the pattern of possible outcomes. In the relevant case of Durston et al, they looked at the flat random case as the null, then progressively applied the observed empirically grounded frequencies to see the info content of proteins known to be formed by expressing genetic codes. (As you will both recall, probabilities will express themselves stochastically so it is a valid approach to look back from the statistics. A simple case is the known statistical pattern of English, such as that E is normally about 1/8 of text.) If you want an a priori approach, it is quite obvious that you have applied Bournoilli indifference in cases where it suits your rhetorical agenda, e.g. on the 500H exercise. Beyond that, there are no known merely physical energetic preferences that drive homochirality of monomers of either proteins or R/DNA, as the issue is a matter of geometry. A spark in gas exercise will form a racemic mix as will any normal synthesis. That 50:50 RH/LH pattern already tells us equiprobable. Information content, 1 bit per monomer. It is the biological world that makes homochiral molecules, and it does so by complex assembly processes -- begging the question of that warm little pond or the like.So, already we have one bit of info per monomer in an informational, homochiral protein or D/RNA. This BTW is not usually reckoned with in calcs. But for OOL on up, it is vital. The geometry is vital and a racemic mix -- what should be expected on energy -- is not going to work. That is for a 300 monomer protein we have 300 expressed bits, and a lot more if we look at the system that normally makes such. The RNA that codes for it at 3 monomers per AA specifying character is 3 bits per character right there on known energy.So, just to get to a system that gives a coded string to specify ONE typical protein we are already past the solar system FSCO/I threshold. And we need many hundreds of complex polymers to be in shouting distance of a functional living cell with gated membrane, metabolism and von Neumann self replicator. Say, 300 proteins and 300 coded RNA's.
300 * 300 = 9 *10^4 bits 300 * 900 = 2.7 * 10^5 bits ____________________________ Total, already: 3.6 * 10^5 bits
This is already two orders of magnitude beyond what is the threshold for not credibly produced by blind chance and mechanical necessity, on needle in haystack grounds. Coming out the gate at OOL, on simple energy considerations driving homochirality alone. I could go on to talk about the problems of getting peptide bonds, known to be about 50% of bonds for AA chains formed out of biological assembly control. Another bit per monomer at 300 characters per typical protein. But this is surplus to needs already. We only need an upper bound and we are well beyond that already by orders of magnitude. Do I need to remind that for each additional bit of info the needle in haystack search space DOUBLES? (At 500 bits on solar system resources of 10^57 atoms and 10^17s -- our effective cosmos for atomic interactions -- we are already looking at searching a cubical haystack 1,000 LY on the side at the level of taking just one straw sized sample. At 1,000 bits the blind search resources of the observed cosmos are swallowed up even more spectacularly.) Why am I insisting on starting with OOL? Because it is the root of the TOL, and without a root there is no tree. We already see that the probability of cell based life/info content of cell based life -- the two are just a log transform apart and so are conceptually equivalent, just log transforming gives a more familiar and easy to work with form -- is such that no blind chance and mechanical necessity process on the gamut of our observed cosmos is an empirically credible source. Now, we do have a single massively empirically warranted source of FSCO/I, design. So reliable is this in cases where we ca directly check that we are logically justified on induction to take this as a reliable sign of design. So, regardless of the talking points of ideologues a priori committed to materialism and padlocked in mind, and their fellow travellers, I conclude the obvious: life from the ground up is designed. So, design is on the table from the root on up, and that makes sense then of how we have sudden appearance in the fossil record of major new forms that can be shown to need about 10 - 100+ mn additional bits of info in genomes to cover cell types, tissues and systems plus regulation to unfold from embryonic or equivalent state. Life is full of FSCO/I and save to those locked i mind into a system that is self referentially incoherent already on worldview considerations, and necessarily false as a result, or their fellow travellers, life is chock full of signs of design. And it is plain that it is question-begging a prioris that are driving resistance to this obvious result in an information age. Since people are liable to try to falsely assert that the Lewonin remark is quote mined, let me here cirte instead the US NSTA board, in 2000:
The principal product of science is knowledge in the form of naturalistic concepts and the laws and theories related to those concepts . . . . [[S]cience, along with its methods, explanations and generalizations, must be the sole focus of instruction in science classes to the exclusion of all non-scientific or pseudoscientific methods, explanations, generalizations and products [--> atmosphere poisoning]. . . . Although no single universal step-by-step scientific method captures the complexity of doing science, a number of shared values and perspectives characterize a scientific approach to understanding nature. Among these are a demand for naturalistic explanations supported by empirical evidence [--> which means anything that can be grossly extrapolated like pepper moths and finch beaks or antibiotic or insecticide resistance without regard to informational barriers] that are, at least in principle, testable against the natural world. Other shared elements include observations, rational argument, inference, skepticism, peer review and replicability of work . . . . Science, by definition, is limited to naturalistic methods and explanations [--> question-begging radical ideologically driven redefinition of science with no proper basis in history or phil of sci or inductive logic] and, as such, is precluded from using supernatural elements in the production of scientific knowledge. [[NSTA, Board of Directors, July 2000. Emphases and comments in brackets added.]
That is what the radicals want to indoctrinate our children in and have already in Kansas threatened to hold children hostage to push it in. (The letters on record are plain about that. Don't make me cite and discuss these in extenso. I am fully prepared to do so at a moment's notice as a look at that part of my always linked note will show.) Game over. KF kairosfocus
KF: you may have been confused by this passage in Durston et al's paper:
Physical constraints increase order and change the ground state away from the null state, restricting freedom of selection and reducing functional sequencing possibilities, as mentioned earlier. The genetic code, for example, makes the synthesis and use of certain amino acids more probable than others, which could influence the ground state for proteins. However, for proteins, the data indicates that, although amino acids may naturally form a nonrandom sequence when polymerized in a dilute solution of amino acids [30], actual dipeptide frequencies and single nucleotide frequencies in proteins are closer to random than ordered [31]. For this reason, the ground state for biosequences can be approximated by the null state. The value for the measured FSC of protein motifs can be calculated by relating the joint (X, F) pattern to a stochastic ensemble, the null state in the case of biopolymers that includes any random string from the sequence space.
In fact what reference 31 (Weiss O, Jimenez-Montano MA, Herzel H: Information content of protein sequences. Journal of theoretical biology 2000, 206:379-386) shows is that functional proteins are highly incompressible sequences (the opposite of Dembski's criterion for Specification, interestingly). So what Durston et al did was to assume random independent draw from a flat (equiprobable) distribution of sequences. That seems reasonable as far as it goes. But their Fits calculation was still based on P(T|H) where H is random independent draw Nothing in that paper indicates that somehow the calculation "automatically take[s] into account all real world relevant processes". Clearly it does not, and the Darwinian hypothesis is NOT a hypothesis of "random independent draw". And, as Dembski says, "H" in P(T|H) is "the relevant chance hypothesis taking into account Darwinian and other material mechanisms". Elizabeth B Liddle
KF:
This is an empirical, post facto metric that will automatically take into account all real world relevant processes.
This is key. Can you explain how defining H as random draw "automatically" takes into account "all real world relevant processes"? The Durston metric assumes random draw. Elizabeth B Liddle
KF, doing a log transform does nothing for the argument, it just means you can add instead of multiply. You seem to think that doing a log transform of a probability magically converts it into "information". An improbable event under a given hypothesis remains simply an improbable event under that hypothesis whether you log transform the probability or not. What it doesn't do is render the improbable event under that one hypothesis improbable by all hypotheses except Design. If you want to reject other hypotheses, then you have to show that the event is also improbable under those hypotheses too. And it doesn't matter whether you use a log transform or not - the log transform makes no difference to the answer. Elizabeth B Liddle
KF, In any case, your comment makes no sense. I have no objection to "carrying out the log transformation", but to do that I need to know the value of P(T|H). That's what I'm taking the log of. As I said to Sal:
To come up with a CSI value you need to compute P(T|H), which is the probability of getting homochirality via “Darwinian and other material mechanisms” (Dembski’s words). How on earth are you going to compute that probability? You certainly can’t model it as a coin flip scenario — that would be pure chance.
What is the probability of getting homochirality via evolution? Please show your work. P.S. Does the word 'homochirality' make you a little nervous? :) keiths
KF, Why are you addressing RDF when he hasn't even commented on this thread? keiths
RDF et al: Why do you insist on refusing to carry out the log transformation that renders what - log_2(P(T|H) into what it means, INFORMATION content? Immediately as that is done we see that he pivotal point of the Dembski 2005 metric is information beyond a threshold that can reasonably be shown to be less than or about 500 - 1,000 bits. That, in a context where the relevant "observers" are every atom in our solar system, every 10^-14s, or every atom in our observed cosmos, every 10^-45 s. You know or should know that the log reduction and simpilification have long been before us all, cf here. And in that context, information content which for something like DNA can be directly observed, is readily evaluated and can be seen as automatically taking into account relevant probabilistic hyps, e.g. try the Durston metrics of null, ground and functional states for proteins assembled using genes and clustered into families across the domain of life. This is an empirical, post facto metric that will automatically take into account all real world relevant processes. And the verdict of this metric is plain, well beyond the threshold where design is the only credible causal explanation. KF kairosfocus
F/N 1: This sort of thing is why Houses now tend to insist on transparent dice and tossing to a studded wall that hen bounces off to roll on the table. I would not now trust anything that looks like opaque dice, and I woulds be wary of magnetisable dice dots and surfaces (even with transparent dice). KF kairosfocus
Sal, The real question is why you would infer the presence of CSI, and therefore design, from homochirality in the first place.
We infer it based on certain assumptions, those assumptions could be wrong or they could be good approximations. Nothing stops anyone from putting on the table as a CSI claim and exposing the claim to falsification by future discovery. The simple claim is that it's inconsistent with the chance hypothesis starting from a pre-biotic soup.
How on earth are you going to compute that probability? You certainly can’t model it as a coin flip scenario — that would be pure chance.
We know the probabilities for soups being 50% or close to it, further, in the polymerized state, we know empirically it approaches 50% (for most amino acids, I see to recall one amino acid did strongly favor one orientation, can't remember which one) over time unless there is a maintenance mechanism since polymerized amino acids racemize as seen in the lab. Obviously a material mechanism can create homochirality, namely a living organism. But in specific prebiotic soups that have been so far conceived? No. The CSI claim might be wrong, but can be justifiably suggested and later falsified. What proof do Darwinsits have that complexity increases in the wild? We sure don't see average increases in the wild today, but it doesn't stop Darwinists from accepting this claim in spite of disagreeable observations. Contrast this with the fact IDists propose a reasonable distribution based on chemistry, yet I see Darwinists ignore obvious distributions from data in the wild not consistent with their theory. Sorry, I can't help notice the double standard. OOL research are certainly invited to keep trying. I hope they will. Some IDists think it won't be falsified as CSI, a few, Dembski included, thinks there might be a simple chemical route to homochirality. I think even if there was, it's a moot point since the amino acids racemize in the polymerized state anyway. Thanks for you comment. scordova
Don't Darwinists face a similar problem when confronted with the possibility that the Earth might have originally been seeded with some type of life?
A new study suggests that there are as many as 60 billion habitable planets orbiting red dwarf stars in the Milky Way alone—twice the number previously thought and strong evidence to hint that we may not be alone.
Wow, it's getting crowded out there! Who knows, maybe geogenic OOL will be thrown under the bus soon. An exciting new new bus is coming that will jump-start evolution! All aboard! ;-) Querius
Sal, The real question is why you would infer the presence of CSI, and therefore design, from homochirality in the first place. To come up with a CSI value you need to compute P(T|H), which is the probability of getting homochirality via "Darwinian and other material mechanisms" (Dembski's words). How on earth are you going to compute that probability? You certainly can't model it as a coin flip scenario -- that would be pure chance. keiths

Leave a Reply