Uncommon Descent Serving The Intelligent Design Community

Falsification of certain ID hypotheses for remotely controllable “fair” dice and chemical homochirality

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Even though I’m an Advantage Player, I would never dream of hosting illegal dice games and fleecing people (I swear never, never). But, ahem, for some reason I did take an interest in this product that could roll 6 and 8 at will!

[youtube 3MynUHA6DTs]

Goodness, that guy could earn a mint in the betting on 6 and 8! 😈

The user can use his key chain and force the dice to certain orientations. As far as I know the dice can behave as if they are fair if the remote control is not in force. For the sake of this discussion, let us suppose the dice will behave fairly when the remote control is not in force.

Suppose for the sake of argument I made this claim: “CSI indicates intelligent agency.”

Suppose further someone objected, “Sal that’s an unprovable, meaningless claim, especially since you can’t define what ‘intelligent agency’ is”.

I would respond by saying, “for the sake of argument, suppose you are right, I can still falsify the claim of CSI for certain events, and therefore falsify the claim of intelligent agency, or at least render the claim moot or irrelevant.”

Indeed, that’s how I can assert in specialized cases, the ID claim can be falsified, or at least rendered moot by falsifying the claim that an artifact or event was CSI to begin with.

To illustrate further, suppose hypothetically someone (let us call him Mr. Unsuspecting) was unfamiliar and naïve to the fine points of high tech devices such as these dice. One could conceivable mesmerize Mr. Unsuspecting into thinking some paranormal intelligence was at play. We let Mr. Unsuspecting play with the dice while having the remote control off, and thus the Mr. Unsuspecting convinces himself the dice are fair. Say further Mr. Unsuspecting hypothesizes: “if the dice roll certain sequences of numbers, a paranormal intelligence was in play”.

We then let the magician running the game and “magically” call out the numbers before the rolls: 6 8 6 8 6 8 ….

When the remote control is running the show, the distribution function is changed as a result of the engineering of the dice and remote control mechanism. The observer thus concluded CSI using the chance hypothesis as compared to the actual outcome: 6 8 6 8 6 8….

The magician then explains what was really going on and that no paranormal intelligence was involved. Hence, the original hypothesis of a paranormal intelligence (by Mr. Unsuspecting) was falsified, and there was no paranormal intelligence as he supposed initially.

It would be fair to say, Mr. Unsuspecting should then formulate an amended CSI hypothesis given that the whole charade was intelligently designed with modern technology, and further the designer of the charade was available to explain it all. Mr. Unsuspecting’s original distribution function (equiprobable outcomes) was wrong, so he inferred CSI for the wrong reasons, and hence his original inference to CSI is faulty not because his conclusion was incorrect (in fact his conclusion of CSI was correct but for the wrong reasons) but his inferential route was wrong. Further his hypothesis of the paranormal designer was totally false, a more accessible human designer was the cause.

The point being, the original hypothesis of CSI, or any claim that an object evidences CSI, can be falsified or amended by a future discovery at least in principle. The whole insistence by Darwinists that IDists get the right distribution before making a claim is misplaced. Claims can be put on the table to be falsified or amended, and there could be many nuances that amend the reality of the situation in light of new discoveries.

IDists can claim Darwinian evolution in the wild in the present day will not increase complexity on average. They can say that increase in complexity in the present day can falsify some of ID’s claims about biology. That claim can be falsified. FWIW, it doesn’t look like it will be falsified, it’s actually being validated, at least at first glance:
The price of cherry picking for addicted gamblers and believers in Darwinism

Suppose we presumed some paranormal or supernatural disembodied intelligence was responsible for homochirality in the first life. If some chemist figures a plausible route to the homochirality, then the CSI hypothesis for the homochirality can be reasonably or at least provisionally falsified, and hence the presumed intelligent agency hypothesis for homochirality of life (even if intelligence is poorly defined to begin with) is also falsified.

Does it bother me CSI of homochirality could be falsified? Yes, in as much as I’d like to know for sure the Designer exists. But I’m not betting on its falsification anytime soon. And formally speaking there could have been a Designer designing the laws of chemistry, so even if the original CSI hypothesis was formulated with the wrong distribution function, there could still be an Intelligent Designer involved….

The essay was meant to capture the many nuances of the Design debate. It’s far more nuanced than I supposed at first. That said, I’d rather wager on the Designer than Darwin, any day…

ACKNOWLEDGEMNENTS

RDFish for spawning this discussion. Mark Frank and Elizabeth Liddle for their criticisms of other possible distribution functions rather than just a single one presumed. And thanks to all my ID colleagues and supporters.

[Denyse O’Leary requested I post a little extra this week to help alleviate the news desk. I didn’t have any immediate news at this time so I posted this since it seem of current interest]

Comments
And as doe WR400?s demand that I identify the contents of H, that is funny, it is an implicit admission of absence of empirically warranted mechanism Nah - it was an attempt to make you stop for just one second and see that you can't justify the things you've been saying about the power of log transforming a number. I'm sure those onlookers you are so concerned about can see that well enough, so I'll take my leave from this.wd400
July 5, 2013
July
07
Jul
5
05
2013
04:18 PM
4
04
18
PM
PDT
KF,
I used the known relationship from Info to probabilities to infer the relevant probabilities based on non-intelligent stochastic processes... I took time to go back to the rot of the situation — something studiously dodged above, and ground the fact that under abiotic circumstances we normally see racemic forms of organic molecules formed.
Whether we are talking about evolution or OOL makes no difference. Pure chance and design do not exhaust the possibilities. Evolution is obviously more than pure chance since selection is nonrandom. But OOL is also nonrandom, because chemistry is not the random assembly of atoms into molecules. CH4 is a possible molecule; CH6 isn't. Chemistry involves nonrandom rules and very strong nonrandom electrical forces. You can't model it with a flat distribution.
Blind statistics based on biases will lead to gibberish with high reliability...
True, and that's a pretty accurate assessment of your argument.
Now as for what the chance based hyps are, obviously they are blind search mechanisms, if design is excluded...
Untrue. Neither evolution nor OOL is a blind search. Blind search is when you pick search points completely randomly out of the entire search space, then turn around and do the same thing again. In evolution, by contrast, you start from wherever you are in the search space and search only those areas that are within the reach of mutation -- a tiny subset of the entire search space. If any of those small areas contains a viable configuration, then you repeat the process, starting from that configuration and searching only the tiny subset of the search space that is reachable from it by mutation. It's highly nonrandom and nothing like a true blind search, though there is a random component to it. OOL is the same. You don't pick a spot in the search space by taking a large number of atoms at random and blindly throwing them together, then repeating the process. You start from whatever molecules you already have, and you see which tiny portions of the search space you can reach from there. Then you repeat the process. There's randomness involved, but you are not searching the entire space -- only a tiny subset. All of your emphasis on the gargantuan size of the search space is therefore misplaced. It's not the overall size of the space that matters, but the size of the space being searched at each step.
Now as to specifics, it is well known that evolutionary mechanisms warranted from empirical grounds relate to chance variations at mutation level and at expression and organisation level...
Mutations are random with respect to fitness. Selection isn't.
And as doe WR400?s demand that I identify the contents of H, that is funny, it is an implicit admission of absence of empirically warranted mechanisms.
He's asking you to enumerate the contents of H because it is apparent that you have neglected to include anything but pure chance. The fact that you won't answer his question is an implicit admission that you cannot justify your CSI and P(T|H) values.
In any case the info to antilog transformation step says in effect that per the statistics [especially redundancy in proteins and the like that reflect their history and however much of chance processes have happened, and whatever survival filtering happened that is traced in the statistics] the information content implies that BLIND processes capable of such statistics will face a probabilistic hurdle of the magnitude described.
Evolution and OOL are blind, but not blind in the way you are using the term above. See my remarks above on why evolution and OOL are not blind searches.keiths
July 5, 2013
July
07
Jul
5
05
2013
02:46 PM
2
02
46
PM
PDT
Onlookers: it would be amusing if it were not so sad to see the red herrings and strawman arguments above. Since I am busy elsewhere, I will note that I have first started from the empirically known fact, info content of bio molecules, which can be evaluated per Durston et al, and were published six years ago. I used the known relationship from Info to probabilities to infer the relevant probabilities based on non-intelligent stochastic processes (and took time to lay out a more familiar illustration of why it is legitimate to do this). I took time to go back to the rot of the situation -- something studiously dodged above, and ground the fact that under abiotic circumstances we normally see racemic forms of organic molecules formed. That applies to the OOL challenge, and I then worked on just one facet of info present there, leading to the unsurprising answer, not credible by blind chance and mechanical necessity. We therefore have the only known, empirically grounded, reliably known cause of FSCO/I sitting at the table from first life on up. I pointed out utter absence of empirical evidence for formation of protein families by observed darwinian processes. I took the generic route of applying the known tracers of chance processes of relevant kinds in an information-bearing outcome. Blind statistics based on biases will lead to gibberish with high reliability on the gamut of our solar system, for reasons laid out above: deep isolation of zones of FSCO/I in config spaces as parts are required to be matched, arranged and coupled correctly to work, but there is no good way to blindly match search to locations of functional zones. The logical answer is the one unacceptable to the objectors for fundamentally ideological reasons,a s also outlined and cited above. Now as for what the chance based hyps are, obviously they are blind search mechanisms, if design is excluded, so they will by definition fit in with chance, whether flat random of biased towards particular zones makes little difference once we realise that we are dealing with hundreds of proteins, linked hundreds of DNA specifications, huge amounts of regulatory and co-ordinating info and more. Search for search will get you everytime. The only viable way that blind mechanisms worked is if they were well matched to the spaces, and those would be again pointing to design. Now as to specifics, it is well known that evolutionary mechanisms warranted from empirical grounds relate to chance variations at mutation level and at expression and organisation level, I think it was 47 engines of variation suggested some years back. These are not mysterious, and there is no warrant to go off on imagining unknown major mechanisms. The whole incrementalist scheme then founders on the point that we know that we are credibly dealing with islands of function, deeply isolated in the config space. So, we have to bridge info gaps of 10 - 100+ mn bits dozens of times over. Well beyond the search capacity of the observed cosmos. Until the chance and necessity advocates can bridge the gap from ponds with plausible chemistry to life forms, they are dead on arrival. Until they can bridge the gaps from simple cells to complex body plans they are dead again coming out the starting gates. Beyond that all else is making stuff up out of thin air. A notorious Darwinist speciality. And as doe WR400's demand that I identify the contents of H, that is funny, it is an implicit admission of absence of empirically warranted mechanisms. In any case the info to antilog transformation step says in effect that per the statistics [especially redundancy in proteins and the like that reflect their history and however much of chance processes have happened, and whatever survival filtering happened that is traced in the statistics] the information content implies that BLIND processes capable of such statistics will face a probabilistic hurdle of the magnitude described. A hurdle consistent with the finding that such is beyond the available atomic and temporal resources in our solar system. KFkairosfocus
July 5, 2013
July
07
Jul
5
05
2013
09:18 AM
9
09
18
AM
PDT
Onlookers- keiths the totally clueless:
KF, pick a protein family. Show us all possible evolutionary pathways leading to that protein family. Identify all the necessary mutations for each pathway. Quantify the probability for each of those mutations. Specify the exact fitness landscape that is in effect at the time of each mutation, and show us (based on the fitness landscape) the probability that each mutation is selected for. Now combine all of this information into a single P(T|H) value.
keiths, no one can demonstrate blind and undirected chemical processes can produce ONE protein on a planet that didn't have any. IOW it is just as I said, you clowns cannot give us any numbers for the probability wrt to your position.Joe
July 5, 2013
July
07
Jul
5
05
2013
07:16 AM
7
07
16
AM
PDT
KF:
And the precise point of the log-antilog process is that it allows us to move from info content ascertainable empirically to the relevant probability that you had made ever so much rhetorical hay out of, which now turned out to be a straw hut in a storm.
No, it just allows us to add logs instead of multiply probabilities, and then convert the result into a probability again at the end. If a probability is information, it will remain information whether or not you transform it into bits. If it isn't, transforming into bits won't turn it into information. And it certainly won't transform one hypothesis into another.Elizabeth B Liddle
July 5, 2013
July
07
Jul
5
05
2013
06:02 AM
6
06
02
AM
PDT
KF:
What I did above is to use a credible I value to give a reasonable value of the related P that EL et al have been trying to make so much of. It is nothing like the much lower value that the likes of Dawkins have wanted to suggest as accessible by incrementalism per alleged Darwinian mechanisms.
So the way you compute P(T|H) where H takes into account "Darwinian and other material mechanisms" is to compute it for H = random independent draw, and then increase to a "reasonable", but not as much as Dawkins would? And then conclude that the negative base 2 log of your P is greater than 500, and therefore we can reject H? This seems to boil down to mere assertion that your reasonable value of P is more reasonable than my (or Dawkins') reasonable value of P! If not, and it is the result of an actual calculation,could you please show me HOW you adjust P to take into account Darwinian and other material mechanisms. I don't mind how approximate your values are, but I do want to see how you derive them from empirical data, and where you plug them into your calculation. Thanks.Elizabeth B Liddle
July 5, 2013
July
07
Jul
5
05
2013
05:56 AM
5
05
56
AM
PDT
KF, if you think the log-transform get's you to P(T|H), what is "H".wd400
July 5, 2013
July
07
Jul
5
05
2013
02:39 AM
2
02
39
AM
PDT
KS: Your rhetorical twist-about stunts have provided no answers to the matter on the merits. And the precise point of the log-antilog process is that it allows us to move from info content ascertainable empirically to the relevant probability that you had made ever so much rhetorical hay out of, which now turned out to be a straw hut in a storm. Good bye. KFkairosfocus
July 5, 2013
July
07
Jul
5
05
2013
01:48 AM
1
01
48
AM
PDT
Onlookers, Note the continued bluffing from KF:
By now it should be obvious that by doing an analysis of protein families and the redundancies involved in these, Durston et al published empirically grounded measures for info content of same. This boils down to doing a measure of info content per symbol in light of redundancies across observed life forms.
All of which is completely irrelevant to the problem of determining P(T|H), which is the probability of obtaining the protein family in question by all possible "Darwinian and material mechanisms."
Next, it is not even an issue that information in bits is a metric of form I = – log_2 p.
Exactly right. As we've told you repeatedly, there is nothing magic about taking a log or an antilog. The information remains the same, just expressed in a different form.
What I did above is to use a credible I value to give a reasonable value of the related P that EL et al have been trying to make so much of.
But your I value isn't credible at all because it does not take all "Darwinian and material mechanisms" into account. To come up with a credible I value, you need a credible P(T|H) value. To obtain a credible P(T|H) value, you would need to
...pick a protein family. Show us all possible evolutionary pathways leading to that protein family. Identify all the necessary mutations for each pathway. Quantify the probability for each of those mutations. Specify the exact fitness landscape that is in effect at the time of each mutation, and show us (based on the fitness landscape) the probability that each mutation is selected for. Now combine all of this information into a single P(T|H) value.
KF:
Materialist rhetorical game over.
Do you think anyone is buying your bluff, KF, or that an outpouring of words can obscure the gaping hole in your argument where P(T|H) is supposed to be? Dualist obfuscatory and evasive game over.keiths
July 4, 2013
July
07
Jul
4
04
2013
05:36 PM
5
05
36
PM
PDT
Onlookers: By now it should be obvious that by doing an analysis of protein families and the redundancies involved in these, Durston et al published empirically grounded measures for info content of same. This boils down to doing a measure of info content per symbol in light of redundancies across observed life forms. Thus, we have in hand a somewhat more nuanced metric than the 4.32 bits per character that a flat random assumption would have given for 20 possibilities per AA; due to redundancies. So when EL suggests that I am inferring to 4.32 bits/character, she is flat wrong; that would be the null state, TWO analytical states away from their functional state. Her error is a typical example of the strawman tactic I pointed out above. Next, it is not even an issue that information in bits is a metric of form I = - log_2 p. What I did above is to use a credible I value to give a reasonable value of the related P that EL et al have been trying to make so much of. It is nothing like the much lower value that the likes of Dawkins have wanted to suggest as accessible by incrementalism per alleged Darwinian mechanisms. Of course, at no point has the Darwinist establishment actually empirically shown a typical sized protein of say 300 AA in a novel fold domain -- these are known to be deeply isolated islands in AA sequence space -- originating by blind chance and mechanical necessity in either the pre-life or the body plan evo settings. I can comfortably say that, noting as a quick cross check the absence of the Nobel Prize for doing that. (So, they are already blowing blue smoke and using trick mirrors.) Next, EL and KS ignore the root case, the pivotal one, which I addressed already. OOL, no root, no shoot, no tree. I simply took up the massive fact of homochirality and geometric dependence leading to key-lock fitting as a major feature of life chemistry. That alone, on the fact that non-biosyntheses generate the thermodynamically expected result, racemic mixes, gives us 1 bit per monomer, roughly. (Glycine is achiral.) So we can easily see that a typical protein of 300 AA or the RNA that codes for it, will dispose of 300 or 900 bits. And we need hundreds in proximity, properly arranged and coupled, for life. Simply on chirality, OOL is inexplicable on blind chance and mechanical necessity. That leaves only one empirically grounded source of high contingency on the table. Which, also happens to be the only empirically grounded source of FSCO/I, design. Design is sitting at the table, from the root on up. When it comes to body plan level macro evo, you need new cell types, tissues, organs, organisation and phased regulation. That pushes up genome size -- an index of the info required -- by some 10 - 100+ mn bits per main body plan, as can easily be checked. And we have the well known point that complex functions that depend on particular organisation are naturally tightly coupled and specific, that is you have to get a lot of things just right or no function emerges. This implies isolated islands of function, and again, high info costs, which transform over to very low probabilities. This is already evident at the level of making required proteins from AA monomers. All of this is multiplied by a known feature of searches: in hard to find cases, searches have to be well matched to the space or they will be on average no better than blind at random walks across a config space. Where search for search is a much harder problem in the space of possible searches of a space -- much larger. In short, blind searches tend to fail in cases where there is a lot of hay, needles are isolated, and there are few resources. (This, we have strongly shown with the 500 H toy example, and predictanbly the usual objectors refuse to acknowledge the massively evident point.) Multiply by the lack of evidence of a vast continent of functional forms bridge-able in incremental steps. (The Cambrian life revo is just one case among many, there just simply is not the sort of overwhelming abundance of transitionals that should be there on Darwinian implications, in a context where the sampling by now should be good enough to capture dominant patterns. (A relatively few cases that through the eye of Darwinist faith -- which demonstrably often runs in circles -- could be construed as transitionals have got headlines out of all due proportion to the weight of the evidence.) In short, we have good reason to see that the high info content of functional forms and organisation for complex body plans is also real. And this again transforms over to exceedingly low probabilities. Maybe this would help. Back in the days when Morse made up his code he talked with printers on the frequency statistics of English, and learned from the counts they used to tell them how many E's and Z's etc they needed. Now imagine that on any reasonable blind search algor in such a printer's case of letters, a text was composed letter by letter, with replacement. A flat random sample would have the statistics of typical English, but would be predictably gibberish. A biased sample would have different statistics, but again would be predictably gibberish, non functional. In either case, the statistics of the text would reflect the probability patterns -- the relevant chance hyps -- at work. Now, impose a criterion that if legitimate words are formed they can be kept. This will easily give us things like is, and, the etc, but not so easy on longer words. Now, we can then try combining words at random in various ways. Predictably, this will fail to give us significantly functional long texts unless there is some sort of targetting. And if words may decay and garbled texts with short words in them are lost, we run into the barriers of isolation of complex structures with functional specificity. This extends to computer code, and so to genomes which have that sort of code. In other words, the underlying point that the statistics of the result will reflect the chance processes that are relevant, is reasonable. And, the basic problem the info metrics are trying to tell KS and EL etc is also underscored. Of course, the evidence is that first viable life forms need genomes of order 100 - 1000 kbits. If EL has an empirically grounded, observed counter example then let her put it on the table: _________________ Prediction -- she cannot. Similarly, if she has an empirically warranted -- observed -- case of blind chance and mechanical necessity generating the FSCO/I for a successful novel body plan, let her give it: ________________ Prediction -- she cannot. The same, for KS. This very post is yet another among billions, nay, trillions, of examples that FSCO/I is routinely and habitually, only observed to be produced by design. The inductively well warranted conclusion (subject to empirical counter-example but not speculations or question begging or ideological a prioris) is that FSCO/I is a reliable empirical sign or marker of design as cause. Materialist rhetorical game over. KFkairosfocus
July 4, 2013
July
07
Jul
4
04
2013
03:45 PM
3
03
45
PM
PDT
Kairosfocus:
KS: Why do you insist on making a strawman?
It's not a "strawman" - we are not attacking a misrepresentation of your hypothesis, we are asking you fix your misrepresentation of ours! What you are, formally, rejecting, when you assume random draw as your null, is not the hypothesis anybody proposes. In other words you really have rejected a "strawman". I'm not clear whether you really think that the Darwinian hypothesis is random draw, or whether you somehow think that something you've done "automatically" extends your null to include "Darwinian and other mechanisms". But it doesn't. Just consider, for a couple of minutes, the possibility that you have made an error here.
The pivotal issue is that the information is empirically evaluable [as Durston et al published six years ago], which gives us a basis to address the problem scientifically, once transformed into information.
No. Durston et al computed size of subsets of functional proteins, and, using reasonable assumptions regarding the distribution of amino acids in functional proteins, and of the distribution of sequences among functional proteins, what the probability of each protein would be on the assumption of "random independent draw" i.e. N(functional variants)/N(possible comparable sequences). They then took the negative base2 log of this value, and called it "Fits". Fine. But those Fits represent the probability of getting such a protein by random independent draw, NOT the probability that such a protein will evolve by "Darwinian and other material mechanisms". The log transform makes no difference - it simply allows you add instead of multiply.
I did so, and on seeing how insistent you all were on evaluating p(T|H) I showed how the empirical data allows that to be done by back-transforming the info metric into a probability metric;
No. Sure, you can get the p back by untransforming the log transform, but that doesn't alter what that p is a probability OF! It still doesn't give you p(T|H) where H is the probability of getting the sequence by "Darwinian or other material mechanisms!. It is still the probability of getting that sequence by independent random draw. Or do you think that "Darwinian and other material mechanisms" is the same as "random independent draw"?Elizabeth B Liddle
July 4, 2013
July
07
Jul
4
04
2013
12:45 PM
12
12
45
PM
PDT
I wonder if KF has misread, or misunderstood Durston, and thinks that because they considered the possibility that the possible amino acids are not equiprobable, and also note that that functional sequences are not significantly different from those that would be produced by random independent draw, that they have "automatically" dealt with the issue as to whether H need be anything other than random independent draw (of course Durston et al do not claim in that paper that 500+Fits renders the protein unevolvable). Hence my toy example. A random independent draw is random independent draw, whether or not the items in the draw are equiprobable, and whether or not the Target sequences are indistinguishable from randomly independently drawn sequences (in fact I randomly drew them from their respective distributions). If KF wants to reject the null of "Darwinian and other material mechanisms" as Dembski specifies, he needs to do your calculation, not use "independent random draw". And why he thinks that log transforming the p values into bits does any more for the problem than z transforming them into sigmas is really beyond me. It's just a rejection criterion however you transform it.Elizabeth B Liddle
July 4, 2013
July
07
Jul
4
04
2013
12:25 PM
12
12
25
PM
PDT
KF, If you weren't bluffing, then you would be able to do what I asked:
KF, pick a protein family. Show us all possible evolutionary pathways leading to that protein family. Identify all the necessary mutations for each pathway. Quantify the probability for each of those mutations. Specify the exact fitness landscape that is in effect at the time of each mutation, and show us (based on the fitness landscape) the probability that each mutation is selected for. Now combine all of this information into a single P(T|H) value.
You can't do it.keiths
July 4, 2013
July
07
Jul
4
04
2013
12:14 PM
12
12
14
PM
PDT
KS: Why do you insist on making a strawman? The pivotal issue is that the information is empirically evaluable [as Durston et al published six years ago], which gives us a basis to address the problem scientifically, once transformed into information. I did so, and on seeing how insistent you all were on evaluating p(T|H) I showed how the empirical data allows that to be done by back-transforming the info metric into a probability metric; and how it underscores the needle in haystack search conundrum. Whatever the history of life is, and however much of chance processes were involved, the information based on the statistics of protein families enfolds that, especially the degree of redundancy found in locations on the AA string, which is what is brought out in Durston's move from null to ground to functional state. It shows what can change, what cannot -- based on the actual result of origin of diverse life forms across the history of life, and how much difference that makes. The end result is that we are looking at needle in haystack searches beyond the reasonable capacity of our solar system, just for protein molecules. Beyond, when it comes to assembling a living cell and getting it to be a code based self replicating system, the problem just gets worse and worse. And to move further beyond that to the much larger information gaps implied by complex body plans, that is much worse yet again. Though you will refuse to acknowledge it, P(T|H) rhetorical game over. KFkairosfocus
July 4, 2013
July
07
Jul
4
04
2013
12:09 PM
12
12
09
PM
PDT
KF to Lizzie:
Onthe matter in hand, changing the subject away from first pretending that the un-addressed elephant in the room — recall your attempted cute rendering of P(T|H) as elephant? — was P(T|H) then refusing to accept that this is a transform away from an info metric that is more tractable, then now refusing to accept that it is a reasonable step of analysis to work back from that which is observable through a transform relationship to that which was deemed a puzzle, simply cemented your discredit.
KF, You're really floundering here. Every mathematically competent person reading this knows that taking a logarithm (or an antilogarithm) does not magically solve your problem. It just represents the same information in a different form. You need P(T|H) in order to determine CSI. It's right there in the equation. Dembski himself specifies that P(T|H) must encompass all "Darwinian and material mechanisms". If you can't come up with a P(T|H) value (or upper bound) based on all "Darwinian and natural mechanisms" and justify it, then the game is over. An ersatz P(T|H) based on random draw won't cut it. You're bluffing, and it isn't pretty.keiths
July 4, 2013
July
07
Jul
4
04
2013
11:58 AM
11
11
58
AM
PDT
Dr Liddle: Your predictable refusal to acknowledge any point made by a design thinker at this stage has zero impression on me, especially coming form someone who has been harbouring slander and then denying it then defending it then pretending that nothing is wrong. Going further, the response to the case of discovering a box of 500 coins all reading H, tells us that you are unwilling to acknowledge the fundamental challenge of searching for a needle in a haystack. Onthe matter in hand, changing the subject away from first pretending that the un-addressed elephant in the room -- recall your attempted cute rendering of P(T|H) as elephant? -- was P(T|H) then refusing to accept that this is a transform away from an info metric that is more tractable, then now refusing to accept that it is a reasonable step of analysis to work back from that which is observable through a transform relationship to that which was deemed a puzzle, simply cemented your discredit. Your current talking point that I am not answering issues you have raised is to be seen in that light, and it carries no weight. The subsequent exercise on balls is simply a red herring led away to a strawman. (And BTW, until the pool of undrawn balls is very large indeed, knowing the size of the pool makes a potentially significant difference. But, I am not playing along with your latest red herring. The real issue on the table is back to where it was two years ago. WmAD's 2005 metric can be shown to be transformable into a measure of info beyond a threshold. You have spent months tryong to hammer away at the pretence that until one can assess P(T|H) one has nothing. I have shown that the information is measurable and can then be transformed into P(T|H) if desired. So, P(T|H) is the steep cliff solution, when a transform away is an easier and empirically more tractable approach. In addition, I have shown how for OOL, the pivotal case for both design and Darwinian, tree of life evo -- no roots, no shoot and no tree -- we do have a pretty good idea of the relevant distributions for chirality, as a beginning, and the result is already showing us that first life is not plausibly a product of blinf chance and mechanical necessity, but design is known to produce FSCO/I. beyond that, we have at least outlined how chance based searches are the pivot of the proposed increase of bio info, which both runs into the islands of function problem and the search for search problem. S4S points to how on average no blind search will be better than simple blind chance, which takes us back tot he same point. And remember, chirality is distinct from protein families and each such family is distinct. they are all pointing as convergent compass needles to a common pole. Design is the best explanation of the world of life, but that is where you are determined not to go.) The conclusion has been written you are just looking for an argument to make it seem plausible to the Darwinist choir. Rhetorical game over. G'day KFkairosfocus
July 4, 2013
July
07
Jul
4
04
2013
11:39 AM
11
11
39
AM
PDT
KF, really, you have NOT "adequately answered" this objection. That's why we keep asking you. I think we have somehow not managed to make our objection clear. The reason I think this is that you keep presenting answers to objections we haven't actually made. Let me try to clarify, by asking some questions: Let's say that we have a large bag of coloured balls - red, blue, green and yellow. Let's say that there are an equal number of each colour in the bag. Let's also say that I have written on a piece of paper, the following five-ball sequences: 'Red', 'Blue', 'Yellow', 'Red', 'Red' 'Red', 'Green', 'Green', 'Green', 'Blue' 'Green', 'Blue', 'Green', 'Red', 'Green' 'Red', 'Blue', 'Green', 'Yellow', 'Red' 'Yellow', 'Yellow', 'Blue', 'Blue', 'Blue' 'Blue', 'Green', 'Green', 'Yellow', 'Yellow' 'Green', 'Blue', 'Yellow', 'Green', 'Blue' 'Yellow', 'Yellow', 'Green', 'Green', 'Green' 'Red', 'Blue', 'Blue', 'Red', 'Yellow' 'Red', 'Red', 'Red', 'Red', 'Blue' I invite you to pull balls five at a time out of the bag, then put them back and shake the bag. So my first question is: what is the probality that you will draw one of the above sequences on any one trial? Now, repeat the exercise, only this time, there are twice as many red balls as any other, and the sequences I have written on my piece of paper are these: 'Yellow', 'Blue', 'Red', 'Red', 'Blue' 'Blue', 'Red', 'Green', 'Green', 'Red' 'Red', 'Red', 'Red', 'Blue', 'Blue' 'Red', 'Red', 'Yellow', 'Red', 'Yellow' 'Green', 'Blue', 'Blue', 'Red', 'Blue' 'Yellow', 'Blue', 'Blue', 'Red', 'Blue' 'Green', 'Green', 'Red', 'Red', 'Yellow' 'Red', 'Yellow', 'Yellow', 'Green', 'Red' 'Red', 'Red', 'Green', 'Red', 'Green' 'Red', 'Green', 'Blue', 'Green', 'Green' Again, what is the probability that you will draw one of these sequences on any one trial? Final question: are both exercises examples of "random independent draw", despite the fact that in the first, each colour is equiprobable, and in the second, reds are more probable than any other?Elizabeth B Liddle
July 4, 2013
July
07
Jul
4
04
2013
10:10 AM
10
10
10
AM
PDT
KS: You are simply 4repeating adequately answered objections and ducking the fact that I ties back to the P(T|H) you tried to use as a rhetorical bludgeon and roadblock. but5 I is pretty easily accessible by inspection and evaluation of empirical observations. (Indeed, the use of families of proteins is in reality a refinement, to try to factor in redundancies as being of less info.) Once we do have an info metric, we can then evaluate back to the relevant probability using a straightforward relationship. That probability automatically brings to bear whatever actual chance/noise influences were at work, and also allows us to evaluate the chance of something with that sort of message statistics coming about by chance in any of its forms. The answer is, well below 1 in 10^150. Rhetorical game over. KFkairosfocus
July 4, 2013
July
07
Jul
4
04
2013
08:40 AM
8
08
40
AM
PDT
keiths:
There are many possible evolutionary pathways to the same endpoint.
How many blind watchmaker pathways? And how did you determine that? THAT is the problem- you equivocate, pontificate and never, ever produce any evidence to support your tripe.Joe
July 4, 2013
July
07
Jul
4
04
2013
07:29 AM
7
07
29
AM
PDT
keiths:
Your derivation doesn’t take all “Darwinian and material mechanisms” into account, as required by the definition of CSI.
No one can demonstrate that “Darwinian and material mechanisms” can construct anything. YOU can't even produce a replicator from those processes, keiths. And you sure as hell cannot produce anything else from a simple replicator. Evidence, keiths, your position doesn't have any to support it.Joe
July 4, 2013
July
07
Jul
4
04
2013
07:27 AM
7
07
27
AM
PDT
KF,
KS is refusing to see that there is more than one way to skin a catfish.
No, I'm suggesting that a hand drill isn't the right tool for the job.
Whatever hyps may be empirically warranted, they have certainly been taken up in the processes that have created the actually observed protein families. So, we can work back and give an upper bound tothe effect of blind chance and mechanical necessity, from teh known result of whatever processes have acted.
You're bluffing, KF, and I'm calling you on it. Your vague and evasive wording gives you away:
Whatever hyps may be empirically warranted, they have certainly been taken up in the processes that have created the actually observed protein families.
Not true. There are many possible evolutionary pathways to the same endpoint. All of them must be considered and quantified even though only one of them actually occurred. It's even worse than that, in fact, because you, as an IDer, don't think that any of them actually happened. Your statement directly contradicts the ID hypothesis!
So, we can work back and give an upper bound tothe effect of blind chance and mechanical necessity, from teh known result of whatever processes have acted.
Same error. According to you, design acted, not evolution. KF, pick a protein family. Show us all possible evolutionary pathways leading to that protein family. Identify all the necessary mutations for each pathway. Quantify the probability for each of those mutations. Specify the exact fitness landscape that is in effect at the time of each mutation, and show us (based on the fitness landscape) the probability that each mutation is selected for. Now combine all of this information into a single P(T|H) value. You can't do it. You're bluffing.keiths
July 4, 2013
July
07
Jul
4
04
2013
07:21 AM
7
07
21
AM
PDT
PS: I should add one more thing. materialistic evolutionary mechanisms come in two main stripes, pre-life and post OOL. Pre OOL, we are looking at chemical dynamics and physics, which pose so many challenges to forming a living cell, directly or indirectly that apart from paper fantasies and their computer kin, there is no serious answer. Just the chirality challenge alone is enough to scupper them. And that is just the beginning. Once we are dealing with cell based life, we then have the point that chance variation [non-foresighted variation] is the creative means, and the consequence of injecting such into an environment is differential reproductive success. This last is not an active agent, it has no creative powers, even in hopes, it simply means that what is not fit is subtracted from the population. An information sink not a source. So we are left with whatever variety of blind search, in a context where as already highlighted, we have good reason to expect to see deeply isolated islands of function in config spaces that are beyond astronomically large. (Just for 1,000 bits much less 100 - 1000 K or 10 - 100+ M bits, is already beyond the capacity of the observed cosmos. 500 bits is enough to swamp the solar system, and the reality is earth's biosphere is MUCH smaller than that.) And, since we are looking at blind mechanisms, we face the problem of blind search for a well-fitted search. Where he number of possible searches on a space of size W is hugely more than |W|. Many of which searches will be hopeless or worse than hopeless, in a situation that was not encouraging to begin with. A result we can accept is that a direct flat random search of the space, on average, will be as good as any other. Those who propose a BLIND search, blindly chosen that is well matched to the space and capable of finding islands of function within relevant resources, need to justify such a claim. Actually, they don't, there is a tendency to assume a vast continent of incrementally traversible ever more complex function, and there is a tendency to suggest that incremental hill-climbing is good enough as a result. The first of these is not justified, never mind the brazen attempt to say this is default show us why we have to look at islands of function. (To which the proper response is, look all around you at things that have to work based on multiple well matched properly arranged and coupled parts. Starting with things like this post's strings of symbols in ASCII code. Requisites of function tie us down sharply to restricted, specific zones in config spaces. And a solar system scope search would in effect be picking from the sun to Pluto, 4 - 7 bn km away. That is a lot of space in which to arrange things.) The second, begs the question of getting to shores of function, and massively extrapolates local hill climbing to a scope that simply lacks empirical warrant. In short, we have no reason to think that here are going to be blind searches that for hundreds of genes and proteins etc will so frequently beat blind flat random chance. Which brings us right back to the relevance of the empirical info derived metrics above for P(T|H) -- in effect for one protein family among hundreds each -- and their message, nope blind chance is not credible. (And remember the method used, which is ex post facto, will factor in relevant chance processes automatically.) KFkairosfocus
July 4, 2013
July
07
Jul
4
04
2013
06:40 AM
6
06
40
AM
PDT
Onlookers: KS is refusing to see that there is more than one way to skin a catfish. We have pretty direct empirical access tot he info content of key biomolecules, and we have a relatioship that links that to the probabilites that he and others have been pushing as a talking point as they imagine it is an obstacle. All I have done is to use the bridge. Whatever hyps may be empirically warranted, they have certainly been taken up in the processes that have created the actually observed protein families. So, we can work back and give an upper bound tothe effect of blind chance and mechanical necessity, from teh known result of whatever processes have acted. The message is that the resulting quantum of info is beyond a reasonable plausibility for blind chance and mechanical necessity to have been the decisive input. the only known source of accessing configs that isolated in possibilities spaces, is imaginative, creative design. Only, that does not fit the ideology so it is dismissed as unimpressive. (And BTW that sort of ideologically loaded "reasoning" by evo mat advocates is much of why design-supportive papers or arguments in general are so often unjustly dismissed as dumb or stupid or blunders or deceptive. A man convinced against his will is of the same opinion still.) Rhetorical game over. KFkairosfocus
July 4, 2013
July
07
Jul
4
04
2013
02:21 AM
2
02
21
AM
PDT
PS: I should briefly note on why FSCO/I is naturally found in narrow islands in wide config spaces. First the complexity means the number of Y/N questions to specify the config is large, i.e. we are beyond 500 - 1,000 bits. Second, to function in specific ways we now have multiple well matched properly aligned, coupled and configured parts. This is easy to see with text such as this post and since AutoCAD etc show us that complex 3-d functioning systems can be reduced to strings of descriptive symbols in accordance with a convention, this focus on strings is WLOG. So, as say a car engine shows, or the code string to specify a protein that folds, fits and functions, we will expect specific configuration which at once locks us into islands in the space of possibilities. This is actualy a further challenge to the darwinist view as it imagines a vast continent of function incrementally traversible by steps thrown out at random and filtered for hill-climbing by sub population competition for scarce resources or the like. Which already points to another problem, time to fix variations given pop sizes, incidence of actually advantageous mutations [the rates issue] and more. In short we have a theory that can make arguments for variations within islands of function being extrapolated without adequate warrant on empirical evidence into a theory presumed practically certain about much larger changes which we have every good empirical reason to see will require jumps across large Hamming distances in spaces of possible configs. KFkairosfocus
July 4, 2013
July
07
Jul
4
04
2013
02:14 AM
2
02
14
AM
PDT
KF,
Predictably, not even the actual presentation of the derivation of P(T|H) per the empirical evidence on the information content of protein families, makes any impression on the likes of KS.
It actually did make an impression. Just not a good one. Your derivation doesn't take all "Darwinian and material mechanisms" into account, as required by the definition of CSI. As you like to say, GIGO -- Garbage In, Garbage Out. END :)keiths
July 4, 2013
July
07
Jul
4
04
2013
01:08 AM
1
01
08
AM
PDT
Onlookers: Predictably, not even the actual presentation of the derivation of P(T|H) per the empirical evidence on the information content of protein families, makes any impression on the likes of KS. And now we see the lame argument that well we don't know just what the very first life form was like, whether it was homochiral. We have a world of life out there that is based on the key-lock fitting of macromolecules in a system where that fold, fit and function pattern is critically and overwhelmingly dependent on just that one-handedness of the right sort for the key bio-molecules, which would be drasrically deranged by the shift to racemic mixes on grounds of geometry and resulting forces being wrong for the fold and function by fitting. That is what the empirical evidence points to. But the objection is tossed up in the teeth of that evidence as if it trumps all. In short that was never the topic, was it. The real topic was that for such science is a handy way to promote a priori materialism, by pushing a redefinition that takes advantage of the prestige of science. Next we see some red herring and strawman debate points on the information content of 500 coins, which holding 2 states have 500 bits of info storage capacity. From this, we can see that we have 2^500 possibilities in a space of possible configs, leading to deep isolation of significant specific forms such as 500 H or more relevantly, 72 or so ASCII characters in English or object code or the like. The point of which is that since the resources of our solar system of 10^57 atoms and 10^17s at chem rxn rates, could only sample as the size of 1 straw to a cubical haystack 1,000 Light years across, so that if superposed on our galactic neighbourhood with all but absolute certainty we would only pick up the bulk, straw, we have every reason to see why it is that cases of FSCO/I on uniform regularly repeated experience are characteristic signs of design as cause. If one saw a box of coins all H, one would know to empirically warranted practical certainty that it was setr like that by design. If the coins were in a line spelling out the first 72 characters of this post in ASCII code or a Hello World program etc, we would have high confidence to infer the same. Now, the next problem is that on evidence of the world of life we do see (the empirical basis that is supposedly the framework for science) we know that for a first cell based life form -- the only biological C-Chemistry life we have evidence of -- we credibly start out at requiring 100 - 1,000 kbits of genetic info, orders of magnitude beyond the FSCO/I threshold. Allt hat empty speculation about self replicators and hill climing begs the question of getting first tot he shores of islands of exceeedingly complex funciton. At best Darwinia mechanisms have some power to explain some types of micro evo, but none at all to explain OOL -- the root of the tree of life. That is BEFORE the code based replication systems we see originated and the evo mat advocate needs to explain that too. On empirical evidence. Ducked. As usual. KFkairosfocus
July 4, 2013
July
07
Jul
4
04
2013
12:55 AM
12
12
55
AM
PDT
(i meant to say, the drift example was motivated by a comment by Liz in another thread. And if you want to see it in action it's easily coded in the statistical language R sim <- function(){ coins <- ifelse(rbinom(500,1,p=0.5), "H", "T") gen = 1 while(max(table(coins)) != 500){ coins <- ifelse(rbinom(500,1,p=mean(coins=="H")), "H", "T") gen = gen + 1 } return(gen) }wd400
July 3, 2013
July
07
Jul
3
03
2013
07:02 PM
7
07
02
PM
PDT
KF, How much information is there in a series of 500 heads or tails from coin flips? So much that you'd be throwing sets of 500 coins for ever and never see one. So much, in fact that you'd rule out that result as plausibly coming from a fair coin, I guess. Now. Instead of throwing sets of 500 coins, let's take a random walk Throw your first 500 coins and record your "H"s and "T"s as usually. But this time your next round of tosses will influenced by the last. The very special coins you will through will have a probability of coming up "H" exactly equal to the frequency of "H" in the last round. This may may sound far-fethched for coins, but's exactly how genetic drift words (sampling from a population of gametes). The end result of these runs is always a set of 500 "H"s or "T" and instead of taking more than the age of universe for that result to arise, it usually happens within a thousand sets of coin flips. So, what I'm trying to ask, is taking the log of p(500_H|random_coin_tosses) helping us understand 500 "H" sequence when we know the hypothesis (random coin tosses) isn't the one that's generating the sequence? That's what you've argued so far, which is very strange.wd400
July 3, 2013
July
07
Jul
3
03
2013
07:00 PM
7
07
00
PM
PDT
KF,
First, it does not seem to have registered that I have addressed the root problem, as the decisive case, forming the molecules of life.
It hasn't registered because you haven't done it.
And, thanks to the racemic forms that routinely form in non biological syntheses we DO know the relevant distribution, which already shows that the proposed Darwinian path cannot get going without a blind chance and necessity answer to the origin of the info in just the fact of homochirality.
You haven't established that the original replicator must have been homochiral (or even that it was chiral at all), so you don't know the relevant distribution. Even if you could prove that it was homochiral, you would also have to know the size of the replicator in order to establish the distribution. You don't know that either. Nobody currently knows the distribution. You certainly don't.
That is, the power of the [log] transform allows us to apply an empirical value to what is a more difficult to solve problem the other way. once we do know the info content of the protein families by a reasonable method, we can then work the expression back ways to see the value of P(T|H). And so lo and behold we do not actually have to have detailed expositions on H to do so, once we have the Information value, we automatically cover the effect of H etc.
KF, logs and antilogs are trivial. It's the same information expressed in a different way. To prove this to yourself, take the log of a number, then take the antilog of the log. You get the original number back. Obviously. So whether you are computing P(T|H) or CSI, you have to consider all "Darwinian and material mechanisms", as Dembski said. There are no shortcuts, and the log transform does not provide any. And since neither you, nor Sal, nor anyone else can calculate P(T|H), you can't make the design inference. At least not rationally. Not to mention the fact, as Lizzie and I keep telling all of you, that the argument from CSI is a circular argument. You already have to know that something couldn't have evolved before you attribute CSI to it. Therefore, using CSI to determine that something couldn't have evolved is useless. It tells you nothing that you didn't already know. It's a waste of time.keiths
July 3, 2013
July
07
Jul
3
03
2013
04:51 PM
4
04
51
PM
PDT
PS: Remember the above info metric does not include the additional info included in homochirality. The issues are distinct.kairosfocus
July 3, 2013
July
07
Jul
3
03
2013
04:19 PM
4
04
19
PM
PDT
1 2

Leave a Reply