Uncommon Descent Serving The Intelligent Design Community

Falsification of certain ID hypotheses for remotely controllable “fair” dice and chemical homochirality

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Even though I’m an Advantage Player, I would never dream of hosting illegal dice games and fleecing people (I swear never, never). But, ahem, for some reason I did take an interest in this product that could roll 6 and 8 at will!

[youtube 3MynUHA6DTs]

Goodness, that guy could earn a mint in the betting on 6 and 8! 😈

The user can use his key chain and force the dice to certain orientations. As far as I know the dice can behave as if they are fair if the remote control is not in force. For the sake of this discussion, let us suppose the dice will behave fairly when the remote control is not in force.

Suppose for the sake of argument I made this claim: “CSI indicates intelligent agency.”

Suppose further someone objected, “Sal that’s an unprovable, meaningless claim, especially since you can’t define what ‘intelligent agency’ is”.

I would respond by saying, “for the sake of argument, suppose you are right, I can still falsify the claim of CSI for certain events, and therefore falsify the claim of intelligent agency, or at least render the claim moot or irrelevant.”

Indeed, that’s how I can assert in specialized cases, the ID claim can be falsified, or at least rendered moot by falsifying the claim that an artifact or event was CSI to begin with.

To illustrate further, suppose hypothetically someone (let us call him Mr. Unsuspecting) was unfamiliar and naïve to the fine points of high tech devices such as these dice. One could conceivable mesmerize Mr. Unsuspecting into thinking some paranormal intelligence was at play. We let Mr. Unsuspecting play with the dice while having the remote control off, and thus the Mr. Unsuspecting convinces himself the dice are fair. Say further Mr. Unsuspecting hypothesizes: “if the dice roll certain sequences of numbers, a paranormal intelligence was in play”.

We then let the magician running the game and “magically” call out the numbers before the rolls: 6 8 6 8 6 8 ….

When the remote control is running the show, the distribution function is changed as a result of the engineering of the dice and remote control mechanism. The observer thus concluded CSI using the chance hypothesis as compared to the actual outcome: 6 8 6 8 6 8….

The magician then explains what was really going on and that no paranormal intelligence was involved. Hence, the original hypothesis of a paranormal intelligence (by Mr. Unsuspecting) was falsified, and there was no paranormal intelligence as he supposed initially.

It would be fair to say, Mr. Unsuspecting should then formulate an amended CSI hypothesis given that the whole charade was intelligently designed with modern technology, and further the designer of the charade was available to explain it all. Mr. Unsuspecting’s original distribution function (equiprobable outcomes) was wrong, so he inferred CSI for the wrong reasons, and hence his original inference to CSI is faulty not because his conclusion was incorrect (in fact his conclusion of CSI was correct but for the wrong reasons) but his inferential route was wrong. Further his hypothesis of the paranormal designer was totally false, a more accessible human designer was the cause.

The point being, the original hypothesis of CSI, or any claim that an object evidences CSI, can be falsified or amended by a future discovery at least in principle. The whole insistence by Darwinists that IDists get the right distribution before making a claim is misplaced. Claims can be put on the table to be falsified or amended, and there could be many nuances that amend the reality of the situation in light of new discoveries.

IDists can claim Darwinian evolution in the wild in the present day will not increase complexity on average. They can say that increase in complexity in the present day can falsify some of ID’s claims about biology. That claim can be falsified. FWIW, it doesn’t look like it will be falsified, it’s actually being validated, at least at first glance:
The price of cherry picking for addicted gamblers and believers in Darwinism

Suppose we presumed some paranormal or supernatural disembodied intelligence was responsible for homochirality in the first life. If some chemist figures a plausible route to the homochirality, then the CSI hypothesis for the homochirality can be reasonably or at least provisionally falsified, and hence the presumed intelligent agency hypothesis for homochirality of life (even if intelligence is poorly defined to begin with) is also falsified.

Does it bother me CSI of homochirality could be falsified? Yes, in as much as I’d like to know for sure the Designer exists. But I’m not betting on its falsification anytime soon. And formally speaking there could have been a Designer designing the laws of chemistry, so even if the original CSI hypothesis was formulated with the wrong distribution function, there could still be an Intelligent Designer involved….

The essay was meant to capture the many nuances of the Design debate. It’s far more nuanced than I supposed at first. That said, I’d rather wager on the Designer than Darwin, any day…

ACKNOWLEDGEMNENTS

RDFish for spawning this discussion. Mark Frank and Elizabeth Liddle for their criticisms of other possible distribution functions rather than just a single one presumed. And thanks to all my ID colleagues and supporters.

[Denyse O’Leary requested I post a little extra this week to help alleviate the news desk. I didn’t have any immediate news at this time so I posted this since it seem of current interest]

Comments
F/N: The above responses by EL and KS leave me shaking my head. First, it does not seem to have registered that I have addressed the root problem, as the decisive case, forming the molecules of life. And, thanks to the racemic forms that routinely form in non biological syntheses we DO know the relevant distribution, which already shows that the proposed Darwinian path cannot get going without a blind chance and necessity answer to the origin of the info in just the fact of homochirality. For which no such answer is reasonable. The only empirically warranted source for FSCO/I as is required here, is design. So, immediately there is design sitting at the table from the root up. Next, it does not seem to register that oftentimes a problem that is hard when phrased one way becomes much easier when transformed. In this case, going to information allows us to use the empirically evaluated information values to resolve the matter, e.g. those of Durston et al. As was worked out here, we know we need an info beyond a threshold, we have conservative thresholds at 500 - 1,000 bits, and we know the info content of functional protein families of interest. We deduced: Chi_500 (solar system) = I*S - 500 bits beyond the SS threshold Where also I = - log_2(p) The answer from the information threshold expression is, that again, we can easily see that protein families are well beyond the FSCO/I threshold where blind chance and mechanical necessity are plausible explanations of the functional molecules of life. Using the three examples from Durston I have commonly cited, and the solar system threshold:
RecA: 242 AA, 832 fits, Chi: 332 bits beyond SecY: 342 AA, 688 fits, Chi: 188 bits beyond Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond
But that is not all, since we know the info values empirically, and we know the relationship that I = - log_2 (P), we can deduce the P(T|H) values for all relevant hypotheses that may have acted by simply working back from I:
RecA: 242 AA, 832 fits, P(T|H) = 3.49 * 10^-251 SecY: 342 AA, 688 fits, P(T|H) = 7.79 * 10^-208 Corona S2: 445 AA, 1285 fits, P(T|H) = 1.50 * 10^-387
That is, the power of the transform allows us to apply an empirical value to what is a more difficult to solve problem the other way. once we do know the info content of the protein families by a reasonable method, we can then work the expression back ways to see the value of P(T|H). And so lo and behold we do not actually have to have detailed expositions on H to do so, once we have the Information value, we automatically cover the effect of H etc. As was said long since but dismissively brushed aside by EL and KS. And consistently these are probabilities that are far too low to be plausible on the gamut of our solar system, which is the ambit in which body plan level evolution would have had to happen. (Indeed, I could reasonably use a much tighter threshold, the resources of earth's biosphere, but that would be overkill.) Now, do I expect EL and KS to accept this result, which boils down to evaluating the value of 2^-I, as we have I in hand empirically? Not at all, they have long since shown themselves to be ideology driven and resistant to reason (not to mention enabling of slander), as the recent example of the 500 H coin flip exercise showed to any reasonable person. But this does not stop here. Joe is right, there is NO empirical evidence that darwinisn mechanisms are able to generate significant increments in biological information and thence new body plans. All of this -- things that are too often promoted as being as certain as say the orbiting of planets around the sun or gravity -- is extrapolation from small changes, most often loss of function that happens to confer an advantage in a stressed environment such as under insecticide or sickle cells that malaria parasites cannot take over. Of course, such is backed by the sort of imposed a priori materialism I highlighted earlier today. What is plain is that the whole evolutionary materialist scheme for origin of the world of life, from OOL to OO body plans and onwards to our own origin, cannot stand from the root on up. KFkairosfocus
July 3, 2013
July
07
Jul
3
03
2013
04:07 PM
4
04
07
PM
PDT
It has become very clear that we do not need to determine P(T|H) for all “Darwinian and material mechanisms”, because no one from the darwinain camp can even say there is a feasibility for darwinian and material mechanisms...Joe
July 3, 2013
July
07
Jul
3
03
2013
01:01 PM
1
01
01
PM
PDT
Sal, The entire premise of this thread is odd. You seem to be saying in effect: "I can't actually determine whether X has CSI, because I can't determine P(T|H) for all "Darwinian and material mechanisms". However, I can hypothesize that it does by assuming a single distribution and computing CSI from that assumed distribution. I can then infer design. But that's okay because my hypothesis can be falsified." Well, sure, you could do that. But who is going to be persuaded if you can't calculate the actual CSI value, or at least establish a believable lower bound for it? If you can't determine P(T|H) (or establish an upper bound on it), then you can't justify your CSI value. If you can't justify your CSI value (or lower bound), then you can't justify the design inference. P(T|H) is the key, and you can't get P(T|H) by looking at just one distribution (and the associated hypothetical mechanism). You have to consider all possible "Darwinian and material mechanisms", or at least all that have sufficiently high probabilities to make a difference in the final P(T|H).keiths
July 3, 2013
July
07
Jul
3
03
2013
11:34 AM
11
11
34
AM
PDT
Lizzie,
A null hypothesis doesn’t magically become a different null hypothesis when you log transform the probability distribution.
You underestimate the power of the Designer. Poof!keiths
July 3, 2013
July
07
Jul
3
03
2013
10:55 AM
10
10
55
AM
PDT
Not a "cheap rhetorical shot" at all, KF. You have shown no evidence at all that you understand the fundamental principle of null hypothesis testing, namely, that you must compute the expected distribution of your data under the null you want to reject. If the only null you have computed the expected probability distribution for is "random independent draws", then that is the only null you are entitled to reject if you make an observation that falls in your rejection region. You can't reject all other nulls just because you have rejected that one, and "Darwinian mechanisms" are not "random independent draws".Elizabeth B Liddle
July 3, 2013
July
07
Jul
3
03
2013
06:39 AM
6
06
39
AM
PDT
EL: Cheap, predictable rhetorical shot and wrong. Notice, I am starting with an upper threshold on the probabilities/lower threshold on the info, based on the well known constraints imposed by thermodynamics and reaction kinetics, at formation of relevant molecules for OOL. No root, no shoot and no tree. The challenges get much steeper from there on, and as you by now know but as usual are denying, a needle in haystack sampling challenge like this is decisive. KFkairosfocus
July 3, 2013
July
07
Jul
3
03
2013
06:06 AM
6
06
06
AM
PDT
And still no positive evidence to support darwinian evolution.Joe
July 3, 2013
July
07
Jul
3
03
2013
05:55 AM
5
05
55
AM
PDT
As are rethought some issues, I realized this was an instructive illustration for me. Mr. Unsuspecting is like us. We have limited knowledge, we grope around in the dark. The equiprobable hypothesis is reasonable given that Mr. Unsuspecting examined the dice and put forward his best hypothesis for a distribution. It turned out the distribution was wrong, in a sense, and right in another. When the intelligent designer in this case wanted to act, he was able to change the distribution, so in a sense the original CSI inference was correct (and yes I'm somewhat retracting my OP in that sense). Further it shows why the inference of who the Designer is, is unwarranted from CSI. Many at UD believe the Intellgent Designer of life is God, but formally speaking that cannot be inferred from CSI, even if a true statement, formally it would be a non-sequitur where the conclusion does not follow from the limited set of premises. I felt it was also important to raise the question of which distribution is chosen. I tried to explain, one can start with a working hypothesis of what distribution is correct or even approximately correct. CSI can be asserted with respect to: 1. a presumed distribution 2. a given recognizable pattern It does not mean the presumed distribution is correct. If it badly approximates real probabilities, the presumption can in principle be falsified, and possibly the CSI inference itself. In some cases, a modified distribution will still result in CSI. I also was trying to address some concerns which I felt were accurate, namely by RDFish. In my opinion he is right to be concerned with the definition of intelligence and to point out "CSI implies intelligence" is an axiomatic belief, not formally provable statement. I think it is a reasonable belief, I accept it personally as true, it can at least be a working hypothesis, but it doesn't have the strength of a math theorem. The connection is also not falsifiable, but that doesn't bother me. Operationally speaking, a claim of CSI for an object is falsifiable and that is good enough for that part of ID to be science. After all, it is perfectly legitimate scientifically to make an observation and provide an estimated distribution and then expose the hypothesized distribution to falsification. That is science. I have no problem calling that science. I think that's what Bill had in mind when he said there is no mandate the one has to proceed from an assumption Designer is real (even though he, and many IDists who are creationists believe He is real):
Thus, a scientist may view design and its appeal to a designer as simply a fruitful device for understanding the world, not attaching any significance to questions such as whether a theory of design is in some ultimate sense true or whether the designer actually exists. Philosophers of science would call this a constructive empiricist approach to design
I agree with that, even though I feel the Designer is real, in my view that conclusion is formally unprovable, but one that is assumed by many IDists, and is reasonable from the resemblance of Design even as Dawkins said:
Some of the greatest scientists who have ever lived ­ including Newton, who may have been the greatest of all ­ believed in God. But it was hard to be an atheist before Darwin: the illusion of living design is so overwhelming. Richard Dawkins
CSI formally demonstrates that resemblance, even if the distribution function is wrong. If the distribution function used to infer CSI is wrong then the CSI hypothesis can be falsified. Good example the craters on the moon. Some scientist long ago saw the craters looking like perfect circles, and inferred design. The CSI inference was faulty and then falsified. Same could be argued with the Chladni plate experiment if one declared CSI to explain the patterns. See: https://uncommondescent.com/intelligent-design/order-is-not-the-same-thing-as-complexity-a-response-to-harry-mccall/ An ID hypothesis can be falsified by falsifying the CSI claim that underlies it. The assertion that "CSI can only be generated by intelligence" is assumed even if: 1. intelligence is left undefined, or even poorly defined 2. the statement is wrong to begin with 3. the statement is unprovable That's not the claim that empirically important, the claim that is empirically important is the CSI claim for an object. That claim can be falsified. And if that claim is falsified, the ID claim for that object is potentially (not necessarily) falsified as well. That is definitely the case for homochirality.scordova
July 3, 2013
July
07
Jul
3
03
2013
05:27 AM
5
05
27
AM
PDT
KF: it is plain to me that you are speaking in the way that a person unfamiliar with null hypothesis testing would speak. Please explain how setting H as "random independent draw" "automatically" also "take[s] into account all real world relevant processes." How can it? A null hypothesis doesn't magically become a different null hypothesis when you log transform the probability distribution.Elizabeth B Liddle
July 3, 2013
July
07
Jul
3
03
2013
05:12 AM
5
05
12
AM
PDT
EL and KS: It seems you are both speaking in a way that one unfamiliar with information metrics would speak, or else the way that one ruthlessly seeking to exploit the ignorance of those unfamiliar with such would speak. The point of such a metric is that its statistical base and assessment of redundancies in a space of possibilities will automatically capture the pattern of possible outcomes. In the relevant case of Durston et al, they looked at the flat random case as the null, then progressively applied the observed empirically grounded frequencies to see the info content of proteins known to be formed by expressing genetic codes. (As you will both recall, probabilities will express themselves stochastically so it is a valid approach to look back from the statistics. A simple case is the known statistical pattern of English, such as that E is normally about 1/8 of text.) If you want an a priori approach, it is quite obvious that you have applied Bournoilli indifference in cases where it suits your rhetorical agenda, e.g. on the 500H exercise. Beyond that, there are no known merely physical energetic preferences that drive homochirality of monomers of either proteins or R/DNA, as the issue is a matter of geometry. A spark in gas exercise will form a racemic mix as will any normal synthesis. That 50:50 RH/LH pattern already tells us equiprobable. Information content, 1 bit per monomer. It is the biological world that makes homochiral molecules, and it does so by complex assembly processes -- begging the question of that warm little pond or the like.So, already we have one bit of info per monomer in an informational, homochiral protein or D/RNA. This BTW is not usually reckoned with in calcs. But for OOL on up, it is vital. The geometry is vital and a racemic mix -- what should be expected on energy -- is not going to work. That is for a 300 monomer protein we have 300 expressed bits, and a lot more if we look at the system that normally makes such. The RNA that codes for it at 3 monomers per AA specifying character is 3 bits per character right there on known energy.So, just to get to a system that gives a coded string to specify ONE typical protein we are already past the solar system FSCO/I threshold. And we need many hundreds of complex polymers to be in shouting distance of a functional living cell with gated membrane, metabolism and von Neumann self replicator. Say, 300 proteins and 300 coded RNA's.
300 * 300 = 9 *10^4 bits 300 * 900 = 2.7 * 10^5 bits ____________________________ Total, already: 3.6 * 10^5 bits
This is already two orders of magnitude beyond what is the threshold for not credibly produced by blind chance and mechanical necessity, on needle in haystack grounds. Coming out the gate at OOL, on simple energy considerations driving homochirality alone. I could go on to talk about the problems of getting peptide bonds, known to be about 50% of bonds for AA chains formed out of biological assembly control. Another bit per monomer at 300 characters per typical protein. But this is surplus to needs already. We only need an upper bound and we are well beyond that already by orders of magnitude. Do I need to remind that for each additional bit of info the needle in haystack search space DOUBLES? (At 500 bits on solar system resources of 10^57 atoms and 10^17s -- our effective cosmos for atomic interactions -- we are already looking at searching a cubical haystack 1,000 LY on the side at the level of taking just one straw sized sample. At 1,000 bits the blind search resources of the observed cosmos are swallowed up even more spectacularly.) Why am I insisting on starting with OOL? Because it is the root of the TOL, and without a root there is no tree. We already see that the probability of cell based life/info content of cell based life -- the two are just a log transform apart and so are conceptually equivalent, just log transforming gives a more familiar and easy to work with form -- is such that no blind chance and mechanical necessity process on the gamut of our observed cosmos is an empirically credible source. Now, we do have a single massively empirically warranted source of FSCO/I, design. So reliable is this in cases where we ca directly check that we are logically justified on induction to take this as a reliable sign of design. So, regardless of the talking points of ideologues a priori committed to materialism and padlocked in mind, and their fellow travellers, I conclude the obvious: life from the ground up is designed. So, design is on the table from the root on up, and that makes sense then of how we have sudden appearance in the fossil record of major new forms that can be shown to need about 10 - 100+ mn additional bits of info in genomes to cover cell types, tissues and systems plus regulation to unfold from embryonic or equivalent state. Life is full of FSCO/I and save to those locked i mind into a system that is self referentially incoherent already on worldview considerations, and necessarily false as a result, or their fellow travellers, life is chock full of signs of design. And it is plain that it is question-begging a prioris that are driving resistance to this obvious result in an information age. Since people are liable to try to falsely assert that the Lewonin remark is quote mined, let me here cirte instead the US NSTA board, in 2000:
The principal product of science is knowledge in the form of naturalistic concepts and the laws and theories related to those concepts . . . . [[S]cience, along with its methods, explanations and generalizations, must be the sole focus of instruction in science classes to the exclusion of all non-scientific or pseudoscientific methods, explanations, generalizations and products [--> atmosphere poisoning]. . . . Although no single universal step-by-step scientific method captures the complexity of doing science, a number of shared values and perspectives characterize a scientific approach to understanding nature. Among these are a demand for naturalistic explanations supported by empirical evidence [--> which means anything that can be grossly extrapolated like pepper moths and finch beaks or antibiotic or insecticide resistance without regard to informational barriers] that are, at least in principle, testable against the natural world. Other shared elements include observations, rational argument, inference, skepticism, peer review and replicability of work . . . . Science, by definition, is limited to naturalistic methods and explanations [--> question-begging radical ideologically driven redefinition of science with no proper basis in history or phil of sci or inductive logic] and, as such, is precluded from using supernatural elements in the production of scientific knowledge. [[NSTA, Board of Directors, July 2000. Emphases and comments in brackets added.]
That is what the radicals want to indoctrinate our children in and have already in Kansas threatened to hold children hostage to push it in. (The letters on record are plain about that. Don't make me cite and discuss these in extenso. I am fully prepared to do so at a moment's notice as a look at that part of my always linked note will show.) Game over. KFkairosfocus
July 3, 2013
July
07
Jul
3
03
2013
05:04 AM
5
05
04
AM
PDT
KF: you may have been confused by this passage in Durston et al's paper:
Physical constraints increase order and change the ground state away from the null state, restricting freedom of selection and reducing functional sequencing possibilities, as mentioned earlier. The genetic code, for example, makes the synthesis and use of certain amino acids more probable than others, which could influence the ground state for proteins. However, for proteins, the data indicates that, although amino acids may naturally form a nonrandom sequence when polymerized in a dilute solution of amino acids [30], actual dipeptide frequencies and single nucleotide frequencies in proteins are closer to random than ordered [31]. For this reason, the ground state for biosequences can be approximated by the null state. The value for the measured FSC of protein motifs can be calculated by relating the joint (X, F) pattern to a stochastic ensemble, the null state in the case of biopolymers that includes any random string from the sequence space.
In fact what reference 31 (Weiss O, Jimenez-Montano MA, Herzel H: Information content of protein sequences. Journal of theoretical biology 2000, 206:379-386) shows is that functional proteins are highly incompressible sequences (the opposite of Dembski's criterion for Specification, interestingly). So what Durston et al did was to assume random independent draw from a flat (equiprobable) distribution of sequences. That seems reasonable as far as it goes. But their Fits calculation was still based on P(T|H) where H is random independent draw Nothing in that paper indicates that somehow the calculation "automatically take[s] into account all real world relevant processes". Clearly it does not, and the Darwinian hypothesis is NOT a hypothesis of "random independent draw". And, as Dembski says, "H" in P(T|H) is "the relevant chance hypothesis taking into account Darwinian and other material mechanisms".Elizabeth B Liddle
July 3, 2013
July
07
Jul
3
03
2013
04:47 AM
4
04
47
AM
PDT
KF:
This is an empirical, post facto metric that will automatically take into account all real world relevant processes.
This is key. Can you explain how defining H as random draw "automatically" takes into account "all real world relevant processes"? The Durston metric assumes random draw.Elizabeth B Liddle
July 3, 2013
July
07
Jul
3
03
2013
03:21 AM
3
03
21
AM
PDT
KF, doing a log transform does nothing for the argument, it just means you can add instead of multiply. You seem to think that doing a log transform of a probability magically converts it into "information". An improbable event under a given hypothesis remains simply an improbable event under that hypothesis whether you log transform the probability or not. What it doesn't do is render the improbable event under that one hypothesis improbable by all hypotheses except Design. If you want to reject other hypotheses, then you have to show that the event is also improbable under those hypotheses too. And it doesn't matter whether you use a log transform or not - the log transform makes no difference to the answer.Elizabeth B Liddle
July 3, 2013
July
07
Jul
3
03
2013
02:58 AM
2
02
58
AM
PDT
KF, In any case, your comment makes no sense. I have no objection to "carrying out the log transformation", but to do that I need to know the value of P(T|H). That's what I'm taking the log of. As I said to Sal:
To come up with a CSI value you need to compute P(T|H), which is the probability of getting homochirality via “Darwinian and other material mechanisms” (Dembski’s words). How on earth are you going to compute that probability? You certainly can’t model it as a coin flip scenario — that would be pure chance.
What is the probability of getting homochirality via evolution? Please show your work. P.S. Does the word 'homochirality' make you a little nervous? :)keiths
July 3, 2013
July
07
Jul
3
03
2013
02:56 AM
2
02
56
AM
PDT
KF, Why are you addressing RDF when he hasn't even commented on this thread?keiths
July 3, 2013
July
07
Jul
3
03
2013
02:51 AM
2
02
51
AM
PDT
RDF et al: Why do you insist on refusing to carry out the log transformation that renders what - log_2(P(T|H) into what it means, INFORMATION content? Immediately as that is done we see that he pivotal point of the Dembski 2005 metric is information beyond a threshold that can reasonably be shown to be less than or about 500 - 1,000 bits. That, in a context where the relevant "observers" are every atom in our solar system, every 10^-14s, or every atom in our observed cosmos, every 10^-45 s. You know or should know that the log reduction and simpilification have long been before us all, cf here. And in that context, information content which for something like DNA can be directly observed, is readily evaluated and can be seen as automatically taking into account relevant probabilistic hyps, e.g. try the Durston metrics of null, ground and functional states for proteins assembled using genes and clustered into families across the domain of life. This is an empirical, post facto metric that will automatically take into account all real world relevant processes. And the verdict of this metric is plain, well beyond the threshold where design is the only credible causal explanation. KFkairosfocus
July 3, 2013
July
07
Jul
3
03
2013
02:48 AM
2
02
48
AM
PDT
F/N 1: This sort of thing is why Houses now tend to insist on transparent dice and tossing to a studded wall that hen bounces off to roll on the table. I would not now trust anything that looks like opaque dice, and I woulds be wary of magnetisable dice dots and surfaces (even with transparent dice). KFkairosfocus
July 3, 2013
July
07
Jul
3
03
2013
02:39 AM
2
02
39
AM
PDT
Sal, The real question is why you would infer the presence of CSI, and therefore design, from homochirality in the first place.
We infer it based on certain assumptions, those assumptions could be wrong or they could be good approximations. Nothing stops anyone from putting on the table as a CSI claim and exposing the claim to falsification by future discovery. The simple claim is that it's inconsistent with the chance hypothesis starting from a pre-biotic soup.
How on earth are you going to compute that probability? You certainly can’t model it as a coin flip scenario — that would be pure chance.
We know the probabilities for soups being 50% or close to it, further, in the polymerized state, we know empirically it approaches 50% (for most amino acids, I see to recall one amino acid did strongly favor one orientation, can't remember which one) over time unless there is a maintenance mechanism since polymerized amino acids racemize as seen in the lab. Obviously a material mechanism can create homochirality, namely a living organism. But in specific prebiotic soups that have been so far conceived? No. The CSI claim might be wrong, but can be justifiably suggested and later falsified. What proof do Darwinsits have that complexity increases in the wild? We sure don't see average increases in the wild today, but it doesn't stop Darwinists from accepting this claim in spite of disagreeable observations. Contrast this with the fact IDists propose a reasonable distribution based on chemistry, yet I see Darwinists ignore obvious distributions from data in the wild not consistent with their theory. Sorry, I can't help notice the double standard. OOL research are certainly invited to keep trying. I hope they will. Some IDists think it won't be falsified as CSI, a few, Dembski included, thinks there might be a simple chemical route to homochirality. I think even if there was, it's a moot point since the amino acids racemize in the polymerized state anyway. Thanks for you comment.scordova
July 2, 2013
July
07
Jul
2
02
2013
09:31 PM
9
09
31
PM
PDT
Don't Darwinists face a similar problem when confronted with the possibility that the Earth might have originally been seeded with some type of life?
A new study suggests that there are as many as 60 billion habitable planets orbiting red dwarf stars in the Milky Way alone—twice the number previously thought and strong evidence to hint that we may not be alone.
Wow, it's getting crowded out there! Who knows, maybe geogenic OOL will be thrown under the bus soon. An exciting new new bus is coming that will jump-start evolution! All aboard! ;-)Querius
July 2, 2013
July
07
Jul
2
02
2013
09:20 PM
9
09
20
PM
PDT
Sal, The real question is why you would infer the presence of CSI, and therefore design, from homochirality in the first place. To come up with a CSI value you need to compute P(T|H), which is the probability of getting homochirality via "Darwinian and other material mechanisms" (Dembski's words). How on earth are you going to compute that probability? You certainly can't model it as a coin flip scenario -- that would be pure chance.keiths
July 2, 2013
July
07
Jul
2
02
2013
09:02 PM
9
09
02
PM
PDT
1 2

Leave a Reply