Uncommon Descent Serving The Intelligent Design Community

The Effect of Infinite Probabilistic Resources on ID and Science (Part 1)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

One common critique of intelligent design is that since it is based on probabilities, then with enough probabilistic resources it is possible to make random events appear designed. For instance, suppose that we live in a universe with infinite time, space and matter. Now suppose we’ve found an artifact that to the best of our knowledge (assuming finite probabilistic resources) passes the explanatory filter and exhibits CSI. However, one of the terms in the CSI calculation is probabilistic resources available. If the resources are indeed infinite, then the calculation will never give a positive result for design. Consequently, if the infinite universe critique holds, then not only does it undermine ID, but every huckster, conman, and scam artist will have a field day.

Say I had a bet with you that I’m flipping a coin and whenever it came up heads I’d pay you $100 and whenever it came up tails you’d pay me $1. Seems like a safe bet, right? Now, say that I flipped 100 tails in a row and you now owe me $100. Would you be suspicious? I might say 100 tails is just a likely, probabilistically speaking, as 50 tails followed by 50 heads, or alternating tails and heads, or any other permutation of 100 flips, which would be mathematically correct. To counter me, you bring in the explanatory filter and say, “Yes, 100 tails is equally probable, but it also exhibits CSI because there is a pattern it conforms to.” In a finite universe, this counter would also be mathematically valid. I’d be forced to admit foul play. But, if we lived in an infinite universe then even events seeming to exhibit CSI will turn up, and I could claim there is no rational reason to suspect foul play. I could keep this up for 1,000 or 1,000,000 or 1,000,000,000,000 tails in a row, and you’d still have no rational reason to call foul play (though you may have rational reason to question my sanity).

Not only do many incredible events become reality, but we begin to lose a grip on reality itself. For instance, it is much more likely, from an a priori probability, that we are merely boltzmann brains [2] instantiated with a momentary existence, only to disappear the next instant. Furthermore, it is much more likely that our view of reality itself is an illusion and the objective world is merely a random configuration that just happens to give us a coherent perception. As a result, in an infinite universe, our best guess is that we are hallucinating, instantaneous brains floating in space, or perhaps worse.

A more optimistic person might say, “Yes, but such a pessimistic situation only exists if we make assumptions about the a priori probability, such as it is a uniform or nearly uniform distribution. There are many other distributions that lead to a coherent universe where we are persistent beings that have a grasp on objective reality. Why make the pessimistic assumption instead of the optimistic assumption?”

Of course, this is good advice, whenever we have such a choice of alternatives. Unfortunately, this advice ignores the mathematical structure of the problem. The proportion of coherent distributions to incoherent distributions drops off exponentially, and as an exponential equation approaches infinity it becomes an almost binary drop off. This means that as probabilistic resources approach infinity, the number of coherent distributions approaches zero. Nor does the situation get any better if we talk about probability distributions over probability distributions, the problem remains unchanged or even gets exponentially worse with every additional layer.

The end result is that with an infinite number of probabilistic resources the case for ID may be discredited, but then so is every other scientific theory.

However, perhaps there is a rational way to preserve science even if there are infinite probabilistic resources. If so, what effect does this have on ID? Maybe ID even has a hand in saving science? More to follow…

[1] http://en.wikipedia.org/wiki/Law_of_large_numbers
[2] http://en.wikipedia.org/wiki/Boltzmann_brain

Comments
Part of our misconceptions about ID is rooted into the fact that it is described and treated by most of us as a theory that has to be tested or proven true. In reality, ID is not a theory, and it is not a religious concept either. It is a more or less unfortunate label we put on a fact of reality any one of us can observe: while not perfect, everything in the universe is put together amazingly well, the best way possibly under the circumstances. You do not have to test the fact that the cat has four legs, or that the four figures of former presidents on Mount Rushmore are not a product of nature. "Intelligent design" is everything we see around us. Of course, the question is what has caused this amazing assemblages of particles to exist and to function the way they do? Well, everything has a purpose and it is up to us to find out what that purpose is. Religious people say there is a supernatural creator god that governs their business and was in charge of the making of the world. Their proof is 'the Bible says so.' In other words, a huge amount of nothing. In Buddhism, though, there is no such thing as a creator god but a source everything there is comes from. And that makes sense while quantum physics more than supports the idea. Everything is energy and information, and energy and information must be emitted by something, something we know nothing about for now. So lets not dismiss ID because we do not understand what caused our world to be what it is, the same way we don't dismiss gravity even if we don't know yet what makes gravity to be gravity. All we can do is to observe and acknowledge the reality of what we see and make an effort to understand it. That said, neither religion nor the theory of evolution are of help with that. http://www.atimeofchange.net/2012/05/intelligent-design-aspect-of-reality-no.htmlGnosius
July 25, 2012
July
07
Jul
25
25
2012
11:52 AM
11
11
52
AM
PDT
Hardly the most favorable? I've assumed everything you could possibly identify to get OOL off the ground. I'm all ears as to ideas for any intermediary stages on the way to a single protein or DNA/RNA string. What is proposed? Part of a protein? Part of a DNA/RNA string? In reality, this is all way too generous for abiogenesis anyway. Even a protein or several proteins doesn't give you life; nor does a complete DNA sequence. Focusing on the smallest, simplest, easiest known item is more than fair -- it is overly generous. And the calculations are pretty sobering.Eric Anderson
July 30, 2011
July
07
Jul
30
30
2011
10:31 AM
10
10
31
AM
PDT
#51 Eric - I admit that what you describe could be seen as a set of conditions. But they are hardly the most favourable possible. They assumes that life started right off with a protein or DNA/RNA string - the constituents were all mixed together and happened to form such a a string. It omits any consideration of intermediary stages.markf
July 29, 2011
July
07
Jul
29
29
2011
11:09 AM
11
11
09
AM
PDT
markf @47: "The best you can talk of is the CSI relative to conditions X,Y and Z. But we never hear what the conditions are." Agreed, that conditions need to be specified. We do hear of them. As an example, for OOL we can assume the most favorable conditions possible and then run a probability analysis. Indeed, this is typically what Dembski and other major ID proponents do. Assume you have all the amino acids or nucleotides you want, in the right proportions, in one place, at one time, with favorable reaction conditions, with no interfering cross reactions, with stability and lack of breakdown once formed, in whatever favorable location you want (tide pools, deep sea vents, mud globules, take your pick), with just the right amount of energy to catalyze the reactions, but not too much to destroy the nascent formations (volcanic heat, lightning, take your pick). Now, given all those concessions and assuming all those favorable conditions are met, what are the odds of a single protein forming, or what are the odds of a string of DNA or RNA forming that could code for a single protein? It is precisely this probability calculation that has driven OOL researchers to acknowledge that it won't happen without some as-yet-undiscovered element added to the mix. If we step back and add back in all the conditions we assumed, the odds become many times worse. Even if we have uncertainty in any of these prior conditions, *by definition,* they cannot make the odds more favorable -- only less favorable or the same. Any uncertainty about the prior conditions doesn't mean our basic calculation of the probabilities of the chemical components coming together in a specific way is inaccurate. It can be very accurate and give us a good feel for what we are up against; however, because we've assumed all the prior conditions, we just need to keep in mind that our calculation is also too conservative.Eric Anderson
July 29, 2011
July
07
Jul
29
29
2011
08:39 AM
8
08
39
AM
PDT
oops: from after the first smiley to "revise that probability" are Scott's words.Elizabeth Liddle
July 29, 2011
July
07
Jul
29
29
2011
01:31 AM
1
01
31
AM
PDT
ScottAndrews:
Elizabeth, I’m trying very hard to get my head around what you are saying, and I don’t mean that in a knee-jerk critical way. (After visiting other forums I appreciate the civil discussions here so much more.)
:) I suppose where I’m stuck is this: So the only way to establish historical causality is to get data. Those data, at one point, might lead you to consider it “highly probable” that event A occurred; however new data could cause you to revise that probability. Yes. Indeed there was an interesting example in connection with the dinosaur-bird transition (although that's pre-history, of course) this week. A new fossil has altered the priors regarding archeopteryx position in the lineage.
Must we then exclude probability when evaluating hypothetical events that have never been observed, since there is no data? That doesn’t seem right. For one thing, it leaves the door open to an infinite number of possible explanations, each of which gains credibility because it has never been observed.
Well, a "hypothetical event" is, presumably an event hypothesised to have taken place on the basis of some data. Otherwise it wouldn't be a hypothesi, it would be fantasy :) A hypothesis is something we propose to account for data. For example, the 65 mya change in fossil biota known the "KT boundary" is hypothesised to have been the result of an unobserved event (a meteor strike). And, indeed, there is evidence to support such an event (globally iridium-enriched strata at that point in the geologic column) and further more, evidence of a huge crater. So our priors at this point are high that these pieces of evidence are evidence of a massive meteor strike that left a crater, a global iridium deposit, and was instrumental in world-wide extinctions. But if, for example, we were to find evidence that the exinctions were much more gradual than we currently infer, that the Chicxulub is actually a vast sinkhole, not a meteor crater, and that the iridium layer has some quite other, non-destructive cause, then that "hypothetical event" would be downgraded as a probability, which is to say, we would have less confidence that that event had occurred. And this comes back to the point I'm afraid I keep banging on about which is that scientific inferences are always provisional. They are generally the best current model given current data, but always subject to revision given new data. Rarely to radical revision, but constantly to elaboration and minor correction.
Obviously we can’t dismiss any event that has not been observed, otherwise we cannot exist. So that leaves the question, how do we judge between two or more historical possibilities without considering probability?
We do consider probability! I'm sorry if I inadvertently implied otherwise. It's just that frequentist probability is rarely applicable, as it assumes a random sampling from a known population. What we use intuitively, every day, and explicitly, in science, is conditional probability - probability contingent on what we know, and that probability will be subject to constant revision given new data, just as the guy forced to guess the colour of the balls in the bag. I guess my main point (my soap box point!) is that for making scientific inferences, probability is better, in general, conceived in a Bayesian framework, in terms of our confidence in our best current guess, than in frequentist terms i.e. what is the known frequency of this event (as in the frequency of a flipped coin landing on its side). They are actually very different concepts. Which is why Bayesians and Frequentists have such horrible fights.Elizabeth Liddle
July 29, 2011
July
07
Jul
29
29
2011
01:27 AM
1
01
27
AM
PDT
kairosfocus
Dr Liddle: Quite often a vague estimate, a sign, or an order of magnitude are quite good enough to make a decision.
Often they have to be, kairosfocus. Very often we have to make decisions on sparse information. In fact there is good neuroscience data to support the view that our brain architecture is designed (heh) to make Bayesian probability inferences, and it is fundamental to perception. Our perceptions, in other words, are strongly influenced by our priors, which are continuously adjusted in light of new data. However, at the point of the decision, we have to go with the most likely outcome, given the data currently available to us. But the more data we have, the more likely our decision is to be right!
Sampling theory also tells us that most reasonable samples of a large enough population will be at least roughly typical.
Random samples, yes.
In the case of in-principle infinite pops, the population of text in English is in principle infinite, or at least quasi-infinite.
Yes.
But we can credibly and usefully infer from reasonable samples, as Printers knew long before modern codes were invented.
Infer what?
Info theorists have long made estimates of probabilities of symbols and used them in information metrics, from Shannon’s original paper onwards.
Yes.
We have excellent reason to see that we are well warranted to infer to design on FSCI, analytically and inductively.
But that doesn't follow from what you've said! What is the population from which you have randomly sampled to make what inference?Elizabeth Liddle
July 29, 2011
July
07
Jul
29
29
2011
12:29 AM
12
12
29
AM
PDT
Eric #45 The point is that the probability of an outcome depends on which prior conditions you include. CSI is calculated using a probability. Therefore the CSI of an outcome depends on the prior conditions that are included. So to talk of "the CSI" of an outcome is nonsense. The best you can talk of is the CSI relative to conditions X,Y and Z. But we never hear what the conditions are. (The CSI is also relative to the specification - but that is a different problem)markf
July 28, 2011
July
07
Jul
28
28
2011
10:40 PM
10
10
40
PM
PDT
Dr. Liddle @39: Thank you for the additional thoughts and examples. Definitely interesting to think about. I'll have to think about it a bit more, but I guess I don't have a problem with the idea that a probability calculation is only valuable to the extent that we have some data to put in the calculation (hence, for example, the source of much of the criticism of the Drake Equation). Based on what we do know, however, not what we don't know, it seems some pretty good calculations can be done for the proability of OOL. Indeed, it seems we can do a much better job of calculating the odds today than 50 years ago. For example, several decades ago ideas were floated about DNA coming together as a natural result of chemical attractions. With that idea, the probability seemed pretty good. Now we have lots of data to show that isn't the case, so we are much better able to ascertain the probabilities associated with a random association of nucleotides. The calculation may not be perfect in an absolute sense, but based on our current state of knowledge we can say with some confidence that the probability is x. I do agree with you however, that we have to define the paramaters of the calculation carefully and with respect to what we do know.Eric Anderson
July 28, 2011
July
07
Jul
28
28
2011
11:27 AM
11
11
27
AM
PDT
markf @37: "You are assuming there is some coherent concept of the probability of an event independent of all prior conditions. But actually this makes no sense. All probabilities are conditional probabilities . . ." No, not independent of all prior conditions. There are plenty of prior conditions that exist and we have to work within the confines of those conditions. What I'm saying is that when looking at a scenario that requires a long string of events/contingencies to get to the final result we cannot just look at the last event in the chain and proclaim that the whole process was likely because, gee, the other prior events already occurred. We don't just get to assume all the prior events when running a probability calculation for the entire scenario. I can't tell if this is what Elizabeth is referring to when she says that the CSI calculation is too conservative.Eric Anderson
July 28, 2011
July
07
Jul
28
28
2011
11:10 AM
11
11
10
AM
PDT
Perhaps a better approach is to infer the probability of unobserved events from the frequency of observed events. There's no guarantee of 100% accuracy, but what would we do with 100% accuracy if we had it? For example, say I've seen a Croatian kuna coin, but I've never flipped one. I could reason that the odds of it landing on one side or the other or its edge are unknowable given my lack of data, or I could take a pretty good stab based on my history of flipping US quarters. Even if we allowed for gross inconsistencies, such as flipping dominoes instead of coins, we might be able to accurately assess whether it's plausible for the coin to land on its edge 100 times.ScottAndrews
July 28, 2011
July
07
Jul
28
28
2011
10:37 AM
10
10
37
AM
PDT
Elizabeth, I'm trying very hard to get my head around what you are saying, and I don't mean that in a knee-jerk critical way. (After visiting other forums I appreciate the civil discussions here so much more.) I suppose where I'm stuck is this: So the only way to establish historical causality is to get data. Those data, at one point, might lead you to consider it “highly probable” that event A occurred; however new data could cause you to revise that probability. Must we then exclude probability when evaluating hypothetical events that have never been observed, since there is no data? That doesn't seem right. For one thing, it leaves the door open to an infinite number of possible explanations, each of which gains credibility because it has never been observed. Obviously we can't dismiss any event that has not been observed, otherwise we cannot exist. So that leaves the question, how do we judge between two or more historical possibilities without considering probability?ScottAndrews
July 28, 2011
July
07
Jul
28
28
2011
08:54 AM
8
08
54
AM
PDT
And most probability estimates are simply based on observed frequencies.
I think I'll go to Vegas and watch numerous spins of the roulette wheel, see if I can observe the frequencies and come up with a probability estimate.Mung
July 28, 2011
July
07
Jul
28
28
2011
08:32 AM
8
08
32
AM
PDT
MF: If all probabilities are conditional, that leads to infinite regress. Some probabilities are not conditional probabilities, and are reasonable estimates on things like Laplace indifference, or frequency studies on reasonable samples etc. Some are epistemic as well, i.e. degrees of confidence in a knowledge claim. GEM of TKIkairosfocus
July 28, 2011
July
07
Jul
28
28
2011
07:15 AM
7
07
15
AM
PDT
Dr Liddle: Quite often a vague estimate, a sign, or an order of magnitude are quite good enough to make a decision. Sampling theory also tells us that most reasonable samples of a large enough population will be at least roughly typical. In the case of in-principle infinite pops, the population of text in English is in principle infinite, or at least quasi-infinite. But we can credibly and usefully infer from reasonable samples, as Printers knew long before modern codes were invented. Info theorists have long made estimates of probabilities of symbols and used them in information metrics, from Shannon's original paper onwards. We have excellent reason to see that we are well warranted to infer to design on FSCI, analytically and inductively. GEM of TKIkairosfocus
July 28, 2011
July
07
Jul
28
28
2011
06:56 AM
6
06
56
AM
PDT
ScottAndrews: well, the answer to your question depends, I think, on a precise articulation of what you are calculating the probability of which includes assumptions of what you are given And most probability estimates are simply based on observed frequencies. After all, given enough information about the trajectory of your coin flip, you could probably predict the chances of it landing on its edge with considerable accuracy, and even, for a particular flip, conclude that the probability of it landing on its edge was near 1. In practice we don't know all this (aren't given it) so we might, if we were interested, flip a vast number of coins and see how often they landed on edge. Then divide that number by the total number of flips to get the probability. But what would that actually tell you? It would tell you that the conditions under which a flipped coin lands on its edge are very rare. It doesn't really tell you much else. And even that information come with a heavy caveat - would it apply to coins flipped by a crack team of coin flippers who had trained for years to get them to land on edge? Or British pound coins (which are quite thick and not very large)? And how much of this information do you know? So to return to your question - probability isn't a lot of use in explaining historical events, without a great deal of additional information. Or, rather, the probability of a particular historical event depends on the probabilities of other events on which that event was contingent. So rather than being an informative calculation, it simply tells you how much you don't know. For example if I tell you a bag is full of red and blue balls, and ask you to pick one out of the bag, and guess the colour without looking, what do you say? If I hold a gun to your head, you will pick red and blue "at random" because you have no more reason to think it is read than blue. At that moment, you have no priors apart from the information,which you may or may not trust, that the bag contains red and blue balls. Then the man with the gun tells you to put the ball back and pick another. Now you have more information. You know there is at least one blue ball in the bag. So if the guy asks you to pick another, you might guess "blue" as being slightly more "probable" than red. At least giving you a more probably chance of survival! And let's say it is blue again. And so you keep going. Eventually you are pretty confident that most of the balls in the bag are blue. So for each pick you answer "blue" because there is a high probability that it will be blue. Now the guy opens the bag and shows you the balls. There are 99 blue balls and 1 red one. So the frequency of red balls is 1%, and you might be tempted to say that the "true probability" of a red ball was 1%. Even though the frequency of red in your sample was zero, which gave a probability of 0%! And even though you started with the assumption of 50%! So probability is really just a way of saying how confident you are in a given result. It doesn't tell you how confident you should be - frequency does that, but for that you need data. So you are back where you started. Probability won't tell anything you don't already know :) So the only way to establish historical causality is to get data. Those data, at one point, might lead you to consider it "highly probable" that event A occurred; however new data could cause you to revise that probability. It's not that your probability calculation was initially wrong and is now better - it's that you now have more data.Elizabeth Liddle
July 28, 2011
July
07
Jul
28
28
2011
06:30 AM
6
06
30
AM
PDT
How accurate does a calculation of probability need to be to be useful for making predictions or explaining historical events? I can state that flipping a U.S. quarter and having it land on its edge 100 consecutive will never happen, assuming no deliberate interference. That statement is based on probability, it's a good prediction, and yet it's based on the vaguest estimate. (The odds of a single event could be one in 10, 15, 57, 840, etc.) It's also a coherent concept of the probability of the event independent of all prior conditions.ScottAndrews
July 28, 2011
July
07
Jul
28
28
2011
05:55 AM
5
05
55
AM
PDT
#35 Eric All we’re doing with all of these detailed contingencies is fleshing out in detail what is actually needed for the ultimate condition to hold. You are assuming there is some coherent concept of the probability of an event independent of all prior conditions. But actually this makes no sense. All probabilities are conditional probabilities - it is just that sometimes the conditions are so obvious we don't specify them. Is the probability of a coin coming down heads 0.5? That assumes a fair coin and a fair toss (not just placed)- but even "a fair coin" needs to be specified in some detail to avoid being circular - something like manufactured so both sides are of equal weight - but more complicated than that.markf
July 27, 2011
July
07
Jul
27
27
2011
11:15 PM
11
11
15
PM
PDT
Eric, Hey, I'm just trying to put forth what I believe the concept was. Didn't say I thought it was all that impressive. As I said, I think the Koonin piece is pretty telling.nullasalus
July 27, 2011
July
07
Jul
27
27
2011
11:11 PM
11
11
11
PM
PDT
nullasalus @33: OK, but that is pure semantics. All you've done is break down the probability calculation into discrete parts, and then you've focused your calculation on dealing only with the last part. In other words, the odds that X is calling his mother right now, really depends on the odds that X exists, that X's mother is alive, that X knows how to use a phone, that X has a phone at his disposal, that X's mother has a phone, that X is on the phone right now, etc. All we're doing with all of these detailed contingencies is fleshing out in detail what is actually needed for the ultimate condition to hold. Similarly, we can't make life more inevitable by simply assuming all the contingencies and then asking, "OK, now life seems more likely." I don't know if Elizabeth was referring to these kinds of contingencies in her comment. She did indicate that she thought the CSI calculation was too conservative . . .Eric Anderson
July 27, 2011
July
07
Jul
27
27
2011
10:52 PM
10
10
52
PM
PDT
I believe so, though in an infinite universe everything that is actual is possible.Eric Holloway
July 27, 2011
July
07
Jul
27
27
2011
08:35 PM
8
08
35
PM
PDT
Scott, A rough example would be, "What are the odds that X is calling his mother right now?" Compare the odds if we know nothing about X, versus "X has a cell phone", versus "X is on the phone right now".nullasalus
July 27, 2011
July
07
Jul
27
27
2011
08:14 PM
8
08
14
PM
PDT
Elizabeth That’s because probabilities are conditional. The question is not (I would argue): “how likely is this pattern (say, a living cell) to have happened by chance at least once given the entire number of events in this universe?”, but rather: “given what has happened in this universe so far, how likely is this pattern (this living cell) to have arisen?” I'm admittedly a little slow, and I'm not saying that facetiously. But how does a dependency on past events affect probability? If I roll one die and it comes up as a six, the odds that if I roll a second six for a total of twelve become one in six. How does that alter the fact that the odds of rolling a twelve with two dice is still one in thirty-six? Allow me to re-illustrate. We live in universe that contains cell phones. That increases the odds that the universe will contain cell phones in one minute to virtually 100%. How does that reflect the odds that, from its beginning, the universe would one day contain cell phones?ScottAndrews
July 27, 2011
July
07
Jul
27
27
2011
07:54 PM
7
07
54
PM
PDT
Eric, Why anyone would think that life is inevitable or highly probably based on our current chemistry and physics escapes me. Isn't this the take of Michael Denton?nullasalus
July 27, 2011
July
07
Jul
27
27
2011
06:12 PM
6
06
12
PM
PDT
Elizabeth Liddle @28: "But I think the CSI formula is far too conservative anyway." I apologize if you've already outlined this elsewhere, but I'm curious as to what you mean. Are you saying that Dembski's probability analysis is not stringent enough, and that CSI is in fact less likely (from a purely probabilistic standpoint) than he posits?Eric Anderson
July 27, 2011
July
07
Jul
27
27
2011
05:14 PM
5
05
14
PM
PDT
Elizabeth Liddle @25: "But the probability of life given this physics and chemistry might be quite high." But isn't this where the rubber meets the road? Why anyone would think that life is inevitable or highly probably based on our current chemistry and physics escapes me. After decades and literally billions in research no-one has something even approaching a plausible explanation, and yet we're going to posit that it is likely or probable? Everything we know suggests that life is built upon contingencies, from the order of the nucleotides right up to the major organs. Where is the inevitability in that? When we talk about inevitability, we presumably mean some law-like function of chemistry and physics, so, pray tell, what property of chemistry and physics do folks have in mind when they propose life is inevitable? Some unknown, as-yet undiscovered law that mimics the kind of design we already know regularly comes from intelligent beings? "Dembski seems to be saying that confronted with something like a living cell, we can infer that it did not come by a series of independent random events." Not exactly. He understands and explicitly talks about the events having to come together in a contingent way. Of course the events are dependent on each other. What he doesn't buy, nor do I, is that there is some kind of law of physics and chemistry that is driving the process as an inevitable outcome of law. Dembski refers to this as "necessity" and certainly takes the concept into account. "And of the life-is-too-complex arguments, I think the one that says that the minimal proto-organism capable of Darwinian evolution required a Designer is stronger than the Darwinian-evolution-can’t-generate-complex-information argument, because, frankly, it can." Interesting. So it sounds like you think the idea of the intelligent creation of initial life is worth pursuing? That is good. I have to disagree with you on the last part, however. Just what CSI do you think Darwinian evolution creates that can account for the complexity and diversity of life we see around us?Eric Anderson
July 27, 2011
July
07
Jul
27
27
2011
05:12 PM
5
05
12
PM
PDT
Eric Holloway:
Note: the incorrect conclusion from this argument is that the universe is finite because we don’t like the consequences. I’m not arguing that at all. I’m merely showing that there is much more than ID at stake when the universe has infinite probabilistic resources.
Yes, I was agreeing with you. But I think the CSI formula is far too conservative anyway.Elizabeth Liddle
July 27, 2011
July
07
Jul
27
27
2011
02:46 PM
2
02
46
PM
PDT
I'm going to note again that in the link I gave, Eugene Koonin seems to be explicitly connecting infinite probablistic resources with producing the 'irreducibly complex'. So I question any claim that "no one" is connecting the OoL with independent random events.nullasalus
July 27, 2011
July
07
Jul
27
27
2011
12:10 PM
12
12
10
PM
PDT
Sorry that last sentence should have come after the second paragraph. oops.Elizabeth Liddle
July 27, 2011
July
07
Jul
27
27
2011
11:05 AM
11
11
05
AM
PDT
Yes, we do have to "go on the basis of the laws of chemistry and physics" but they are a given in this universe. The probablity of life given all possible physicses and chemistries may be very low. But the probability of life given this physics and chemistry might be quite high. Dembski seems to be saying that confronted with something like a living cell, we can infer that it did not come by a series of independent random events. Which nobody claims. What people claim is that it came about through a series of highly dependent - contingent- events. Just as, given the constants of this universe, iron was virtually inevitable, life may also have been virtually inevitable. We don't know that, of course, but there is a lot of supporting data. Which is why I think the fine-tuning argument, though flawed, is a much stronger argument for a Designer than the life-is-too-complex argument. And of the life-is-too-complex arguments, I think the one that says that the minimal proto-organism capable of Darwinian evolution required a Designer is stronger than the Darwinian-evolution-can't-generate-complex-information argument, because, frankly, it can. And the probability of life on a rocky planet, orbiting a midsize star might be higher still.Elizabeth Liddle
July 27, 2011
July
07
Jul
27
27
2011
10:43 AM
10
10
43
AM
PDT
1 2

Leave a Reply