One common critique of intelligent design is that since it is based on probabilities, then with enough probabilistic resources it is possible to make random events appear designed. For instance, suppose that we live in a universe with infinite time, space and matter. Now suppose we’ve found an artifact that to the best of our knowledge (assuming finite probabilistic resources) passes the explanatory filter and exhibits CSI. However, one of the terms in the CSI calculation is probabilistic resources available. If the resources are indeed infinite, then the calculation will never give a positive result for design. Consequently, if the infinite universe critique holds, then not only does it undermine ID, but every huckster, conman, and scam artist will have a field day.

Say I had a bet with you that I’m flipping a coin and whenever it came up heads I’d pay you $100 and whenever it came up tails you’d pay me $1. Seems like a safe bet, right? Now, say that I flipped 100 tails in a row and you now owe me $100. Would you be suspicious? I might say 100 tails is just a likely, probabilistically speaking, as 50 tails followed by 50 heads, or alternating tails and heads, or any other permutation of 100 flips, which would be mathematically correct. To counter me, you bring in the explanatory filter and say, “Yes, 100 tails is equally probable, but it also exhibits CSI because there is a pattern it conforms to.” In a finite universe, this counter would also be mathematically valid. I’d be forced to admit foul play. But, if we lived in an infinite universe then even events seeming to exhibit CSI will turn up, and I could claim there is no rational reason to suspect foul play. I could keep this up for 1,000 or 1,000,000 or 1,000,000,000,000 tails in a row, and you’d still have no rational reason to call foul play (though you may have rational reason to question my sanity).

Not only do many incredible events become reality, but we begin to lose a grip on reality itself. For instance, it is much more likely, from an a priori probability, that we are merely boltzmann brains [2] instantiated with a momentary existence, only to disappear the next instant. Furthermore, it is much more likely that our view of reality itself is an illusion and the objective world is merely a random configuration that just happens to give us a coherent perception. As a result, in an infinite universe, our best guess is that we are hallucinating, instantaneous brains floating in space, or perhaps worse.

A more optimistic person might say, “Yes, but such a pessimistic situation only exists if we make assumptions about the a priori probability, such as it is a uniform or nearly uniform distribution. There are many other distributions that lead to a coherent universe where we are persistent beings that have a grasp on objective reality. Why make the pessimistic assumption instead of the optimistic assumption?”

Of course, this is good advice, whenever we have such a choice of alternatives. Unfortunately, this advice ignores the mathematical structure of the problem. The proportion of coherent distributions to incoherent distributions drops off exponentially, and as an exponential equation approaches infinity it becomes an almost binary drop off. This means that as probabilistic resources approach infinity, the number of coherent distributions approaches zero. Nor does the situation get any better if we talk about probability distributions over probability distributions, the problem remains unchanged or even gets exponentially worse with every additional layer.

The end result is that with an infinite number of probabilistic resources the case for ID may be discredited, but then so is every other scientific theory.

However, perhaps there is a rational way to preserve science even if there are infinite probabilistic resources. If so, what effect does this have on ID? Maybe ID even has a hand in saving science? More to follow…

[1] http://en.wikipedia.org/wiki/Law_of_large_numbers

[2] http://en.wikipedia.org/wiki/Boltzmann_brain

“I could keep this up for 1,000 or 1,000,000 or 1,000,000,000,000 tails in a row, and you’d still have no rational reason to call foul play.”

Cheaters in vegas just got a bone.

Pit boss: “That’s impossible three royals in row, you’re cheating!”

Grifter: “Welcome to the multi-verse baby, ship it.”

You seem to forget that even chance events constrain subsequent events.

The universe isn’t a series of hands from a constantly reshuffled deck.

It’s a Markov chain, not white noise.

Also:

No, they won’t 🙂

Remember that CSI includes a term representing probabilistic resources!

CSI is scale-invariant.

heh. that’s twice today I’ve had to correct someone on that.

And I’m not even an ID proponent 🙂

The issue in a multiverse becomes this sub cosmos, the one capable of interaction with us within relevant horizons. Its local fine tuning whereby slight perturbations kick it our of life friendly zones becomes material. Indeed, Hoyle’s resonance example is till relevant as it is astrophysical not cosmos scale, save in eh underlying laws: abundant water is a miracle.

And how did the universe get off that markov chain and on to something else?

iow, how did it fundamentally change from being a random process to being a non-random process?

What is it now, if not a markov chain?

Eric Holloway, interesting thoughts.

At its foundation, the problem with the “infinite resources” argument is that we cannot take it seriously as a counter to CSI, even if infinite resources were true.

In other words, the CSI we are identifying is not a single, detached event, like a hypothetical coin flip in a vaccuum in space, with no relation to the rest of reality. Rather, it is a whole series of events that constitute a larger whole. Specifically, the reason we would still cry foul on your 100 tails in a row, is that we know that *regardless of any infinite resources in the universe* in our particular corner of the cosmos, in this place and this time, you shouldn’t be able to get that result by chance.

Related to life, *regardless of any infinite resources in the broader universe,* given our knowledge of the specific laws of chemistry and physics that operate in our neck of the woods, you can’t get the kinds of CSI that we see all around us in life just by chance.

Bottom line, the “infinite resources” of some hypothetical universe or multi-verse, are irrelevant to our analysis of whether the laws of chemistry and physics *as we know and understand them to function in our neck of the woods* are sufficient to create the CSI we see.

Clearly they are not. The infinite resources, multiverse, or whatever hypotheses, are an irrelevant distraction from the question of whether design can be detected in the diversity and complexity of life on the Earth.

Mung:

It’s still a markov chain!

Not true. 100 tails does not contain any more specificity than any other permutation of coin flips. And, though it isn’t any less probable either, the real counter to your argument is that 100 tails is far less probable than 50 head and 50 tails in no particular order.

Also, if 100 tails contains CSI then what does the C in CSI stand for?

Also,

What could I be suspicious of with the coin flipping example? If I didn’t actually see you flipping the coin, I’d suspect you were lying.

If I saw you flipping the coin, and watched it turn up tails every time, then I’d suspect that both sides are tails.

If I was shown that there was indeed a heads and a tails, I’d suspect that the coin was weighted extremely heavily in one direction. In which case 100 tails is highly probable.

The question is–to bring this back to evolution–can natural processes make it highly probable that a pattern arises which is equivalent to a coin landing tails 100 times in a row.

2^100 is within the CSI — cosmos — limit; but on a lab scale would be unreasonable on chance.

Eliabeth Liddle:

And you’re just one link in that chain?

@Elizabeth Liddle,

Thanks, good catch regarding CSI. I changed the phrase to “seeming to exhibit CSI.”

As for your markov chain point, I don’t know how we could tell we’re any different than a random sample vice a markov chain. The markov chain idea is just an inference from observation, which falls apart in an infinite universe.

@Eric Anderson,

Right, it seems in our universe that even with infinite resources local interactions are still finitely constrained. But, as I mention to EL above, I’m skeptical we can infer such constraints from observations with infinite probabilistic resources.

Eric Holloway:

🙂

The other person I picked up on that today was dmullenix.

Well, of course it has to be a modified Markov Chain because a Markov Chain is a “discrete time” random process, and with they universe, we don’t even have simultaneity, let alone discrete time!

My point was that if you started with a randomly drawn sample of events drawn from a population of all possible events, at the beginning of the universe, each of those events would then determine (plus a random quantum factor) the next events, and so on.

The universe plays the hand it’s dealt, in other words, the cards are not constantly reshuffled and re-dealt.

So while the initial state of the universe might have been random with respect to all possible states, subsequent states are not random with respect to the initial state, and indeed each future state are is highly predicted by each present state.

And while Markov Chains are “memoryless” in the sense that the next state depends entirely on the present state, plus random input; it does not depend on any previous state, I’ve always thought it was a bad term: it’s not that Markov Chains are “memoryless” it’s that each state embodies information inherited from previous states, and no other information is required to determine the next state.

And that’s essentially the “Necessity” part of “Chance and Necessity”, although in practice, we often call things Chance when all we mean is that we do not know what caused them, or that they are non-generalisable causal factors.

my head hurts

There is no evidence that

the universe— you know, all that merely lifeless matter bumping around — is non-deterministic. What there is are many educated-beyond-their-abilities persons who confuse a human general lack knowledge of the cause(s) of things for “randomness” causing these things.There is an identifiable correlation between randomness and confusion. It follows that confusion is caused by randomness.

@EL

“My point was that if you started with a randomly drawn sample of events drawn from a population of all possible events, at the beginning of the universe, each of those events would then determine (plus a random quantum factor) the next events, and so on.”

I don’t think it’s possible to know this to any degree with infinite resources. There is a possible sequence of events where each event is entirely independent of all previous events, or correlates with random previous/future events, and with infinite resources that possibility is an actuality. How do we know we aren’t in that event sequence?

For instance, I can easily make the qualification that in this possible sequence our rational faculties moment to moment tell us we are in a markov chain, even though this is entirely false in actuality. Again, it is a possible sequence so it is also an actual sequence.

Hi Eric:

I guess I don’t find the conjecture that there exist universes that don’t obey any causal laws terribly useful!

And, as you say, we don’t know, and can’t, know that this one won’t stop making any sense at all tomorrow. Nor do we know that the universe wasn’t created Last Thursday with the appearance of age. Nor that existence is not an illusion unique to me.

But again, I don’t find such speculation very information, not least

becausethere is no way of testing them.Science is about finding regularities in the universe – it is powerless to find anything else.

As you say: “The end result is that with an infinite number of probabilistic resources the case for ID may be discredited, but then so is every other scientific theory.”

My view is that ID is incorrect, but not because there might be “an infinite number of probabilistic resources”. I don’t think that biological entities came about by “chance” where that means that one day the dice fell at a biological entity.

In fact, I think that the “probility resources” term in CSI is way too conservative. I’d be persuaded by something much more generous if the rest of the argument made sense.

That’s because probabilities are conditional. The question is not (I would argue): “how likely is this pattern (say, a living cell) to have happened by chance at least once given the entire number of events in this universe?”, but rather: “

givenwhat has happened in this universe so far, how likely is this pattern (this living cell) to have arisen?”And to calculate that, we need to know something of the contingencies that led to that point.

Dembski’s CSI concept does not even attempt to estimate this. And, I’d argue, that this is exactly what evolutionary biologists attempt to discover!

Eugene Koonin on the Multiverse and the Origin of Life:

The MWO version of the cosmological model of eternal inflation could suggest a way out of this conundrum because, in an infinite multiverse with a finite number of distinct macroscopic histories (each repeated an infinite number of times), emergence of even highly complex systems by chance is not just possible but inevitable.…

A final comment on “irreducible complexity” and “intelligent design”. By showing that highly complex systems, actually, can emerge by chance and, moreover, are inevitable, if extremely rare, in the universe, the present model sidesteps the issue of irreducibility and leaves no room whatsoever for any form of intelligent design.@EL:

In an infinite universe all physically possible conjectures are actual, including the one where our scientific instruments show regularity where there isn’t any. With an infinite universe we’re basically stuck in the matrix with no red pill.

Note: the incorrect conclusion from this argument is that the universe is finite because we don’t like the consequences. I’m not arguing that at all. I’m merely showing that there is much more than ID at stake when the universe has infinite probabilistic resources.

Elizabeth Liddle:

““given what has happened in this universe so far, how likely is this pattern (this living cell) to have arisen?”

I trust you are not including in this question the formation of the living cell itself? If so, your question becomes circular and assumes as a given the very thing it is trying to explain. What do you mean, “given what has happened so far?”

I’d be curious in hearing you flesh out your thoughts a bit, as I don’t see right now, how your idea of precedent contingencies changes the calculation in any meaningful sense. We still have to go on the basis of the laws of chemistry and physics we see operating around us today if we are going to calculate any kind of probabilities. Isn’t that what Dembski has done? Indeed, the idea of taking all the probabilistic resources in the universe is a concession to show how unrealistic a materialistic origin scenario is. Otherwise, we could limit the resources to Earth or our galactic neighborhood, and the numbers would look even worse for the materialistic scenario.

Yes, we do have to “go on the basis of the laws of chemistry and physics” but they are a given in this universe.

The probablity of life given all possible physicses and chemistries may be very low. But the probability of life given

thisphysics and chemistry might be quite high.Dembski seems to be saying that confronted with something like a living cell, we can infer that it did not come by a series of independent random events.

Which nobody claims.

What people claim is that it came about through a series of highly dependent – contingent- events. Just as,

giventhe constants of this universe, iron was virtually inevitable, life may also have been virtually inevitable.We don’t know that, of course, but there is a lot of supporting data.

Which is why I think the fine-tuning argument, though flawed, is a much stronger argument for a Designer than the life-is-too-complex argument.

And of the life-is-too-complex arguments, I think the one that says that the minimal proto-organism capable of Darwinian evolution required a Designer is stronger than the Darwinian-evolution-can’t-generate-complex-information argument, because, frankly, it can.

And the probability of life on a rocky planet, orbiting a midsize star might be higher still.

Sorry that last sentence should have come after the second paragraph.

oops.

I’m going to note again that in the link I gave, Eugene Koonin seems to be explicitly connecting infinite probablistic resources with producing the ‘irreducibly complex’. So I question any claim that “no one” is connecting the OoL with independent random events.

Eric Holloway:

Yes, I was agreeing with you.

But I think the CSI formula is far too conservative anyway.

Elizabeth Liddle @25:

“But the probability of life given this physics and chemistry might be quite high.”

But isn’t this where the rubber meets the road? Why anyone would think that life is inevitable or highly probably based on our current chemistry and physics escapes me. After decades and literally billions in research no-one has something even approaching a plausible explanation, and yet we’re going to posit that it is likely or probable? Everything we know suggests that life is built upon contingencies, from the order of the nucleotides right up to the major organs. Where is the inevitability in that? When we talk about inevitability, we presumably mean some law-like function of chemistry and physics, so, pray tell, what property of chemistry and physics do folks have in mind when they propose life is inevitable? Some unknown, as-yet undiscovered law that mimics the kind of design we already know regularly comes from intelligent beings?

“Dembski seems to be saying that confronted with something like a living cell, we can infer that it did not come by a series of independent random events.”

Not exactly. He understands and explicitly talks about the events having to come together in a contingent way. Of course the events are dependent on each other. What he doesn’t buy, nor do I, is that there is some kind of law of physics and chemistry that is driving the process as an inevitable outcome of law. Dembski refers to this as “necessity” and certainly takes the concept into account.

“And of the life-is-too-complex arguments, I think the one that says that the minimal proto-organism capable of Darwinian evolution required a Designer is stronger than the Darwinian-evolution-can’t-generate-complex-information argument, because, frankly, it can.”

Interesting. So it sounds like you think the idea of the intelligent creation of initial life is worth pursuing? That is good.

I have to disagree with you on the last part, however. Just what CSI do you think Darwinian evolution creates that can account for the complexity and diversity of life we see around us?

Elizabeth Liddle @28:

“But I think the CSI formula is far too conservative anyway.”

I apologize if you’ve already outlined this elsewhere, but I’m curious as to what you mean. Are you saying that Dembski’s probability analysis is not stringent enough, and that CSI is in fact less likely (from a purely probabilistic standpoint) than he posits?

Eric,

Why anyone would think that life is inevitable or highly probably based on our current chemistry and physics escapes me.Isn’t this the take of Michael Denton?

Elizabeth

That’s because probabilities are conditional. The question is not (I would argue): “how likely is this pattern (say, a living cell) to have happened by chance at least once given the entire number of events in this universe?”, but rather: “given what has happened in this universe so far, how likely is this pattern (this living cell) to have arisen?”I’m admittedly a little slow, and I’m not saying that facetiously.

But how does a dependency on past events affect probability?

If I roll one die and it comes up as a six, the odds that if I roll a second six for a total of twelve become one in six. How does that alter the fact that the odds of rolling a twelve with two dice is still one in thirty-six?

Allow me to re-illustrate. We live in universe that contains cell phones. That increases the odds that the universe will contain cell phones in one minute to virtually 100%. How does that reflect the odds that, from its beginning, the universe would one day contain cell phones?

Scott,

A rough example would be, “What are the odds that X is calling his mother right now?”

Compare the odds if we know nothing about X, versus “X has a cell phone”, versus “X is on the phone right now”.

I believe so, though in an infinite universe everything that is actual is possible.

nullasalus @33:

OK, but that is pure semantics. All you’ve done is break down the probability calculation into discrete parts, and then you’ve focused your calculation on dealing only with the last part. In other words, the odds that X is calling his mother right now, really depends on the odds that X exists, that X’s mother is alive, that X knows how to use a phone, that X has a phone at his disposal, that X’s mother has a phone, that X is on the phone right now, etc.

All we’re doing with all of these detailed contingencies is fleshing out in detail what is actually needed for the ultimate condition to hold.

Similarly, we can’t make life more inevitable by simply assuming all the contingencies and then asking, “OK, now life seems more likely.”

I don’t know if Elizabeth was referring to these kinds of contingencies in her comment. She did indicate that she thought the CSI calculation was too conservative . . .

Eric,

Hey, I’m just trying to put forth what I believe the concept was. Didn’t say I thought it was all that impressive.

As I said, I think the Koonin piece is pretty telling.

#35 Eric

All we’re doing with all of these detailed contingencies is fleshing out in detail what is actually needed for the ultimate condition to hold.You are assuming there is some coherent concept of the probability of an event independent of all prior conditions. But actually this makes no sense. All probabilities are conditional probabilities – it is just that sometimes the conditions are so obvious we don’t specify them. Is the probability of a coin coming down heads 0.5? That assumes a fair coin and a fair toss (not just placed)- but even “a fair coin” needs to be specified in some detail to avoid being circular – something like manufactured so both sides are of equal weight – but more complicated than that.

How accurate does a calculation of probability need to be to be useful for making predictions or explaining historical events?

I can state that flipping a U.S. quarter and having it land on its edge 100 consecutive will never happen, assuming no deliberate interference. That statement is based on probability, it’s a good prediction, and yet it’s based on the vaguest estimate. (The odds of a single event could be one in 10, 15, 57, 840, etc.) It’s also a coherent concept of the probability of the event independent of all prior conditions.

ScottAndrews: well, the answer to your question depends, I think, on a precise articulation of what you are calculating the probability

ofwhich includes assumptions of what you aregivenAnd most probability estimates are simply based on observed frequencies.

After all, given enough information about the trajectory of your coin flip, you could probably predict the chances of it landing on its edge with considerable accuracy, and even, for a particular flip, conclude that the probability of it landing on its edge was near 1.

In practice we don’t know all this (aren’t

givenit) so we might, if we were interested, flip a vast number of coins and see how often they landed on edge. Then divide that number by the total number of flips to get the probability.But what would that actually tell you? It would tell you that the conditions under which a flipped coin lands on its edge are very rare. It doesn’t really tell you much else.

And even that information come with a heavy caveat – would it apply to coins flipped by a crack team of coin flippers who had trained for years to get them to land on edge? Or British pound coins (which are quite thick and not very large)? And how much of this information do you know?

So to return to your question – probability isn’t a lot of use in explaining historical events, without a great deal of additional information.

Or, rather, the probability of a particular historical event depends on the probabilities of other events on which that event was contingent. So rather than being an informative calculation, it simply tells you how much you don’t know.

For example if I tell you a bag is full of red and blue balls, and ask you to pick one out of the bag, and guess the colour without looking, what do you say?

If I hold a gun to your head, you will pick red and blue “at random” because you have no more reason to think it is read than blue. At that moment, you have no priors apart from the information,which you may or may not trust, that the bag contains red and blue balls.

Then the man with the gun tells you to put the ball back and pick another. Now you have more information. You know there is at least one blue ball in the bag. So if the guy asks you to pick another, you might guess “blue” as being slightly more “probable” than red. At least giving you a more probably chance of survival!

And let’s say it is blue again. And so you keep going. Eventually you are pretty confident that most of the balls in the bag are blue. So for each pick you answer “blue” because there is a high

probabilitythat it will be blue.Now the guy opens the bag and shows you the balls. There are 99 blue balls and 1 red one.

So the

frequencyof red balls is 1%, and you might be tempted to say that the “true probability” of a red ball was 1%.Even though the frequency of red in your sample was zero, which gave a probability of 0%! And even though you started with the assumption of 50%!

So probability is really just a way of saying how confident you are in a given result. It doesn’t tell you how confident you

shouldbe – frequency does that, but for that you need data.So you are back where you started. Probability won’t tell anything you don’t already know 🙂 So the only way to establish historical causality is to get data. Those data, at one point, might lead you to consider it “highly probable” that event A occurred; however new data could cause you to revise that probability.

It’s not that your probability calculation was initially wrong and is now better – it’s that you now have more data.

Dr Liddle:

Quite often a vague estimate, a sign, or an order of magnitude are quite good enough to make a decision.

Sampling theory also tells us that most reasonable samples of a large enough population will be at least roughly typical.

In the case of in-principle infinite pops, the population of text in English is in principle infinite, or at least quasi-infinite.

But we can credibly and usefully infer from reasonable samples, as Printers knew long before modern codes were invented.

Info theorists have long made estimates of probabilities of symbols and used them in information metrics, from Shannon’s original paper onwards.

We have excellent reason to see that we are well warranted to infer to design on FSCI, analytically and inductively.

GEM of TKI

MF:

If all probabilities are conditional, that leads to infinite regress.

Some probabilities are not conditional probabilities, and are reasonable estimates on things like Laplace indifference, or frequency studies on reasonable samples etc. Some are epistemic as well, i.e. degrees of confidence in a knowledge claim.

GEM of TKI

I think I’ll go to Vegas and watch numerous spins of the roulette wheel, see if I can observe the frequencies and come up with a probability estimate.

Elizabeth,

I’m trying very hard to get my head around what you are saying, and I don’t mean that in a knee-jerk critical way. (After visiting other forums I appreciate the civil discussions here so much more.)

I suppose where I’m stuck is this:

So the only way to establish historical causality is to get data. Those data, at one point, might lead you to consider it “highly probable” that event A occurred; however new data could cause you to revise that probability.Must we then exclude probability when evaluating hypothetical events that have never been observed, since there is no data? That doesn’t seem right. For one thing, it leaves the door open to an infinite number of possible explanations, each of which gains credibility

becauseit has never been observed.Obviously we can’t dismiss any event that has not been observed, otherwise we cannot exist. So that leaves the question, how do we judge between two or more historical possibilities without considering probability?

Perhaps a better approach is to infer the probability of unobserved events from the frequency of observed events. There’s no guarantee of 100% accuracy, but what would we do with 100% accuracy if we had it?

For example, say I’ve seen a Croatian kuna coin, but I’ve never flipped one. I could reason that the odds of it landing on one side or the other or its edge are unknowable given my lack of data, or I could take a pretty good stab based on my history of flipping US quarters.

Even if we allowed for gross inconsistencies, such as flipping dominoes instead of coins, we might be able to accurately assess whether it’s plausible for the coin to land on its edge 100 times.

markf @37:

“You are assuming there is some coherent concept of the probability of an event independent of all prior conditions. But actually this makes no sense. All probabilities are conditional probabilities . . .”

No, not independent of all prior conditions. There are plenty of prior conditions that exist and we have to work within the confines of those conditions. What I’m saying is that when looking at a scenario that requires a long string of events/contingencies to get to the final result we cannot just look at the last event in the chain and proclaim that the whole process was likely because, gee, the other prior events already occurred. We don’t just get to assume all the prior events when running a probability calculation for the entire scenario.

I can’t tell if this is what Elizabeth is referring to when she says that the CSI calculation is too conservative.

Dr. Liddle @39:

Thank you for the additional thoughts and examples. Definitely interesting to think about.

I’ll have to think about it a bit more, but I guess I don’t have a problem with the idea that a probability calculation is only valuable to the extent that we have some data to put in the calculation (hence, for example, the source of much of the criticism of the Drake Equation).

Based on what we do know, however, not what we don’t know, it seems some pretty good calculations can be done for the proability of OOL. Indeed, it seems we can do a much better job of calculating the odds today than 50 years ago. For example, several decades ago ideas were floated about DNA coming together as a natural result of chemical attractions. With that idea, the probability seemed pretty good. Now we have lots of data to show that isn’t the case, so we are much better able to ascertain the probabilities associated with a random association of nucleotides.

The calculation may not be perfect in an absolute sense, but based on our current state of knowledge we can say with some confidence that the probability is x. I do agree with you however, that we have to define the paramaters of the calculation carefully and with respect to what we do know.

Eric #45

The point is that the probability of an outcome depends on which prior conditions you include. CSI is calculated using a probability. Therefore the CSI of an outcome depends on the prior conditions that are included. So to talk of “the CSI” of an outcome is nonsense. The best you can talk of is the CSI relative to conditions X,Y and Z. But we never hear what the conditions are.

(The CSI is also relative to the specification – but that is a different problem)

kairosfocus

Often they have to be, kairosfocus. Very often we have to make decisions on sparse information. In fact there is good neuroscience data to support the view that our brain architecture is designed (heh) to make Bayesian probability inferences, and it is fundamental to perception. Our perceptions, in other words, are strongly influenced by our priors, which are continuously adjusted in light of new data. However, at the point of the decision, we have to go with the most likely outcome, given the data currently available to us.

But the more data we have, the more likely our decision is to be right!

Random samples, yes.

Yes.

Infer what?

Yes.

But that doesn’t follow from what you’ve said!

What is the population from which you have randomly sampled to make what inference?

ScottAndrews:

🙂

I suppose where I’m stuck is this:

So the only way to establish historical causality is to get data. Those data, at one point, might lead you to consider it “highly probable” that event A occurred; however new data could cause you to revise that probability.

Yes. Indeed there was an interesting example in connection with the dinosaur-bird transition (although that’s pre-history, of course) this week. A new fossil has altered the priors regarding archeopteryx position in the lineage.

Well, a “hypothetical event” is, presumably an event hypothesised to have taken place on the basis of some data. Otherwise it wouldn’t be a hypothesi, it would be fantasy 🙂 A hypothesis is something we propose to account for

data. For example, the 65 mya change in fossil biota known the “KT boundary” is hypothesised to have been the result of an unobserved event (a meteor strike). And, indeed, there is evidence to support such an event (globally iridium-enriched strata at that point in the geologic column) and further more, evidence of a huge crater. So our priors at this point are high that these pieces of evidence are evidence of a massive meteor strike that left a crater, a global iridium deposit, and was instrumental in world-wide extinctions.But if, for example, we were to find evidence that the exinctions were much more gradual than we currently infer, that the Chicxulub is actually a vast sinkhole, not a meteor crater, and that the iridium layer has some quite other, non-destructive cause, then that “hypothetical event” would be downgraded as a probability, which is to say, we would

have less confidencethat that event had occurred.And this comes back to the point I’m afraid I keep banging on about which is that scientific inferences are always provisional. They are generally the best current model given current data, but always subject to revision given new data. Rarely to radical revision, but constantly to elaboration and minor correction.

We do consider probability! I’m sorry if I inadvertently implied otherwise. It’s just that frequentist probability is rarely applicable, as it assumes a random sampling from a known population.

What we use intuitively, every day, and explicitly, in science, is conditional probability – probability contingent on what we know, and that probability will be subject to constant revision given new data, just as the guy forced to guess the colour of the balls in the bag.

I guess my main point (my soap box point!) is that for making scientific inferences, probability is better, in general, conceived in a Bayesian framework, in terms of our confidence in our best current guess, than in frequentist terms i.e. what is the known frequency of this event (as in the frequency of a flipped coin landing on its side).

They are actually very different concepts. Which is why Bayesians and Frequentists have such horrible fights.

oops: from after the first smiley to “revise that probability” are Scott’s words.

markf @47:

“The best you can talk of is the CSI relative to conditions X,Y and Z. But we never hear what the conditions are.”

Agreed, that conditions need to be specified. We do hear of them. As an example, for OOL we can assume the most favorable conditions possible and then run a probability analysis. Indeed, this is typically what Dembski and other major ID proponents do. Assume you have all the amino acids or nucleotides you want, in the right proportions, in one place, at one time, with favorable reaction conditions, with no interfering cross reactions, with stability and lack of breakdown once formed, in whatever favorable location you want (tide pools, deep sea vents, mud globules, take your pick), with just the right amount of energy to catalyze the reactions, but not too much to destroy the nascent formations (volcanic heat, lightning, take your pick).

Now, given all those concessions and assuming all those favorable conditions are met, what are the odds of a single protein forming, or what are the odds of a string of DNA or RNA forming that could code for a single protein? It is precisely this probability calculation that has driven OOL researchers to acknowledge that it won’t happen without some as-yet-undiscovered element added to the mix.

If we step back and add back in all the conditions we assumed, the odds become many times worse. Even if we have uncertainty in any of these prior conditions, *by definition,* they cannot make the odds more favorable — only less favorable or the same. Any uncertainty about the prior conditions doesn’t mean our basic calculation of the probabilities of the chemical components coming together in a specific way is inaccurate. It can be very accurate and give us a good feel for what we are up against; however, because we’ve assumed all the prior conditions, we just need to keep in mind that our calculation is also too conservative.

#51

Eric – I admit that what you describe could be seen as a set of conditions. But they are hardly the most favourable possible. They assumes that life started right off with a protein or DNA/RNA string – the constituents were all mixed together and happened to form such a a string. It omits any consideration of intermediary stages.

Hardly the most favorable? I’ve assumed everything you could possibly identify to get OOL off the ground. I’m all ears as to ideas for any intermediary stages on the way to a single protein or DNA/RNA string. What is proposed? Part of a protein? Part of a DNA/RNA string?

In reality, this is all way too generous for abiogenesis anyway. Even a protein or several proteins doesn’t give you life; nor does a complete DNA sequence. Focusing on the smallest, simplest, easiest known item is more than fair — it is overly generous. And the calculations are pretty sobering.

Part of our misconceptions about ID is rooted into the fact that it is described and treated by most of us as a theory that has to be tested or proven true. In reality, ID is not a theory, and it is not a religious concept either. It is a more or less unfortunate label we put on a fact of reality any one of us can observe: while not perfect, everything in the universe is put together amazingly well, the best way possibly under the circumstances. You do not have to test the fact that the cat has four legs, or that the four figures of former presidents on Mount Rushmore are not a product of nature. “Intelligent design” is everything we see around us. Of course, the question is what has caused this amazing assemblages of particles to exist and to function the way they do? Well, everything has a purpose and it is up to us to find out what that purpose is. Religious people say there is a supernatural creator god that governs their business and was in charge of the making of the world. Their proof is ‘the Bible says so.’ In other words, a huge amount of nothing. In Buddhism, though, there is no such thing as a creator god but a source everything there is comes from. And that makes sense while quantum physics more than supports the idea. Everything is energy and information, and energy and information must be emitted by something, something we know nothing about for now. So lets not dismiss ID because we do not understand what caused our world to be what it is, the same way we don’t dismiss gravity even if we don’t know yet what makes gravity to be gravity. All we can do is to observe and acknowledge the reality of what we see and make an effort to understand it. That said, neither religion nor the theory of evolution are of help with that.

http://www.atimeofchange.net/2.....ty-no.html