Uncommon Descent Serving The Intelligent Design Community

Coordinated Complexity — the key to refuting postdiction and single target objections

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
arroba Email

[As I recall, Jason Rosenhouse objected that Bill Dembski’s notion of specification cannot be applied to biology. This essay is written to challenge some of the objections think I’ve heard him raise informally over the years at my ID talks at his school and our discussion at ID and creation conferences. He’s one of the brightest critics of ID that I know, and thus I think objections he might raise should be addressed.]

The opponents of ID argue something along the lines: “take a deck of cards, randomly shuffle it, the probability of any given sequence occurring is 1 out of 52 factorial or about 8×10^67 — Improbable things happen all the time, it doesn’t imply intelligent design.”

In fact, I found one such Darwinist screed here:

Creationists and “Intelligent Design” theorists claim that the odds of life having evolved as it has on earth is so great that it could not possibly be random. Yes, the odds are astronomical, but only if you were trying to PREDICT IN ADVANCE how life would evolve.

http://answers.yahoo.com/question/index?qid=20071207060800AAqO3j2

Ah, but what if cards dealt from one random shuffle are repeated by another shuffle, would you suspect Intelligent Design? A case involving this is reported in the FBI website: House of Cards

In this case, a team of cheaters bribed a casino dealer to deal cards and then reshuffle them in same order that they were previously dealt out (no easy shuffling feat!). They would arrive at the casino, play cards which the dealer dealt and secretly record the sequence of cards dealt out. Thus when the dealer re-shuffled the cards and dealt out the cards in the exact same sequence as the previous shuffle, the team of cheaters would be able to play knowing what cards they would be dealt, thus giving them substantial advantage. Not an easy scam to pull off, but they got away with it for a long time.

The evidence of cheating was confirmed by videotape surveillance because the first random shuffle provided a specification to detect intelligent design of the next shuffle. The next shuffle was intelligently designed to preserve the order of the prior shuffle.

Here is a spectacular example of how a skilled dealer can control the sequence of cards through an intelligently designed shuffle:

But how does this relate to the problem of ID and defining specifications which signify the action of an intelligent agency?

The answer is that it illustrates how circumstances themselves can provide specification for detecting design even when we don’t have the specification in advance. The casinos observing the cheating team did not know in advance what the outcome of the shuffled decks would be but they were able to detect intelligent design despite lacking explicit patterns of cards to look for before catching the crooks.

Opponents of ID have insinuated that we cannot legitimately compute probabilities of designed objects if we don’t have explicit specification of the design before we make the observation of the artifact. Not so. The FBI case is a case in point!

Now consider a randomly generated string: “yditboawrt”. Its existence might not be significant unless it were converted for use as someone’s password. But if it were converted to a password, at that point, what was previously just a random string takes on significance. If one has a lock-and-key/login-password type system, you can legitimately estimate threshold improbability based on looking at the improbability of the password itself.

For example a given password of 10 letters will have an improbability of 1 out of 26^10. If we found a random string of letters (like say scrabble pieces) lying in a box, it might be rather pointless to use probability to argue the pattern is designed merely because its improbability is 1 out of 26^10, however if we found a computer system protected by a login-password that consists of 10 letters, the improbability of that system existing in the first place is at least 1 out of 26^10 and actually far more remote since the system that would implement the password protection is substantially more complex than the password itself (and this is an understatement).

This illustrates how specified complexity can be detected in systems where we don’t have specifications in advance. One merely calculates the complexity of one of the parts that must be coordinated with the whole. I call this coordinated complexity. The FBI case is an example where individuals were able to confirm intelligent design without having explicit patterns to work with in advance.

There may be infinite ways to make lock-and-key systems or login-password systems. But the fact that there are an infinite number of ways to build these things, does not imply the systems are probable. Likewise, even though in principle we could construct life forms in an infinite number of ways, it does not mean they are probable from random events any more than lock-and-key systems are probable from random events. Critics of ID claim that ID proponents are assuming life forms assume basically one form. That objection is largely irrelevant, because the improbability of a design is evident by the level of coordinated complexity in evidence in the artifact! The calculations the ID proponents use that focus on a specific target are legitimate if one considers that the target itself is specified by the entire system which it is a part of.

Critics of ID often argue something to the effect, “simple replicators can be built” with the insinuation that since simple replicators exist, complex replicators are somehow probable. This is like saying if hacker is able to compromise a relatively short password by exhaustive search, that somehow he will resolve a far more complex one with the same techniques. Not so! Yet this is exactly what defenders of OOL research do. They give examples of replicators and suggest that it is not so hard to make a replicator. Agreed that it is relatively easy to make a replicator (like the Ghadiri Peptide), but making replicators isn’t the problem. Nor is it the problem that we have merely an improbable structure in life (after all the sequence of a randomly shuffled deck of cards is astronomically improbable), but that the structure is specified improbability. It is specified because of the level of coordination that defines the structure, just like the level of coordination needed to implement a password-protected system of a mere 10 letters, or the cooridination of a falsely shuffled deck of cards with a randomly shuffled deck of cards. It is specified complexity because it is coordinated complexity.

lock and key

Like the calculation of the improbability of a password understates the improbability of a password protected computer system, the calculation for the arrival of a given proteins may actually understate the improbability of the system in which such a protein is deemed functional. For example, letting the protein be analogous to a key, consider how hard it would be given a key, to build a corresponding lock!

The problem with solving the origin of life is akin to login-password protected computer systems arising spontaneously. For this reason, it would seem, the calculations of life’s improbability put forward by ID proponents are quite valid, and actually may understate the magnitude of the problem for life spontaneously arising.

Comments
One of the toughest jobs in this debate is taking the ubiquitously obviousness of design and breaking it down into discrete, communicable concepts and usable terms and phrases for those who shut their eyes to it. Well done, scordova! Coordinated complexity! Even if you have a machine that randomly varies not only the shape of nodes on a key, but also the number of nodes on the key from a handful to hundreds, and another machine that randomly varies the shape and number of nodes in a lock; what is the chance that any key will match any lock and also be produced in a time frame where both exist at the same time in proximity to each other? I have a question, though. If there is a mutation and a key changes and now fits a lock that was hanging around for no productive reason, is the error-correction code built in, or is that a third happy accident, where not only the key and the lock happen to exist at the same time, but a mutation in another key-lock system generates an error-correct mechanism - miraculously - for the new key-lock pair that the mechanisms just-so-happened to produce? William J Murray
Thanks Deuce. Awesome insight. Nice to see you here. scordova
The fundamental problem with Rosenhouse's and other critics' argument is that it's just as much an argument against Darwinism as it is against ID. When we observe living things, we see organisms and structures that exhibit patterns that demand explanation. We see matter arranged in forms that appear to have purpose or function (aka design). Biology is ultimately the practice of trying to understand and explain those forms and the function they exhibit. The whole point of Darwin's theory was to explain those forms, or patterns, in a way that doesn't require actual purpose or function, by supplying a designer-substitute. What this argument implies, on the other hand, is that those patterns are merely projected onto the world by us, and so don't really need to be explained after all. He's saying that they're essentially like any random poker shuffle. The odds of a random shuffle coming out the particular way that it does are extremely low, but as you say, that doesn't require a design-explanation. A person who tries to explain the pattern exhibited by that shuffle using design is deluding himself that there is an important pattern there that needs explaining in the first place. But here's the thing: a person who tried to explain that pattern using some sort of substitute-designer would be just as deluded. A person trying to "explain" the outcome of a random shuffle at all would be deluded, because there's nothing there to explain! To use that sort of argument against ID in biology is to commit yourself to a constructivist or post-modernist view of science (at least if you're consistent, which of course the people making such arguments never are). It's essentially to say that all those forms we observe and seek to explain - organisms, eyes, hearts, flagella, molecular machines, error-correcting mechanisms, etc - don't really exist objectively and independently of our minds, but are patterns being subjectively imposed on reality by us, and so require no explanation (except perhaps a psychological explanation), just like a man who convinces himself that there are deliberately placed faces in every cloud, or who thinks that some random shuffle of cards is important and dedicates his life to trying to explain the imaginary "mystery" of why it came out that way. It implies that all biological function is "socially constructed" by us rather than discovered. And again, worst of all for the people making such an argument, if the logic is followed consistently the argument cuts across Darwinian explanation (and all biological science for that matter!) as much as design explanation. If you are logically consistent and don't wish to adopt biological constructivism/post-modernism with all the absurdity that it entails, then you must grant the realism that science requires, and in so doing you must grant that there is specification in biology. Deuce
Barry @31: Absolutely agree with you. That is where I was heading in my #2, namely, if we are relying on infinite resources and infinite time, then we don't have an explanation. I think you've perhaps articulated this more succinctly. Sal @34: LOL! Eric Anderson
UprightBiPed@22 Have you read Werner Gitt's new book "Without Excuse"? He seems to develop the argument from information into a kind of cosmological argument for God and into a general argument against naturalism and materialism. kuartus
Something tell me that the casino security guards would be hard to convince... ;D jstanley01
If I recall correctly, I thought I saw Dr. Rosenhouse mention he accepted the multi universe and/or the many worlds idea. The false shuffle team mentioned earlier might consider using multi-universe as a defense for explaining the specified complexity in evidence in the cards. After all many scientists think it is a good explanation for specified complexity it life, it ought to be good enough to defend criminals who perform crimes that evidence far higher probabilities than those in life. scordova
scordova @ 16, got it. jstanley01
Eric Anderson @21, thanks for the answer and the insights. jstanley01
EA @ 21. You say there are at least four reasons the multi-verse does not turn the materialist creation myth into a rational explanation. I would add a fourth. 4. Resort to the multiverse makes “explanation” itself pointless. If we allow the multiverse concept to lead us to say that any event with a non-zero probability of happening through sheer blind chance must in fact happen, then chance suddenly “explains” everything and therefore nothing. Barry Arrington
Upright Biped @19, Sal @28: Apologies for jumping into the middle, but would it be correct to say the following? 1- It is quite common for physical systems to arise that have some relationship to each other. This is why we are constantly faced with the question of correlation/causation: did the hot sidewalk actually cause my ice cream cone to melt, or is there a third cause (sun) that can explain both? In this category, we are typically trying to determine whether there is a physical causation mechanism that relates the two systems. 2- It is quite common for physical systems to arise that have some immaterial relationship to each other. Say, the last 4 digits of your best friend's phone number in high school happened to be the same as the last 4 digits of your SS#. Or in Sal's example, person x tosses 4 heads in a row and person y does the same thing. However the physical systems we observe falling into this category are almost invariably short (i.e., not complex) and the immaterial relationship imposed is often, though not always, simply one of identity or similarity. That is to say, the systems demonstrate some identical or similar features, but typically the features don't mean anything. 3- It has not ever been observed that two physical systems arise independently and exhibit a relationship that is (i) immaterial, (ii) complex, and (iii) has meaning/function. Given the probabilities, we can say the probability of this last category is effectively zero, although one could argue that it is just exceedingly low. Eric Anderson
I should add to my comment #23, and this is critical: Natural selection is not a relevant answer to the awful probabilities that beset materialistic abiogenesis, because those calculations typically assume natural selection is operating in all its Darwinian agrandized, anthropogenicized glory, perfectly and carefully selecting all that is good and rejecting all that is bad, per Darwin's literary description. In other words, natural selection cannot make the probabilities better, because natural selection is already assumed in making the calculations in the first place. ----- Consider a proteins-first scenario. We say to chance: "Go find a functional protein." If chance happens, against the terrible odds, to stumble upon a functional protein, we say to natural selection: "OK, you get to keep this functional protein, all carefully safe and preserved." Then we say to chance: "Go find the next functional protein." If chance finds it, we let natural selection hold onto it as well. Finally, once all the proteins have been carefully and safely protected by the benevolent hand of natural selection, we go back go chance and say: "OK, not put these proteins together in a functional system." And on and on. It doesn't matter whether we view this as a sequential operation or as an all-at-once operation. Chance has to find the proteins (or the whole functional system) and then we assume natural selection is doing a perfect job of preserving the functional element. The issue is the same, whether we are talking about proteins-first or DNA-first or RNA-first, or whatever-first scenario. Chance does the searching and then natural selection gets to keep anything functional that chance happens to find. ----- In reality of course, natural selection won't do anything even approaching a perfect job. Interfering reactions, natural breakdown of chemical components, subsequent mutations, mechanical stresses and forces in the environment, the vagaries and hazards of nature, will all likely obliterate the nascent system before anything really gets off the ground. Some authors have discussed these challenges in passing, but these challenges are usually unquantifiable enough and make things so bad that nearly all probability calculations for first life end up ignoring these challenges and just assuming that the island of function located in the vast sea of search space, once found, will be automatically and perfectly preserved. Again: natural selection is irrelevant as an answer to the probability problem, because it is already assumed in arriving at the probabilities in the first place. Eric Anderson
Upright Biped, Apologies for the misunderstanding. But to clarify, you asked:
how does one calculate the probability that two isolated physical objects will arise which demonstrate immaterial relationships, and that those two objects will be coordinated one with the other?
You were asking HOW the probability is calculated, not what the probability is. You gave your answer to the probabiliy: Answer: 0 The answer isn't zero, imho. It is possible that a set of 4 random coins in one corner of the world will be all heads and another 4 random coins in another part of the world will also be all heads. Thus they have a non-material relationship which emerged possibly by accident (they symbolically mirror each other). It is highly improbable that physical principles can create large scale information processing. Immaterial software is special because it is decoupled from the material properties of hardware. For software to be software, it's salient properties cannot be dependent on hardware properties. But it might not be accurate to say that hardware (as in glitches) can't possibly modify software by accident. Happens all the time. It would be fair to say however, that hardware can't by accident consistently make large coordinated software like Windows 7, Unix, or the software found in living cells. It is fair to say that software transcends hardware, that one does not understand the key properties of software by understanding the chemical and physical properties of hardware. This is evidenced by the fact the same piece of software frequently runs on radically different hardware architectures. But I don't think I would go so far to say the probability is absolutely zero, only operationally zero with respect to OOL. That was the distinction computer scientist and chemist Don Johnson made in Programming of Life. scordova
kf:
From the simplest independent and dependent living cells, we know that we are looking at about 1 million bits worth of genetic info for origin of life, codes for proteins, regulatory code, etc.
Do you happen to have handy a rough estimate/calculation or source for this number? Seems a little on the low side to me, but I haven't tried to run the calculations. I've seen some calculations for simple genomes, but haven't seen anything including cellular machinery, regulatory codes, epigenetic information, etc. Eric Anderson
Regardless, I have to take issue with this formulation, because it concedes way too much.
Yes, I've been known to be generous to my opponents on the otherside as they try to defend their indefensible position. The OOL researchers (not the internet DarwinDefenders), are incredibly gallant. They are fighting impossible odds. There was once an old saying in american culture, "don't hit a man when he's down. Help him up first." Though I disagree with OOL researchers, here is my tribute to their gallantry and determination: http://www.youtube.com/watch?v=YGzqbEeVWhs scordova
This is in response to the tripe posted on The Septic Zone- Natural selection can put Functional Information into the genome- First, natural selection is a result and becomes no more than a statistical artifact. Secondly biological fitness pertains to reproductive success which is an after-the-fact assessment. Third there is behaviour, something that can be changed much quicker to aid survival and adds nothing to the genome. Joe F sez:
The essence of the notion of Functional Information, or Specified Information, is that it measures how far out on some scale the genotypes have gone.
Unfortunately Joe F never provides a reference for that bit of tripe. I have never read any IDist say anything like that. Methinks Joe F made it up
The relevant measure is fitness.
Umm biological fitness is nonsense Joe F- it is an after-the-fact assessment. But anyway as I explained to Joe F CSI pertains to origins. Unfortunately Joe F refused to grasp that fact. Also natural selection has never been observed to do anything. So that would be another problem. Joe
Folks: The very constructive discussion continues. EA has put his finger on a very central challenge to the all-purpose appeal to the claimed or assumed wonderful powers of natural selection:
natural selection doesn’t do anything to sample the search space. The search space has to be sampled by something else (chance or some kind of guided direction). Only when the search has successfully stumbled upon a function can natural selection attempt to preserve it. But of course natural selection isn’t even relevant and doesn’t have anything to preserve the function against until we have at least two different replicators in close proximity competing against each other for scant resources. Does this even make sense in an abiogensis scenario?
It is worth pointing out that already we can see how loosely the term "natural selection" is being used, not in the context of differential reproductive success, but in the sense of access to niches of success. This issue has further been set in the context of abiogenesis, but it also applies to the issue of the origin of novel body plans, once we factor in implications of the genetic code. From the simplest independent and dependent living cells, we know that we are looking at about 1 million bits worth of genetic info for origin of life, codes for proteins, regulatory code, etc. That is 1,000 times as many bits as would credibly exhaust the blind search capacity of our observed cosmos. But for novel body plans, we are looking at maybe 10 million to 100 million bits. Each. Dozens of times over. And, NS is really a culler- out of inferior varieties, it is not the engine of variation. Some form of chance process has to drive that search of a space of contingent possibilities, once we rule out intelligence as the evolutionary materialists do. The only hope is that such functional configurations must be commonplace, i.e contiguous continents of function, not isolated islands. Indeed, that is the implication of the Darwin-style tree of life diagram, that by incremental variations we can connect microbes to man. What is the actual empirical evidence of such smoothly connected incremental variability? Nil. We know that codes are highly specific and tend to be breakable by injecting fairly small random variations. Similarly, we know that co-ordinated, functionally specific organised complexity tends to be exactingly specific, as anyone who has watched a key being duplicated can testify, or anyone who has had to match a car part. So, what is the empirically observed evidence that allows naturalistic evolutionary materialists to confidently posit that he world of life is an exception to this pattern? Again, nil -- apart from question-begging a priorism along the lines of Lewontin et al. There is something rotten in the state of origins science in our time. GEM of TKI kairosfocus
Sal:
Now Darwinists keep arguing that the first life didn’t emerge all at once, but in pieces where selection worked. The problem with that: natural selection can work on things that aren’t replicating in the first place.
I presume you meant natural selection "can't" work . . . Regardless, I have to take issue with this formulation, because it concedes way too much. First, natural selection doesn't do anything to sample the search space. The search space has to be sampled by something else (chance or some kind of guided direction). Only when the search has successfully stumbled upon a function can natural selection attempt to preserve it. But of course natural selection isn't even relevant and doesn't have anything to preserve the function against until we have at least two different replicators in close proximity competing against each other for scant resources. Does this even make sense in an abiogensis scenario? Second, there is no evidence that natural selection has the ability to do anything even remotely meaningful in terms of building complex specified information. As Behe and others have pointed out, what natural selection seems to be capable of is a few-bit changes when there are huge populations under extreme selective pressure. That is the 'edge of evolution.' So it doesn't matter if replication and natural selection in all their fantasized glory existed right from the get-go. There is no evidence -- wishful speculation only -- that we would get molecular machines, digital codes, complex specified information, the organisms we see around us. Eric Anderson
Sal, you and I are talking past each other, and probably so much so that there is no need in trying to fix it. If you want to know where I am coming from, you can probably glean as much by going here cheers Upright BiPed
jstanley01: Sal has given you the standard materialist position behind the multiverse theory as a salvation for abiogenesis, but I want to pursue your comment for a moment, because you are absolutely right that the multiverse is useless as an argument to make a material origin of life probable. There are at least three reasons: 1. There is no evidence for it. 2. The probabilities of abiogenesis are so unfavorable that you would essentially need a preposterous number of multiverses to even begin to swing the odds in your favor, meaning that you still have a probability problem. And that is even assuming the multiverses contained conditions amendable to life. The laws of physics and chemistry which permit life are extremely fine tuned, so the odds of getting a universe amenable to life are astronomically small, even assuming the multiverse idea. As a practical matter, what this effectively means is that you need an infinite number of universes to deal with the probabilities. Recurring to infinite resources and infinite time as an answer to a probability question is not an answer. It is just a materialist miracle story. 3. It doesn't make one bit of difference whether there are other universes. Even other universes just like ours. Even millions of universes just like ours. We are trying to answer the question: how did life arise and develop to its current state of diversity and complexity, given our universe (its age, structure, laws of chemistry and physics that we know, and so forth). Here and now. In our universe. It has no impact on the probabilities in our universe and makes no difference what laws of physics and chemistry might exist in other hypothetical universes. In other words, the multiverse idea is simply irrelevant to the question on the table, which is: Given our universe, what is the most reasonable explanation for life? Eric Anderson
Upright, The way to do it is easy given the right circumstance. Take the example of coins. Granted they are designed, but can we detect another layer of organization in a configuration of coins. Lets say we have two rows of coins laid out on a table. Cleary the coins are designed, and so is the fact they are laid out in rows. If however we see one row have the apparently rando pattern: H T T H H H T H T T H H H T T T H T ... we might not think much of until we notice the other row has the exact same pattern H T T H H H T H T T H H H T T T H T ... The objects are coordinated with one another. The relationship is not material between the two rows (since the concept of heads and tails is an immaterial concept to describe physical coins). We then have cells that look mostly identical through the process of common descent (even creationists accept some common descent). Obviously the identical coordinated patterns is improbable if we were dealing with a random soup of biological molecules, but highly probable because of the machinery in the cell. But the fact we see coordination gets our attention. We can then calculate the probabilities associated with creating copy machines like the cell. It would follow along the lines of the way I calculated probabilities for the password protected system. In such case were are estimating the probability of arriving at functional proteins for the system via random or assisted search (if selection is involved) much like the probability of a hacker compromising a password via random or assisted search. If the protein is critcal to life, like say insulin, it is very reasonable to say that the search is effectively random and not assisted by selection since without functioning insulin, the population that needs it would be dead and one can not have natural selection on dead populations. We can make conservative estiimates of the protein forming by taking the nearest related protein and estimating the number of mutations needed to create function for the system. Calculating the probabilities for the origin of life problem would be easier since we estimate what it would take to get a protein from a primordial soup to make a DNA. Pick a protein that would be considered primitive to life and calculate the probability that it would be found via random search (just like ID proponents have done all along). The essay above only points out why such a calculation is a valid measure of specified complexity because it actually understates the probability of such a coordinated machine emerging in one step. Now Darwinists keep arguing that the first life didn't emerge all at once, but in pieces where selection worked. The problem with that: natural selection can't work on things that aren't replicating in the first place. Genetic algorithms can't solve passwords, and neither will they solve the structure of proteins that are sufficiently complex (functioning doesn't take place till all the essential parts are in place). Passwords are strings of characters, and proteins are strings of characters too (albeit with a different alphabet known as the amino acids of life). Searching for a protein string in a biotic soup is analogous to the search for a password at random. scordova
The question of course, is how does one calculate the probability that two isolated physical objects will arise which demonstrate immaterial relationships, and that those two objects will be coordinated one with the other? Answer: 0 Upright BiPed
..has to hurdle... Upright BiPed
Hi Sal, The probability of abiogenisis has hurdle the establishment of formal system which observationally demonstrates an immaterial relationship between codon and resulting effect, as physically set by the aaRS - with has no material interaction with either the codon or the effect. Upright BiPed
But why would x be constant? Wouldn’t the “search space” (if I’m using the term correctly) denominator increasing at the same rate as number of universes expressed by the numerator? IOW, in one universe the odds would be 1/x, in two 1+1/x+x, in three 1+1+1/x+x+x.
x is presumed constant to reflect the assumption that each universe has the same probability of life emerging. Consider rolling dice. Each roll of the dice landing 12 is 1 out of 36. The probability that each roll will land 12 is always the same. Several rolls will increase your chances of landing 12 at least once. In the case of multi universes, each universe is like a roll of the dice. Like rolling dice enough times to see at least 1 "12", enough universes will allow for one universe where life arises. But to quote Einstein: "God doesn't play dice with nature". scordova
IOW the odds would be the same for the whole as for any one... :D jstanley01
As a layman daring to post on a UD thread like this, please forgive me scordova, if this off topic comment and question steers it in a direction where you would rather not go. But it seems to me as an observer sitting in "the peanut gallery," so to speak, that the last bastion which the materialists have constructed to flee to -- when confronted by these types of probability arguments from ID -- has increasingly become the Multiverse Theory. Now realize, my math skills extend no further than algebra. But based mostly on what, to me, passes for common sense, I don't see how the existence of multiple non-interacting universes helps their case. Which expressed in fractions, I would understand this way: In one universe the odds of life arising is 1/x, but 2 ups the odds in both universes to 1+1/x, and 3 ups them among the universes as a whole to 1+1+1/x. So by extending the number of universes on out, you eventually come upon a universe in which the long odds pay and life arises. Like ours in, for instance. But why would x be constant? Wouldn't the "search space" (if I'm using the term correctly) denominator increasing at the same rate as number of universes expressed by the numerator? IOW, in one universe the odds would be 1/x, in two 1+1/x+x, in three 1+1+1/x+x+x. In which case, it looks to me like you can add all the universes you'd like up to infinity, but the odds for producing life randomly in any one of them would not change. jstanley01
Sal, I like your house of cards example. That is very helpful imagery for explaining the issue to the lay person. Eric Anderson
Folks: Great, constructive discussion. I would stir into the pot, that in a large enough space of possible configs, the coordination and organisation to achieve specific, complex function, will be a very unrepresentative fraction. Blind, chance plus necessity sampling is going to by overwhelming likelihood, pick up the BULK of the distribution -- non-functional -- rather than what is unrepresentative. And such a sampling theory result does not require any exact estimation of probabilities. Not, when the solar system's 10^57 atoms are looking at no more than 1 in 10^48 of the possibilities for 500 bits. Then, add the ingredient, that the self replication we must account for -- no bait and switcheroo games, objectors -- is CODE based. Code is the ultimate key-lock game, and comes with issues over language, symbol systems, encoders, transmitters, receivers, decoders and storage media. Pretty well once we are looking at digital code, the ONLY observed source is intelligence, and the notion that it could all fall out by happy chance would in a sane world have long since been laughed out of court. In short, once DNA and code based replication were on the table, the game should have been over. That it is not, and that there have been attempts to pretend that DNA code is not a code, shows just how deeply embedded materialist bewitchment is. Yes, folks, people have -- for months here at UD -- tried to argue that we should not believe that DNA is a code using prescriptive information. And of course, don't tell them that we are looking at step by step execution of co-ordinated actions to make key nanomachines to carry out the work of the cell. Reductio ad absurdum. As in, if you swallow an absurdity, you will often stoutly resist the patent truth. Let's ask: on a common sense basis, what best explains a digital microcontroller based system that carries out complicated processes? Why? Why should we think the living cell is any different, apart from ideologically imposed materialist a prioris? Bewitchment. But, sooner rather than later, it will be over, once people wake up from befuddlement. GEM of TKI kairosfocus
As Gil said at 1 I think the purpose is the specification. Purpose as specification. butifnot
Scordova, do you mean self ordering instead of self organization? Stephen meyers says ordered systems as characterized by their low information content whereas organized systems are high in specified complexity because of their indeterminate aperiodic nature.
Thank you for the comment, and rather than answer your question directly, let me offer this thought. Organization is organizing things in ways that are resisted or not directly facilitated by self ordering. OOL research are tries to explain organization through self-ordering which is like looking for square circles. My favorite example of this is a house of cards. The natural self-ordering tendency is for cards to rest mostly flat on a table, or maybe in a pile on top of each other. This is the most likely equilibrium configuration. The cards can be organized into a delicate house of cards. Such an organization of cards actually goes AGAINST the natural self ordering tendency of cards (which is to lay flat in a pile, if that). A house of cards achieve a state of quasi-equilibrium, but it is a state that is not easily achived via undirected forces. Physical principles makes it possible for the house of cards to stand, but it also makes it improbable for such a structure to emerge spontaneously from undirected forces (like wind or the table shaking). It is in this space of organized configuration where physical law makes possible but simultaneously improbable that designed organization can be recognized. OOL either tries to explain oranization through ordering ( a contradiction) or essentially argue that the appearance of organization is an illusion. I recall Dr. Hazen showing many pictures of self-ordered phonomenon like snowflakes to argue his case for OOL. But life is different. It is organized in a way that assembly would tend to be resisted by physical principles, just like the formation of a house of cards via undirected processes is resisted by physical principles. I like the house of card analogy a lot because you can visualize how self-ordering tendencies (like the tendency of cards to lie flat) will oppose the possibility of organization into a house. It is the fact that the molecules of life are organized in a way that goes against the most natural equilibrium configuration that makes life different from other collections of matter. Computer systems also have the features of a house of cards, but it is not quite so visually obvious. Living cells implement computers. scordova
Thanks, Sal. Both for your thoughts and kind words. I took a couple year break from UD and other ID/evolution discussions, partly due to work and other commitments, partly due to feeling a bit tired of the same issues coming up over and over. A few months ago I got somewhat rejuvenated and decided to return here to check things out. There are some thoughtful posters and we've had several good discussions, although often in less detail than I'd like. I don't spend a lot of time at Talk Origins. The quality of some of the FAQ's over there is abysmal, and Musgrave's abiogenesis page, in particular, is a disaster. I suspect he knows it is a hatchet job, rather than a fair discussion. You are a more patient man than I to attempt a discussion of what he wrote there. Nevertheless, his page did point me to the two Ghadiri papers, so hopefully I'll be able to read them and find out more about Ghadiri's work. If history is any guide, what I'll find is that Ghadiri's papers do not fully support what they are claimed by abiogenesis proponents to support. This could turn out to be a first, however, so I'm looking forward to reading what his work actually demonstrates! :) I do have to gently disagree with you on one thing. The title to this post suggests that coordinated complexity can address the postdiction and single target objections. I fear this is too optimistic. The objections, as near as I can ascertain, do not rest on a reasoned evaluation of the facts, but rather on a philosophical commitment to chance and necessity. Coordinated complexity, while an extremely important point and a significant player in any probability calculation, will simply be viewed as yet another improbable event that has already happened (postdiction) or as yet another single target (albeit one in a long sequence of single targets). I have never seen any rational argument on the side of the materialists to address the awful probabilities that await a natural abiogenesis story. I've spent a fair amount of time thinking about what you are calling coordinated complexity. I haven't been able to decide whether it is a difference of degree or of kind. It sounds like you are arguing that it is a difference of kind, but that is a challenging argument to get people to accept, even if correct. Coordinated complexity may in fact be -- and will certainly be viewed by the materialist as -- a difference of degree only. Coordinated complexity, as important as it is, just adds more zeroes. But all the zeroes didn't phase the committed materialist in the first place, because he is interested in the storyline, not an objective assessment of the probabilities. Anyway, I hope that doesn't sound too negative. I know you aren't really expecting folks like Rosenhouse and Musgrave to suddenly 'see the light' once you've explained coordinated complexity. You've laid out some great points that perhaps will help those who are willing to listen solidify our thinking around these issues. Eric Anderson
Scordova, do you mean self ordering instead of self organization? Stephen meyers says ordered systems as characterized by their low information content whereas organized systems are high in specified complexity because of their indeterminate aperiodic nature. kuartus
Since I mentioned Dr. Musgraves article on looking at it again I saw him illustrating the exact same mistake that this essay addresses:
At the moment, since we have no idea how probable life is, it's virtually impossible to assign any meaningful probabilities to any of the steps to life except the first two (monomers to polymers p=1.0, formation of catalytic polymers p=1.0).
No sir. The problem is not of formation of catalytic polymers. It is that the polymers are structured into a lock-and-key system! The probabilities of lock and key systems can be estimated.
So I've shown that generating a given small enzyme is not as mind-bogglingly difficult as creationists (and Fred Hoyle) suggest.
No sir! As I showed it is easy to generate a random string, but putting in a context where it is functionally coordinated into a system is very improbable. Same with making a small enzyme. The issue isn't making the enzyme, but an enzyme appropriate to a functioning rube-goldberg machine. Like the OOL community, Musgrave redifines the problem that is being solved, leaving the reader with the impression that the ID argument has been refuted, whearas he refuted an argument which the ID proponent hasn't really made. To be fair to Dr. Musgrave, the problem being solved may not have been sufficiently articulated to prevent objections like Dr. Musgrave's from being asserted. This essay is an attempt to clarify the issue. scordova
By the way the ghadiri peptide like Fox's protocells was made from pre-existing biolgical materials since making homochiral polymers isn't easy from scratch. :-) Musgrave didn't point that out. :-) The reaction is best called self-catalysis. Further it doesn't replicate the way a biological system replicates. OOL research are looking for self-organizing reaction, but that is a dead end. Life is made up of materials that notoriously don't self-organize (as is evident in decomposing dead bodies). That is why the design is special. The Ghadiri ligase is a self-organizing system. Self organizing systems almost by definition can't be very good information processing machines like computers or cells. I mentioned it to Dr. Hazen at my school. I pointed out papers to the effect. He was polite, but I don't think the OOL community will see their flaw: COMPUTER SYSTEMS HAVE BITS! Bits imply the has improbability (not self-organization) as a salient feature. Looking for computers that self organize from disorganized chemical soups is like looking for square circles. The quest is doomed. Self-organization prevents the formation of an information processor, and thus prevents a computer from forming spontaneously. Improbability is needed to build computer memory systems, improbability (measured in bits, like say in the computer's memory) guarantees the computer cannot be made by chance. The random shuffle of cards being replicated is astonishing for the reason the cards resist self-organized sequence replication. The materials that make life would ordinarily not tend to self replicate polymer sequences either, that is why the self-replication process is astonishing, whereas the replication of salt-crystals is not at all astonishing by comparison. OOL research, faced with these problems, redefine the metric of success from "explain how software, the computer system it resides on can form spontaneously and replicate itself" to "explain how something can replicate". They are answering questions that aren't really being asked. The question really being asked is why the replicators of life are rube-gold machines. Like the peacock's tail which made Darwin sick, life is rich with examples of rituals that would seem to detract from the necessity of pure survival. scordova
Eric, Delight to see you. I posted a couple years back pointing out you were favorably mentioned in a peer-reviewed work:-) See: Michael Behe, Eric Anderson, David Chiu, Kirk Durston mentioned favorably in ID-sympathetic Peer-Reviewed Article But regarding your question about the peptide, you can find it here in Ian Musgraves essay: http://www.talkorigins.org/faqs/abioprob/abioprob.html I think this essay pretty much refutes Musgraves attemps at dealing with the obvious statistical hurdles in OOL. Musgrave tries to redefine the problem as one of making replicators. That's not the problem. The problem is that the replicators do not take the simplest pathway to replication but are rather Rube-Goldberg machines with lots of lock and key systems along the way! scordova
Sal, good to see you. Couple of quick thoughts. 1. "Improbable things happen all the time, it doesn’t imply intelligent design." If Rosenhouse "is one of the brightest ID critics" and this is an argument he is making then I'd have to object to your characterization of him being bright. He obviously doesn't understand the design argument or is purposely setting up a strawman. 2. ". . . simple replicators can be built . . ." Can anyone point me to one of these? I followed your link about the Ghadiri peptide, but it was a long discussion that didn't get to the real point. I also went to the Ghadiri website, but didn't see much specific detail. Would you happen to have a copy of Ghadiri's paper discussing this self-replicating molecule? I'm particularly interested because lots of folks, including the Harvard Origins Project, are spending a lot of time and energy trying to come up with a viable scenario for a self-replicating molecule. If Ghadiri has already demonstrated such a thing, that would be most interesting. Of course, as you mention, the real issue is specified complexity, and I appreciate your taking time to respond to the Rosenhouse strawman. Eric Anderson
Sal, As you know, I lived in the depressing, nihilistic, soul-destroying depths of atheistic materialistic philosophy for a dreadful 43 years, but was liberated. And ID theory was a major factor in my liberation. GilDodgen
Thanks Gil for your peer review. I corrected my dyslexic error. Thank a million! scordova
Sal, How nice to see you here! Actually, it's 26^10 (26 possibilities in 10 locations) or approximately 1.4e+14, but your point still stands. Something completely mystifies me. The bacterial flagellar system is a machine, with an obvious purpose (propulsion). If this isn't a specification, nothing is. Darwinists resort to what I would describe as intellectual contortionism in order to deny the obvious. And the motivation for this grotesque mental plasticity is clear to me: If design really does exist in biological systems, the Darwinist's entire worldview collapses. As Paul points out in Romans, it is clearly seen that some things are made. GilDodgen

Leave a Reply