Uncommon Descent Serving The Intelligent Design Community

Coordinated Complexity — the key to refuting postdiction and single target objections

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

[As I recall, Jason Rosenhouse objected that Bill Dembski’s notion of specification cannot be applied to biology. This essay is written to challenge some of the objections think I’ve heard him raise informally over the years at my ID talks at his school and our discussion at ID and creation conferences. He’s one of the brightest critics of ID that I know, and thus I think objections he might raise should be addressed.]

The opponents of ID argue something along the lines: “take a deck of cards, randomly shuffle it, the probability of any given sequence occurring is 1 out of 52 factorial or about 8×10^67 — Improbable things happen all the time, it doesn’t imply intelligent design.”

In fact, I found one such Darwinist screed here:

Creationists and “Intelligent Design” theorists claim that the odds of life having evolved as it has on earth is so great that it could not possibly be random. Yes, the odds are astronomical, but only if you were trying to PREDICT IN ADVANCE how life would evolve.

http://answers.yahoo.com/question/index?qid=20071207060800AAqO3j2

Ah, but what if cards dealt from one random shuffle are repeated by another shuffle, would you suspect Intelligent Design? A case involving this is reported in the FBI website: House of Cards

In this case, a team of cheaters bribed a casino dealer to deal cards and then reshuffle them in same order that they were previously dealt out (no easy shuffling feat!). They would arrive at the casino, play cards which the dealer dealt and secretly record the sequence of cards dealt out. Thus when the dealer re-shuffled the cards and dealt out the cards in the exact same sequence as the previous shuffle, the team of cheaters would be able to play knowing what cards they would be dealt, thus giving them substantial advantage. Not an easy scam to pull off, but they got away with it for a long time.

The evidence of cheating was confirmed by videotape surveillance because the first random shuffle provided a specification to detect intelligent design of the next shuffle. The next shuffle was intelligently designed to preserve the order of the prior shuffle.

Here is a spectacular example of how a skilled dealer can control the sequence of cards through an intelligently designed shuffle:

But how does this relate to the problem of ID and defining specifications which signify the action of an intelligent agency?

The answer is that it illustrates how circumstances themselves can provide specification for detecting design even when we don’t have the specification in advance. The casinos observing the cheating team did not know in advance what the outcome of the shuffled decks would be but they were able to detect intelligent design despite lacking explicit patterns of cards to look for before catching the crooks.

Opponents of ID have insinuated that we cannot legitimately compute probabilities of designed objects if we don’t have explicit specification of the design before we make the observation of the artifact. Not so. The FBI case is a case in point!

Now consider a randomly generated string: “yditboawrt”. Its existence might not be significant unless it were converted for use as someone’s password. But if it were converted to a password, at that point, what was previously just a random string takes on significance. If one has a lock-and-key/login-password type system, you can legitimately estimate threshold improbability based on looking at the improbability of the password itself.

For example a given password of 10 letters will have an improbability of 1 out of 26^10. If we found a random string of letters (like say scrabble pieces) lying in a box, it might be rather pointless to use probability to argue the pattern is designed merely because its improbability is 1 out of 26^10, however if we found a computer system protected by a login-password that consists of 10 letters, the improbability of that system existing in the first place is at least 1 out of 26^10 and actually far more remote since the system that would implement the password protection is substantially more complex than the password itself (and this is an understatement).

This illustrates how specified complexity can be detected in systems where we don’t have specifications in advance. One merely calculates the complexity of one of the parts that must be coordinated with the whole. I call this coordinated complexity. The FBI case is an example where individuals were able to confirm intelligent design without having explicit patterns to work with in advance.

There may be infinite ways to make lock-and-key systems or login-password systems. But the fact that there are an infinite number of ways to build these things, does not imply the systems are probable. Likewise, even though in principle we could construct life forms in an infinite number of ways, it does not mean they are probable from random events any more than lock-and-key systems are probable from random events. Critics of ID claim that ID proponents are assuming life forms assume basically one form. That objection is largely irrelevant, because the improbability of a design is evident by the level of coordinated complexity in evidence in the artifact! The calculations the ID proponents use that focus on a specific target are legitimate if one considers that the target itself is specified by the entire system which it is a part of.

Critics of ID often argue something to the effect, “simple replicators can be built” with the insinuation that since simple replicators exist, complex replicators are somehow probable. This is like saying if hacker is able to compromise a relatively short password by exhaustive search, that somehow he will resolve a far more complex one with the same techniques. Not so! Yet this is exactly what defenders of OOL research do. They give examples of replicators and suggest that it is not so hard to make a replicator. Agreed that it is relatively easy to make a replicator (like the Ghadiri Peptide), but making replicators isn’t the problem. Nor is it the problem that we have merely an improbable structure in life (after all the sequence of a randomly shuffled deck of cards is astronomically improbable), but that the structure is specified improbability. It is specified because of the level of coordination that defines the structure, just like the level of coordination needed to implement a password-protected system of a mere 10 letters, or the cooridination of a falsely shuffled deck of cards with a randomly shuffled deck of cards. It is specified complexity because it is coordinated complexity.

lock and key

Like the calculation of the improbability of a password understates the improbability of a password protected computer system, the calculation for the arrival of a given proteins may actually understate the improbability of the system in which such a protein is deemed functional. For example, letting the protein be analogous to a key, consider how hard it would be given a key, to build a corresponding lock!

The problem with solving the origin of life is akin to login-password protected computer systems arising spontaneously. For this reason, it would seem, the calculations of life’s improbability put forward by ID proponents are quite valid, and actually may understate the magnitude of the problem for life spontaneously arising.

Comments
One of the toughest jobs in this debate is taking the ubiquitously obviousness of design and breaking it down into discrete, communicable concepts and usable terms and phrases for those who shut their eyes to it. Well done, scordova! Coordinated complexity! Even if you have a machine that randomly varies not only the shape of nodes on a key, but also the number of nodes on the key from a handful to hundreds, and another machine that randomly varies the shape and number of nodes in a lock; what is the chance that any key will match any lock and also be produced in a time frame where both exist at the same time in proximity to each other? I have a question, though. If there is a mutation and a key changes and now fits a lock that was hanging around for no productive reason, is the error-correction code built in, or is that a third happy accident, where not only the key and the lock happen to exist at the same time, but a mutation in another key-lock system generates an error-correct mechanism - miraculously - for the new key-lock pair that the mechanisms just-so-happened to produce?William J Murray
December 20, 2013
December
12
Dec
20
20
2013
06:00 PM
6
06
00
PM
PDT
Thanks Deuce. Awesome insight. Nice to see you here.scordova
March 15, 2012
March
03
Mar
15
15
2012
07:08 PM
7
07
08
PM
PDT
The fundamental problem with Rosenhouse's and other critics' argument is that it's just as much an argument against Darwinism as it is against ID. When we observe living things, we see organisms and structures that exhibit patterns that demand explanation. We see matter arranged in forms that appear to have purpose or function (aka design). Biology is ultimately the practice of trying to understand and explain those forms and the function they exhibit. The whole point of Darwin's theory was to explain those forms, or patterns, in a way that doesn't require actual purpose or function, by supplying a designer-substitute. What this argument implies, on the other hand, is that those patterns are merely projected onto the world by us, and so don't really need to be explained after all. He's saying that they're essentially like any random poker shuffle. The odds of a random shuffle coming out the particular way that it does are extremely low, but as you say, that doesn't require a design-explanation. A person who tries to explain the pattern exhibited by that shuffle using design is deluding himself that there is an important pattern there that needs explaining in the first place. But here's the thing: a person who tried to explain that pattern using some sort of substitute-designer would be just as deluded. A person trying to "explain" the outcome of a random shuffle at all would be deluded, because there's nothing there to explain! To use that sort of argument against ID in biology is to commit yourself to a constructivist or post-modernist view of science (at least if you're consistent, which of course the people making such arguments never are). It's essentially to say that all those forms we observe and seek to explain - organisms, eyes, hearts, flagella, molecular machines, error-correcting mechanisms, etc - don't really exist objectively and independently of our minds, but are patterns being subjectively imposed on reality by us, and so require no explanation (except perhaps a psychological explanation), just like a man who convinces himself that there are deliberately placed faces in every cloud, or who thinks that some random shuffle of cards is important and dedicates his life to trying to explain the imaginary "mystery" of why it came out that way. It implies that all biological function is "socially constructed" by us rather than discovered. And again, worst of all for the people making such an argument, if the logic is followed consistently the argument cuts across Darwinian explanation (and all biological science for that matter!) as much as design explanation. If you are logically consistent and don't wish to adopt biological constructivism/post-modernism with all the absurdity that it entails, then you must grant the realism that science requires, and in so doing you must grant that there is specification in biology.Deuce
March 15, 2012
March
03
Mar
15
15
2012
04:30 AM
4
04
30
AM
PDT
Barry @31: Absolutely agree with you. That is where I was heading in my #2, namely, if we are relying on infinite resources and infinite time, then we don't have an explanation. I think you've perhaps articulated this more succinctly. Sal @34: LOL!Eric Anderson
March 12, 2012
March
03
Mar
12
12
2012
10:17 PM
10
10
17
PM
PDT
UprightBiPed@22 Have you read Werner Gitt's new book "Without Excuse"? He seems to develop the argument from information into a kind of cosmological argument for God and into a general argument against naturalism and materialism.kuartus
March 12, 2012
March
03
Mar
12
12
2012
06:29 PM
6
06
29
PM
PDT
Something tell me that the casino security guards would be hard to convince... ;Djstanley01
March 12, 2012
March
03
Mar
12
12
2012
04:05 PM
4
04
05
PM
PDT
If I recall correctly, I thought I saw Dr. Rosenhouse mention he accepted the multi universe and/or the many worlds idea. The false shuffle team mentioned earlier might consider using multi-universe as a defense for explaining the specified complexity in evidence in the cards. After all many scientists think it is a good explanation for specified complexity it life, it ought to be good enough to defend criminals who perform crimes that evidence far higher probabilities than those in life.scordova
March 12, 2012
March
03
Mar
12
12
2012
03:38 PM
3
03
38
PM
PDT
scordova @ 16, got it.jstanley01
March 12, 2012
March
03
Mar
12
12
2012
02:34 PM
2
02
34
PM
PDT
Eric Anderson @21, thanks for the answer and the insights.jstanley01
March 12, 2012
March
03
Mar
12
12
2012
02:28 PM
2
02
28
PM
PDT
EA @ 21. You say there are at least four reasons the multi-verse does not turn the materialist creation myth into a rational explanation. I would add a fourth. 4. Resort to the multiverse makes “explanation” itself pointless. If we allow the multiverse concept to lead us to say that any event with a non-zero probability of happening through sheer blind chance must in fact happen, then chance suddenly “explains” everything and therefore nothing.Barry Arrington
March 12, 2012
March
03
Mar
12
12
2012
12:17 PM
12
12
17
PM
PDT
Upright Biped @19, Sal @28: Apologies for jumping into the middle, but would it be correct to say the following? 1- It is quite common for physical systems to arise that have some relationship to each other. This is why we are constantly faced with the question of correlation/causation: did the hot sidewalk actually cause my ice cream cone to melt, or is there a third cause (sun) that can explain both? In this category, we are typically trying to determine whether there is a physical causation mechanism that relates the two systems. 2- It is quite common for physical systems to arise that have some immaterial relationship to each other. Say, the last 4 digits of your best friend's phone number in high school happened to be the same as the last 4 digits of your SS#. Or in Sal's example, person x tosses 4 heads in a row and person y does the same thing. However the physical systems we observe falling into this category are almost invariably short (i.e., not complex) and the immaterial relationship imposed is often, though not always, simply one of identity or similarity. That is to say, the systems demonstrate some identical or similar features, but typically the features don't mean anything. 3- It has not ever been observed that two physical systems arise independently and exhibit a relationship that is (i) immaterial, (ii) complex, and (iii) has meaning/function. Given the probabilities, we can say the probability of this last category is effectively zero, although one could argue that it is just exceedingly low.Eric Anderson
March 12, 2012
March
03
Mar
12
12
2012
12:14 PM
12
12
14
PM
PDT
I should add to my comment #23, and this is critical: Natural selection is not a relevant answer to the awful probabilities that beset materialistic abiogenesis, because those calculations typically assume natural selection is operating in all its Darwinian agrandized, anthropogenicized glory, perfectly and carefully selecting all that is good and rejecting all that is bad, per Darwin's literary description. In other words, natural selection cannot make the probabilities better, because natural selection is already assumed in making the calculations in the first place. ----- Consider a proteins-first scenario. We say to chance: "Go find a functional protein." If chance happens, against the terrible odds, to stumble upon a functional protein, we say to natural selection: "OK, you get to keep this functional protein, all carefully safe and preserved." Then we say to chance: "Go find the next functional protein." If chance finds it, we let natural selection hold onto it as well. Finally, once all the proteins have been carefully and safely protected by the benevolent hand of natural selection, we go back go chance and say: "OK, not put these proteins together in a functional system." And on and on. It doesn't matter whether we view this as a sequential operation or as an all-at-once operation. Chance has to find the proteins (or the whole functional system) and then we assume natural selection is doing a perfect job of preserving the functional element. The issue is the same, whether we are talking about proteins-first or DNA-first or RNA-first, or whatever-first scenario. Chance does the searching and then natural selection gets to keep anything functional that chance happens to find. ----- In reality of course, natural selection won't do anything even approaching a perfect job. Interfering reactions, natural breakdown of chemical components, subsequent mutations, mechanical stresses and forces in the environment, the vagaries and hazards of nature, will all likely obliterate the nascent system before anything really gets off the ground. Some authors have discussed these challenges in passing, but these challenges are usually unquantifiable enough and make things so bad that nearly all probability calculations for first life end up ignoring these challenges and just assuming that the island of function located in the vast sea of search space, once found, will be automatically and perfectly preserved. Again: natural selection is irrelevant as an answer to the probability problem, because it is already assumed in arriving at the probabilities in the first place.Eric Anderson
March 12, 2012
March
03
Mar
12
12
2012
11:52 AM
11
11
52
AM
PDT
Upright Biped, Apologies for the misunderstanding. But to clarify, you asked:
how does one calculate the probability that two isolated physical objects will arise which demonstrate immaterial relationships, and that those two objects will be coordinated one with the other?
You were asking HOW the probability is calculated, not what the probability is. You gave your answer to the probabiliy: Answer: 0 The answer isn't zero, imho. It is possible that a set of 4 random coins in one corner of the world will be all heads and another 4 random coins in another part of the world will also be all heads. Thus they have a non-material relationship which emerged possibly by accident (they symbolically mirror each other). It is highly improbable that physical principles can create large scale information processing. Immaterial software is special because it is decoupled from the material properties of hardware. For software to be software, it's salient properties cannot be dependent on hardware properties. But it might not be accurate to say that hardware (as in glitches) can't possibly modify software by accident. Happens all the time. It would be fair to say however, that hardware can't by accident consistently make large coordinated software like Windows 7, Unix, or the software found in living cells. It is fair to say that software transcends hardware, that one does not understand the key properties of software by understanding the chemical and physical properties of hardware. This is evidenced by the fact the same piece of software frequently runs on radically different hardware architectures. But I don't think I would go so far to say the probability is absolutely zero, only operationally zero with respect to OOL. That was the distinction computer scientist and chemist Don Johnson made in Programming of Life.scordova
March 12, 2012
March
03
Mar
12
12
2012
09:39 AM
9
09
39
AM
PDT
kf:
From the simplest independent and dependent living cells, we know that we are looking at about 1 million bits worth of genetic info for origin of life, codes for proteins, regulatory code, etc.
Do you happen to have handy a rough estimate/calculation or source for this number? Seems a little on the low side to me, but I haven't tried to run the calculations. I've seen some calculations for simple genomes, but haven't seen anything including cellular machinery, regulatory codes, epigenetic information, etc.Eric Anderson
March 12, 2012
March
03
Mar
12
12
2012
08:49 AM
8
08
49
AM
PDT
Regardless, I have to take issue with this formulation, because it concedes way too much.
Yes, I've been known to be generous to my opponents on the otherside as they try to defend their indefensible position. The OOL researchers (not the internet DarwinDefenders), are incredibly gallant. They are fighting impossible odds. There was once an old saying in american culture, "don't hit a man when he's down. Help him up first." Though I disagree with OOL researchers, here is my tribute to their gallantry and determination: http://www.youtube.com/watch?v=YGzqbEeVWhsscordova
March 12, 2012
March
03
Mar
12
12
2012
08:24 AM
8
08
24
AM
PDT
This is in response to the tripe posted on The Septic Zone- Natural selection can put Functional Information into the genome- First, natural selection is a result and becomes no more than a statistical artifact. Secondly biological fitness pertains to reproductive success which is an after-the-fact assessment. Third there is behaviour, something that can be changed much quicker to aid survival and adds nothing to the genome. Joe F sez:
The essence of the notion of Functional Information, or Specified Information, is that it measures how far out on some scale the genotypes have gone.
Unfortunately Joe F never provides a reference for that bit of tripe. I have never read any IDist say anything like that. Methinks Joe F made it up
The relevant measure is fitness.
Umm biological fitness is nonsense Joe F- it is an after-the-fact assessment. But anyway as I explained to Joe F CSI pertains to origins. Unfortunately Joe F refused to grasp that fact. Also natural selection has never been observed to do anything. So that would be another problem.Joe
March 12, 2012
March
03
Mar
12
12
2012
06:59 AM
6
06
59
AM
PDT
Folks: The very constructive discussion continues. EA has put his finger on a very central challenge to the all-purpose appeal to the claimed or assumed wonderful powers of natural selection:
natural selection doesn’t do anything to sample the search space. The search space has to be sampled by something else (chance or some kind of guided direction). Only when the search has successfully stumbled upon a function can natural selection attempt to preserve it. But of course natural selection isn’t even relevant and doesn’t have anything to preserve the function against until we have at least two different replicators in close proximity competing against each other for scant resources. Does this even make sense in an abiogensis scenario?
It is worth pointing out that already we can see how loosely the term "natural selection" is being used, not in the context of differential reproductive success, but in the sense of access to niches of success. This issue has further been set in the context of abiogenesis, but it also applies to the issue of the origin of novel body plans, once we factor in implications of the genetic code. From the simplest independent and dependent living cells, we know that we are looking at about 1 million bits worth of genetic info for origin of life, codes for proteins, regulatory code, etc. That is 1,000 times as many bits as would credibly exhaust the blind search capacity of our observed cosmos. But for novel body plans, we are looking at maybe 10 million to 100 million bits. Each. Dozens of times over. And, NS is really a culler- out of inferior varieties, it is not the engine of variation. Some form of chance process has to drive that search of a space of contingent possibilities, once we rule out intelligence as the evolutionary materialists do. The only hope is that such functional configurations must be commonplace, i.e contiguous continents of function, not isolated islands. Indeed, that is the implication of the Darwin-style tree of life diagram, that by incremental variations we can connect microbes to man. What is the actual empirical evidence of such smoothly connected incremental variability? Nil. We know that codes are highly specific and tend to be breakable by injecting fairly small random variations. Similarly, we know that co-ordinated, functionally specific organised complexity tends to be exactingly specific, as anyone who has watched a key being duplicated can testify, or anyone who has had to match a car part. So, what is the empirically observed evidence that allows naturalistic evolutionary materialists to confidently posit that he world of life is an exception to this pattern? Again, nil -- apart from question-begging a priorism along the lines of Lewontin et al. There is something rotten in the state of origins science in our time. GEM of TKIkairosfocus
March 12, 2012
March
03
Mar
12
12
2012
02:34 AM
2
02
34
AM
PDT
Sal:
Now Darwinists keep arguing that the first life didn’t emerge all at once, but in pieces where selection worked. The problem with that: natural selection can work on things that aren’t replicating in the first place.
I presume you meant natural selection "can't" work . . . Regardless, I have to take issue with this formulation, because it concedes way too much. First, natural selection doesn't do anything to sample the search space. The search space has to be sampled by something else (chance or some kind of guided direction). Only when the search has successfully stumbled upon a function can natural selection attempt to preserve it. But of course natural selection isn't even relevant and doesn't have anything to preserve the function against until we have at least two different replicators in close proximity competing against each other for scant resources. Does this even make sense in an abiogensis scenario? Second, there is no evidence that natural selection has the ability to do anything even remotely meaningful in terms of building complex specified information. As Behe and others have pointed out, what natural selection seems to be capable of is a few-bit changes when there are huge populations under extreme selective pressure. That is the 'edge of evolution.' So it doesn't matter if replication and natural selection in all their fantasized glory existed right from the get-go. There is no evidence -- wishful speculation only -- that we would get molecular machines, digital codes, complex specified information, the organisms we see around us.Eric Anderson
March 11, 2012
March
03
Mar
11
11
2012
11:17 PM
11
11
17
PM
PDT
Sal, you and I are talking past each other, and probably so much so that there is no need in trying to fix it. If you want to know where I am coming from, you can probably glean as much by going here cheersUpright BiPed
March 11, 2012
March
03
Mar
11
11
2012
11:02 PM
11
11
02
PM
PDT
jstanley01: Sal has given you the standard materialist position behind the multiverse theory as a salvation for abiogenesis, but I want to pursue your comment for a moment, because you are absolutely right that the multiverse is useless as an argument to make a material origin of life probable. There are at least three reasons: 1. There is no evidence for it. 2. The probabilities of abiogenesis are so unfavorable that you would essentially need a preposterous number of multiverses to even begin to swing the odds in your favor, meaning that you still have a probability problem. And that is even assuming the multiverses contained conditions amendable to life. The laws of physics and chemistry which permit life are extremely fine tuned, so the odds of getting a universe amenable to life are astronomically small, even assuming the multiverse idea. As a practical matter, what this effectively means is that you need an infinite number of universes to deal with the probabilities. Recurring to infinite resources and infinite time as an answer to a probability question is not an answer. It is just a materialist miracle story. 3. It doesn't make one bit of difference whether there are other universes. Even other universes just like ours. Even millions of universes just like ours. We are trying to answer the question: how did life arise and develop to its current state of diversity and complexity, given our universe (its age, structure, laws of chemistry and physics that we know, and so forth). Here and now. In our universe. It has no impact on the probabilities in our universe and makes no difference what laws of physics and chemistry might exist in other hypothetical universes. In other words, the multiverse idea is simply irrelevant to the question on the table, which is: Given our universe, what is the most reasonable explanation for life?Eric Anderson
March 11, 2012
March
03
Mar
11
11
2012
11:01 PM
11
11
01
PM
PDT
Upright, The way to do it is easy given the right circumstance. Take the example of coins. Granted they are designed, but can we detect another layer of organization in a configuration of coins. Lets say we have two rows of coins laid out on a table. Cleary the coins are designed, and so is the fact they are laid out in rows. If however we see one row have the apparently rando pattern: H T T H H H T H T T H H H T T T H T ... we might not think much of until we notice the other row has the exact same pattern H T T H H H T H T T H H H T T T H T ... The objects are coordinated with one another. The relationship is not material between the two rows (since the concept of heads and tails is an immaterial concept to describe physical coins). We then have cells that look mostly identical through the process of common descent (even creationists accept some common descent). Obviously the identical coordinated patterns is improbable if we were dealing with a random soup of biological molecules, but highly probable because of the machinery in the cell. But the fact we see coordination gets our attention. We can then calculate the probabilities associated with creating copy machines like the cell. It would follow along the lines of the way I calculated probabilities for the password protected system. In such case were are estimating the probability of arriving at functional proteins for the system via random or assisted search (if selection is involved) much like the probability of a hacker compromising a password via random or assisted search. If the protein is critcal to life, like say insulin, it is very reasonable to say that the search is effectively random and not assisted by selection since without functioning insulin, the population that needs it would be dead and one can not have natural selection on dead populations. We can make conservative estiimates of the protein forming by taking the nearest related protein and estimating the number of mutations needed to create function for the system. Calculating the probabilities for the origin of life problem would be easier since we estimate what it would take to get a protein from a primordial soup to make a DNA. Pick a protein that would be considered primitive to life and calculate the probability that it would be found via random search (just like ID proponents have done all along). The essay above only points out why such a calculation is a valid measure of specified complexity because it actually understates the probability of such a coordinated machine emerging in one step. Now Darwinists keep arguing that the first life didn't emerge all at once, but in pieces where selection worked. The problem with that: natural selection can't work on things that aren't replicating in the first place. Genetic algorithms can't solve passwords, and neither will they solve the structure of proteins that are sufficiently complex (functioning doesn't take place till all the essential parts are in place). Passwords are strings of characters, and proteins are strings of characters too (albeit with a different alphabet known as the amino acids of life). Searching for a protein string in a biotic soup is analogous to the search for a password at random.scordova
March 11, 2012
March
03
Mar
11
11
2012
05:49 PM
5
05
49
PM
PDT
The question of course, is how does one calculate the probability that two isolated physical objects will arise which demonstrate immaterial relationships, and that those two objects will be coordinated one with the other? Answer: 0Upright BiPed
March 11, 2012
March
03
Mar
11
11
2012
05:00 PM
5
05
00
PM
PDT
..has to hurdle...Upright BiPed
March 11, 2012
March
03
Mar
11
11
2012
03:35 PM
3
03
35
PM
PDT
Hi Sal, The probability of abiogenisis has hurdle the establishment of formal system which observationally demonstrates an immaterial relationship between codon and resulting effect, as physically set by the aaRS - with has no material interaction with either the codon or the effect.Upright BiPed
March 11, 2012
March
03
Mar
11
11
2012
03:30 PM
3
03
30
PM
PDT
But why would x be constant? Wouldn’t the “search space” (if I’m using the term correctly) denominator increasing at the same rate as number of universes expressed by the numerator? IOW, in one universe the odds would be 1/x, in two 1+1/x+x, in three 1+1+1/x+x+x.
x is presumed constant to reflect the assumption that each universe has the same probability of life emerging. Consider rolling dice. Each roll of the dice landing 12 is 1 out of 36. The probability that each roll will land 12 is always the same. Several rolls will increase your chances of landing 12 at least once. In the case of multi universes, each universe is like a roll of the dice. Like rolling dice enough times to see at least 1 "12", enough universes will allow for one universe where life arises. But to quote Einstein: "God doesn't play dice with nature".scordova
March 11, 2012
March
03
Mar
11
11
2012
03:11 PM
3
03
11
PM
PDT
IOW the odds would be the same for the whole as for any one... :Djstanley01
March 11, 2012
March
03
Mar
11
11
2012
01:34 PM
1
01
34
PM
PDT
As a layman daring to post on a UD thread like this, please forgive me scordova, if this off topic comment and question steers it in a direction where you would rather not go. But it seems to me as an observer sitting in "the peanut gallery," so to speak, that the last bastion which the materialists have constructed to flee to -- when confronted by these types of probability arguments from ID -- has increasingly become the Multiverse Theory. Now realize, my math skills extend no further than algebra. But based mostly on what, to me, passes for common sense, I don't see how the existence of multiple non-interacting universes helps their case. Which expressed in fractions, I would understand this way: In one universe the odds of life arising is 1/x, but 2 ups the odds in both universes to 1+1/x, and 3 ups them among the universes as a whole to 1+1+1/x. So by extending the number of universes on out, you eventually come upon a universe in which the long odds pay and life arises. Like ours in, for instance. But why would x be constant? Wouldn't the "search space" (if I'm using the term correctly) denominator increasing at the same rate as number of universes expressed by the numerator? IOW, in one universe the odds would be 1/x, in two 1+1/x+x, in three 1+1+1/x+x+x. In which case, it looks to me like you can add all the universes you'd like up to infinity, but the odds for producing life randomly in any one of them would not change.jstanley01
March 11, 2012
March
03
Mar
11
11
2012
01:32 PM
1
01
32
PM
PDT
Sal, I like your house of cards example. That is very helpful imagery for explaining the issue to the lay person.Eric Anderson
March 11, 2012
March
03
Mar
11
11
2012
08:39 AM
8
08
39
AM
PDT
Folks: Great, constructive discussion. I would stir into the pot, that in a large enough space of possible configs, the coordination and organisation to achieve specific, complex function, will be a very unrepresentative fraction. Blind, chance plus necessity sampling is going to by overwhelming likelihood, pick up the BULK of the distribution -- non-functional -- rather than what is unrepresentative. And such a sampling theory result does not require any exact estimation of probabilities. Not, when the solar system's 10^57 atoms are looking at no more than 1 in 10^48 of the possibilities for 500 bits. Then, add the ingredient, that the self replication we must account for -- no bait and switcheroo games, objectors -- is CODE based. Code is the ultimate key-lock game, and comes with issues over language, symbol systems, encoders, transmitters, receivers, decoders and storage media. Pretty well once we are looking at digital code, the ONLY observed source is intelligence, and the notion that it could all fall out by happy chance would in a sane world have long since been laughed out of court. In short, once DNA and code based replication were on the table, the game should have been over. That it is not, and that there have been attempts to pretend that DNA code is not a code, shows just how deeply embedded materialist bewitchment is. Yes, folks, people have -- for months here at UD -- tried to argue that we should not believe that DNA is a code using prescriptive information. And of course, don't tell them that we are looking at step by step execution of co-ordinated actions to make key nanomachines to carry out the work of the cell. Reductio ad absurdum. As in, if you swallow an absurdity, you will often stoutly resist the patent truth. Let's ask: on a common sense basis, what best explains a digital microcontroller based system that carries out complicated processes? Why? Why should we think the living cell is any different, apart from ideologically imposed materialist a prioris? Bewitchment. But, sooner rather than later, it will be over, once people wake up from befuddlement. GEM of TKIkairosfocus
March 11, 2012
March
03
Mar
11
11
2012
04:22 AM
4
04
22
AM
PDT
As Gil said at 1 I think the purpose is the specification. Purpose as specification.butifnot
March 10, 2012
March
03
Mar
10
10
2012
10:31 PM
10
10
31
PM
PDT
1 2

Leave a Reply