(Adapted from a discussion at Evolution and Design and from material in Trevors and Abel’s peer-reviewed paper, Chance and necessity do not explain the origin of life, featured in Cell Biology International, 2004.)
The Explanatory Filter in ID literature outlines a textbook method for detecting design. If one finds a physical artifact, the artifact is inferred to be designed if the features in question are not explainable by naturalistic explanations, namely:
1. natural law, or
2. chance
(I will explain later why I define “naturalistic explanations” this way.)
However, two objections often arise:
A. How can we be sure we won’t make some discovery in the future that will invalidate the design inference?
B. How can we be sure we’ve eliminated all possible naturalistic causes, particularly since we have so few details of what happened so long ago when no one was around?
Answer: We can be sure if we are dealing with the right kind of design, a perfect architecture to communicate design! The right kind of design will negate objections raised by questions A and B.
I must admit at first, A and B seemed impossible for finite humans like us to answer. I mean, after all, would we not have to be All-Knowing to answer such questions? However, there is mathematical tool known as Proof by Contradiction which allows finite humans to give accurate descriptions about issues that deal with an infinitely large number of objects.
It is rumored that the first recorded application of Proof by Contradiction was so heretical to the Greeks that they executed the mathematician who first applied it successfully (see The Square Root of 2). Let us then use this heretical tool to allow us to answer A and B without knowing everything.
What then is an example of a perfect architecture which resists natural law and chance explanations? Answer: self-replicating computer systems (Turing machines) and/or the first living organism. A peer-reviewed article on this very topic by Trevors and Abel in the journal, Cell International, is available here: Chance and necessity do not explain the origin of life.
Rather than quote the entire article, let me give their explanation for why any natural law we are aware of, or any natural law we might possibly discover in the future, would not explain living organisms (the same is true of self-replicating computer systems, which living cells happen to be also):
Natural mechanisms are all highly self-ordering. Reams of data can be reduced to very simple compression algorithms called the laws of physics and chemistry. No natural mechanism of nature reducible to law can explain the high information content of genomes. This is a mathematical truism, not a matter subject to overturning by future empirical data. The cause-and-effect necessity described by natural law manifests a probability approaching 1.0. Shannon uncertainty is a probability function (−log2 p). When the probability of natural law events approaches 1.0, the Shannon uncertainty content becomes miniscule (−log2 p = −log2 1.0 = 0 uncertainty). There is simply not enough Shannon uncertainty in cause-and-effect determinism and its reductionistic laws to retain instructions for life. Prescriptive information (instruction) can only be explained by algorithmic programming. Such DNA programming requires extraordinary bit measurements often extending into megabytes and even gigabytes. That kind of uncertainty reflects freedom from law-like constraints.
The above is an example of using Proof by Contradiction. It is in no way an “argument from ignorance” (too use a tired old phrase by the anti-IDsts).
The rest of the paper gives an explanation why chance cannot be factor as it relates to pre-biotic chemistry and information science.
It is not reasonable to expect hundreds to thousands of random sequence polymers to all cooperatively self-organize into an amazingly efficient holistic metabolic network. The spontaneous generation of long sequences of DNA out of sequence space (Ω) does have the potential to include the same sequences as genetic information. But there is no reason to suspect that any instructive biopolymer would isolate itself out of Ω and present itself at the right place and time.
…
Even if all the right primary structures (digital messages) mysteriously emerged at the same time from Ω, “a cell is not a bag of enzymesâ€Â. And, as we have pointed out several times, there would be no operating system to read these messages.Without selection of functional base sequencing at the covalent level, no biopolymer would be expected to meet the needs of an organizing metabolic network. There is no prescriptive information in random sequence nucleic acid. Even if there were, unless a system for interpreting and translating those messages existed, the digital sequence would be unintelligible at the receiver and destination. The letters of any alphabet used in words have no prescriptive function unless the destination reading those words first knows the language convention.
The question then arises, how about some combination of chance and necessity, a mechanism like natural selection. Well in addition to the fact one may not have a viable reproducing organism to even begin to have natural selection do it’s work, the Displacement Theorem shows why such a mechanism is even more remote than chance as an explanation. Thus, combinations of natural law and chance are also rejected as explanations.
We thus have, in the first life, something, that by definition resists naturalistic origins. It is not a matter of ignorance that this conclusion is arrived at, it is a matter of a mathematical Proof by Contradiction. If one assumes naturalistic origins for life, one eventually runs into a logical impossibility, which demonstrates the assumption of naturalistic origins was incorrect to begin with.
Lest I be accused of equivocation of the word “naturalisticâ€Â, let me point out if that if by naturalistic one means no involvement by the supernatural, that results in a either a meaningless definition (beautifully described by Mark Perakh on the supernatural and science) or a metaphysical definition (i.e., naturalistic = “anything except ID or God”). In either case, such a definition of “naturalistic†is scientifically meaningless.
In contrast, the definition for naturalistic that I gave above is consistent with the concept of naturalistic in ID literature, and further, such a definition is scientifically meaningful versus a metaphysical definition (naturalistic = “anything except ID or God”).
There is perhaps the hypothetical chance we have a non-natural, but also non-ID explanation for the first living organism. Such an explanation, given that it does not proceed from a natural law or chance would not be in principle testable, thus it too would fall outside materialist definitions of science. But this is an intolerable situation for materialist “science” because in that case, the explanation for life would still fall outside of their self-contradictory definition of science, and thus life, at least in their conception, would of necessity have an unscientific cause!
One might argue the possibility of a non-natural, non-ID cause negates the ID inference as well. But in such case I appeal to other factors:
1. We have examples of agents, namely humans, which can make comparable artifacts, thus the inference is at least consistent with an intelligence that is willing to behave in a human-like manner
2. If all else fails, we can point out the laws of physics strongly suggest the existence of an Ultimate Intelligence.
Thus really, a non-ID cause becomes less and less plausible.
I hope this essay has helped illustrate why life is a perfect architecture to communicate design!
Salvador
Salvador,
Thank you for the excellent post. There is one thing I am curious about though. You set the ID problem up by asking two questions:
A. How can we be sure we won’t make some discovery in the future that will invalidate the design inference?
B. How can we be sure we’ve eliminated all possible naturalistic causes, particularly since we have so few details of what happened so long ago when no one was around?
First, let me suggest that these two questions are really one question that could be phrased as: “How can we be sure there will not be a future discovery of a naturalistic cause of which we are presently unaware that accounts for the data?â€Â
I am curious about why you use the word “sure.†Must we be able to assert that our scientific theory is “sure†in the absolute sense of the word? I think not. Indeed, I would suggest that Popper was correct when he said that all scientific conclusions are contingent. Popper wrote: “. . . there can be no ultimate statements in science: there can be no statements in science which cannot be tested, and therefore none which cannot in principle be refuted, by falsifying some of the conclusions which can be deduced from them.†Karl Popper, The Logic of Scientific Discovery, (New York, Routledge Classics, 1959, reprint of first English edition, 2002), 25.
He also wrote: “Science does not rest on solid bedrock. The bold structure of its theories rises, as it were, above a swamp. It is like a building erected on piles. The piles are driven down from above into the swamp, but not down to any natural or ‘given’ base; and if we stop driving the piles deeper, it is not because we have reached firm ground. We simply stop when we are satisfied that the piles are firm enough to carry the structure, at least for the time being.†Karl Popper, The Logic of Scientific Discovery, (New York, Routledge Classics, 1959, reprint of first English edition, 2002), 94.
I don’t always agree with Popper, but this seems right to me. In asserting a scientific conclusion there is no need to negate all possible objections, present or future. This is obviously impossible. Our goal is to posit the best explanation of the data, recognizing that just as Newton gave way to Einstein in many respects, future discoveries may overtake our conclusion.
So I would say that the answer to your question must always be: “We cannot be sure.†The design inference is currently the best explanation of the data given our present knowledge. But that conclusion, like all scientific conclusions, is contingent. It is not – indeed cannot be –impregnable to future discoveries.
I agree with Barry on this point. Surety is not something that science encompasses. In all fairness however Sal was talking about mathematical proof and proofs are something that math does encompass. The problem is I don’t believe that ID can be proven like it can be proven that the angles in a triangle always add up to 180 degrees. The unprovable point lies in the probalistic resources. One can never be certain that all probalistic resources are known and accounted for. But fortunately in science as in a courtroom the metric in question is reasonable doubt. -ds
The paper has several holes in the “proof”:
Where do they prove this? It’s stated in the paper, but with no citation to other work, or any demonstration that it is the case.
Again, an assertion, but with nothing to back it up. Where is the theorem that shows how much “Shannon uncertainty” is necessary?
Bob
(My response is below, several comments down — Sal)
Thank you for your suggestions Barry, I will think upon them….
Regarding the certainty, regarding being “sure”, I was not asserting the certainty of ID, but rather that the architecture of life prevents a naturalistic/materilaistic (an essentially self-contradictory) framework of science from explaining it. It is not a matter of metaphysics, it is a mattter that a naturalist framing of science is self-contradictory when it attempts to say the orgin of life has a naturalistic/materialistic explanation.
We can be “sure” in otherwords, that attempts at naturalistic/materialistic explanations for life are logically incoherent in much the same way as saying the square root of 2 has a rational description.
The architecture of life is thus a perfect candidate to resist a naturalistic/materialistic explanation. Whether ID is true, is a separate question, but I think it is the most reasonable explanation.
Salvador
Sal, could you PM me over at ARN or someplace with your email address? I’d like to get a copy of this from you if you have it.
The most interesting item, to me, from the abstract:
This is one reason why I like to concentrate on the origin of metabolism.
The only sure way to solidify the biological design inference is to find the designer. As long as ignorance of it’s identity persists, doubts will persist, and there will be no sure way to dispel them short of actual designER detection. Mere design detection will not be enough for William to prevail against his foes.
Best regards,
apollo230
Mung, I updated the above links.
Try this link:
html version
or
pdf version
Great to see you!
Sal
It would be worth pointing out what would happen if one takes non-ID explanations to the extreme.
For example, one might make the conclusion EVERY hallmark of intelligent behavior or life is driven by blind purposeless forces, and that the only thing provable is one’s on consciousness. One can insist therefore there is no proof that people are conscious beings, but perhaps merely automatons passing the Turing test.
We call such a view, solpsistic. That is, the only thing such a person can be sure of is his own consciousness and intelligence. One would not have any means of foramally proving consciousness or intelligence exists outside of one’s own experience, it is merely an assumption. But such a philosophical view would be inconsistent with they way one conducts oneself in every other facet of ones life.
So, yes, we may hypothetically suggest an non-ID explanation for life, but one has to be careful to realize such arguments come very close to solpsistic philosophy. That is to say, one will always have the capacity to deny intelligent agency no matter how evident if one so chooses.
Salvador
“I was not asserting the certainty of ID, but rather that the architecture of life prevents a naturalistic/materialistic (an essentially self-contradictory) framework of science from explaining it.”
Certainly I agree with you. My point is a very limited one. I am saying only that we raise the bar too high – far higher than any reasonable epistemology requires – when we allow our opponents to goad us into attempting to establish that any scientific conclusion is “sure.â€Â
Agreed, and well said.
Salvador
I have a question about the design inference.
Does it really rule out chance and necessity, or does it only rule out chance and necessity to the exclusion of intelligence?
Why does ID rule out “natural” causes? Sal, I think you are conflating natural law and natural causes, and reasoning that if you have ruled out law (necessity) and cahnce that you have ruled out natural causes. I don’t think it follows. Have I misunderstood you?
That is a good question.
Answer: it rules out the possiblity chance and natural law as being the ONLY explanations.
A good illustration is let us look at 2 boxes with 8 dice each and (dice we already know are designed, and same with the box). Each die is colored. Here is the pattern in box 1:
Box 1:
3 orange
2 blue
4 red
6 green
1 white
3 black
5 yellow
1 pink
Box 2:
3 orange
2 blue
4 red
6 green
1 white
3 black
5 yellow
1 pink
Does the pattern in the boxes suggest intelligence? Without going into detail, the answer is yes. Such a pattern may have arisen by the designer paritally using chance. That is, Box 1’s pattern may have been arrived at with no pre-meditation, but through a shaking action designed to induce a chance causation in Box 1’s pattern. He simply could then use his intelligence to cause Box 2 to match the pattern of Box 1, and thus design is evidenced with respect to the numbers on the face of the dice.
Chance and natural law are not reasonably inferred to be the ONLY agencies involved in the pattern observed, even though chance, and obviously some natural law, was involved in the fabrication of the pattern in evidence.
Well, my choice of words may lead to the possibility of unclarity. :=)
First of all, it is, from a scientific standpoint not universally agreed what the meaning of natural cause is. I tried to use natural cause to mean natural law, chance, or some combination thereof. I tried to acknowledge that the definition I used in this essay may not be the same as other people’s definitions of natural cause
If one means
“natural cause” = “anything but God or supernatural”
then one is faced with the problem Mark Perakh pointed out. It ends up being a meaningless definition, and really, in my mind simply a metaphysical statement. Such a conception may well have roots in theology. I’m not saying it’s bad, but like the word “evolution” one has to be careful of the usage.
I think your question is good, and I may try to revise my presentation in the future in light of your comments. However, I felt if I focused too much on the rigor of definitions, rigor in my essay would turn to rigor mortis.
That said, at least in terms of the Explanatory Filter, Life is an ideal candidate to pass through it’s nodes for natural law and chance.
Whether one thinks this is a good description of eliminating naturalistic causes according to ones definition of nataturalistic is a good question, but at least with respect to the Explanatory Filter the nodes are passed according to the formalism’s Dembski laid out.
When I looked at the bacterial flagellum, though I would personally view it as designed, it was hard to actually construct a formal argument which would pass the formalisms of Explanatory Filter as well as Trevors and Abel were able to do for the first life.
Salvador
Thanks Salvador for an excellent perspective. Here is the citation and link you were trying to make at creationism.org.pl :
J.T. Trevors & D.L.Abel, Chance and necessity do not explain the origin of life, Cell Biology International 28(2004)729-739.
Trevors & Abel (2004) Chance and necessity do not explain the origin of life
This is a good question, but the answer is basic information science, (something which seems lost upon most origin-of-life (OOL) researchers).
Here are some fundamentals:
Information is made possible through uncertainty, the definition of information being “that which reduces uncertainty”.
Uncertainty is not possible through a deterministic law like processes by definition.
It is that simple.
There is no information without the capacity for uncertainty. It is a fundamental principle. I think your criticism is misplaced. For the reader’s benefit see Principia Cybernetica Web, Definition of Information.
I hope then it is somewhat apparent why IDer’s have such disdain for certain industries, particularly those who assert naturalistic evolutionary scenarios occurred. Such scenarios go against fundamental principles that are readily apparent in the information sciences and engineering.
ID engineers protest what goes on in these industries, but then we get told, “you guys aren’t biologists.”
What is happening here is reminiscent of what happened when we realized the universe may have a beginning. This realization was a basic consequence of a basic law, namely the 2nd law of thermodynamics. This law implied the universe had a beginning, since stars could not be burning forever. But this simple thesis was resisted because of it’s implications, not because it was not readily apparent!
Same with what this paper is offering. OOL is an emperor with no clothes. The fact this paper passed peer-review (and Albert Voie’s paper shortly thereafter) is indicative some have had enough.
I think your comment highlights the fact information engineers and biologists haven’t had a lot of communication with each other.
Salvador
Trevors and Abel are backed up by 150 years of empirical data after Pasteur disproved spontaneous generation. Abiogensis goes against everything we know empirically and theoretically.
My favorite illustration:
Salvador, upon reviewing some of the other posts today, I find that DaveScott made much the same point that I did in an earlier thread. Since we developed our thoughts independently, call it “convergent Popperism.†😉
This is only true if the inputs are known: if they’re not, then you have uncertainty in the output, and hence te probability can be much less than 1.
I guess my main complaint is the assertion that OOL was deterministic: I have no idea why this has to be so. We know about emergent behaviour (i.e. patterns being fromed from stochasticity), so conceptually there is no reason why determinism is needed. Rather, both sides have to try and demonstrate that OOL is possible/is not possible through emergent behaviour.
Bob
Bob OH,
Thank you for your comment.
But it is misplaced to say there was a hole in the proof because Trevors and Abel were looking at natural law in isolation in the paragraph I quoted. The view of natural law was prevalant in the 1960 with the idea of Kenyon’s Biochemical Predestination. Kenyon abondoned it, became and IDer, and it was only evident later Biochemical Predestination was dead-on-arival (DOA) for the reasons Trevors highlighted….
Thus natural law in isolation is removed as a cause.
Secondly chance in isolation was removed as a cause.
What the paper did not cover was the probability some combination of chance and natural law could have been the cause.
Dembski’s displacement theorem which I linked to above addresses that issue in a clever way. Rather than posing the probability directly, it simply demonstrates that combinations of natural law and chance are on average no more likely than chance explanations alone, and in fact, in general get exponentially more difficult as explanations of law and chance are displaced to higher level explanations of law and chance.
The displacement theorem captures formally our intuition of designed artifacts: “the manufacturing process is usually more complex than the artifact itself.” A computer factory and all the supportive factories are more complex than the computer itself, etc.
Emergence is the last naturalistic hope, imho. There have been gallant attempts at this by professors at my school (Morowitz and Hazen).
However, I think this too will fail for the very reason that if an emergent phenomena requires any amount of complexity and specificity, it begins to be exactly the thing a naturalistic origins scenario tries to avoid, namely, alleviating the need for complexity and specificity in the first place!
We will see.
Salvador
Hi everyone!
Listen, I know I was one of the first people to spout off about more science here, but um, would it be too much if we could do it in english?
I know you guys are all miles ahead of me, but my science teachers were never as interesting as my english teachers. (esp. Ms. O’Bannon who also had really great hair!) In fact, let’s be nice and just say my science teachers were reeeeaaaalllllyyyy boring. Copying notes from the board written by a million year old bald guy just didn’t do it for me. I got A’s, but I don’t really know how.
Could someone summarize this whole thing in little itsy bitsy steps for me?
Thanks,
JanieBelle
ooops. Sorry about that. Forgot the magic word — Please?
🙂
JanieBelle
Hi JanieBelle,
I appreciate the feedback.
Okay, here is itsy bitsy step one 🙂
The main issue is this, design proponents have suggested life is designed, and further, some (like myself) assert it is unlikely ANY future scientific discovery will find an answer in terms of a purely naturalistic/materialistic origin of life.
But how can we be so certain, how can we make such sweeping generalities since we humans are not All-Knowing?
The essay was an attempt to demonstrate that it is possible to make that assertion without being All-Knowing, and that is made possible because of the way life is architected! Were life architected in another way, we would probably not be able to make that bold assertion.
Let me know if that helps, and then we can go to the next itsy bitsy step.
Sal
Thank you so much, Sal. I am really thankful that you would take the time to “hold my hand” (that’s what my old cheerleading coach used to say when she had to walk one of us through something step-by-step) on this.
Ok, itsy bitsy step one. This is what you’re going to demonstrate. Got it. (Is “architected” a real word? 🙂 )
JanieBelle
oh, I have to go out for a while, but I’ll be back in a little bit.
JanieBelle
If I may give the readers some perspective, Kepler once believed there were people on the moon. One of his reasons for doing so was the nicely circular craters on the moon suggested design to him….thus he made a design inference regarding moon craters.
It turned out, that as we learned more about our universe, that Kepler’s original design inference was overturned and that a simple naturalistic explanation could account for the shape of moon craters (namely meteorite impacts).
Therefore, of major concern to IDers is the possibility our design inference could be overturned by some later discovery, and IDers would be haunted with the possiblity that their inference might only be grounded in ignorance rather than knowledge. Trevors and Abel’s paper went a long way to addressing that problem.
Salvador
Can you be more clear about this? What precisely do you mean? The first life was designed? All life is designed? All aspects of life are designed?
Also, why are you starting our where you want to get to? I.e, the conclusion that llife is designed? Aren’t you worried about circular reasoning or affirming the consequent fallacies?
ID is supposed to be an “inference.” One does not make inferences by deductive reasoning. ID will never get anywhere if it starts out where it wants to end up.
Here’s how I would state it:
Is there a reason to believe that [X] exhibits characteristics which would lead us to infer that if the causal history of X were known it could be traced to intelligent agency?
There is reason to believe that X exhibits the following characterstics …
In all other cases exhibiting these characteristics and where the causal history is known it can be traced to intelligent agency.
We can reasonably infer that if the causal history of X were known, it would have in it’s history intelligent agency.
Can anyone improve on this, or should it be trashed, hehe?
In any event, the point is, we need to identify the characteristics and then say why they lead to a design inference. Then we can test the hypothesis by finding analogues and seeing if they trace to intelligent agency. That would be “doing science” I suppose.
Hypothesis, test, revise …
Ok, maybe I should point out exactly where I’m getting stuck, so we’re all on the same page.
I like analogies and hypotheticals, silly simple ones. They work for me.
So, here goes JanieBelle, as she tiptoes through the darkest jungle.
Suddenly, she comes to a giant animal called a giraffelantopotomus.
“Well,” says she. “Hello there Mrs. Giraffelantopotomus! I didn’t think I’d ever see you in my life. I see that’s a very nice big toe you have there. It’s very fancy. It seems rather complex, as well. How ever did you come by such a toe?”
Now, perhaps Mrs. Giraffelantopotomus evolved such a toe.
Perhaps it was the product of design.
How do we know?
If I understand what Sal is saying in this article (possibly not, big surprise), there is a way by which we can determine that there is no possible way that this big toe evolved. Even if we don’t know every possible way that nature works, we can still determine that there is no possible way for this fancy, complex, giraffelantopotomus big toe to evolve.
Is that it? If it is, then where I’m stuck is in understanding how we go about that.
If not, well I guess we’d better take another crack at itsy bitsy step 1.
Thank you again,
JanieBelle
We can never prove there is no other possible way. We can’t rule out things we don’t know about and we can’t know everything. Fortunately for us science doesn’t work by proving things. Science is about the best explanation. Explanations are always tentative and subject to revision or rejection upon newly discovered contrary data. Some explanations are more tentative than others is all. If you read the sidebar “ID Defined” note where in the first paragraph it says “best explained”. So ID’s task is twofold. First show where the current “best explanation” is flawed and second show why ID a better explanation. But the heck of it is, even if ID isn’t a better explanation, the current explanation isn’t strong enough that it deserves to be taught as unquestionable fact in the absence of criticism or alternative explanations. That’s why the rallying cry is “teach the controversy”. We propose that certain aspects of neoDarwinian evolution are controversial and people should be made aware of what the controversy is all about. The most controversial bit is the supposed ability of random mutation and natural selection to create novel cell types (particularly the first cell), tissue types, organs, and (to a lesser extent) body plans. None of these have ever been observed in nature or recreated in a lab. -ds
P.S. Snide comments about my example are grounds for a kick in the shin.
🙂
JanieBelle
You are close, and I’m glad you are asking these questions as it will help me in my future presentations of the subject matter.
There are claims in the essay that are “airtight” (mathematically speaking) and then there are claims that are not-quite airtight but would be classified as reasonable.
What are the “airtight claims” about the first life (not a even necessarily having a toe)? They are:
I chose my words very carefully this time, and there are subtleties about them which we can discuss later. The main thing is that the above 3 considerations prevent a scientist who disbelieves ID from ever being able to scientifically disprove ID. A scientist will never be able to demonstrate natural law and chance as the source of the first life.
Let me now state the reasonable but not quite air-tight claim in my essay:
The claim is not formally air-tight as the 3 statements above, but only reasonable.
Salvador
Ok, so we’re talking about first life, not later pieces of it. I’m with you. And we’re going to use math to rule out natural law, chance, and any combination thereof. gotcha.
Now when we say “first life” are we talking about virus and bacteria “first life” or are we talking about giraffelantopotomus “first life”?
JanieBelle
This is a very good question!
The answer is “whatever form the first life took on”. But how do we describe the first life?Formally it must have the following capability:
1. its cells must replicate through a computational process
2. its cells have a fully functional computer (Turing machine)
The first life can surely have more features about it the merely #1 and #2, but those are the bare minimum characteristics to declare it alive. (In formal terms, #1 and #2 are necessary but not sufficient conditions for life.)
Bacteria and the giraffelantopotomus (assuming it’s a mammal) have those characteristics in their cells. A virus would not qualify.
Exactly what the first life (or lives) looked like, is beyond the scope of Trevor’s paper, but it must have the 2 characteristics I described.
Salvador
Oh, good. You just answered my next question. 🙂
Ok, I’m with you so far.
JanieBelle
Moving on, we have things called natural laws. The most well known example of a natural law is the law of gravity. There are also other natural laws like the laws of magnetism and electricity. Those are perhaps the most familiar to everyday life. There are other natural laws in science such as the relativity, and quantum mechanics, and a few others I probably left out….
We can explain how a ball thrown up into the air will come back down because of gravity. We can take a laws of electricity to explain why when a light switch is off there is no light, and when it’s on there is light. Simple enough.
Can these laws then explain how life came about? After all, they explain so many other things in the world? The answer is “no”. How about if scientists discover any more laws, can those laws explain life? The answer is still “no”.
How can one actually prove this amazing claim? Well….that’s where it gets pretty hard, and that’s where all the discussion of Shannon uncertainty and proof by contradiction come in.
But to give my best approximation of how this amazing claim is justified, imagine that you have a piece of white paper and only white paint to paint with, could you really draw any pictures (ok, ideally assume the person looking at your work can’t actually see the brush strokes, thus no matter what you draw it looks like a blank piece of paper)? How about a black piece of paper and only black paint? Or a blue piece of paper and only blue paint? (One can definitely demonstrate this with a computer paint program.)
When one has the same ink color as the paper color, one has no Shannon uncertainty, the outcome will always be the same! But when one has black ink and white paper, or better yet many colors of ink and white paper, one has the capacity to draw meaningful pictures. That is because having many colors on white paper allows a high degree of Shannon uncertainty.
Using natural laws alone to explain origin of life is like the trying to paint a picture using white ink on white paper. It simply cannot work.
One cannot completely explain the experience of seeing in terms of the experience of hearing. Attempting to do so makes no sense, it is a non-sensical quest. So is the search for square circles…..
In like manner, trying explain life in terms of natural law is also a non-sensical quest. However, to actually demonstrate this mathematically is not so easy. That’s what Trevors did.
Salvador
Re: Mung @24 (“Hypothesis, test, revise…”):
(1) Random variation & natural selection exist in nature.
(2) RV+NS cannot generate CSI.
(3) It is impossible to show that no other known cause is competent to produce CSI: intelligence can.
(1) Intelligence exists in nature. 🙂
(2) Intelligence can produce complex specified information. 🙂
(3) There are no other known causes that are competent to produce CSI. 🙂
(Sorry to interrupt the tutoring.)
I disagree: I don’t see how you can prove this. Certainly not with Shannon information: there can be quite enough stochasiticty in a system as large as the earth. And the Displacement Theorem isn’t relevant, unless it’s to prove that intelligent causes don’t work. The beauty of evolution by natural selection is that you don’t need to search for the fitness function: it’s a part of the physical system.
Where?
Bob
Some words from Wm. Dembski:
That is why it is called a design inference. And it is also why further investigation is usually carried out- research that can either confirm or refute that inference.
Note to Bob- when discussing ID Shannon’s information is useless because it does not care about meaning or content; usefulness and value are also irrelevant.
Also talikng about “natural” can be misleading as both intelligence and design are natural- ie they exist in nature…
Trevors was not referring to stochasticity, but to deterministic regularities. Natural law in this case refers to deterministic laws, one may object that it was not sufficienctly clear in my essay, and that can easily be amended, law in this sense, is deterministic.
Right there in the paragraph I quoted from the paper. The proof is not that hard (compared to other proofs, like say Fermat’s theorem), it’s perhaps the difficulty of accepting it.
But elaborating what it actually says might be helpful.
Recall a deterministic natural law is represented with variables and unspecified boundary conditions, such as:
There are other examples.
If one has 1 iron coin, and let this represent a bit, with unspecified boundary conditions (such as a description of the magnetic and gravitational fields acting on it and the initial, final, and intervening boundary conditions such as initial position, velocity, atmospheric properties, etc.) does one have sufficient information to describe the outcome? answer: Absolutely not!
Hence Natural law (deterministic law) alone is not sufficient to give an account of even 1 bit of information in one iron coin. By mathematical induction, this is true of any other object. Trevors demonstrates that appeals to purely any form of a natural law in the absence of boundary conditions is a category error, much like looking for square circles.
Salvador
Indeed, I could not agree more, I am the same way. I like to start out flipping coins, lol. Simple binary for me.
But let’s keep in mind that ID is an inference to the best explanation and not an argument from analogy. DS bring up some good points that I should have incorporated into my own presentation. All I gave was a statement about a reasonable inference, when I should have compared with other explanations so that we have in inference to the best explanation.
Design is better explanation than regularity or law, because…
Design is better explanation than chance, because…
Design is a better explanation than some interaction of chance and law, because…
Francis Bacon in his Novum Organum (one of the foundations of modern science; see: The New Organon) was the first to point out that science can’t “prove” anything (in the strict formal/logical sense), showing that inductive reasoning is fundamentally limited to erecting tentative (albeit useful) generalizations that are always subject to revision and/or replacement.
This situation hasn’t changed since Bacon’s time, as Karl Popper (in The Logic of Scientific Discovery and other works exhaustively pointed out that positive evidence for an hypothesis in no way constitutes a formal/logical “proof†of that hypothesis. However, negative evidence against a hypothesis does indeed constitute formal/logical “proof†against it, requiring that it be modified or replaced. This is why science is always changing, unlike purely formal/logical disciplines like geometry (non-Euclidian geometry notwithstanding, as it is based on an alternative starting axiom concerning parallel lines, axioms being “unprovable” by definition).
That this should be the case is simply the result of a fundamental (and irreducible) characteristic of inductive reasoning: unlike deductive reasoning, it cannot possibly “prove†any hypothesis in an absolute sense.
I have recently posted much more extensively on this subject at The Evolution List (see “Identity, Analogy, and Logical Argument in Science”), and recommend anyone interested in this topic to read Bacon, Popper, Kuhn, Feyerabend, and Lakatos (just Google the names, along with “science”) on the logical impossibility of “proving†scientific hypotheses.
Indeed, it is precisely because scientific reasoning cannot “prove†anything that ID theory is NOT science. Both Michael Behe’s concept of “irreducible complexity†and William Dembski’s concept of “complex specified information†are based on the logic of elimination (aka “logical exclusion” a la the “explanatory filter”). That is, they depend on being certain that one has eliminated natural causes for the origin of complex biological objects and processes, thereby logically requiring an alternative hypothesis (i.e. that such objects and processes must have been “intelligently designedâ€Â).
But, if standard scientific inference using induction cannot possibly “prove†anything, then the logical elimination of natural causes is quite literally excluded as a logical operation. In other words, just because one cannot provide a naturalistic explanation for the origin of something today is literally no guarantee that such information cannot eventually be discovered and applied in a naturalistic explanation. Therefore, applying the ID concepts of IC and CSI should only be done as a last resort (once all possible naturalistic explanations have been tested and invalidated), as they depend fundamentally on the kind of comprehensive logical elimination that inductive reasoning absolutely prohibits.
Bob OH raised good questions, and that is why I am taking time to try to respond. The matter is one of clarifying the issues and terminology.
Information is defined as the reduction of uncertainty. For example, let a coin being heads or tails, represent 1 bit.
Take any natural deterministic law (i.e. the approximation known as Newton’s sencond law : F=ma is an example of a deterministic law)
The variables in the statement of the law have UNSPECIFIED value. Can such a law in and of itself, or any combination of similar laws with UNSPECIFIED variables (boundary conditions), reduce the uncertainty of whether the coin is heads or tails at some point in time? Absolutely not. The laws therefore are shown incapable of reducing uncertainty in the desired dimension (such as heads or tails), and thus are incapable of creating the very information we are trying to assert.
Salvador
Allen,
Thank you for offering your scholarship on these issues. I am not a philosopher of science, but it seems to me, that statements that can be demonstrated to be logically incoherent or contradictory (such as saying there exists a square circle, or that the square root of 2 is rational) should not be allowed to be incorporated into empirical or theoretical science.
Whether ID can be defined as science is a good issue to discuss, but it largely is a question I do not delve into. However, I think questions of logical coherency in existing scientific theories is fair game.
I would welcome your thoughts.
Salvador
Sergeant Springer–“We can never prove there is no other possible way. We can’t rule out things we don’t know about and we can’t know everything. Fortunately for us science doesn’t work by proving things. Science is about the best explanation. Explanations are always tentative and subject to revision or rejection upon newly discovered contrary data.”
So true. Only you haven’t gone far enough. I have lost track of how many times I have read something a scientist wrote that basically says: No amount of evidence, no matter how massive, can ever ‘prove’ a theory. But it only takes a single piece of evidence, a single counter example, no matter how small, to disprove one. Completely. Totally. Unequivocally. ID provides that evidence and certainly not in small quantities.
Yours,
D.Grey
“A computer factory and all the supportive factories are more complex than the computer itself, etc.”
This is only necessarily true if you take into account the software that runs the factory.
I suspect DNA appears deceptively simple to a lot of biologists because they are used to thinking about it in terms of strands of nucleotides. They may not say it, but they may intuit that since it’s just a long strand of nucleotides, unguided nature should “easily” be able to arrange and rearrange to produce the biological complexity we observer, given “enough time.” However, engineers and information scientists know better. Software is king here, and that’s were the devil (or God) is.
This sort of thread is what I like to see the most on ID forums.
From Trevors and Abel:
From post 35:
In that case, p=1, so the probability does not approach 1, it is 1. The difference may be subtle, but it tells us that Trevors and Abel are referring to stochasticity, somewhere.
There’s no proof: it’s all just assertion. *ahem* Just like my last sentence.
OK, the main assertion is this:
And you write:
Something which is wrong, and why it’s wrong explains the problem with Trevors and Abel. If we do not know the boundary conditions, then we can gived them a prior distribution, and from that calculate the probability of the coin being heads or tails. The uncertainty enters through the uncertainty in the boundary conditions (i.e. the inputs), rather than the deterministic mechanisms. If one believes that the universe is deterministic (as Trevors and Abel appear to), then I think you have to apply this reasoning to any statement about uncertainty: in other words if you want to use Shannon information, then you have to assume that there is something non-deterministic in teh system, and it’s either the process itself, or the inputs.
Bob
Oh BS Salvador, stop sucking up. If Allen had done his scholarship he would know he is misrepresenting ID. I prefer to think that he has not done his scholarship over the alternative, which is that he has and his misrepresentations are deliberate.
Whoa, BACK UP. “Y’all done just flew off to Tahiti and left me sittin’ at Port Columbus.” (My mom says that.)
I’ll try to see if I understand all this, feel free to set me straight if I misunderstand.
First, I almost missed Dave’s comment to my comment way back up yonder there. Dave’s point (extended by Dennis) is that science never proves anything 100%. Like how Newton had gravity right, but Einstein had it “righter”. I get that. Newton had a close approximation of how the planets move, but there was a small problem with Mercury’s position, if I remember that right. It had something to do with being really close to something really big. Once Einstein came along, he figured out some of what was wrong, and now we understand gravity better, and Einstein’s laws work for Mercury and all the places where Newton’s laws work, but Einstein’s laws break down in a black hole. Someday, someone will figure out a law that’s even “righter” than Einstein’s laws, but they’ll never be perfect. Is that right?
But if we came across one example where a planet didn’t obey Newton’s laws approximately, then that would disprove Newton’s laws, and we’d have to start all over. Right?
Ok, silly example time:
JanieBelle’s law: A flipped quarter will land flat. (Compare to Newton)
Mung’s law: A flipped quarter will land on heads about half the time, and tails about half the time. (Compare to Einstein)
If we flip the coin a zillion times, and it lands on heads or tails, we’re all good. But if the zillion and first time we flip the coin, and it disappears, we’re up a creek.
It’s not perfect, but that’s the drift.
Now I thought that what Salvador was saying was that the guys who wrote the paper up above there had figured out a way, not to prove design necessarily, but to disprove natural evolution by random chance. I’m not sure how to work that into my silly example, but there ya’ go.
Am I with you so far?
hehe, Mung… I’m going to go a step further and suggest that if people would do their scholarship and combine it with intellectual honesty, we’d have more ID proponents than we’d know what to do with.
It should be pointed out that, although science can’t prove stuff with 100% certainty, it can come to reasonable and convincing conclusions. We can assert with virtual certainty that no one will ever be able to make a perpetual-motion machine, because the law of conservation of energy won’t permit it. Now, it is possible that there is some undiscovered natural law that allows energy to be had for free, but there is no reason to believe this, so we should not feign agnosticism about making perpetual-motion machines while we wait to discover this fantastic new law of nature.
It seems to me that the same applies to the origin of life and biological information. Natural laws and chance, both empirically and analytically, have been shown to be inadequate to the task of spontaneously generating life and new, complex biological information. Based on the best, currently-available evidence and analysis, natural law and chance represent the wrong explanatory category altogether for the phenomena in question. We do know of one thing that is up to the task of generating information, however, and that is intelligence, so this should represent the most reasonable inference, at least for now, while we wait to discover some enigmatic and heretofore undetectable natural process that is up to the task.
“Indeed, it is precisely because scientific reasoning cannot “prove†anything that ID theory is NOT science.
____________________________________-
In that case, what IS science?
Allen MacNeill: I am going to assume that you are not willfully misrepresenting ID, and that you genuinely believe that the explanatory filter depends on being logically CERTAIN that no naturalistic explanation is, or ever will be, sufficient to account for all the complex phenomena in biology. I have personally met many people who seem genuinely to harbor these misconceptions. ID is probabalistic. It is an inference about the BEST available explanation for observed phenomena based NOT on ignorance ( like some law which could concievably be out there in operation which we haven’t noticed yet ) but instead based upon positive knowledge about the types of things which are designed by conscious intelligent agents. As Biology advances and uncovers greater and greater levels of complexity, the antique idea that chance and necessity wrought all of this becomes more and more IMprobable , while the idea that some intelligent agency acted to bring about this complexity becomes more and more probable.
Thank you for the feedback. I’ve mentioned the Displacement theorem and thus a followup to describe the Displacement Theorem in English would be useful.
“No Free Lunch, The Displacement Theorem, and the Disproof of Square Circles”
Expecting that the origin of life problem or the problem of large scale biological complexity will ever be solved through the exploration of purely naturalistic mechanisms is like saying there possibly exists square circles in Euclidean geometry, or that the square root of two is rational. The naturalistic origins of life and large scale biological complexity frame the scientific exploration in a way that can not possibly succeed. Trevors stated it well regarding of natural deterministic laws:
Extending this to combinations of chance and law are the next step, and that involves the Displacement Theorem.
If I may add and clarify, I’m actually with BarryA and DaveScot, ID can not formally be proven, but it can be shown to be reasonable.
However, what can be proven is that ID’s fiercest opponents have been searching for square circles all their lives.
Salvador
You’re close.
Statement 1 and statement 2 are not the same. Let me give you an illustration. You have a very large box with 500 coins in it. You can’t see the coins in the box. You shake the box vigorously at 4:20 pm Eastern Time on July 6, 2006 and set it down. Then at 4:30 pm you shake the box again and set it down.
If Richard Dawkins came along and said, “after you shook the box at 4:20pm, all the coins were heads” Would you believe him? Well, given that you have shaken the box again at 4:30 pm there is no way to possibly know is there? That corresponds to statement 1, Richard Dawkins, in other words is making things up which can not possibly be proven by any scientific means in the given situation!
However, if the IDers came along and said, “JanieBelle, it’s very unlikely random chance made all those 500 coins heads when you shook the box at 4:20pm”, you would hopefully think they had a more reasonable (but not completely airtight) case. That corresponds to statement 2.
The paper focuses mainly on what corresponds to statement 1, but it gives hints of statement 2.
Salvador
Ok, Salvador. I think I’ve got it now.
So the next question seems pretty obvious…
“Exactly how unlikely is unlikely?” And this paper is going to tell us that.
Right?
JanieBelle
“In other words, just because one cannot provide a naturalistic explanation for the origin of something today is literally no guarantee that such information cannot eventually be discovered and applied in a naturalistic explanation.”
If there was a naturalistic explanation for OOL then there would be a positive progress towards it, however what we see is either no progress or actually a negative progress in OOL researchs. As other commenters emphasized above finding a naturalistic explanation is not a matter of time when actually there is none.
If we find a cow over the moon surface and expect to find a naturalistic explanation of how a cow can jump from earth and land over the moon, giving more and more time will not bring a resolution to this problem because the whole idea is based on a flawed logic. The other (and the best) alternative is looking for an intelligent cause for the transportation for example a spaceship.
NASA has sent some probes to the Mars and there is a chance that a few bacteria might have travelled along with the probe and reached the Mars surface. Now assume those bacterias will be conserved in a freezed/dormant phase and in a far future somehow Mars will gain a convenient condition for life and those bacterias will become active and populate the Mars surface. In this case the OOL on Mars will not be as a result of chemical evolution.
Oops!
== frozen/dormant phase ==
Wonderful! (By the way, I cleaned up some of my typos, in case you want to re-read what I wrote 10 minutes ago).
The paper only hints at how difficult, and gives a long list of references of where to get an answer. I think everyone here at UD can give you their idea of how improbable it is, but it’s pretty improbable.
But in general, given the complexity of even the most minimum life, the chances life arose in one random attempt are worse than 1 in 3.27 x 10^150 (10^150 means 1 followed by 150 zeros which is more than all the sub atomic particles in the universe and all the possible ways they could reasonably interact since the beginning of time). In fact that estimate is too optimistic, the odds are probably far worse. Given the such odds for one attempt, even if one had many attempts, it still would not be enough even if one were using all the resources of the univirse since the beginning of time.
To give you an idea of how this is calculated, if you flip a coin, what’s the chance it could be heads? Answer: 1 / 2
If you filp two coins together, what’s the chance both coins would be heads? Answer : 1 / 4
If you filp three coins together, what’s the chance all three coins would be heads? Answer : 1 / 8
How do I figure this out? There are canned formulas for this, but to illustrate if you have 20 coins, then to get the probability, take 2 and raise it to the 20th power and divide into 1. Thus if you flip 20 coins together, what’s the chance all 20 will be heads? 1 / 2^20 or 1 / 1,048,576
If you would like to see for yourself, try flippin 20 coins randomly and you see they don’t all show heads simulataneously very often (if at all)!
By way of extension we can do this with 500 coins, and the chance all are heads through a random process is : 1 / some monster number
the monster number is approximately 3.27 x 10^150
That is the probability of finding 500 coins being all heads in one flip. 500 coins (or parts) is not that much compared to the number of parts needed in the simplest life form. A minimal life form may have thousands of necessary atomic parts.
The calculations are not quite so simple in the case of life (since there may be more than one way to structure life), but the calculation can still be done, albeit a bit more carefully.
I hope this gives you the general idea of how these calculations are done. The chances life arose as part of some cosmic accident seems pretty remote. If you think this world (despite it’s problems) is not an accident, then you would be in agreement with what the numbers are telling us.
If you would like to learn more, I suggest the videos which you can view online by following the links:
http://www.uncommondescent.com.....chives/882
Salvador
I do think we may find traces of life outside of Earth. But that would not invalidate the design hypothesis.
Salvador
Ok, I get your drift. But isn’t that a calculation of one particular hunk of material becoming alive? Wouldn’t you have to multiply that times all the hunks of material in the universe? The universe is pretty darned big, so by that reasoning, it almost seems like it would equal out to 1/1, which would mean that life would not only probably happen, but it would almost HAVE to happen “accidently”. Not that that rules out design, just that it doesn’t seem like it rules out random chance, either.
Sorry if I’m missing something, I’m just trying to get an idea here. (I’m blonde, whatdaya want?)
Thanks again, by the way, for taking so much of your time to explain this to me. It’s really very kind.
JanieBelle
I’m not sure if I’m being clear here.
Let’s go back to silly and simple.
I understand that it’s WWWWWAAAAAYYYYY not gonna happen that you get 500 heads on the first flip of 500 coins. But if you flip them enough, won’t they EVENTUALLY all come out heads?
So if each flip is like one piece of carbon or whatever stuff you need for life, it seems like you’d have to do a lot of flips to account for all the stuff in the universe.
Or does my example not follow?
Yes.
Good question, and an answer. Let’s look at the number I gave:
1 / (3.27 * 10^150)
lets multiply it by the number of chunks in the universe. The smallest chunk is a sub-atomic particle, it is estimated that there are 10^80 such particles in the universe. So let’s do the multiplication you suggest:
10^80 / (3.27 * 10^150) = 1 / (3.27 * 10^70)
Now the fastest rate a measurable quantum interaction can happen is 10^45 interactions per second based on Plank time (from physics). So let’s factor that in:
10^45 / (3.27 * 10^70) = 1/ (3.27 * 10^25)
Let’s then factor the amount of time in seconds since the beginning of time, roughly 10^16:
10^16 / (3.27 * 10^25) = 1/ (3.27 * 10^9)
Which is still one in billions.
So, even if one used the entire universe and all it’s resources, at the maximum possible speed (which is excessively generous!), one still has only a remote chance of creating a life. As I said, the probabilities, are probably far worse than the optimistic numbers I gave. Further, assuming that the universe is exploring all these possibilities at 10^45 interactions per second is unbelievably generous.
Those are good questions. You are welcome.
Feel free to ask more questions. I have some other posts I need to respond to, and I’ll be away this weekend, so please forgive me if I’m delayed in responding. Don’t hesitate however to call upon my other very able comrades here at UD.
I recommend if you want to learn more, you can watch the videos I linked to.
http://www.ideacenter.org has a list of IDEA chapters around the nation, and there is John Calvert’s Intelligent Design Network: http://www.intelligentdesignnetwork.org It may take some work, but if you’re really determined, you might be able to meet some of the individuals in IDEA or the ID-network.
(Last but not least are those unsavory theologically heavy-handed creationist organizations out there, but well, let’s save those as a last resort. hehehe.)
Salvador
Sal,
I think your example was being very generous because you are assuming that all the particles in the universe have an equal chance of combining to form life. The reality is, as shown in the priviledged planet that few places would have the proper conditions, and even those places only have teh conditions for a certain period of time.
Thank you so much, Salvador. Take care of your other customers, and I’ll get back with you after the weekend. Enjoy!
I’m getting the math much better than the science itself. So while you’re enjoying your weekend, I’m gonna try and see if I can understand where your initial numbers come from. Once the numbers are plugged in, I see how it works, I’m just not sure about where the numbers came from.
Like…
1 / (3.27 * 10^150)
10^80
10^45
10^16
I’m sure the guys over at AtBC have some interesting ideas about where you got them from, but I’d rather find out for myself, thank you. (In case you hadn’t noticed, some of them are watching this thread rather closely. You should be flattered, I guess. I know I feel special being quoted. 🙂 )
I was going to say something else here, but I’ve thought better of it. It was pretty funny, though, so just laugh anyway. 🙂
Thanks again, Salvadore.
ajl,
That’s a good point, but hang on. You seem to be assuming life as we know it. In order to rule out chance, don’t we have to rule out the chance of any possible kind of life? Do we know for an absolute fact that silicon or bzywhateverium can’t make life?
My old science teacher (the million year old bald guy) used to say “The universe is not only stranger than we imagine, but stranger than we can imagine”.
Aren’t they finding some pretty weird stuff down in caves and at the bottom of the ocean? Stuff that eats rocks and all? I’m not sure we can rule anything out just yet, can we?
JanieBelle
Genetic engineers are a fact. We know that intelligent agents can manipulate genomes for fun and profit and if you care to disagree I’ve got a bag full of genetically engineered rotten fruit to throw at you. Presumably, according to the Darwinian chance worshippers, these intelligent agents with white lab coats and gene splicing machines arose through natural processes without any intelligent help. Point #1: intelligent agents capable of genetic engineering are a natural part of the universe. Next consider that DNA and ribosomes, the protein factory that exists in every living thing we’ve examined, is a digital program controlled machine. Instructions for manufacturing different proteins are *coded* onto the spine of the DNA molecule and the ribosome reads those coded instructions just like a computer reads instructions from a program. The machine then assembles a protein according to those instructions just like computer controlled machines assemble complex pieces of automobiles. DNA and ribosomes are digitally programmed robotic protein assemblers or *machines* in every sense of the word. Point #2: All living cells so far observed contain complex machinery. Next point. In every case where we observe a machine in nature and we *know* where the machine came from, we know it came from intelligent agency. Point #3: all machines where the origin can be determined come from intelligent agents..
Point #1: intelligent agents capable of genetic engineering are a natural part of the universe.
Point #2: all living cells so far observed contain complex machinery.
Point #3: all machines where the origin can be determined come from intelligent agents.
Now tell me why it’s unreasonable to consider it a strong possibility that the living machinery of life is the result of intelligent agency. If anyone can describe to me a plausible way for complex program code driven machinery on the level of DNA and ribosomes can assemble via chance interactions of chemicals with no forethought then I’ll reevaluate whether ID is the best explanation for where these machines came from. Until then, the best explanation is rather obvious unless you’ve got some kind of mental block that makes you refuse to believe it possible that intelligence existed in the universe before humans came along. -ds
For the benefit of the readers, various numbers have been floated around for the minimum number of parts of a self-replicating von-neumann automata (which is a necessary condition for life).
Something on the order of 1 out of 10^40,000 for probability was my inference from von Neumann’s writings. This is independent of the physical substrate of the automata, whether it is made of semi-conductor materials, DNA, amino acids, whatever…..
Because the architectures of the automata are recognized as independent (since we see partial constructs of it in engineering), and the specification was present before we knew much of the cell, it does not matter whether one defines life another way because what matters is we have found an artifact (life) matching an information rich independent specification (Turing machines and von neumann automata). They are even more information rich than Paley’s watch. Thus a design inference is reasonable even if we were to define life another way. After all, a complex machine is still a complex machine!
The von-neumann automata was mentioned in Dr. Albert Voie’s paper. I’d like to announce that possibly in a week or so, Dr. Albert Voie may visit our weblog!
The number 10^150 which I gave above thus one of the smaller numbers I had available. I invite my fellow UDers to give some of the numbers they are familiar with.
Salvador
Ok, now I’m lost again. I’m gonna do what I said before, and we’ll have to come back to all that stuff.
Enjoy your weekend, Salvadore.
JanieBelle
Easy, Dave, I’m not suggesting anything else.
I’m trying to understand how Salvadore and these guys prove mathematically that evolution isn’t true, or as Salvadore said earlier, that it’s mathematically nearly impossible for chance to explain life.
I’m just trying to see how we cover all the bases, so there are no holes for things to slip through.
I agree that it’s kind of dumb for them to say “we made this happen without intelligence”. If they made it happen, well, duh, there’s intelligence.
I’m just trying to understand this, it’s not really necessary for you to be rude and yell at me about mental blocks.
Sorry if I said something to P you O.
JanieBelle
I didn’t yell and it was the generic “you” not the personal “you”. If it’s personal I’ll add something about how your
mommagirlfriend wears combat boots so there’s no mistake. 😉 -dsBob,
No it is not just an assertion. I’ll take this issue again because it is too important. A good illustration is communication channel as it illustrates a conduit where the output is dterministic with respect to the input. You have the sender and receiver. In such a system, the sender sends 20 bits of information by imposing boundary conditions on the input end of the channel. The reciever will receive 20 bits of information. In the absence of the boundary conditions being induced by the sender, 0 bits appear at the output end (even if there is some sort of data decompression at the output end). Thus, this illustrates that deterministic laws do not spontaneously create information!
To give a historical context Muarry Eden in the 60’s laws of nature which might be the source of life. If that law was deterministic however, apart from boundary conditions, such laws could not spontaneously create information. Trevors simply asserted what has since been realized, no deterministic law spontaneously creates information, if it did, it would not be deterministic!!!
If you can agree that deterministic laws, in the absence of boundary conditions do not spontaneously create information, then we can move to the issue of the probability of appropriate boundary conditions infusing information into lifeless matter.
You may not like the way Trevors partitioned the issue (by partitioning physical reality into determistic laws, and stochastic laws, and boundary conditions). But if you can accept that one can still describe system behavior along these lines then the discussion can move forward. It would be helpful to state, Trevors’ partitioning is completely consistent with the way physical theories are described today.
I think your resistance is to his partitioning, not really to the veracity of his proof.
Salvador
PS
I can deal with the decompression issue at the output end, but let’s save that for later, OK? The point is that even a deterministic decompressor does not spontaneously create information.
“Now tell me why it’s unreasonable to consider it a strong possibility that the living machinery of life is the result of intelligent agency.” –ds
ds, I agree completely that it isn’t unreasonable to seriously entertain this possibility. What I can’t for the life of me undertand, though, is why the ID movement does not find it equally reasonable to consider that *that* intelligent agency must, by the very same reasoning, be very likely derived from another such agency. Which in turn would need to be derived from another..and so on. You get the picture. Ultimately you have to face the question of the source of intelligent agency. Darwinism proposes, thus far without much evidence, that complexity/information in biological systems was developed, from the bottom up, from primitive constituents. As best I can gather, ID witholds itself from making any claim about the history/creation of the intelligent agency inferred. Yet how can you assault Darwinism for not providing an adequate account of biological complexity (i.e. for not invoking intelligent agency) while at the same time claiming that the intelligent agency, as a complex intelligent entity of whatever form, does not need an adequate explanation aside from a de facto assertion of its existence. Am I missing something? I sincerely would like to know how you folks resolve for yourselves what seems to me a fundamental inconsistency with this whole intellectual endeavor. In other words, why is it okay stop with the first intelligent agency inferred? At least Darwinists make a half-a**ed attempt to account for the formulation of high levels complexity/information.
why the ID movement does not find it equally reasonable to consider that *that* intelligent agency must, by the very same reasoning, be very likely derived from another such agency
Huh? I find the possibility reasonable. The unfortunate fact is we don’t any have data about the next level. All the evidence of intelligence in the universe we have is the machinery that we created and the machinery in living things that predated our machinery. There’s some tantalizing evidence that the universe itself was the result of intelligent agency but again, we have no empirical evidence to lead us beyond the moment the observable universe was instantiated. Maybe SETI will find something that will provide further clues. Maybe God will return and tell us. Until then, we work with what we have. My feeling is that intelligent agency predates the instantiation of the observable universe. Currently there’s no way of investigating that so it really doesn’t belong in a discussion about math and science and empirical evidence. So why don’t we stick to things for which first hand evidence exists. -ds
Of course it does no such thing, unless one is omniscient. The counter-example is a pseudo-random number generator. If you don’t know that there’s a deterministic algorithm, then you’ll conclude it’s stochastic (assuming it’s a good RNG).
Gah! My argument is precisely that the boundary conditions can be stochastic!
There’s no proof! It’s just assertion, assrtion, assertion. I don’t believe a lot of the assertions, such as that there is not enough stochasticity in the system. This assertion is just obviously wrong: there’s a large literature which treats law-like systems as stochastic: e.g. statistical physics. Trevors and Abel accept that there is some stochasticity in the system, as I’ve already pointed out. I’ve also shown how stochastic output can be gained from a deterministic system: in two ways now (stochastic inputs, and an unknown pseudo-random system).
Bob
I didn’t yell and it was the generic “you†not the personal “youâ€Â. If it’s personal I’ll add something about how your momma girlfriend wears combat boots so there’s no mistake. -ds
Oh, good. When I read all that gobledygook, it “sounded” in my head like you were mad at me. I’m glad you’re not, because you’ve been sooo sweet to me, here, in your emails, and at my blog. I like having you around, and I’d hate to ban “the Banninnator” HAAHAHAHA….
Hey watch it with the combat boot jokes, or I might make YOU my girlfriend, buster. -jb 🙂
As Frank Burns on Mash said: “It’s nice to be nice to the nice.” That said I’m going to have to bow out of further commentary at your blog. It’s become a bit too risque for me, all things considered. Sorry about that. -ds
Response to comment #38
AM:
Both Michael Behe’s concept of “irreducible complexity†and William Dembski’s concept of “complex specified information†are based on the logic of elimination (aka “logical exclusion†a la the “explanatory filterâ€Â).
The EF, like ALL filters, is eliminative. That is true. The EF eliminates via consideration. But your premise is false:
Dr Behe:
and
AM:
That is, they depend on being certain that one has eliminated natural causes for the origin of complex biological objects and processes, thereby logically requiring an alternative hypothesis (i.e. that such objects and processes must have been “intelligently designedâ€Â).
Wm. Dembski pg 36 of The Design Inference:
What part about that don’t you understand?
I’m glad we can agree on something
I’m afraid not, because if one is Omnicient (All-Knowing), the deterministic law does not tell Him something He doesn’t all-ready know, thus there is no surprisal value, thus to the Omnicient One, there is no reduction of uncertaintity for Him, thus no information is created as far as He is concerned. One then could argue that for a Being to be All-Knowing, such a Being must then also be the Author and Ultimate Source of all information…but I digress.
I’m afraid that is not correct either, because all one has to do is run the pseudo-random number generator again, and demonstrate 100% repeatability! In such case, because the system repeats, one realizes one is dealing with a deterministic system not a stochastic one. After all, a responsible scientist will try to rerun his experiment or otherwise try to create reproducibility wouldn’t he? 🙂
How do we discover deterministic (or approximately deterministic) properties in systems such as natural laws? Through reproducible experiments! If one runs the system and a different output is generated, one is not dealing with a purely deterministic system. Thus the counter-example you offered is invalid.
This leads of course to the discussion of stochastic processes (versus purely deterministic process) where we don’t have absolute repeatability, but only repeatability of certain macroscopic properties (like temperature or pressure). The pattern of 500 coins in a box after being shaken is an example of something we view as a stochastic process. Stochastic processes will be the topic of subsequent posts in this thread.
And this shows the exact flaw in materialistic/naturalistic science that I’m trying to point out: using stochastic outputs (appeals to uncertainty) to try to account for the emergence of a CERTAIN specific complex artifacts is hopless. It’s like trying to find certainty through creating more uncertainty.
But can a stochastically described process (whose behavior is defined by a few bits of information such as the type of distribution plus some parameters like mean and std deviation) coupled with deterministic laws account for highly improbable and also highly specific patterns (like say 500 coins heads)?? No, it is the search for square circles as well. The topic of combination systems is the topic of the Displacement Theorem (another thread).
As I pointed out in my example of 500 coins in a box to JanieBelle, assertions of the emergence of specified complex events (such as 500 coins heads) is not scientifically defensible because the inherent definition of stochastic processes prevents such processes from being able to scientifically account for high specificity events. If a stochastic process accounts for highly improbable specific events, then by definition it is not stochastic, it is the mathematical equivalent of a square circle.
Appeals to purely deterministic laws do not solve the problem either, because deterministic laws by definition do not spontaneously create information apart from boundary conditions, thus for system to create information it must have a degree of uncertainty, exactly as Trevors argues.
But in this thread, I discuss purely deterministic, and purely stochastic porcesses. I’ll go to the more difficult topic of combination of deterministic an stochastic processes in a yet-to-be posted thread on Dembski’s displacement theorem.
Salvador
Yes.
Bob
Isn’t this a peer-reviewed ID paper? If it is, it predates the Meyer paper. Has it generated much ruckus?
Well, they were smart enough not to advocated for ID. :=)
Albert Voie’s paper however is a pro-ID paper that did pass peer review. I mentioned Voie’s paper here:
http://www.uncommondescent.com.....chives/722
Abel (Trevors co-author) encouraged Voie to write what eventaully became a pro-ID paper.
I do not know what Trevors and Abel actually believe. However, Abel has posted a milliion dollar reward for finding a naturalistic answer to life, see:
http://www.us.net/life/
What do you think of that? 1,000,000 buckaroos for solving OOL! Perhaps I should offer a 1,000 prize for finding a square circle.
Salvador
PS
off for the weekend, see y’all next week
“Yet how can you assault Darwinism for not providing an adequate account of biological complexity (i.e. for not invoking intelligent agency) while at the same time claiming that the intelligent agency, as a complex intelligent entity of whatever form, does not need an adequate explanation aside from a de facto assertion of its existence. Am I missing something?”
I think it lies in the fact that Modern Evolutionary Theory (“NeoDarwinism”) is simply inadequate to explain with any kind of impressive precision (to me personally) much of what exists. Therefore the best explanation available at the present time given what we know about similar systems fabricated by mankind is that an intelligent agency with foresight fabricated the biosystem that exists. Particularly when I contemplate the existence of the original reproducting cell, of which MET has nothing to say at all, but whence from the whole shambang springs. I see process theory, information theory, and common sense killing MET. That is not to say that there is no evolution. On the contrary, once we understand the nature of the original cell (if this is possible) all questions may be answered on that score. In the mean time, we must grapple with the fact that *something* is responsible for the fantastic digital computing process that led to all of what we see. It screams design from every angle, and I think only fools deny it. But who am I.
“Appeals to purely deterministic laws do not solve the problem either, because deterministic laws by definition do not spontaneously create information apart from boundary conditions,”
Not only that, but pure determinism (which must exist to the materialist unless reason is to be discarded, for then cause and effect is violated) implies a front loading of everything that exists, which naturally leads to the issue of how the universe become front loaded in such a manner. Otherwise one must open the door to a *genuine* randomness, i.e, uncaused events. Which is just another way of saying, “we don’t know what the hell causes it.”
But of course, this is all just “philosophical gas.” *Real* biologists (the neoDarwinists) just stick to the empirical facts, right?
Perhaps as a theologian or philosopher, but isn’t science precluded from trying to find the answers to ultimate questions? And science ID wishes to remain within the realm of science, it does not address the question of the ultimate source of intelligent agency.
Which is as it should be, until such an entity can be brought within the purview of science.
Darwinism is not assaulted for not invoking intelligent agency. It is, however, assaulted for failing to provide an adequate causal account of biological complexity. It is necessary to do so in order to show why ID is a better explanation. Now, you want ID, once it has inferred design, to begin an exploration of the designer. And your beef with ID is that it does not do this. In fact, ID cannot even tell us if that designer exists. So, given that ID cannot even tell us that the designer exists, why should ID address itself to explaining the designer? Why can ID not take the designer as a give, like Darwinism takes OOL for a given?
What on earth do you mean?
To infer that some event had an intelligent cause is not to identify any specific intelligent agent. So what precisely is it that ID is stopping at? What is it that you would have ID explore, and using what method, ocne ID has inferred design of some feature?
“If a stochastic process accounts for highly improbable specific events, then by definition it is not stochastic, it is the mathematical equivalent of a square circle.”
This is the crux of the matter. If by sifting thru many terabytes of data I run across the ASCII codes for the text of War and Peace, then by any coherent (and I might add, meaningful and useful) definition of stochasm and information, we have encountered information.
GIGO, baby, GIGO. All first year computer science majors learn that. When the input garbage is shaped into non-garbage output, something specific is at work in the “shaper.” If the input is stocastic and if the output consists of even stocastic numbers, an algorithm is obviously at work.
If the entire universe is taken as a whole, any events leading to specific patterns must have been processed by preexisting conditions. (Obviously.) How did the universe come to the state it is presently?
Wheneven I talk to an anti-ID person, I always try to get the metaphysical philosophy cleared up first, what they really believe about the nature of the universe. Does any sort of true randomness exist or not? Not just random or “stochastic” from our viewpoint, but truly random from any viewpoint- genuinely undetermined events, events where the result is undetermined by the cause? The answer tells me a lot about how a person thinks. Other than just *saying* the words “an undetermined event”, I find the concept to be utterly meaningless.
If a person believes in a strict determinism, then everything that exists was set in stone, so to speak, from the first unit of Planck’s time. Beyond that point, no rational inquiry can exist. A logical dead end. (Yes, I know about multiverses, or explanding and contracting universe, and so forth, but like the question of God’s existence, a multiverse or a repeating universe, doesn’t explain the meta concept of its existence in the first place, or anything about how our universe got the specific ordering that it has.)
If a person accepts that undetermined events truly occur, and that THESE are the source of the biological complexity on Earth, then in effect, they are simply taking the “I don’t know” position using different words. However, I believe it is fundamentally weaker than pure determinism since pure determinism only requires a single logical dead end, and this view requires two logical dead ends. Either one may be correct, who is to say. But I should think one gap would be preferable over two gaps to any scientist. And who knows? When you let open the “back door” to reason by allowing undetermined “I dont know” events to occur presently, why then, ANYTHING could happen. Induction may make us feel better, but it is hardly “true.” Moreover, the idea of undetermined events is simply nonsense. (Quantum Mechanics deals basically with statistical probabilities of events, and relegates certain subatomic events to the “I dont know” category. However, QM strictly speaking is incomplete in how it deals with these undetermined events. There are schools of thought on what is going on “down there” (many worlds, Copenhagen, etc) but these interpretations are not QM itself. Something unknown “shapes” the individual undetermined events into statistically accurate outcomes overall, so they either cannot be so undetermined after all. String theory attempts to explain it but lacks empirical verification thus far, and ST is a pure deterministic system. Point is, nobody knows why undetermined events occur the way they do according to QM, or if they are genuinely undetermined. It’s an open question.)
At any rate, whether or not undetermined events actually exist or not is irrelevant. Either view poses serious questions to the anti-ID mindset. In the universe, every event is shaped by the state of the entire system that came before. There’s no getting around that. And the universe’s “shaping mechanism” led to intelligent beings with insight, foresight, and the ability to deeply probe itself.
How could it have been any other way?
Thanks for reading.
Mike 1962:
Thanks for the post (#77). And I agree, unless one is clear on the determined/undetermined question, everything else is pointless. Let me be clear right off the bat that I am firmly in the undetermined camp. This, of course, puts me at odds with many in the “pro-evolution/anti-ID” camp, who are what I sometimes refer to as “relentless determinists.” My good friend and mentor, Will Provine, often makes statements that seem to me to be “pan-determinist”, and with which I therefore disagree (and, surprise, we still remain the best of friends).
This problem is paralleled by what I call the problem of “pan-adaptationism.” As most clearly articulated by Lewontin and Gould in their “spandrels” paper, this metaphysical position (and it is IMO metaphysical, not empirical) assumes that virtually no characteristic of any living organism is “accidental” – that is, everything is an adaptation. There are historical reasons why many supporters of the “modern synthesis” (i.e. what the ID camp likes to call “neo-darwinists”…odd, many of us used to like that moniker) take this position, beginning with R. A. Fisher’s assertion that individual fitness is summed over the entire genotype, and can therefore be considered to focus on a single “crucial” character/allele that is the entire subject of selection. Fisher asserted this because he couldn’t model selection mathematically without doing so, but it set the stage for (or at least didn’t discourage) the “pan-adaptationism” that Lewontin and Gould decry.
Indeed, as I am currently working up in an essay (which will, in the fullness of time, become a book), a fully determined universe is a “closed” universe, and therefore everything in it will perforce be “closed” as well, including society and our own minds (i.e. neither “free will” nor anything like it can possibly exist). I’m on the side of Karl Popper, author of The Open Society and Its Enemies, and therefore rush in (where some others fear to tread) and assert that both the universe and everyting in it are (at some level) undetermined (or, at least, not fully determined). IOW, as Democritus said, “all things are the fruit of chance and necessity” not just necessity.
Does this mean that I might entertain the idea of ID? Of course is does, but as a fully committed empiricist, I won’t change my mind until IDers do the same things to verify their theory that we neo-evos do: propose some testable hypotheses, formulate some predictions that can be empirically (i.e. not just theoretically) tested, then do the tests (i.e. get your hands dirty), analyze the results (preferably using generally accepted statistical methods), and then publish the results in peer-reviewed journals. I know, there have been some peer-reviewed pro-ID articles (some are referenced above), but I haven’t seen any (yet) that use real organisms in natural environments to validate or falsify testable hypotheses. If this can be done, and the results look good, then it’s time to break out the bubbly. Doing so before you’ve jumped through the hoops is not only premature, it makes you look like a deluded, self-agrandizing idiot to the people who have spent their lives getting their hands dirty. And nobody wants to look like a deluded, self-agrandizing idiot…
…not me, anyway.
Why should we be held to a higher standard than you? There is no empirical evidence using real organisms in natural environments in support of the claim that random mutation plus natural selection can create novel cell types, tissue types, organs, and body plans. Physician, heal thyself. -ds
scordova @73: “I do not know what Trevors and Abel actually believe.”
Discovered this for Trevors:
From “Inquiring Minds Want to Know” by Andrew Vowles ( http://www.uoguelph.ca/atguelp.....file.shtml ).
j,
Thank you for the information! That entire article on Trevors is worth reading.
I respect Trevors open mind on the issue, and I’m glad the scientific community accepted publication of his thought process.
Salvador
Allen,
Thank you for your post. It’s is of value knowing where you think IDers need to focus their efforts.
However, I think ID will be accepted by next generation scientists, doctors, and engineers through outside literature, not peer-reviewed literature because of the institutional barriers. Dean Kenyon and Michael Behe and others were persuaded with popular literature outside mainstream peer-review. Curiously, they were actually well-qualified to be peer-reviewers of such popular literature (especially Kenyon when he read the works of A.E. Wilder Smith!).
There will be heretical ideas circulated outside the blessings of “the powers that be” even amongst practicing scientists. I expect these alternative, informal research networks will be where ideas are developed, even annonymously. This is possible now because of the internet.
Any literature for main-stream publication will have to diplomatically avoid references to the fact it is favorable or could be interpreted as favorable to intelligent design.
I expect very little empirical research to be devoted by IDers to moving naturalistic evolutionary theories or OOL theories forward since it is largely viewed as a hopeless quest in their eyes, especially OOL.
Given that, ID theory may not ever meet the criteria you would expect, and I respect that.
However, if you find value in the work of Jack Trevors (who is not an IDer) or Richard Sternberg or even John Sanford, I would consider that progress, as I think from a scholarly standpoint, their ideas make a valuable contribution to critical thinking with respect to prevailing theories.
Salvador
As this thread is quickly slipping out of the queue, let me wrap up a few points.
Regarding pure deterministic laws, even a deterministic pseudo-random number generator cannot infuse life with information apart from a coupling mechanism (a boundary condition). Thus deterministic laws by themselves are impotent to infuses biology with information, not matter how information rich the laws are (and the known ones are actually very information poor according to the work of Gregory Chaitin). This is consistent with Trevors paper. I think Trevors claims hold and are mathematically irrefutable by definition.
Moving one to purely stochastic “laws” or chance processes. These process can only be described by simple general specifications. That is, like deterministic laws used in physics, they are usually poor in information content. When we say a concept is simple and elegant, mathematically speaking it is information poor. F=ma is simple and elegant, but information poor in terms of bits….
Stochastic laws are in a similar way information poor, but in ways less obvious. A stochastic law is usually described with a simple distribution and a few parameters. For example the normal distribution can be described by a simple equation, a mean, and a standard deviation, and nothing else. It is information poor in it’s description.
Consider an illustration where 500 coins are subject to non-specific boundary conditions (like being shaken in a box). This process can be described by a stochastic process where a fair coin has a probability of being heads 50% of the time. A stochastic process is an appropriate model given the absence of specific boundary conditions (like precise specification of the initial conditions and forces acting on the coin).
For example, given that each coin has a probability of being heads about 50% of the time, it is highly unlikely all 500 coins will be heads apart from specific boundary conditions. That is a predicted, easily seen macroscopic property. We see such a stochastic process can’t reasonably be expected to make all coins heads, but rather most outcomes will be such that about 50% of the coins will be heads.
But there are exceptions. Letting H represent heads and T represent tails, the following pattern would be superficially consistent with the idea that 50% of the coins are heads, yet the pattern resists stochastic explanation:
Surprisingly one could progressively make the patterns more intricate until the pattern is Kolmogorov Complex and there would still be problems. It is this claim that is perhaps not so obvious. All coins heads being improbable is obvious, but a Kolmogorov complex pattern being improbable is not so obvious.
The problem is when we begin looking to purely stochastic process to create highly a specific outcome (for a given trial), we get into trouble. As long as the the outcomes are described in generalities (like 50% heads) versus specifities (like a very exact patterns, even Kolmogorov Complex patterns) we avoid getting into trouble. Doing otherwise would be like asking a stochastic process to tell me what your passwords are. It simply is illogical.
That is why I said, if a stochastic process is called upon to give highly specific outcome (for a given trial), it ceases to be a stochastic process by definition. Highly specific outcomes (or specified outcomes) for a given trial do not come about through stochastic processes. This obvious fact has been pounded by Dembski, and it is so obvious, I’m at a loss that it is not clearly seen.
Purely deterministic and purely stochastic processes (in an of themselves) have been shown to be inadequate generators of specific biological information. But what of combinations of deterministic and stochastic processes? That is the subject of my upcoming thread on the Displacement Theorem.
I will also post on Dave Thomas’s evolutionary algorithms, but in brief, his disproof can be illustrated by this fictional scenario:
Salvador
It was written by Allen MacNeill…
But, if standard scientific inference using induction cannot possibly “prove” anything, then the logical elimination of natural causes is quite literally excluded as a logical operation. In other words, just because one cannot provide a naturalistic explanation for the origin of something today is literally no guarantee that such information cannot eventually be discovered and applied in a naturalistic explanation. Therefore, applying the ID concepts of IC and CSI should only be done as a last resort (once all possible naturalistic explanations have been tested and invalidated), as they depend fundamentally on the kind of comprehensive logical elimination that inductive reasoning absolutely prohibits.
eebrom, using “argument by conservation of right-handed parity” writes (;-))…
But, if standard scientific inference using induction cannot possibly “prove” anything, then the logical elimination of intelligent-design is quite literally excluded as a logical operation. In other words, just because one cannot provide intelligent-design explanation for the origin of something today is literally no guarantee that such information cannot eventually be discovered and applied in an intelligent-design explanation. Therefore, applying the naturalistic concepts to IC and CSI should only be done as a last resort (once all possible intelligent design explanations have been tested and invalidated), as they depend fundamentally on the kind of comprehensive logical elimination that inductive reasoning absolutely prohibits.
Salvador, regarding your comment #83, you’re very close to understanding something that will change your view of specified complexity. I’d like to discuss it with you on a neutral board. Thanks.
secondclass,
Comment #83 was eebroms.
visit teleological.org and post a SHORT comment there, and if I have time, I’ll try to respond.
Thanks for your participation.
Salvador
[…] Perfect Architectures which scream design […]