Uncommon Descent Serving The Intelligent Design Community

Reductionist Predictions Always Fail

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

 Rod Dreher writes:

Time and time again, an experimental gadget gets introduced — it doesn’t matter if it’s a supercollider or a gene chip or an fMRI machine — and we’re told it will allow us to glimpse the underlying logic of everything. But the tool always disappoints, doesn’t it? We soon realize that those pretty pictures are incomplete and that we can’t reduce our complex subject to a few colorful spots. So here’s a pitch: Scientists should learn to expect this cycle — to anticipate that the universe is always more networked and complicated than reductionist approaches can reveal.

…Karl Popper, the great philosopher of science, once divided the world into two categories: clocks and clouds. Clocks are neat, orderly systems that can be solved through reduction; clouds are an epistemic mess, “highly irregular, disorderly, and more or less unpredictable.” The mistake of modern science is to pretend that everything is a clock, which is why we get seduced again and again by the false promises of brain scanners and gene sequencers. We want to believe we will understand nature if we find the exact right tool to cut its joints. But that approach is doomed to failure. We live in a universe not of clocks but of clouds.

Comments
And exactly why would you believe a stable genome for over plus 30 million years would support evolution? bornagain77
“I have emailed Dr. Cano and linked him to your post #90, asking him whether his work supports an argument against evolution.”
No response so far. Petrushka
Freelurker: Thank you for the clarification. I think we substantially agree. Engineers can be certainly left in peace, and no "engineering" tag is specially needed by ID. gpuccio
@gpuccio Back to your comments:
To go back to my example of myoglobin, we have to know what myoglobin does, and how it does it. That is a task which has to be accomplished, and if you say that such a task is more specific of an engineer’s approach, that’s fine with me. But that task is a fundamental part of the ID discourse.
We are not way far apart on this. But I say let's "know them by their fruits" i.e., by the products they produce. Finding out what something does and how it does it is not an engineer's approach, it's an engineer's desired end result (in engineering analysis.) Yes, if an IDist is going to attribute intelligence to a pattern then they have to learn what the pattern is first. But the end result of ID is putting a tag on something, a tag that says "attributable to intelligent design," "attributable to regularity" or "attributable to chance." And, btw, learning about biological structures and functions is not a distinguishing feature of ID. It is also fundamental to what evolutionary biologists do.
Then there is the causal part. ID does not stop to “determining the design”. It says that the function, if present and complex, can be attributed to intelligent intervention. But what would an engineer say?
The engineer would say; "I can see that this attribution to intelligent intervention is very important to you; if fact, you appear to be totally consumed by it. But I'm an engineer. I enjoy inventing things and figuring out how things work. That's what they asked me to do and that's what I get paid for. I can't justify charging that kind of discussion on my timecard, especially since nobody knows what the intervention would have been. Let's get together on this again next Sunday."
So, I still don’t unbderstand where is the equivocation.
Based on your last comment, you do not appear to be one of those who outright conflates reverse engineering with design detection. You are not equivocating on the word "design." Freelurker_
@gpuccio Thanks for your latest response. It was clear and it helped me understand where you, individually, are coming from.
Still I don’t understand the emphasis on engineering or not, but I will try just the same to answer, also to clarify further the terminology.
Yes, I should explain my emphasis on engineering. I'll do that in this comment, so not everything in this comment is directed at you. I'm an engineer who has a problem when IDists misrepresent my profession to further their social/religious movement. IDists often kid themselves and others that IDists take an engineering perspective. Dembski and Marks have even proclaimed that ID belongs to the engineering sciences. But the IDist perspective is that certain aspects of nature were engineered by an intelligence. No field of engineering assumes or concludes that at all. (This is not to say, however, that there is anything about engineering that is in opposition to that, as a general prospect.) Engineers, when they are doing engineering, take a materialistic and mechanistic view of nature. I came to this thread because it seemed to me that DATCG and johnnyb were conflating (1) the figuring out of how something worked or how it was put together (aka reverse engineering) with (2) attributing certain patterns to intelligence (aka design detection.) scordova helpfully provided a clear example of someone doing just that. I had seen this conflation before, and I had figured that the root cause of it was just equivocation on the word "design." As I explained above, Dembski's and Behe's use of the term "design" it is very different from the way it is used in engineering. But, it now seems to me that something is going on besides the equivocation. For some of you (johnnyb; maybe, scordova; definitely) the act of figuring out how something works is in and of itself "doing ID." If so, what do you ID guys stand for? At one point "doing ID" meant supporting the claim that certain aspects of nature were best explained by intelligence. Next it was just studying patterns that indicate intelligence. Now it's just figuring out how stuff works (?) Freelurker_
Mr Hayden, I loaded this video for in it Dr. Craig speaks of abstract numbers in comparison to "the first cause" The First Cause Must Be A Personal Being - William Lane Craig - video http://www.metacafe.com/w/4813914 bornagain77
Petrushka you state: "Quantum theory has been described as the most nearly prefect theory in science. Every experimental test has confirmed its predictions to the limits of instruments. And yet it is incomplete. It fails to account for gravity." Would you believe that a strong case can be made for Jesus "unifying" quantum field theory and General relativity? I find it extremely interesting that quantum mechanics tells us that instantaneous quantum wave collapse to its "uncertain" 3-D state is centered on each individual observer in the universe, whereas, 4-D space-time cosmology tells us each 3-D point in the universe is central to the expansion of the universe. Why should the expansion of the universe, or the quantum wave collapse of the entire universe, even care that I exist? Psalm 33:13-15 The LORD looks from heaven; He sees all the sons of men. From the place of His dwelling He looks on all the inhabitants of the earth; He fashions their hearts individually; He considers all their works. This is obviously a very interesting congruence in science between the very large (relativity) and the very small (quantum mechanics). A congruence they seem to be having a extremely difficult time "unifying" mathematically into a "theory of everything".(Einstein, Penrose). The Physics Of The Large And Small: What Is the Bridge Between Them? Roger Penrose Excerpt: This, (the unification of General Relativity and Quantum Field theory), would also have practical advantages in the application of quantum ideas to subjects like biology - in which one does not have the clean distinction between a quantum system and its classical measuring apparatus that our present formalism requires. In my opinion, moreover, this revolution is needed if we are ever to make significant headway towards a genuine scientific understanding of the mysterious but very fundamental phenomena of conscious mentality. http://www.pul.it/irafs/CD%20IRAFS%2702/texts/Penrose.pdf "There are serious problems with the traditional view that the world is a space-time continuum. Quantum field theory and general relativity contradict each other. The notion of space-time breaks down at very small distances, because extremely massive quantum fluctuations (virtual particle/antiparticle pairs) should provoke black holes and space-time should be torn apart, which doesn’t actually happen." - G J Chaitin http://www.umcs.maine.edu/~chaitin/bookgoedel_6.pdf Yet, this "unification", into a "theory of everything", between what is in essence the "infinite world of Quantum Mechanics" and the "finite world of the space-time of General Relativity" seems to be directly related to what Jesus apparently joined together with His resurrection, i.e. related to the unification of infinite God with finite man: The Center Of The Universe Is Life - General Relativity, Quantum Mechanics and The Shroud Of Turin - video http://www.metacafe.com/watch/3993426/ The End Of Christianity - Finding a Good God in an Evil World - Pg.31 - William Dembski Excerpt: "In mathematics there are two ways to go to infinity. One is to grow large without measure. The other is to form a fraction in which the denominator goes to zero. The Cross is a path of humility in which the infinite God becomes finite and then contracts to zero, only to resurrect and thereby unite a finite humanity within a newfound infinity." http://www.designinference.com/documents/2009.05.end_of_xty.pdf Philippians 2: 5-11 Let this mind be in you, which was also in Christ Jesus: Who, being in the form of God, thought it not robbery to be equal with God: But made himself of no reputation, and took upon him the form of a servant, and was made in the likeness of men: And being found in fashion as a man, he humbled himself, and became obedient unto death, even the death of the cross. Wherefore God also hath highly exalted him, and given him a name which is above every name: That at the name of Jesus every knee should bow, of things in heaven, and things in earth, and things under the earth; And that every tongue should confess that Jesus Christ is Lord, to the glory of God the Father. "Miracles do not happen in contradiction to nature, but only in contradiction to that which is known to us of nature." St. Augustine Thus, much contrary to the mediocrity of earth, and of humans, brought about by the heliocentric discoveries of Galileo and Copernicus, the findings of modern science are very comforting to Theistic postulations in general, and even lends strong support of plausibility to the main tenet of Christianity which holds Jesus Christ is the only begotten Son of God. Matthew 28:18 And Jesus came up and spoke to them, saying, "All authority has been given to Me in heaven and upon earth." bornagain77
Petrushka you state: And yet when you kick a really big rock really hard it hurts, regardless of axiomatic reasoning. But what are you actually stubbing your toe on Petrushka? Can you stub your toe on information? Yes! Excerpt: the most solid, unchanging, indestructible “things” in the atoms of a rock are the unchanging, universal, transcendent, information constants, that are holding the rock together, exercising overriding dominion of all quantum events. Transcendent information constants that have not varied one iota from the universes creation. ------ Testing Creation Using the Proton to Electron Mass Ratio Excerpt: The bottom line is that the electron to proton mass ratio unquestionably joins the growing list of fundamental constants in physics demonstrated to be constant over the history of the universe.,,, as well Petrushka photons reduce to "infinite transcendent information" Explaining Information Transfer in Quantum Teleportation: Armond Duwell †‡ University of Pittsburgh Excerpt: In contrast to a classical bit, the description of a (photon) qubit requires an infinite amount of information. The amount of information is infinite because two real numbers are required in the expansion of the state vector of a two state quantum system (Jozsa 1997, 1) — Concept 2. is used by Bennett, et al. Recall that they infer that since an infinite amount of information is required to specify a (photon) qubit, an infinite amount of information must be transferred to teleport. https://uncommondescent.com/intelligent-design/nuclear-power-a-new-movement-you-won%E2%80%99t-believe/#comment-355516 Thus Petruska you have the foundational "material" entity of this universe, photons, made out of "infinite" transcendent information,, and these photons, of which all mass is made, are constrained in their actions by universal transcendent information constants. The whole universe is reducible to "The Word"!!! bornagain77
I will return to this thread if and when I hear from Dr. Cano. I would expect something within a week, if he responds at all. Petrushka
I have no problem accepting the never-ending incompleteness of science. And yet when you kick a really big rock really hard it hurts, regardless of axiomatic reasoning. Some statements about the physical world are more useful than others. The accumulation of useful statements is the business of science. Mainstream biology leads to the prediction of and finding of fossils like Lucy and Tiktaalik. Whether you find this kind of knowledge useful or important is a matter of your personal psychology. But the methods of mainstream science are the methods that advance this kind of knowledge. Occasionally they lead to a new technology or a new medicine, or even a new beer. When you assert that a large chunk of history is outside the purview of mainstream science, you assert that there is no point in going forward. Whatever the cause of historical events, they are not the result of regular processes. No more regularities can be found. Historically, this has not been a useful approach. Petrushka
Petrushka, You are confused.
“Information” is an abstraction, and abstractions, by definition, are simplifications of reality. If there is an apparent discrepancy between what is observed in biochemistry and the abstraction of it, the observations win.
All observations of our material world are abstractions of reality. All information of our material world is an abstraction of reality. All information about our material world came about by observation. Information does not exist as material particles among the other particles of matter in the universe, it requires observation (perhaps more aptly stated as perception) in order to exist at all. Perception creates a semiotic abstraction of reality to become instantiated within a medium, which may then be transfered to other mediums. Upright BiPed
Petrushka,
Pure mathematics and formal logic are things unto themselves, but applied mathematics and mathematical descriptions of natural phenomena always seem to fall short.
Falling short is an abstraction.
Quantum theory has been described as the most nearly prefect theory in science. Every experimental test has confirmed its predictions to the limits of instruments.
Predictions are abstractions.
It is in that sense — incompleteness — that I assert that abstractions never completely describe reality.
"Incompleteness" and "never" are abstractions.
Reason proceeds from axioms, premises and assumptions, and there are no pure axiomatic truths about physical reality.
But there are about reason herself. If you deny this, you cannot go on reasoning at all.
Reasoning about approximations has certainly proved useful over the centuries, but it is not TRUTH.
All we can ever glean by descriptions of the natural world are only approximations. You're exactly right, we can never get to truth by studying the natural world. Clive Hayden
Abstractions are reality to me, more so than material...
Pure mathematics and formal logic are things unto themselves, but applied mathematics and mathematical descriptions of natural phenomena always seem to fall short. Quantum theory has been described as the most nearly prefect theory in science. Every experimental test has confirmed its predictions to the limits of instruments. And yet it is incomplete. It fails to account for gravity. Mathematical descriptions of gravity are also incomplete. That's physics, our hardest and soundest science. It is in that sense -- incompleteness -- that I assert that abstractions never completely describe reality. Reason proceeds from axioms, premises and assumptions, and there are no pure axiomatic truths about physical reality. We obtain approximations through observation and research. Reasoning about approximations has certainly proved useful over the centuries, but it is not TRUTH. Petrushka
Very well said Mr. Hayden,, Petrushka you stated: “Information” is an abstraction, and abstractions, by definition, are simplifications of reality. No Petrushka, Information is reality! "It from bit symbolizes the idea that every item of the physical world has at bottom - at a very deep bottom, in most instances - an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that things physical are information-theoretic in origin." John Archibald Wheeler Why the Quantum? It from Bit? A Participatory Universe? Excerpt: In conclusion, it may very well be said that information is the irreducible kernel from which everything else flows. Thence the question why nature appears quantized is simply a consequence of the fact that information itself is quantized by necessity. It might even be fair to observe that the concept that information is fundamental is very old knowledge of humanity, witness for example the beginning of gospel according to John: "In the beginning was the Word." Anton Zeilinger - a leading expert in quantum teleportation: http://www.metanexus.net/Magazine/ArticleDetail/tabid/68/id/8638/Default.aspx As well, "pure transcendent information" is now shown to be "conserved". (i.e. it is shown that all transcendent information which can possibly exist, for all possible physical events, past, present, and future, already must exist. This is since transcendent information exercises direct dominion of energy which cannot be created or destroyed by any "material" means. i.e. First Law of Thermodynamics) Conservation Of Transcendent Information - 2007 - video http://www.metacafe.com/watch/3995275 These following studies verified what I had suspected in the preceding video: How Teleportation Will Work - Excerpt: In 1993, the idea of teleportation moved out of the realm of science fiction and into the world of theoretical possibility. It was then that physicist Charles Bennett and a team of researchers at IBM confirmed that quantum teleportation was possible, but only if the original object being teleported was destroyed. --- As predicted, the original photon no longer existed once the replica was made. http://science.howstuffworks.com/teleportation1.htm Quantum Teleportation - IBM Research Page Excerpt: "it would destroy the original (photon) in the process,," http://www.research.ibm.com/quantuminfo/teleportation/ Unconditional Quantum Teleportation - abstract Excerpt: This is the first realization of unconditional quantum teleportation where every state entering the device is actually teleported,, http://www.sciencemag.org/cgi/content/abstract/282/5389/706 Of note: conclusive evidence for the violation of the First Law of Thermodynamics is firmly found in the preceding experiment when coupled with the complete displacement of the infinite transcendent information of "Photon c": http://docs.google.com/Doc?docid=0AYmaSrBPNEmGZGM4ejY3d3pfMzBmcjR0eG1neg In extension to the 2007 video, the following video and article shows quantum teleportation breakthroughs have actually shed a little light on exactly what, or more precisely on exactly Whom, has created this universe: Scientific Evidence For God (Logos) Creating The Universe - 2008 - video http://www.metacafe.com/watch/3995300 etc... etc.. etc.. bornagain77
Petrushka,
“Information” is an abstraction, and abstractions, by definition, are simplifications of reality.
That's quite an abstraction you have there then Petrushka, because that is certainly a simplification of reality. Abstractions, in reality, can be more complicated or less complicated than reality, it just depends on what you mean by simplifying and what you mean by reality. Abstractions are reality to me, more so than material, which, as veilsofmaya pointed out, isn't so solid anymore. But things like love and mercy, justice and dignity, mathematics and morality, are just as solid as they ever were, for we understand their makeup. We understand what makes them what they are, whereas we have no equivalent understanding of what makes up matter, or why two things connected physically should be connected philosophically. We know that the law of non-contradiction and 2+2=4 are necessities, true and solid, what the mental and philosophical necessity is behind why a bird that flies must also lay eggs, we cannot say. All we can say is that we've seen them together, but that is not to say that they must, by some hidden philosophical necessity, always fly and lay eggs together. We must not say that all apples should be golden, green or red, and that there is a mental necessity against their being blue. There is no such necessity that we can see. So the "reality" is the vision, the metaphysical understanding of things we can actually understand, which are metaphysical things. We can only describe the material world, and on the most basic levels cannot even describe it very well, we must come up with metaphysical metaphors like a dead or alive cat in a box to even attempt to describe it. But it must sink in that natural descriptions are not explanations. Natural descriptions do not add-up to reasonable proscriptions of nature, that is, proscription of nature understood by our reason. Clive Hayden
You seem to have quite the talent for completely failing to grasp the immensity of the universe wide chasm that separates purely material processes from functional information:
"Information" is an abstraction, and abstractions, by definition, are simplifications of reality. If there is an apparent discrepancy between what is observed in biochemistry and the abstraction of it, the observations win. Evolution is observed. The designer is neither ovserved nor described. ID has no description of the designer, no hypotheses concerning the nature of the designer, the times an places at which the designer may have acted, nor any description or hypothesis concerning the methods or motives of the designer. In short, we have a choice between evolution, which is an incomplete analysis, but which suggests lines of research, and ID which basically sits on the sidelines and points out gaps. ID rejects the mainstream history of life while proposing no alternative. Other than incredulity and irrelevant calculations of probability, ID has nothing to add to ongoing research. I suppose by harping on gaps, ID motivates some scientists to fill them, but they would be filled anyway. Much of science is driven by available technology. Some gaps are not amenable to research because they are beyond the reach of current technology. It's the weekend, so I expect no email response for a while, if ever. Petrushka
Petrushka, "I have emailed Dr. Cano and linked him to your post #90, asking him whether his work supports an argument against evolution." Seeing as Dr. Cano has navigated treacherous Darwinian waters for almost twenty years with this evidence for stability, it will be interesting to see his reply. I bet a large measured dose of diplomacy will be forthcoming. But my question to you Petrushka is why in the world does not all the other evidence that has been presented to you count against evolution? You seem to have quite the talent for completely failing to grasp the immensity of the universe wide chasm that separates purely material processes from functional information: Book Review - Meyer, Stephen C. Signature in the Cell. New York: HarperCollins, 2009. Excerpt: As early as the 1960s, those who approached the problem of the origin of life from the standpoint of information theory and combinatorics observed that something was terribly amiss. Even if you grant the most generous assumptions: that every elementary particle in the observable universe is a chemical laboratory randomly splicing amino acids into proteins every Planck time for the entire history of the universe, there is a vanishingly small probability that even a single functionally folded protein of 150 amino acids would have been created. Now of course, elementary particles aren't chemical laboratories, nor does peptide synthesis take place where most of the baryonic mass of the universe resides: in stars or interstellar and intergalactic clouds. If you look at the chemistry, it gets even worse—almost indescribably so: the precursor molecules of many of these macromolecular structures cannot form under the same prebiotic conditions—they must be catalysed by enzymes created only by preexisting living cells, and the reactions required to assemble them into the molecules of biology will only go when mediated by other enzymes, assembled in the cell by precisely specified information in the genome. So, it comes down to this: Where did that information come from? The simplest known free living organism (although you may quibble about this, given that it's a parasite) has a genome of 582,970 base pairs, or about one megabit (assuming two bits of information for each nucleotide, of which there are four possibilities). Now, if you go back to the universe of elementary particle Planck time chemical labs and work the numbers, you find that in the finite time our universe has existed, you could have produced about 500 bits of structured, functional information by random search. Yet here we have a minimal information string which is (if you understand combinatorics) so indescribably improbable to have originated by chance that adjectives fail. http://www.fourmilab.ch/documents/reading_list/indices/book_726.html further notes: These following articles refute Lenski's supposed "evolution" of the citrate ability for the E-Coli bacteria after 20,000 generations of the E-Coli: Multiple Mutations Needed for E. Coli - Michael Behe Excerpt: As Lenski put it, “The only known barrier to aerobic growth on citrate is its inability to transport citrate under oxic conditions.” (1) Other workers (cited by Lenski) in the past several decades have also identified mutant E. coli that could use citrate as a food source. In one instance the mutation wasn’t tracked down. (2) In another instance a protein coded by a gene called citT, which normally transports citrate in the absence of oxygen, was overexpressed. (3) The overexpressed protein allowed E. coli to grow on citrate in the presence of oxygen. It seems likely that Lenski’s mutant will turn out to be either this gene or another of the bacterium’s citrate-using genes, tweaked a bit to allow it to transport citrate in the presence of oxygen. (He hasn’t yet tracked down the mutation.),,, If Lenski’s results are about the best we've seen evolution do, then there's no reason to believe evolution could produce many of the complex biological features we see in the cell. http://www.amazon.com/gp/blog/post/PLNK3U696N278Z93O Lenski's e-coli - Analysis of Genetic Entropy Excerpt: Mutants of E. coli obtained after 20,000 generations at 37°C were less “fit” than the wild-type strain when cultivated at either 20°C or 42°C. Other E. coli mutants obtained after 20,000 generations in medium where glucose was their sole catabolite tended to lose the ability to catabolize other carbohydrates. Such a reduction can be beneficially selected only as long as the organism remains in that constant environment. Ultimately, the genetic effect of these mutations is a loss of a function useful for one type of environment as a trade-off for adaptation to a different environment. http://www.answersingenesis.org/articles/aid/v4/n1/beneficial-mutations-in-bacteria bornagain77
Phaedros,
I wonder if Clive has something against bornagain….
Not at all. Clive Hayden
I have emailed Dr. Cano and linked him to your post #90, asking him whether his work supports an argument against evolution. Petrushka
Petruska, to be as clear as possible on the yeast, the gain in ability to utilize a broader scope of sugars will be found to come at a cost of original "optimal" functionality of the yeast in the wild: The parallel test for bacteria is found in this test: Initially, it was difficult to demonstrate differences between wild-type and clinical strains in a rich media (Nutrient or Typticase-soy agar). There were no differences in growth rate or colony size. However, after switching to minimal media and observing hourly, the differences were readily observed. In order to confirm and extend the differences in growth rates between the sensitive BS303S strain (isolated from pond water) and the resistant WFR strain, a fitness/competition assay was performed. This assay sought to simulate famine conditions in the natural environment by utilizing minimal media and to evaluate the wild-type against ampicillin resistant, clinical strains exhibiting loss of prodigiosin production. Once subjected to conditions that were “harsh,” differences were seen in their performance (growth rate and robustness of colonies). http://www.answersingenesis.org/articles/aid/v2/n1/darwin-at-drugstore Petrushka, your yeast example is very much like Lenski's "cuddled" e-coli. bornagain77
Petrushka, your references are all dated to 2005 and earlier and are overturned by the 2009 study of 419 million year old extinct bacteria that confirmed Vreeland's methodology! You claim yeast has confirmed a gain in functional complexity by utilizing a broader scope of carbohydrates, but exactly what have you confirmed? are the modern strains more fit in the wild than the ancient strain is as I confirmed to a fitness test? Petruska you are imposing your interpretation on to the evidence: Since you can't believe Dr. Cano's word on there being 3 sources of independent verification I suggest you e-mail him for the proof. He is personable and should respond if you ask nicely. notes: Is Antibiotic Resistance evidence for evolution? - "The Fitness Test" - video http://www.metacafe.com/watch/3995248 Testing the Biological Fitness of Antibiotic Resistant Bacteria - 2008 http://www.answersingenesis.org/articles/aid/v2/n1/darwin-at-drugstore Thank Goodness the NCSE Is Wrong: Fitness Costs Are Important to Evolutionary Microbiology Excerpt: it (an antibiotic resistant bacterium) reproduces slower than it did before it was changed. This effect is widely recognized, and is called the fitness cost of antibiotic resistance. It is the existence of these costs and other examples of the limits of evolution that call into question the neo-Darwinian story of macroevolution. http://www.evolutionnews.org/2010/03/thank_goodness_the_ncse_is_wro.html List Of Degraded Molecular Abilities Of Antibiotic Resistant Bacteria: http://www.trueorigin.org/bacteria01.asp bornagain77
http://mbe.oxfordjournals.org/cgi/content/full/18/6/1143 http://geology.gsapubs.org/content/33/1/e93.1 http://mbe.oxfordjournals.org/cgi/reprint/19/9/1637 Petrushka
Petrushka, the bacteria are confirmed to be ancient by 3 outside sources, the DNA has been sequenced, and the small changes are confirmed to be due to Genetic Entropy not genetic drift...
Just give me a link to the three independent confirmations. That's all I ask. I've spent an hour googling and all I find are arguments that the "unchanged" DNA is contamination. As for genetic entropy, the scientist who is now in the beer business says the difference is that modern yeast can metabolize more kinds of carbohydrates. Is that consistent with entropy? Petrushka
Of note to the main topic of reductionism, I think this is very interesting: the complexity of computing the actions of even simple atoms quickly exceeds the capacity of our supercomputers of today: Delayed time zero in photoemission: New record in time measurement accuracy - June 2010 Excerpt: Although they could confirm the effect qualitatively using complicated computations, they came up with a time offset of only five attoseconds. The cause of this discrepancy may lie in the complexity of the neon atom, which consists, in addition to the nucleus, of ten electrons. "The computational effort required to model such a many-electron system exceeds the computational capacity of today's supercomputers," explains Yakovlev. http://www.physorg.com/news196606514.html bornagain77
Petrushka, the bacteria are confirmed to be ancient by 3 outside sources, the DNA has been sequenced, and the small changes are confirmed to be due to Genetic Entropy not genetic drift, for you to just restate your position against what was established is personal incredulity on your part and is not science. For you to overturn the fact that was established you appealed to yeast, but have you, or any other "expert" witnessed yeast "evolving" past trivial variation within kind that stays within the principle of genetic entropy? Have you cited anything other than your personal belief? Of course not, for no such evidence exist or can exist since it would violate known principles of science! bornagain77
MY expectations are clear. If the amber organisms are confirmed to be ancient, and if their DNA is sequenced, they will show a pattern of genetic drift consistent with our understanding of molecular clocks.
I think I mentioned this, but it's worth repeating: one of the claims made for beer made from the ancient organisms is that it is different because the yeast is different. Commercial product claims are not heavily regulated, so I wouldn't use this as scientific evidence. But it demonstrates that the scientist who found the ancient organism expects them to be genetically different from modern organisms. Petrushka
Petrushka, To further solidify my claim for Genetic Entropy explaining the small amount of change witnessed in the almost exact genetic sequences of the modern bacteria from the ancient bacteria: Raul Cano states in this article: "After the onslaught of publicity and worldwide attention (and scrutiny) after the publication of our discovery in Science, there have been, as expected, a considerable number of challenges to our claims, but in this case, the scientific method has smiled on us. There have been at least three independent verifications of the isolation of a living microorganism from amber. http://www.microbeworld.org/index.php?option=com_content&view=article&id=388:raul-cano-career-profile&catid=75:career-profiles&Itemid=219 Commentary from another article: "Raul J. Cano and Monica K. Borucki discovered the bacteria preserved within the abdomens of insects encased in pieces of amber. In the last 4 years, they have revived more than 1,000 types of bacteria and microorganisms -- some dating back as far as 135 million years ago, during the age of the dinosaurs.,,, In October 2000, another research group used many of the techniques developed by Cano’s lab to revive 250-million-year-old bacteria from spores trapped in salt crystals. With this additional evidence, it now seems that the "impossible" is true." http://www.physicsforums.com/showthread.php?t=281961 Thus with outside verification from 3 sources,, this test,,,,,,,, In reply to a personal e-mail from myself, Dr. Cano commented on the “Fitness Test” I had asked him about: Dr. Cano stated: “We performed such a test, a long time ago, using a panel of substrates (the old gram positive biolog panel) on B. sphaericus. From the results we surmised that the putative “ancient” B. sphaericus isolate was capable of utilizing a broader scope of substrates. Additionally, we looked at the fatty acid profile and here, again, the profiles were similar but more diverse in the amber isolate.”: Fitness test which compared the ancient bacteria to its modern day descendants, RJ Cano and MK Borucki ,,,,,,, is further solidified. Petrushka what you need to overturn this "fact" for Genetic entropy is solid empirical evidence and not just personal incredulity on your part or any other "experts" part that you may cite. bornagain77
I don't see that you've walked me through anything. I ask for a clear and coherent statement of your point in your own words. My assertion was that genomes change over time. I made no reference to morphologies changing over time. The evidence you cited does not address the question of genomes changing over time. To the extent they discuss DNA, they are contradictory. One study shows no change, but hasn't been independently replicated. The other study shows either vast change, or degradation of the genome. All I am asking for is a clear statement of your expectations. Do genomes change over time, and what pattern would expect to find if the amber organism DNA is sequenced? MY expectations are clear. If the amber organisms are confirmed to be ancient, and if their DNA is sequenced, they will show a pattern of genetic drift consistent with our understanding of molecular clocks. If you are aware of any journal articles discussing the sequencing of amber organisms, I'd appreciate a link or a reference. Petrushka
Petrushka, I am not walking you through it again. I have made my point clearly and you have just stated nothing but blind faith save for the one article which I addressed but you did not heed. bornagain77
Petrushka, the morphologies of ancient bacteria are surprisingly stable:
I haven't been discussing morphologies. I've asked several times, as plainly as I can, whether you are claiming that genomes don't change over time. I asked if any genomes from the amber organisms have been sequenced, and if so, are they the same or different from modern organisms. You've mentioned two examples of gene sequences. One was identical to modern organism and almost certainly the result of contamination. I don't see that those findings have been independently replicated. The other example involved DNA snippets that are entirely unlike any current living organism. So instead of referring me to YouTube videos, which I am unlikely to watch, how about explaining in your own words, exactly what your point is. You seem to be arguing that in some lineages, genomic change hasn't occurred, but your evidence doesn't address this. If there are lineages where genetic drift hasn't occurred, what would that say about genetic entropy? Petrushka
Freelurker_: I will be happy to read your further comments. About Behe, I suppose you refer to DBB and the concept of irreducible complexity. Still I don't understand the emphasis on engineering or not, but I will try just the same to answer, also to clarify further the terminology. IMO, what Behe is doing in DBB is the following: a) Analyzing a couple of biological machines (the famous flagellum, and the clot cascade), and commenting in detail about what they do, how they work, and how their function is due to the fact that they are made of assembled parts, each of them complex, which contribute to the general function because they are assembled that way. This discussion is the same as what you call "determining the design", and is equivalent to what an engineer does, according to your definitions, when he analyzes how some software works and how its function is implemented. To be more precise, I think we should call this part: verifying and analyzing the functional specification. In that sense, the word "specification" is more correct, because it refers to an observable property of the object we are studying, and does not in itself imply the design inference. IOW, the object could still appear specified without having been designed, if its complexity is low. b) That done, Behe discusses the causal model for those biological objects, and in particular the common model of RV + NS. And he argues that the specific property elucidated in the previous analysis, being made of complex structured parts which are all necessary to generate the general function of the object, a property which he calls irreducible complexity, is in itself a valid empiricsal argument against the usual causal model of RV + NS. That derives from the fact that the necessity part of the model (NS) can operate only when the function is present, and the modular nature of the function makes that explanation completely unlikely for those kinds of objects. That means applying a concept derived from the engineering analysis of the object (how it works, how it is structured, its modular function, the irreducibility of that function) to invalidate an existing (and vastly accepted) causal model. The design inference, then, follows implicitly according to the general model outlined by Dembski in the explanatory filter: as the objects observed are specified (in this case, functional specification); as they are complex (made of many different proteins, each of them extremely complex); and as there is no known necessity model which could credibly generate the whole object (that's where the Behe analysis comes in, in disproving the generally accepted credibility of the NS necessity model); then design is the best explanation. This second part (inferring design) is not probably what engineers usually do, because engineers usually know for certain that the software they are analyzing is designed. But, if an engineer were called to answer the question: is this string of bits a designed software?, then he would act in the same way: first he would try to analyze if the string is a software at all (analyze if it has function, and how that function is implemented). IOW, verify if the string is functionally specified. Then, he would probably infer design, but before doing that he should ask himself if there is any credible model where that string could have arisen by chance, necessity, or a mixture of the two. For that task, the concepts of complexity and, if present, of irreducible complexity, are used. gpuccio
I wonder if Clive has something against bornagain.... Phaedros
Thanks Clive, sorry for misspelling your name Petrushka. bornagain77
bornagain77, It's Petrushka, not Petruska or Peruska. Clive Hayden
Petruska, the morphologies of ancient bacteria are surprisingly stable: AMBER: THE LOOKING GLASS INTO THE PAST: Excerpt: These (fossilized bacteria) cells are actually very similar to present day cyanobacteria. This is not only true for an isolated case but many living genera of cyanobacteria can be linked to fossil cyanobacteria. The detail noted in the fossils of this group gives indication of extreme conservation of morphology, more extreme than in other organisms. http://bcb705.blogspot.com/2007/03/amber-looking-glass-into-past_23.html Thus as far back in time as we can collect fossilized bacteria they look exactly the same as their modern day counterparts (save for some may be a little larger). As with metazoans, there is never a "transition" to be documented, save of course for the transitions found in the imaginations of neo-Darwinists such as yourself. That neo-Darwinists would expect a large amount of "genetic drift" all the while allowing for the morphology to remain exactly the same is a affront to reason. Thus that Vreeland would find the DNA sequences to be almost exactly the same as modern is actually to be expected if one were judging solely from morphological considerations of the ancient and modern bacteria. As for your other "evidence" of which you cited none, it is typical of neo-Darwinists who come on this site to try to rationalize away the evidence. But alas, you can plead for more time, for more evidence, or for whatever, but the fact is that neo-Darwinists have always had, and will always have, nothing but the smoke and mirrors of deception to back there delusions up, whereas ID can rest its foundation on the sure foundations of the second law of thermodynamics and conservation of information. Myself I can't see any reason why evolutionists are so enamored with a philosophy that promises them nothing but death and has been the root cause of so much needless suffering in the world. Shoot the materialistic philosophy, which is falsified by "non-local quantum mechanics by the way, can't even compare to the promises I find in Christ, who is very much alive by the way. Kutless: Promise of a Lifetime http://www.youtube.com/watch?v=2wgA93WQWKE Awake and Alive” – Skillet http://www.youtube.com/watch?v=gw20o0gOorI further note: Odd Geometry of Bacteria May Provide New Way to Study Earth's Oldest Fossils - May 2010 Excerpt: Known as stromatolites, the layered rock formations are considered to be the oldest fossils on Earth.,,,That the spacing pattern corresponds to the mats' metabolic period -- and is also seen in ancient rocks -- shows that the same basic physical processes of diffusion and competition seen today were happening billions of years ago,,, http://www.sciencedaily.com/releases/2010/05/100517152520.htm Ancient Fossils That Evolutionists Don't Want You To See http://www.youtube.com/watch?v=jzFPhRzhMGs THE FOSSILS IN THE CREATION MUSEUM - 1000's of pictures of ancient "living" fossils that have not changed for millions of years: http://www.fossil-museum.com/fossils/?page=0&limit=30 Fossils Without Evolution - June 2010 Excerpt: New fossils continue to turn up around the world. Many of them have an amazing characteristic in common: they look almost exactly like their living counterparts, despite being millions of years old,,, http://www.creationsafaris.com/crev201006.htm#20100618a bornagain77
@gpuccio - I am going to read this whole thread again and put together a comprehensive response. I will specifically address your latest comment. It may take a day or two. Meanwhile, let me ask a couple of questions that may get to the heart of the matter: Is Michael Behe doing engineering? Why or why not? These questions are addressed to all IDist engineers. Freelurker_
Peruska, I ain’t going to waste my time walking you through it again.
I'm not asking you to repeat yourself. I'm simply asking why you are intrigued by DNA that appears to be old. I haven't seen any analysis of the ancient yeast genome, but everything I've been able to find indicates it's significantly different from modern yeast. In fact, that's a selling point for the beer being brewed from it. We would expect really ancient DNA to be different from anything modern. Either because the lineage has changed, or because the ancient DNA has degraded. I haven't found anything to contradict the assumption that the "unchanged" 250 million year old DNA represents contamination. Unfortunately, this is the one claim that hasn't been verified or replicated independently. I'm betting that as samples of ancient DNA are verified and the methodologies verified, we will find differences consistent with evolution. Petrushka
Peruska, I ain't going to waste my time walking you through it again. bornagain77
I don't see that you've addressed the problem. You argue that DNA is old because it's different from any current DNA. At the same time you argue that DNA that is not different is old. I don't see any journal articles describing the successful sequencing of DNA from ancient amber. Perhaps you have a link. From my searches it looks like the claims on both sides are unsettled. Petrushka
Petruska, the fact is that Vreeland is verified by two lines of solid evidence, the recent 450 million year old study and the Cano study of ancient Amber sealed bacteria. That you would cite the very subjective molecular clock test, a test which is not derived from empirical tests in the first place, and as well a very subjective "genetic drift" guesstimate (poisson distribution), a test which is also not derived from a empirical basis but from imposed human interpretations of how the sequences "should look" if evolution is true, only strengthens the falsification of neo-Darwinism by this line of evidence! bornagain77
Petruska, that is old news. bornagain77
Studies of ancient DNA have attracted considerable attention in scientific journals and the popular press. Several of the more extreme claims for ancient DNA have been questioned on biochemical grounds (i.e., DNA surviving longer than expected) and evolutionary grounds (i.e., nucleotide substitution patterns not matching theoretical expectations for ancient DNA). A recent letter to Nature from Vreeland et al. (2000), however, tops all others with respect to age and condition of the specimen. These researchers extracted and cultured a bacterium from an inclusion body from what they claim is a 250 million-year (Myr)-old salt crystal. If substantiated, this observation could fundamentally alter views about bacterial physiology, ecology and evolution. Here we report on molecular evolutionary analyses of the 16S rDNA from this specimen. We find that 2-9-3 differs from a modern halophile, Salibacillus marismortui, by just 3 unambiguous bp in 16S rDNA, versus the approximately 59 bp that would be expected if these bacteria evolved at the same rate as other bacteria. We show, using a Poisson distribution, that unless it can be shown that S. marismortui evolves 5 to 10 times more slowly than other bacteria for which 16S rDNA substitution rates have been established, Vreeland et al.'s claim would be rejected at the 0.05 level. Also, a molecular clock test and a relative rates test fail to substantiate Vreeland et al.'s claim that strain 2-9-3 is a 250-Myr-old bacterium. The report of Vreeland et al. thus falls into a long series of suspect ancient DNA studies.
http://www.ncbi.nlm.nih.gov/pubmed/11734907 Petrushka
Petruska, read it again,,,, the 450 million year old sequences, which confirmed Vreeland's methodology, no longer exist period! The 250 million year old sequences almost matched exactly. The small change that is noted is verified to be due to Genetic entropy by Cano! bornagain77
I find it interesting that your sources argue on one hand that ancient DNA has not changed at all, and on the other hand, argue that contamination was excluded because the DNA had changed. Petrushka
Petruska, It is funny I cite hard facts to back up my position and you state blind faith to back up your position. bornagain77
But the word "achieve" is nonsensical in this context. Populations either survive or they don't. they aren't trying to achieve anything. Petrushka
The numbers of Plasmodium and HIV in the last 50 years greatly exceeds the total number of mammals since their supposed evolutionary origin (several hundred million years ago), yet little has been achieved by evolution.
Except for jumping from one species to another. And except for evolving withing an individual victim faster than the victim can develop defences. Petrushka
So you expect bacteria to change in their genome sequences while they are searching for a new sequence Petruska??
Genomes are changing all the time, although change is slow in human perception. The search metaphor is not very useful. Evolution doesn't search. The record of established populations finding "solutions" to changing ecosystems is not good. Most large and rapid changes to ecosystems result in extinction rather than adaptation. Petrushka
...whether or not evolution is just about specific targets or just change is irrelevant when we want to discuss already existing structures and proposed evolutionary sequences.
It makes no sense to calculate the odds against something that's already happened. Now if you had a time machine and could demonstrate the the E.coli flagellum originated through some history that did not include the accumulation of small changes, perhaps you could talk about probabilities. But the evidence is that flagella and cilia have many variants and employ many variations and many subsets of the proteins used by E.coli. Not only that, but many of the proteins can have functions unrelated to locomotion. In short, there is strong evidence that the flagellum is not isolated by a sea of non-viability. There is nothing in nature that requires a flagellum to exist. Either there are many pathways leading to its evolution, or it's a lotto winner. Either way, ID would have to demonstrate that all paths involve dead zones before having a case. Petrushka
Petruska you state: "It is, however, possible for large populations having short reproductive cycles to explore many point mutations. Hence the interest in bacteria in research. It’s probably why bacteria don’t go extinct and are difficult to eradicate." So you expect bacteria to change in their genome sequences while they are searching for a new sequence Petruska?? Some bacterium spores, in salt crystals, dating back as far as 250 million years have been revived, had their DNA sequenced, and compared to their offspring of today (Vreeland RH, 2000 Nature). To the disbelieving shock of many scientists, both ancient and modern bacteria were found to have the almost same exact DNA sequence. The Paradox of the "Ancient" Bacterium Which Contains "Modern" Protein-Coding Genes: “Almost without exception, bacteria isolated from ancient material have proven to closely resemble modern bacteria at both morphological and molecular levels.” Heather Maughan*, C. William Birky Jr., Wayne L. Nicholson, William D. Rosenzweig§ and Russell H. Vreeland ; http://mbe.oxfordjournals.org/cgi/content/full/19/9/1637 Evolutionists were so disbelieving at this stunning lack of change that they insisted the stunning similarity was due to modern contamination. Yet the following study laid that objection to rest by verifying Dr. Vreeland's methodology was not introducing contamination: World’s Oldest Known DNA Discovered (419 million years old) - Dec. 2009 Excerpt: But the DNA was so similar to that of modern microbes that many scientists believed the samples had been contaminated. Not so this time around. A team of researchers led by Jong Soo Park of Dalhousie University in Halifax, Canada, found six segments of identical DNA that have never been seen before by science. “We went back and collected DNA sequences from all known halophilic bacteria and compared them to what we had,” Russell Vreeland of West Chester University in Pennsylvania said. “These six pieces were unique,,, http://news.discovery.com/earth/oldest-dna-bacteria-discovered.html This following study also corroborated Vreeland's work:: Revival and identification of bacterial spores in 25- to 40-million-year-old Dominican amber Dr. Cano and his former graduate student Dr. Monica K. Borucki said that they had found slight but significant differences between the DNA of the ancient, 25-40 million year old amber-sealed Bacillus sphaericus and that of its modern counterpart,(thus ruling out that it is a modern contaminant, yet at the same time confounding materialists, since the change is not nearly as great as evolution's "genetic drift" theory requires.) http://www.sciencemag.org/cgi/content/abstract/268/5213/1060 30-Million-Year Sleep: Germ Is Declared Alive http://query.nytimes.com/gst/fullpage.html?res=990CEFD61439F93AA25756C0A963958260&sec=&spon=&pagewanted=2 In reply to a personal e-mail from myself, Dr. Cano commented on the "Fitness Test" I had asked him about: Dr. Cano stated: "We performed such a test, a long time ago, using a panel of substrates (the old gram positive biolog panel) on B. sphaericus. From the results we surmised that the putative "ancient" B. sphaericus isolate was capable of utilizing a broader scope of substrates. Additionally, we looked at the fatty acid profile and here, again, the profiles were similar but more diverse in the amber isolate.": Fitness test which compared the 30 million year old ancient bacteria to its modern day descendants, RJ Cano and MK Borucki Thus, the most solid evidence available for the most ancient DNA scientists are able to find does not support evolution happening on the molecular level of bacteria. In fact, according to the fitness test of Dr. Cano, the change witnessed in bacteria conforms to the exact opposite, Genetic Entropy; a loss of functional information/complexity, since fewer substrates and fatty acids are utilized by the modern strains. Considering the intricate level of protein machinery it takes to utilize individual molecules within a substrate, we are talking an impressive loss of protein complexity, and thus loss of functional information, from the ancient amber sealed bacteria. Is Antibiotic Resistance evidence for evolution? - "Fitness Test" - video http://www.metacafe.com/watch/3995248 A review of The Edge of Evolution: The Search for the Limits of Darwinism The numbers of Plasmodium and HIV in the last 50 years greatly exceeds the total number of mammals since their supposed evolutionary origin (several hundred million years ago), yet little has been achieved by evolution. This suggests that mammals could have "invented" little in their time frame. Behe: ‘Our experience with HIV gives good reason to think that Darwinism doesn’t do much—even with billions of years and all the cells in that world at its disposal’ (p. 155). http://creation.com/review-michael-behe-edge-of-evolution bornagain77
Irreducible complexity** Phaedros
But Petrushka that's exactly what Behe showed. Even organisms that reproduce at great rates take many years to explore those point mutations. Also, petrushka whether or not evolution is just about specific targets or just change is irrelevant when we want to discuss already existing structures and proposed evolutionary sequences. Those do require specific changes, that's the point of irreducible. You hinder science because instead looking for the required changes you say well i just happened because look! It happened! Well that's neither interesting or adequate as a scientific explanation. Phaedros
You’ve already been corrected on this upthread in 65, 66, and elsewhere.
There's nothing relevant to my argument in 65 or 66. Just about every element of a genome changes over time. ID argues that specified changes are unlikely, but evolution doesn't go in specified directions. It doesn't wait for just the right mutation. It doesn't search for a target. Things just change. The argumnet that a specific sequence of change, or a specific collection of changes is improbable is simply irrelevant. No serious biologist argues that a specified change will occur within a reasonable time. It is, however, possible for large populations having short reproductive cycles to explore many point mutations. Hence the interest in bacteria in research. It's probably why bacteria don't go extinct and are difficult to eradicate. Petrushka
Moreover DNA sequences, and the protein machinery that replicates this DNA, is found to be vastly different in even the most ancient of different single celled organisms: Uprooting The Tree Of Life - W. Ford Doolittle Excerpt: as DNA sequences of complete genomes have become increasingly available, my group and others have noted patterns that are disturbingly at odds with the prevailing beliefs. http://people.ibest.uidaho.edu/~bree/courses/2_Doolittle_2000.pdf Did DNA replication evolve twice independently? - Koonin Excerpt: However, several core components of the bacterial (DNA) replication machinery are unrelated or only distantly related to the functionally equivalent components of the archaeal/eukaryotic (DNA) replication apparatus. http://nar.oxfordjournals.org/cgi/content/full/27/17/3389 There simply is no smooth "gradual transition" to be found between these most ancient of life forms, bacteria and archaea, as even this following "evolution friendly" article clearly points out: Was our oldest ancestor a proton-powered rock? Excerpt: In particular, the detailed mechanics of DNA replication would have been quite different. It looks as if DNA replication evolved independently in bacteria and archaea,... Even more baffling, says Martin, neither the cell membranes nor the cell walls have any details in common (between the bacteria and the archaea). http://www.newscientist.com/article/mg20427306.200-was-our-oldest-ancestor-a-protonpowered-rock.html?page=1 Kangaroo genes close to humans Excerpt: Australia's kangaroos are genetically similar to humans,,, "There are a few differences, we have a few more of this, a few less of that, but they are the same genes and a lot of them are in the same order," ,,,"We thought they'd be completely scrambled, but they're not. There is great chunks of the human genome which is sitting right there in the kangaroo genome," http://www.reuters.com/article/science%20News/idUSTRE4AH1P020081118 "Why Darwin was wrong about the tree of life," New Scientist (January 21, 2009) Excerpt: Even among higher organisms, “the problem was that different genes told contradictory evolutionary stories,”,,,“despite the amount of data and breadth of taxa analyzed, relationships among most [animal] phyla remained unresolved.” ,,,,Carl Woese, a pioneer of evolutionary molecular systematics, observed that these problems extend well beyond the base of the tree of life: “Phylogenetic incongruities [conflicts] can be seen everywhere in the universal tree, from its root to the major branchings within and among the various taxa to the makeup of the primary groupings themselves.”,,, “We’ve just annihilated the (Darwin's) tree of life.” http://www.evolutionnews.org/2009/05/a_primer_on_the_tree_of_life_p_1.html#more A Primer on the Tree of Life (Part 4) Excerpt: "In sharks, for example, the gut develops from cells in the roof of the embryonic cavity. In lampreys, the gut develops from cells on the floor of the cavity. And in frogs, the gut develops from cells from both the roof and the floor of the embryonic cavity. This discovery—that homologous structures can be produced by different developmental pathways—contradicts what we would expect to find if all vertebrates share a common ancestor. - Explore Evolution http://www.evolutionnews.org/2009/05/a_primer_on_the_tree_of_life_p_3.html#more bornagain77
Repeatable Evolution or Repeated Creation? Fazale Rana http://www.reasons.org/evolution/evolutionary-trees/repeatable-evolution-or-repeated-creation bornagain77
"Evolution is about change, not goal seeking...Calculations of probabilities for specific sequences of change are irrelevant" You've already been corrected on this upthread in 65, 66, and elsewhere. Never let a good misrepresentation go to waste, eh Petrushka? Upright BiPed
Petruska and why can’t you look them up?
I searched on "Dr. Rana" and didn't find any published articles. It is a fact that convergent evolution does not involve repeating a sequence of mutations. Behe is correct in asserting that any specified long sequence of mutations is not likely to happen. This particular assertion has no implications for evolution, however. Evolution is about change, not goal seeking. Calculations of probabilities for specific sequences of change are irrelevant. Petrushka
Freelurker: Let's put Dembski aside for a moment, will you? Just let's speak simply about this point: a) You say engineers figure out how something works (determine the design). That's fine. I just say that IDists too are very much interested in "determining the design". To define that something is functional, you must certainly understand its function, and how that function is implemented. To go back to my example of myoglobin, we have to know what myoglobin does, and how it does it. That is a task which has to be accomplished, and if you say that such a task is more specific of an engineer's approach, that's fine with me. But that task is a fundamental part of the ID discourse. b) Then there is the causal part. ID does not stop to "determining the design". It says that the function, if present and complex, can be attributed to intelligent intervention. But what would an engineer say? The same thing. If an engineer determines a functional design, say in a software, will he doubt that a programmer wrote that software? No. Unless the function is so basically simple that it could be only an example of pseudo-design. IDists and engineers are not two separate classes of people. The only important class of people is: people who can reason correctly about design and causation. Going back to the "complement": I will no certainly speak for Dembski, but for me it is obvious that, of we are speaking of causal factors, regularity, randomness and design are three different causal explanations of things we observe. But the meaning of "design" is always the same: something is designed if an intelligent agent designed it. In that sense, it did not originate form random systems or form laws of reguilarity. The only important concept here is that the causal intervention of a conscious intelligent agent makes the difference, and that that difference can be marked by a kind of output (CSI, dFSCI or any other equivalent definition) which is never observed when an intelligent agent is not implied. So, I still don't unbderstand where is the equivocation. We IDists attribute sone forms of information (like the sequence of a protein) to design when: a) we can determine function in it: we understand that it works and how it works, engineer-like. b) we recognize that the above function is comnplex enough so that it cannot be attributed to regularity and/or chance. In that case, we know from empirical observations that it can be safely attributed to the causal intervention of a conscious intelligent agent. To me, that's very clear, and there is no equivocation. So, to sum up, it's perfectly correct to say that a piece of software has an object-oriented implementation of fucntion, which means that it is functionally specified. And, if it is also complex enpough, attribute its causation to design, the complement of regularity and chance. gpuccio
I said:
The point is that, in ID, “design” does not mean an arrangement of parts. This is most clear in Dembski’s definition of design, which is “the complement of regularity and chance.”
gpuccio said:
No, that’s not correct. In ID, like in any other context, design means that something has been designed, IOW that it is the intentional and purposeful product of an intelligent conscious being. There is no coubt about that. That’s what design means, nothing else.
The above is indeed Dembski's definition of design. You can see it in his book The Design Inference. Link. It may not be your definition, but you cannot deny that it is Dembski's. As discussed earlier, what "design" means to Dembski and Behe is not what it means in an engineering context. In engineering, a design is an arrangement of parts. It would make no sense to say that a piece of software has an object-oriented complement of regularity and chance. The thing being reviewed at an engineering design review is an arrangement of parts, not a complement of regularity and chance. The distinction I'm talking about is important because it is the distinction that is lost by IDists when they equivocate between (1) figuring out how something works ("determining the design") and (2) attributing something to intelligence ("detecting design"). This equivocation shows up when IDists (perhaps not you) try to say either that engineers do what IDists do or that IDists do what engineers do. Freelurker_
Freelurker: In nature, IDists are trying to detect purposefulness without detecting purposes, i.e., they are trying to detect “free-floating purposefulness.” I appreciate this discussion, but I believe that still you have not completely got my point. If you look at my definiton of dFSCI, you can see that dFSCI can be defined and measured in proteins using the specific known function of that protein, or protein family, as the marker of specification. The function, in this case, ir rather explicit, so much so that it can be found in any database of proteins on the internet. So, ID detects purpose in a specific protein, and a very explicit purpose too! Let's make an example: if you look for "myoglobin, human" (just to stay with a very siomple case) on the UNIPROT site, you can find, in the pertinent page for record P02144, the following data: Function: Serves as a reserve supply of oxygen and facilitates the movement of oxygen within muscles. Biological process: Oxygen transport, Transport Ligand: Heme, Iron, Metal-binding Therefore, we have very good information not only on the generic function (reserve supply of oxygen), but also on the specific way that function is biochemically implemented (through the heme ligand and iron). Where is your “free-floating purposefulness"? The protein is specified, because a very explicit function can be defined for it. That's specific purpose, specific function. We only have to compute the final complexity to have a measure of the dFSCI (that can be done, for instance, by the Durston method for protein families). If the dFSCI is above some conventional threshold we agree upon, we can conclude that myoglobin sequences exhibit dFSCI. If no specific pathway based on necessity is known which can explain the emergence of those sequences, we can infer design as the best explanation. No “free-floating purposefulness". Just a simple method, explicit and clear. gpuccio
Petruska and why can't you look them up? Something tells me that if you can't even expend the energy to look up the proper references you would not be persuaded even though I present them to you personally: I give you a clue where many references are though,,, click on my handle: bornagain77
I don't watch videos. Give mne a link to a textbook or journal paper. Petrushka
Petruska you state: "Biologists have never believed a specific sequence is likely to occur, or that any historical sequence would recur (or occur in reverse)." But Petruska if you had watched the video I listed, that is exactly the point the Dr. Rana makes. Even though it is clearly not suppose to happen it does. Thus either your presupposition is wrong, which it is not, or the neo-Darwinian framework is falsified by another line of evidence.,,, Speaking of falsification of the neo-Darwinian framework I kind of like the falsification of the entire genetic reductionism scenario pointed out by Dr. Meyer in this video: Stephen Meyer - Complexity Of The Cell - Layered Information - video http://www.metacafe.com/watch/4798685 So please tell me "what your mechanism is for change now that genetic reductionism is falsified?" bornagain77
As all 15 AAs are different in the beginning from the final target, what we have here is a random search in the search space of all possible combinations of those 15 AAs. That space is 20^15.
The calculation is completely irrelevant. Protein A is not changing into B. It's simply changing. Petrushka
Convergent evolution does not involve the repeating of a sequence of changes. Biologists have never believed a specific sequence is likely to occur, or that any historical sequence would recur (or occur in reverse). Petrushka
Petruska you state: "No sequence of mutations will repeat." Then you don't believe in "convergent evolution" but believe in historical contingency? Well congratulations that's the correct stance, But the bad news is that it refutes neo-Darwinism. See at the 2:30 minute mark of the following video: Lenski's Citrate E-Coli - Disproof of "Convergent" Evolution - Fazale Rana - video http://www.metacafe.com/watch/4564682 bornagain77
So Petruska do you at least adhere to Dollo's law? Well I got bad news for you on that front as well: Dollo's law and the death and resurrection of genes ABSTRACT: Dollo's law, the concept that evolution is not substantively reversible, implies that the degradation of genetic information is sufficiently fast that genes or developmental pathways released from selective pressure will rapidly become nonfunctional. Using empirical data to assess the rate of loss of coding information in genes for proteins with varying degrees of tolerance to mutational change, we show that, in fact, there is a significant probability over evolutionary time scales of 0.5-6 million years for successful reactivation of silenced genes or "lost" developmental programs. Conversely, the reactivation of long (>10 million years)-unexpressed genes and dormant developmental pathways is not possible unless function is maintained by other selective constraints; http://www.pnas.org/content/91/25/12283.full.pdf+html Dollo's Law was further verified to the molecular level here: Dollo’s law, the symmetry of time, and the edge of evolution - Michael Behe Excerpt: We predict that future investigations, like ours, will support a molecular version of Dollo's law: ,,, Dr. Behe comments on the finding of the study, "The old, organismal, time-asymmetric Dollo’s law supposedly blocked off just the past to Darwinian processes, for arbitrary reasons. A Dollo’s law in the molecular sense of Bridgham et al (2009), however, is time-symmetric. A time-symmetric law will substantially block both the past and the future,". http://www.evolutionnews.org/2009/10/dollos_law_the_symmetry_of_tim.html bornagain77
gpuccio -
Freelurker: You say: If one makes a bad-design argument against ID (which I don’t), IDists, including Behe, will legitimately tell you that nobody knows the purposes the purported designer had in mind. I don’t agree. Bad design arguments are bad arguments essentially for one reason: bad design is still design.
As I said, I don't make the bad-design argument. I brought it up only because it is in those kind of discussions that one can see IDists saying (legitimately) that nobody knows the purposes the purported designer had in mind. In nature, IDists are trying to detect purposefulness without detecting purposes, i.e., they are trying to detect "free-floating purposefulness." Freelurker_
Petruska, You built a strawman argument. It was taken away. You then switched gears and built another. Again, it was removed. Now you wave your sword in the air and repeat them both as if the preceding never occured. Clearly, you are not interested in evidence, and you've apparently given up on honesty as well. Upright BiPed
Petrushka: Are we speaking the same language? You say that you haven't retreated a bit from the "simultaneity" argument, and to prove your point you go on with a series of "arguments" which never mention the simultaneity issue and have nothing to do with it? Anyway, I am tired... Good night! gpuccio
I am anyway satisfied that you have apparently retreated from the vain “simultaneity” argument.
I haven't retreated a bit. The probability calculations presented by ID advocates are based on a number of bogus assumptions. 1. There is no incremental path from a state of not having a complex structure to a state where the structure exists. 2. Evolution has goals. Structures are specified. 3. Every step in the accumulation of change leading to a structure must involve an increment in fitness and Progress toward the function. None of these assumptions are part of biology. They are subsets of an overall assumption that what is was destined to be. The first assumption simply isn't science. Calculating probabilities after something has happened makes no sense, and you can't calculate the probabilities of a sequence unless you know the sequence. The second and third assumptions are also divorced from reality. No one in biology assumes that the evolution of a flagellum is inevitable, and certainly not by the route taken historically. No sequence of mutations will repeat. Dollo's Law. It is possible that there are many routes to a flagellum. No one knows. But we do know that there are dozens of partial flagella and many variations involving some, but not all of the proteins found in the E.coli flagellum. At any rate, calculating probabilities without knowing the history and the landscape is nonsense. Petrushka
Upright BiPed: Thanks anyway... :) gpuccio
Ah...GP beat me to it. (stands to reason) Upright BiPed
Petrushka, You stated very plainly:
"Biologist do not assume that structures came together in a single event, so the mathematics of improbability is irrelevant."
GP then has gone out of his way to explain that a single simultaneous event has nothing whatsoever to do with it. Having been relieved of this strawman complaint, you now switch gears without ever acknowledging your mistaken argument. You now have moved your complaint-making apparatus to "intentionality" and a "goal".
"The probability calculations are irrelevant because they assume that changes are “leading up” to something, or anticipating being part of a larger structure... ...no pre-specified series of changes is likely to occur... ...No serious biologist thinks that a series of changes leading to a new function was inevitable... ...Not if you assume that there is a goal being searched for, but no one in biology thinks that... ...No serious person thinks that (b)ecause a function exists, it was destined. "
YET, absolutely nowhere in GP's argument does he mention a prespecified goal or intentionlity. He simply follows the logic that one protein must have accumulated some changes in order to become another protein. It so crazy it might be logical. - - - - - - Your attempted argument is so transparent it astounding that you take it so seriously. Really. Upright BiPed
Petrushka: As usual, you change arguments when you don't know what to say. Firts you bring about the problem of simultaneity, then, after I have shown that it is a false problem, instead of admitting that, you shift to the usual: "The probability calculations are irrelevant" or: "there is no target" or just try to affirm "the likelihood that there are nearly infinite combinations that are viable". I have already dealt with all that elsewhere, and I will not do that again now. I am anyway satisfied that you have apparently retreated from the vain "simultaneity" argument. But I am sure I will read that again from you in another thread... gpuccio
I think you are completely ignoring the likelihood that there are nearly infinite combinations that are viable, but which never get explored. We know that there are vast oceans of possibilities, for the simple reason that most of the seven billion humans are genetically unique. Change is not necessary death, nor is it necessarily dramatic. Petrushka
The problem is: can those 15 (in our example) coordinated mutations be found by the probabilistic resource in time t?
Not if you assume that there is a goal being searched for, but no one in biology thinks that. No serious person thinks that tecause a function exists, it was destined. I suppose there could hypothetically be some instances where biochemistry dictates a sequence of change, but that would be lawful behavior, not an improbable event. Petrushka
Simultaneity needs not be assumed. If you are not convinced of that, please specify why.
For the simple reason that we know that alleles can have useful functions unrelated to their function as part of a larger structure or function. We also know that alleles can persist in a population when their effect is neutral. The probability calculations are irrelevant because they assume that changes are "leading up" to something, or anticipating being part of a larger structure. This isn't what biologists assume or observe. The current understanding of Dollo's law is that no pre-specified series of changes is likely to occur. That's pretty much a restatement of Behe's claim. No serious biologist thinks that a series of changes leading to a new function was inevitable or destined. And in cases where similar structures have evolved through different routes, this pretty much demonstrates that functionality is not a matter of islands separated by unbridgeable seas. Petrushka
Petrushka: No, you don't understand. I'll try to explain. Let's say. just to have a model, that in the course of evolution a new protein B comes form an existing protein A in a time t. To make things simpler, and to stay within a common evolutionary scenario, let's say that B comes form an inactive duplicate of A, let's say A', so that we can ignore the problem of the loss of function of A because of mutations (I think that's the best darwinist scenario we can imagine). Now, in our scenario, B is different from A' for at least 15 AAs: IOW, at least 15 AAs must change, and be present at the same time in B in the new form, so that the function of B appears. We also assume that, as soon as the new function appears, it undergoes NS: IOW, the single clone where the mutations have taken place is expanded and fixed. But not before that. So, always for simplicity, we assume that A' changes by single random independent mutations, stepwise. As all 15 AAs are different in the beginning from the final target, what we have here is a random search in the search space of all possible combinations of those 15 AAs. That space is 20^15. Each time a mutation happens, one of those 20^15 possibilities is explored. Obviously, as the mutations are independent, each new mutation can also change a previous "favourable" mutation. Anyway, the fact remains true that each new mutation is a new "trial". So, the probabilities to get to B must be evaluated taking into account: a) the search space (20^15) b) the probabilistic resources (number of possible trials in the time t) Obviously, if the target space is bigger than 1 (if more than one sequence will have the B function, which is usually the case), then we have to take that into account (calculate the ratio between target space and search space, and then refer it to the probabilistic resources). As you can see, nowhere in this model (which is a correct ID model to compute the probabilistic credibility of any protein transition) is it necessary to think that the 15 mutations have to happen simultaneously. It is obviously more sensible to assume that they happen stepwise. The problem is: can those 15 (in our example) coordinated mutations be found by the probabilistic resource in time t? IOW, is a random search credible? Has it the power to determine this particular transition in the historical time and in the biological context we are assuming? Or is the result completely out of range of a random search? And please note, even if we in the end infer design, the mutations can just the same have happened in a stepwise, guided modality. Design needs not be implemented "simultaneously". Guided mutation or intelligent selection in a stepwise modality remain, IMO, the best scenario for biological design. So, simultaneity is a false problem. Is that clear? gpuccio
and all of them must be present at the same time to give the new function, then the probability computation is the same.
I'm not even sure what that means. Of course they must be "present" at the same time, but they do not need to occur at the same time, nor at the time they occur do they have to be in anticipation of some future combination. The simple logic of evolution is that mutations and selection are observed, even two-step accumulations of mutations. No law of physics or chemistry is violated. No simultaneous two or three step mutation has ever been observed, whether it be the result of designer intervention or anticipation of need. Nor has the instantaneous creation of any organism been observed. So what you have is ongoing research under the assumption that small changes accumulate, or the assumption that larger changes (which have never been observed) occur under the influence of an unnamed and undescribed agent, at unspecified times, using unspecified methods for unspecified reasons. In the absence of an actual observed history you are merely asserting that the accumulative history didn't happen, without providing an alternative. Petrushka
Petrushka: I have already pointed to you that there is no nedd that the mutations happen simultaneously. This is a strange idea that you seem to stick to. And a completely wrong one. If the mutations are independent, not individually selectable (the intermediates have no special increase in function), and all of them must be present at the same time to give the new function, then the probability computation is the same. It doesn't matter if they happen stepwise, or all at the same time. Simultaneity needs not be assumed. If you are not convinced of that, please specify why. gpuccio
He is telling you how to detect that the parts came together purposefully rather than as a result of regularity or of chance.
Actually, he's merely asserting that several favorable mutations happening at once is unlikely -- a subset of the argument that a complex structure is unlikely to assemble in one step. Behe says nothing at all about the actual probability of a flagellum evolving stepwise, because neither he nor anyone else knows the exact history of the flagellum. If you had a time machine and could prove that three or six simultaneous mutations occurred, you would have positive evidence for ID. Petrushka
Freelurker: You say: If one makes a bad-design argument against ID (which I don’t), IDists, including Behe, will legitimately tell you that nobody knows the purposes the purported designer had in mind. I don't agree. Bad design arguments are bad arguments essentially for one reason: bad design is still design. Design needs not be perfect to be design. Design needs not be optimal. The bad design arguments made by some darwinists are in reality of the kind: "but if you believe that God, who in your opinion is perfect, designed living beings, how can you explain bad design?" That argument is not only bad, it is a religious, philosophical argument. It has no scientific value. And, even as a religious-philosophical argument, it is very bad anyway, because first of all even a perfect God can operate in a context, and adjust to the existing context. Second, as you say, we are not sure we can understand the whole scenarion of God's intentions (or, for that, of any designer's intentions). But these, again, are philosophical aspects of a bad philosophical argument. Design detection is a scientific issue. So, first let's affirm design where it is recognizable, and then, and only then, we can try to understand if the observed design is optimal, suboptimal, or simply gross, if and when the data allow that kind of analysis. gpuccio
Freelurker: Thank you for your clarifications about your thought. You say: Function is not, technically, the same thing as purpose. A function is actually a regularity; it’s a mapping between system inputs and outputs. A function may fulfill a purpose (fulfill a requirement.) I am afraid here you are thinking as a mathemathician. Let's think as engineers, instead. Let's say that a function is a mapping which fulfills a purpose. That's the correct definition for ID. My personal definition of dFSCI, on which I base all my ID discourse, is exactly based on an explicit definition of function in that sense, and indeed needs a conscious observer to recognize and define function in each case. You can find my explicit definition of dFSCI here: https://uncommondescent.com/intelligent-design/signature-of-controversy-new-book-responds-to-stephen-meyers-critics/#comment-355968 You say: The point is that, in ID, “design” does not mean an arrangement of parts. This is most clear in Dembski’s definition of design, which is “the complement of regularity and chance.” No, that's not correct. In ID, like in any other context, design means that something has been designed, IOW that it is the intentional and purposeful product of an intelligent conscious being. There is no coubt about that. That's what design means, nothing else. But ID is about recognizing design, when possible, from the properties of tyhe designed thing (and not, as would be obvious, from a direct observation of the design process). So, as the distinctive trait of designed things is specification (which is the direct result of the conscious, purposeful process), the first thing we have to observe, to hypothesize design, is some form of specification in the designed object. Now, here is where darwinists get confused (or, sometimes, willfully equivocate). Dembski discusses various kind of specifications, and gives different definitions of it in different works. That's very fine for me, but not necessary for my discourse about biological ID. As I have said many times, the only restricted kind of specification we need in biological analysis is functional specification, and please check my link for a specific definition of it. The issue of "“the complement of regularity and chance”, instead, is a separate discourse. Once specification, of any valid kind, has been established, then we have to be sure that we are not observing what I call a "pseudo-specification". IOW, something which appears as specified, but is not the product of design. That is not impossible, and not unlikely. One thing many people seem not to understand is that design can be simple. One can purposefully design a very simple thing, which has some simple function. That is designed, and if I can observe the process of design, I will know for certain that it is designed. But, if I can only observe the product, and not the process, I will not be able to say that it is designed. The product, being simple, could be the result of random processes. So, that would be a false negative in ID detection: the thing is designed, but we cannot be sure of that. That's where the complexity is necessary. Only complex specified things can be recognized with certainty as designed, because the complexity empirically rules out a random origin. Origin form necessity has to be ruled out separately (IOW, the observed complexity must not be compressible, we have to refer to the true Kolmogorov complexity). That's what Dembski means when he says that design is “the complement of regularity and chance.” IOW, after having ruled out both regularity and chance as causal models of the specified information we observe (in biology, of the functionally specified information we observe), then design is the best inference (indeed, the only inference left). That's exactly the application to biology. In biology, we observe functionally specified digital information: the simplest case is the information for the primary sequence of a functional protein in a protein coding gene. Neo darwinism affirms that such information can be explained as originating from a previously existing information (for instance, another, different protein) through the process of darwinian evolution: a process which has two causal moments, one purely random (RV), and the other purely necessary (NS). In the light of ID, that model can only work if the transitions between the times whan NS can operate (selectable new information) can always be explained in terms of RV. IOW, it cannot work. No detailed darwinist model exists, even if only theorical, of how the different protein domains known to us could have originated that way. So, we are left with a lot of functionally specified information (all existing different protein domains) well beyond the reach of random variation, and with no model based on necessity which can explain it. So, design is, absolutely is, the best inference. gpuccio
@gpuccio The point is that, in ID, "design" does not mean an arrangement of parts. This is most clear in Dembski's definition of design, which is "the complement of regularity and chance." Behe's definition is, not surprisingly, not very different from Dembski's. To see this, notice that when Behe tells you how to detect "design" in the bacteria flagellum he is not telling you how to detect the arrangement of the flagellum's parts. He is telling you how to detect that the parts came together purposefully rather than as a result of regularity or of chance.
The purpose of the flagellum (its function) is obviously to allow movement in space. Weren’t you aware of that? And Behe discusses in detail trhe purpose of each of its parts (stator, rotor, juncion, filament, etc.), exactly as we would do for a man made machine, or for a man made software. I really can’t understand what’s your problem.
If one makes a bad-design argument against ID (which I don't), IDists, including Behe, will legitimately tell you that nobody knows the purposes the purported designer had in mind. This is what I'm emphasizing when I say that, in ID, "design" means free-floating purposefulness. Function is not, technically, the same thing as purpose. A function is actually a regularity; it's a mapping between system inputs and outputs. A function may fulfill a purpose (fulfill a requirement.)
While your concept of “free-floating purposefulness” is certainly funny and bizarre, it means nothing.
But it's not my concept; it's what "design" means in ID. If you find it funny and bizarre then it means you are thinking like an engineer. Freelurker_
uoflcard: I have many problems too with the imaginative post by AllenMcneill, but really I did not feel like commenting on it. As you have opened the debate, I will just say here that I don't understand the basis for the following statement: The schematic diagrams of bacterial flagella, drawn like engineering designs, illustrate the “average” arrangement of such structures. In any given bacterium, the actual structures only approximate this ideal structure. However, given large numbers of “approximations” of the “ideal” structures, biological processes proceed with fairly high efficiency. Frankly, I don't understand in what sense the actual structures should "approximate" the ideal structure. Is that only a philosophical metaphor? Or is there any biological basis for that affirmation? Is the primary, or secondary, or tertiary, or quaternary structure of proteins different in individual flagella? Or is it only a vague boutade, like saying that each molecule of water is just an approximation of the ideal structure of water, but given the high number of molecules, we can drink water with good efficiency? And all this philosophical nonsense just because a transmission electron microscope image looks grainy? gpuccio
Allen, I didn't see anyone respond to most of your comments at #6 and #13, but I have some serious issues with some of the things you said.
The point here is that the relative inefficiency of biological “machines” such as rubisco is almost always compensated for by “massive redundancy”. The schematic diagrams of bacterial flagella, drawn like engineering designs, illustrate the “average” arrangement of such structures. In any given bacterium, the actual structures only approximate this ideal structure. However, given large numbers of “approximations” of the “ideal” structures, biological processes proceed with fairly high efficiency.
I don't think you're making quite the momentous statement that you seem to believe you're making. So there is an ideal structure, which biological structures approximate. Well if most aren't close enough to that ideal to be considered efficient individually, then the system/process as a cumulative whole will not be efficient. I'm not a molecular biologist, so I will not argue over the efficiency of particular machines, like rubisco. But I will say that if you have an efficient system of individual components, those individual components are also efficient. Multiplying inefficient parts will never produce an efficient whole. You can't have a system of engines that operate at 65% efficiency that add up to a whole system that operates at 90%. If you can, you have successfully turned thermodynamics on its head.
This model of biological efficiency — that irreducible randomicity is compensated for by massive redundancy – is, of course, the underlying organizing concept in evolution by natural selection. An irreducibly stochastic generator of phenotypic variation (the so-called “engines of variation”; seehttp://evolutionlist.blogspot......awman.html for a list) is coupled with a probabilistic “filter” that preserves and reproduces only those phenotypic variations that on the average result in continued function.
Now are we talking about efficiency or just brute function (i.e. survival)? You seem to treat them as one and the same while they are quite different. Generating function out of a system of components that vary in efficiency and function is one thing, but generating high efficiency by summing a population of components that on average have lower efficiencies is impossible. The only possible conclusion when viewing an efficient system is that its components must on average be at least that efficient.
The roads we drive on, like the biological systems of which we are composed, are only approximations of what could be called “ideal designs”. The dispute between evolutionary biologists (EBers) and ID supporters (IDers) is between EBers who see biological systems as being constructed and operated “from the bottom up”, with irreducible random/stochastic variation woven in at all levels, and IDers who see biological systems as being designed “from the top down”, with no genuinerandom/stochastic variation at all.
This seems to be a very elementary view of the ID argument. Arguing for ID does not require arguing for static, indistinctive biological machines. This feels like you're trying to claim facts as sole property of EBers (that there is variation at all levels of biological systems). Is every Rolex the exact same down to a molecule? Of course not. At the molecular level, the precise arrangement of atoms will vary wildly from one watch to the next. Does it follow, then, that the watches were not intelligently designed? From a functional perspective, some watches tell time slightly fast, some slightly slow; again, there is stochastic variation in everything we can observe in this universe, it seems, other than the laws of physics. So how stochastic variation is off limits to IDers or how it transforms inefficient parts into efficient wholes is beyond me. From #13
As to the distinction between clocks and clouds, of course biological organisms are clouds. This, however, means that a great deal of the fine structure of biological organisms (like the fine structure of clouds) is the result of stochasticprocesses. That is, a cloud viewed as a single entityexhibits regular structure and function. This is why we can classify them as cirrus, cumulus, stratus, etc. However, this overall regularity is the result of the mass action of a very large number of very small particles, which viewed individually act as purely stochastic “Newtonian” particles. That is, although the cloud as a whole exhibits teleomatic changes over time (to use Ernst Mayr’s word for purely physical processes with predictable cause-and-effect relationships), each individual particle moves and collides with others in essentially random patterns.
This seems to be an incredible stretch. Of course at the molecular level there is tremendous variation, like I just explained with Rolexes. But it is quite remarkable to extrapolate molecular variation of clouds and biological structures to them having equivalent ultimate forms. It is also completely irrelevant to the ID debate as no one is arguing about the specific order of each molecule of biological structures. There is a tremendous, obvious distinction to be made between a cloud and a clock, namely funcitonal, specified information. In biology, that information does not reside at the molecular level of the machines themselves but rather at a higher order of operation (just like the CSI of a Rolex does not reside in the particular arrangment of atoms in each gear and spring).
So, once again we find that biological processes exhibit predictable patterns of “behavior” (i.e. change over time), but these are grounded in stochastic processes that have irreducible random components.
Just as cells are "blobs of protoplasm". No, they are grounded at a higher order than the particular atomic arrangement of the particles. Highly complex, functional behavior is simply never grounded in randomness.
Ergo, the “complexity” of clouds (as compared with clocks) is due to their massively greater stochasticity, rather than greater organization at the level of fine structure. Therefore, it seems to me that asserting that biological systems, if they are more like clouds than clocks, are much closer to the evolutionary model of reality than the ID model. Clouds evolve (i.e. change over time) as the result of purely “natural” processes which do not require any “intelligence” or “design” at all, whereas clocks (at least the kind that are manufactured by humans) are designed for an intended purpose by intelligent agents.
Again, you attempt to claim the randomness of individual atoms of biological structures as sole property of EBers, on what grounds I am unsure. I am also lost as to the grounds for equating the atomic arrangement of biological machines to the evolution of the functional, specified, complex information of the genome, which is the heart of the ultimate debate here. uoflcard
Freelurker (#50): I don't understand what you are saying: In engineering, a design is an arrangement (a pattern) of parts. Performing an act of design is coming up with an arrangement of parts... As an example, when software engineers speak of an object-oriented design they are speaking of a type of arrangement of a software item’s parts. OK, that's fine. And the same is true for design in biological entities. The above is simply not what IDists mean by design, according to their own definitions. Michael Behe defines design as “the purposeful arrangement of parts.” But what do you mean? Isn't that the same concept? Or are you suggesting that in object oriented design the arrangement of parts is not "purposeful"? Parts are arranged to perform some specific function, both in human design and in biological design What is the difference that you see? He says that he has detected such design in, for example, the bacterial flagellum. And so? But notice that he does not claim to be one of the people who discovered the arrangement of parts in the flagellum. He learned about that from scientists working in labs. And so? Behe is arguing that there is a functional arrangement of parts in the flagellum. What does it matter who discovered the facts which allow us to make that conclusion? In case you don't know, in science facts aren't anybody's property. Also note that he does not claim to have discovered the purpose of the flagellum, or the purposes of its parts. Are you kidding? The purpose of the flagellum (its function) is obviously to allow movement in space. Weren't you aware of that? And Behe discusses in detail trhe purpose of each of its parts (stator, rotor, juncion, filament, etc.), exactly as we would do for a man made machine, or for a man made software. I really can't understand what's your problem. So what is this “design” that Behe has detected? It’s free-floating purposefulness. Absolutely not! It's the arrangement of functional parts to perform a function. While your concept of "free-floating purposefulness" is certainly funny and bizarre, it means nothing. To be sure, IDists claim to be able to be able to infer purposefulness (i.e., design in the ID sense) from certain arrangements (patterns) of parts (i.e., design in the engineering sense) but you do not help your case when you confuse these two different concepts. What different concepts? I see no different concepts here. I believe you are only equivocating on the term "purposefulness" which you use instead of "function". Now, let's be clear: in biological design, the specification is given by function. That's why I (like many others here) always refer to the subset of CSI which is called FSCI (functionally specified complex information). That point has been discussed in great detail many times. So, let's see: 1) In software, we can infer design because the arrangement of parts is functional (and, especially in object oriented software, the parts are themselves a functional arrangements of parts). And, obviously, the functionality in software is absolutely purposeful (unless you believe that software comes out by RV and NS). Design is purposeful by definition. The designer arranges parts to implement a function. Implementing that function is his purpose. Is that clear? 2) In biological information, it's exactly the same thing. In the flagellum, we can observe the function of the whole machine (movement), and the contributing function of each part, and the functional arrangement of parts in the whole machine. Moreover, each part is made of objects (proteins), each of which is made by the functional arrangement of parts (aminoacids) which allows the function of the whole protein. All of that is obviously purposeful in the same sense that software is purposeful. There is no difference. The concepts of CSI and of irreducible complexity are necessary for the design inference, both in software and in biological information, just to make the inference certain, and to avoid false positives (random structures which could seem functional, but in reality have never been purposeful, because they originated by non intelligent mechanisms). The concept is exactly the same for design detection in any structure, be it a man made machine, a man made software, or biological information. gpuccio
Freelurker - It is true that this is already done. That's part of the ID argument - we're merely pointing out that the *methodology* that produces good results in biology is the one that *assumes* that things in the cell were designed purposefully, and then engages to figure out that purpose. This is why evolutionists tie themselves up in knots - they know this to be true but try to explain it in some way which doesn't obviously show the truth of ID. However, in addition to what is already done implicitly in biology, I contend that doing it *explicitly* will lead to even more fruitfulness. I give an example of this with an extended conception of Irreducible Complexity here, and give other examples here. I'll leave you with a quote from Michael Ruse (Darwin and Design):
We treat organisms—the parts at least— as if they were manufactured, as if they were designed, and then try to work out their functions. End-directed thinking—teleological thinking—is appropriate in biology because, and only because, organisms seem as if they were manufactured, as if they had been created by an intelligence and put to work
So, as Michael Ruse points out, ID - even if he wouldn't call it that - is an important principle for biology. The problem is that materialists simply refuse to see that this actually puts the burden of proof on them to show how something was not designed, despite the fact that our primary modes of investigation assume its design. Natural selection is invoked as a magic dust to remove traces of real design from apparent design without discussion, and by assuming that the case is already closed. johnnyb
scordova -
Interpretation of software is recognizing the design of the software. Johnnyb was highlighting the fact that one does not need to know the designer in order to recognize designs.
As I said earlier, IDist engineers are prone to equivocating between the way the term "design" is used in ID and the way it is used in engineering. I thank you for providing such a vivid example. In engineering, a design is an arrangement (a pattern) of parts. Performing an act of design is coming up with an arrangement of parts. When we read a design specification or attend a design review for a system, we expect to learn about what its parts will be, how they will be arranged, and how they will interact to fulfill the system's purposes (its requirements). As an example, when software engineers speak of an object-oriented design they are speaking of a type of arrangement of a software item's parts. The above is simply not what IDists mean by design, according to their own definitions. Michael Behe defines design as "the purposeful arrangement of parts." He says that he has detected such design in, for example, the bacterial flagellum. But notice that he does not claim to be one of the people who discovered the arrangement of parts in the flagellum. He learned about that from scientists working in labs. Also note that he does not claim to have discovered the purpose of the flagellum, or the purposes of its parts. So what is this "design" that Behe has detected? It's free-floating purposefulness. (Dembski calls it the "complement of regularity and chance.") To be sure, IDists claim to be able to be able to infer purposefulness (i.e., design in the ID sense) from certain arrangements (patterns) of parts (i.e., design in the engineering sense) but you do not help your case when you confuse these two different concepts. Saying that "Interpretation of software is recognizing the design of the software" would be fine if by "recognizing the design" you meant what a software engineer would mean by this, that is, meaning identifying how the parts are arranged and what they do. You wish to associate IDists with that activity, but that activity is not something that IDists do; IDists can't even begin to determine if an arrangement of parts is attributable to intelligence until after the arrangement of parts has been determined. Determining whether or not something is, per se purposeful, that is, determining if it's the product of "the complement of regularity and chance" is actually foreign to the practice of engineering. Freelurker_
What part of ID theory says we have to establish when an artifact was made in order to make an inference.
That is the conundrum of ID. The basic claim of ID is that existing biological objects cannot have arisen through a series of small modifications. The claim is vacuous without proposing an alternative history. We know that populations change over time and we know that the observed rate of change is consistent with the difference in genomes when they are assumed to be related by descent. We also have theories of how and why changes accumulates. Biologist do not assume that structures came together in a single event, so the mathematics of improbability is irrelevant. Petrushka
Allen MacNeill: To be as specific as possible, according to current ID theory, how and when and where was the bacterial flagellum designed, under what conditions, in response to what ecological requirements, and for what purpose(s)? Freelurker: But it’s a misrepresentation for IDists to associate themselves with this. Figuring out how things work is not what ID is about, no more than it is about providing histories. ID is about attributing patterns to intelligence. First of all, I don't understand all this fuss about the "how, when and why". There is no reason why ID should not try to answer these questions. It is true that the first task of ID is at present to prove design in biological information (also because most biologists are still stubbornly trying to deny the evidence for it). But that priority is in no way a limit to what ID can do. An ID scenario can certainly help very much in answering these questions, while the darwinian scenario, being completely wrong, can only sidetrack present and future research and understanding. 1) "How" has two different aspects: a) How did the designer input the necessary information into the emerging species? That's a very appropriate question, and it is certainly possible to formulate different hypotheses (e.g., guided mutations, selected targeted hypervariation, intelligent selection, a mixture of all them, and probably others). All of these hypotheses are open to empirical verification (or falsification). b) How does the intelligent information work? How is it coordinated and integrated? These are easier issues, perfectly open to an analysis of the software structure in biological information. 2) "When" is a very interesting question, but it has the same validity for both ID and darwinism. The correct question is: "when does new CSI appear massively in evolution?". And we already have a few answers. Certainly, about 4 billion years ago, at OOL. Then, at some time much later, with the emergence of eukaryotes. The, about 580 Myears ago, with the Ediacara explosion, and again, 540 Myears ago, with the Cambrian explosion. And later, with the flowering plants explosion. And, in minor degrees, each time a new species emerges, or just each time a new protein superfamily emerges. All these sudden emergences of information will be better detailed as research goes on, especially if some sidetracking false assumptions of darwinism are put aside. 3) "Why?". Another pertinent ID question. While the big "why" (why does the universe, or life, exist at all?) could well be beyond the reach of pure science (it's more likely it will always have important philosophical connotations), many lesser "whys" can certainly be approached by science, especially design centered science. For instance, if we get rid of the darwinian myth that increased fitness and response to imagined fitness landscapes are the only engine driving evolution, it seems perfectly possible that the increasing complexity in living beings can have explcit purposes, and be targeted to exploring and expressing new ideas and new functions. So, the very simple answer to the why for the flagellum would be: because it allows to express movement, to explore space, to interact with different parts of reality. That's exactly why designers write more complex software: to implement new functionalities. According to a plan, according to a purpose. According to a desire. gpuccio
Ena, Is an X-ray of an arm and an MRI of an arm inaccurate when compared to each other and/or a real arm? All 3 images look different yet all 3 are very much accurate in the function they were designed to do. The image at the top was designed to be a replica of a real structure. I, a layperson, know this. As for whether the flagellum is a clock or cloud, I dont' think either catagory is accurate in themselves. The comparison is really whether something is fluid or rigid. Both can be orginaized or disorganized. I would dare say that the cloud is very clock-like in that all the necessary factors must be present in order for them to appear. what the categories should be is a clock and a clock that doesn't keep time. So using these two categories the flagellum is a clock. You cannot define function without first defining purpose for that function. And without knowing either funciton or purpose one cannot measure it's effectiveness,or it's "optimization". wagenweg
To make your analogy even close to the ID situation, imagine you have a fragment of code carved in a tablet and are claiming that this was created in 3000 BCE.
What part of ID theory says we have to establish when an artifact was made in order to make an inference. Can you point to specific pages in ID literature? That's a rhetorical question on my part. What you assert is does not represent the discipline of ID exploration. This is the definition of ID:
ID is the study of patterns that signifify itelligence Bill Dembski
Freelurker responded:
johnnyb wrote: For instance, when analyzing a computer program, I can interpret it just fine without knowing who programmed it, where they were sitting, or what device they used to input it.”
But it’s a misrepresentation for IDists to associate themselves with this. Figuring out how things work is not what ID is about, no more than it is about providing histories. ID is about attributing patterns to intelligence
No it is not a misrepresentation. Interpretation of software is recognizing the design of the software. Johnnyb was highlighting the fact that one does not need to know the designer in order to recognize designs. We've been able to figure out important features of the DNA genetic code without identifying the Intelligent Designer. Thus we can recognize and interpret artifacts without direct or indirect interaction with the designer. Further the cell is a computer and it runs computer languages running on a computer architecture. The computer analogy is spot on. Having access to the original intelligent designer to help us interpret the design is a sufficient, but not necessary condition to infer design. Johnnyb is point out knowing the designer is not a necessary condition for recognizing design (even though knowing the designer might be a sufficient condition for recognizing design). scordova
johnnyb -
For instance, when analyzing a computer program, I can interpret it just fine without knowing who programmed it, where they were sitting, or what device they used to input it."
But it's a misrepresentation for IDists to associate themselves with this. Figuring out how things work is not what ID is about, no more than it is about providing histories. ID is about attributing patterns to intelligence. Freelurker_
johnnyb:
For things which are designed, the material aspects of their design (how, when, and where) are much less relevant. For instance, when analyzing a computer program, I can interpret it just fine without knowing who programmed it, where they were sitting, or what device they used to input it.
However, you already know the designer (human) and can frame fairly accurate questions around "how" (e.g. Dvorak or QWERTY), so claiming (in this case) that the questions of "how, where and when" are less relevant is only possible because you already know most of the answers. To make your analogy even close to the ID situation, imagine you have a fragment of code carved in a tablet and are claiming that this was created in 3000 BCE. The questions of "how, where and when" suddenly become crucial and directly impact whether your 'evidence' is even valid.
These are much better ID-oriented questions. Right now, I would say that at present the state of ID is unable to answer these questions definitively. However, I think that a more ID-way of asking the question would be this – “how does the design of the organism interact with its purposes and ecological requirements?”
In this statement (and subsequent paragraphs) you seem to take a limited view of "purpose" - in fact, a use of the word more appropriate to an evolutionary view of ecological niches, etc. However, you claim that we're dealing with an intelligence. In this context, "purpose" could be anything. You might claim that a virus was designed to fit a particular ecological niche. I might claim that it was invented deliberately to torture other living creatures. Since ID has no information in this area, my claim is as good as your's. mikev6
DATCG - "Reductionism as reverse engineering is a great way to discover design." If you are using the terms "reverse engineering" and "design" the way they are used in engineering then this is an uncontroversial statement. Reverse engineering analyzes an item and then describes that item as an arrangement of parts. But in the context of this blog, it looks as if, when you refer to discovering design, you are referring to the design detection that IDists talk about. This would be equivocating between the way the term "design" is used in ID and the way it is used in engineering. (IDist engineers are prone to this.) Design detection starts from a known arrangement of parts and then tries to determine if the pattern indicates the involvement of intelligence. Freelurker_
DATCG, I really liked your link @ 36 Bacterial Flagellum: Visualizing the Complete Machine In Situ Excerpt: Electron tomography of frozen-hydrated bacteria, combined with single particle averaging, has produced stunning images of the intact bacterial flagellum, revealing features of the rotor, stator and export apparatus. http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6VRT-4M8WTCF-K&_user=10&_coverDate=11%2F07%2F2006&_rdoc=1&_fmt=full&_orig=search&_cdi=6243&_sort=d&_docanchor=&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=8d7e0ad266148c9d917cf0c2a9d12e82&artImgPref=F You might like this following link. In the video they get into some of the intricate assembly process for the flagellum: Bacterial Flagellum - A Sheer Wonder Of Intelligent Design - video http://www.metacafe.com/watch/3994630 bornagain77
The artist rendering is not used by any ID proponent in a scientific paper.
What scientific paper might that (not) be? Adel DiBagno
Adel, "you might say that the flagellum is Haeckel's embryo of ID" No, not at all. ID proponents are not faking comparisions of multiple divergent units for one. And two, this is a website blog. The artist rendering is not used by any ID proponent in a scientific paper. The flagellum looks more like a machine either thru electron scattering images or as in the 3D image rendering by scientist that are non-ID proponents. Frankly, they could put the image up that Upright linked to at the top. It is even more convincing of Design in my opinion, as well as the 3D image I linked. Haeckel intentionally made multiple drawings of different embryos from different stages and different higher taxa to appear exactly the same, then declared a theory of recapitulation. Whereas ID proponents utilized existing images and scientific structural synopsis utilized by non-supporters alike(ie Nick Matzke's rebutall for example to the flagellum). The fake stages of chimp to human is more akin to Haeckel's embryo drawings. DATCG
gpuccio, Ena Sharples is no longer with us. Clive Hayden
Ena Sharples: Just to make my point more clear: a) Here is a link to an ultrasonographic image of the human abdomen: http://cir.ncc.go.jp/wwwdata/img/0118/0118010200J.jpg that's more or less the equivalent of your transmission electron microscope image, on which you based all your reasoning. b) Here is a link to one of the famous drawings by Netter of the abdominal cavity: http://biomedcentral.inist.fr/images/1477-7800-4-28-2.jpg c) Here is a 3D rendering of the abdominal cavity: http://www.voxel-man.de/vm/images/io_abdomen_quer.jpg d) And here is the real stuff: http://www.esg.montana.edu/esg/kla/ta/abdomen.jpg Look how idealized Netter's drawing or the 3d rendering are! Certainly, the ultrasonographic picture is the cloudy truth... gpuccio
Reductionism as reverse engineering is a great way to discover design. Reductionism ad naseum for the purposes of materialist ideology loses sight of the big picture. DATCG
@Ena, Upright The link below shows an excellent 3D image of the flagellum with work by David Derosier and company in Current Biology. Let Readers decide for themselves Ena, do you have the same problem with fake chimp to human pictures in biology books and posters? Fake chimp to human poster What about Lucy? She is depicted as having human feet. Maybe you should protest? DATCG
Matteo: Very funny indeed! We do need some good laugh here, once in a while... I am really surprised here by the futile attempts of darwinists to call "idealization" what is only "reasonable approximation". I suppose that, in that perspective, all physical laws are only propaganda idealizations, attempting to give the false impression to the layman that regularities do exist in nature, while any good picture of a bubble chamber experiment would clearly show what a cloudlike mess is underlying everything! Ah, the tricks of those bloody teleologists... gpuccio
Ena Sharples (#2): I have to disagree with you substantially. The flagellum is certainly a clock. And a very good one. And the image at the top of this blog is probably a quite accurate 3D rendering of its structure. Like all 3D renderings, it has some limits and does not show perfectly all the details (atomic and molecular movements, and so on). But the same is true of any rendering in modern video games (and some of them are definitely good!). The problem here is another one. What you link is a transmission electron microscope image, and everybody who is familiar with those images knows that they are very "dirty". But, if you are accustomed to reading such images, you can easily see that your image shows astounding regularities, confirming that the flagellum is a clock. gpuccio
LOL@Matteo DATCG
Allen - "To be as specific as possible, according to current ID theory, how and when and where was the bacterial flagellum designed," While those are certainly interesting questions, it is only the materialist position that makes them the primary questions, and presupposes that they have a definitive, findable answer. For things which are designed, the material aspects of their design (how, when, and where) are much less relevant. For instance, when analyzing a computer program, I can interpret it just fine without knowing who programmed it, where they were sitting, or what device they used to input it. I can't tell by looking at a program whether the typist used a Dvorak or a QUERTY keyboard. The main thing is that, with design, the logical relationships between the components are primary considerations, and the historical factors that led to those logical relationships are of secondary importance, and perhaps irrelevant. "under what conditions, in response to what ecological requirements, and for what purpose(s)?" These are much better ID-oriented questions. Right now, I would say that at present the state of ID is unable to answer these questions definitively. However, I think that a more ID-way of asking the question would be this - "how does the design of the organism interact with its purposes and ecological requirements?" My personal belief is that organisms are separated out into basic types, with each type having a purpose or range of purposes. In addition, organisms have an innate sense of what the overall design *should* be, and therefore adapt to fill predefined ecological niches where there is a need. This is why in different areas of the world you have the same basic ecological roles, but played out by species that are not related by descent. They are able to detect the niche that is not filled, and then adapt so as to fill it. As for the flagellum, while I am not an expert in its operation, I would assume that its role is to make sure that bacteria infiltrate the ecology sufficiently. johnnyb
Don't you all get it? The flagellum drawing at the top of the page is obviously idealized and therefore off-limits, but the METHINKSITSLIKEAWEASEL program beautifully establishes the true power of natural selection, proving that all true scientists shun, shun misleading oversimplification. Matteo
Barry Arrington quoted Rod Dreher, who quoted Andrew Sullivan, who quoted Jonah Lehrer wrote …
Time and time again, an experimental gadget gets introduced -- it doesn't matter if it's a supercollider or a gene chip or an fMRI machine -- and we're told it will allow us to glimpse the underlying logic of everything. But the tool always disappoints, doesn't it? We soon realize that those pretty pictures are incomplete and that we can't reduce our complex subject to a few colorful spots. So here's a pitch: Scientists should learn to expect this cycle -- to anticipate that the universe is always more networked and complicated than reductionist approaches can reveal.
First, is this supposed to be controversial? Of course it's incomplete. These "pretty pictures" are only an abstraction of reality which has limits we're quite aware of. However, this doesn't mean that we should abandon a reductionists approach. Just because a tool has limits doesn't mean we should throw it away or that it always fails, as the title of this article suggests. What do you suggest we repose it with? Second, the original author (Lehrer) is a science blogger, who appears to be commenting more on how the press and public interprets science, rather than the actual views of the scientists themselves. For example, Stephen Hawking on finding the Higgs bosun…
“I think it will be much more exciting if we don’t find the Higgs. That will show something is wrong, and we need to think again. I have a bet of $100 that we won’t find the Higgs.”
So, I'd ask, who's really going to be disappointed, Hawkin, Lehrer or the lay public who Lehrer writes to? Lehrer then goes on to quote Karl Popper. However, Popper presented an emergentist cosmology which was not anti-reductionist either. veilsofmaya
near impossible?,,,, Seeing as evolutionists, using all their intelligence and lab equipment, have not even found a single biologically relevant novel functional protein,,,, Stephen Meyer - Functional Proteins And Information For Body Plans - video http://www.metacafe.com/watch/4050681 ,,, I would say finding a minimal system of functionally interacting proteins is "near impossible" multiplied exponentially. bornagain77
I wonder how does evolution create over time optimal systems when on the other hand the chances of even minimal function arising are near impossible? Phaedros
MacNeill you state: "Optimality, by itself, is no evidence for either evolution or ID. Indeed, both evolutionary biology and ID propose that most biological processes will be near optimal under most conditions (and this goes for rubisco, as well as for any other enzyme)." So please Mr. MacNeill explain to me why evolutionists proposed 180 vestigial organs and 95% junk DNA if evolution makes such a concise prediction for optimality? bornagain77
Ena asked: "though he has no chance of even explaining the origination of what he perceives to be sub-optimal design) Out of interest, how do you explain it?" I'll answer that if you explain to me how you explain a quantum wave collapse to its "uncertain" particle/wave state in quantum mechanics. bornagain77
Ena,
"It’s a very nice image."
You asked for one, and you got it.
"I don’t know. What are they using it for? Then I’ll be able to tell you if it’s misleading."
In the paper in which it was published, the authors placed it under the heading of "Structure".
"It would be truthier. It would have more truthiness."
Faced with taking an illogical position regarding the image at the top of this page, you are reduced to insinuating dishonesty on the part of the UD website. The fact that the scientific literature is replete with such images provided for the precise purpose of understanding the structure and function of such entities is simply ignored. Have a nice day. Upright BiPed
Upright
It was interesting to see a scientist ignore detail. Apparently, there was too much of it.
Very interesting to see that, yes. Ena Sharples
Upright, http://www.skeptic.com/eskeptic/08-08-20images/figure03.jpg It's a very nice image.
Is this a misleading image? If so, then how so?
I don't know. What are they using it for? Then I'll be able to tell you if it's misleading.
If the banner at the top of this page was replaced with the attached image, what would be different?
It would be truthier. It would have more truthiness. But for one thing people would look at it and wonder what the heck it was. As it looks nothing like what's at the top of the page, that's for sure. Most people would probably not connect the two. Ena Sharples
I find it so interesting how Mr. Macneill and other evolutionists claim that "Biological entities are imprecise, irregular, and multifarious in function." Yet, when we talk about enzymes or cellular structures or whole organs we constantly talk about high efficiency, incredibly complex structure, and optimality. This seems very strange to me. Never mind the fact that the differences between organisms of the same species are negligible at best. The fact that some don't develop perfectly or have defects does not refute design in the slightest. I think Dr. Craig had some fun remarks about this in response to Dr. Ayala in their debate, "Is Intelligent Design Viable?" Phaedros
Ena, For crying out loud. Its not about the image, its about the function and the structure. You've willfully ignored your initial problem of comparing an "actual picture that does not show the internal structure to an idealized picture that does". You, like Perakh, subscribe to the idea that viewing an artist's rederning of the internal structural details of a flagella will impede in the understanding of it. You just as quickly ignore the fact that the these renderings (which appear throughout the scientific literature) are provided precisely for the purposes of understanding the structure and function. You then go on to insinuate that the laypeople who visit here might perhaps be too dense to understand that a drawing is not a photograph. So infused with ignorance, they might not be able to recall a time in their lives where an artistic redering was used to provide them with an insight into a subject. Perhaps you think that scientists should have images and laypeople should not view them - lest they be betrayed by this ungamely ignorance. This is, of course, a common refrain. How can these laypeople be trusted to learn anything? I have no doubt that you ignore this faulty reasoning to advance additional faulty reasoning. Here is an image produced within the scientific literature. It is captioned with these words: "rotationally averaged reconstruction of images of hook-basal bodies seen in an electron microscope...This reconstruction is derived from rotationally averaged images of about 100 hook–basal body complexes of Salmonella polyhook strain SJW880 embedded in vitreous ice (29). The radial densities have been projected from front to back along the line of view, so this is what would be seen if one were able to look through the spinning structure." The scientists who developed this image did so for a reason. Is this a misleading image? If so, then how so? And since you brought it up, may I ask: If the banner at the top of this page was replaced with the attached image, what would be different? Upright BiPed
To be as specific as possible, according to current ID theory, how and when and where was the bacterial flagellum designed, under what conditions, in response to what ecological requirements, and for what purpose(s)? Allen_MacNeill
Re my comment #16: For example, evolutionary biologists have proposed an evolutionary process by which the bacterial flagellum has evolved as the result of exaptation of structures and functions that were originally adapted to other circumstances. By contrast, Michael Behe (when pressed) asserted that the bacterial flagellum was created "in a puff of smoke". Is there an empirically testable process (i.e. a single step or series of steps) by which the bacterial flagellum can be shown to have become adapted to its function, and if so where can a clear and testable description of such a process be found? Allen_MacNeill
Barry
clouds are an epistemic mess, “highly irregular, disorderly, and more or less unpredictable.”
Could clouds also be pressed into service as an analogy for fitness? I expect some real fitness landscapes to be very messy indeed. Ena Sharples
Optimality, by itself, is no evidence for either evolution or ID. Indeed, both evolutionary biology and ID propose that most biological processes will be near optimal under most conditions (and this goes for rubisco, as well as for any other enzyme). The real question is how biological systems become optimal. Evolutionary biologists assert that this happens via natural selection, which be inferred from empirical studies of the genetic and phenotypic evolutionary transitions from non-optimal to sub-optimal to near-optimal function (and, of course, in the other direction as well). Until ID supporters provide empirically testable mechanisms for such transitions, it won't be considered to be a science. Allen_MacNeill
ba77
though he has no chance of even explaining the origination of what he perceives to be sub-optimal design)
Out of interest, how do you explain it? Ena Sharples
MacNeill's use of the rubisco enzymne, and the underlying implications of sub-optimal design for the rubisco, (though he has no chance of even explaining the origination of what he perceives to be sub-optimal design) is in fact a excellent example of the reductionist fallacy that materialists are gullible to fall prey to. For when taking into consideration the entire mosaic of the web of life here on earth we find rubisco is indeed optimal for its purpose of sustaining higher life forms above it which the rubisco is not aware of nor cares about: Rubisco is not an example of unintelligent design - David Tyler Excerpt: The analysis of Tcherkez et al. (2006) was significant for showing that Rubisco does not bear the marks of Darwinian tinkering and that research to genetic modify the enzyme to gain agricultural benefits can be expected to deliver only "modest improvements" in its efficiency of operation. "Further, [our hypothesis] raises the possibility that, despite appearing sluggish and confused, most Rubiscos may be near-optimally adapted to their different gaseous and thermal environments. If so, genetic manipulation can be expected to achieve only modest improvements in the efficiency of Rubisco and plant growth. Such improvement would be limited to the magnitude of the scatter apparent in the correlations (Fig. 3), if the scatter represents incomplete optimization (see above). [. . .] Such adaptation in response to the changing atmosphere and temperature appears to have been instrumental in enabling the expansion of the biosphere to its current size." Design theorists have drawn attention to three additional considerations: 1. A single-factor analysis of Rubisco is inadequate. The parameters considered to conclude the enzyme is poorly designed and inefficient are very limited. We should note that our perceptions of intelligent design are typically subjective, and most claims for poor design do not stand up to the test of time - further research leads to a greater appreciation of design (a good example being mammalian eye design). Furthermore, unintelligent design of architectures we deem sub-optimal should not be regarded as the only possible hypothesis. Multiple factors are likely to be relevant as chemosynthetic carbon fixation also makes use of Rubisco. It is employed by organisms living at hydrothermal vents and cold hydrocarbon seeps. 2. Photorespiration, the consumption of oxygen to produce a sugar that ultimately forms carbon dioxide during a series of reactions, may not be a mark of inefficiency, but the process may be useful to the plant. The null hypothesis for Design theorists is that processes have functionality. This hypothesis is not without some support: the process of photosynthesis is not just to capture CO2 and release oxygen because nitrate assimilation in plant shoots depends on photorespiration, as Rachmilevitch et al (2004) have shown. 3. Ecological considerations should be included in the analysis. If design is relevant to understanding the way plants work, we should consider not only the benefits to the organism (which limits the horizon for those with a Darwinian perspective) but also the biosphere as a whole. Rubisco's ability to capture CO2 increases with increasing CO2 content in the atmosphere, so its efficiency rises in a CO2-rich atmosphere. However, increasing oxygen levels in the atmosphere will reduce Rubisco's ability to capture carbon. So a negative feedback mechanism exists to regulate the relative concentrations of oxygen and carbon dioxide in the atmosphere. This is another example of design affecting the Earth's ecology - for more on this, go here. http://www.arn.org/blogs/index.php/literature/2010/01/21/rubisco_is_not_an_example_of_unintellige My question for you MacNeill is why did you use this example of rubisco without the caveat of "thank goodness it is inefficient"? You knew this refutation was issued several months ago. so why do you refuse correction to your stance? notes: "There are no detailed Darwinian accounts for the evolution of any fundamental biochemical or cellular system only a variety of wishful speculations. It is remarkable that Darwinism is accepted as a satisfactory explanation of such a vast subject." James Shapiro - Molecular Biologist As well, Physicists find many processes in a cell operate at the "near optimal" capacities allowed in any physical system: William Bialek - Professor Of Physics - Princeton University: Excerpt: "A central theme in my research is an appreciation for how well things “work” in biological systems. It is, after all, some notion of functional behavior that distinguishes life from inanimate matter, and it is a challenge to quantify this functionality in a language that parallels our characterization of other physical systems. Strikingly, when we do this (and there are not so many cases where it has been done!), the performance of biological systems often approaches some limits set by basic physical principles. While it is popular to view biological mechanisms as an historical record of evolutionary and developmental compromises, these observations on functional performance point toward a very different view of life as having selected a set of near optimal mechanisms for its most crucial tasks.,,,The idea of performance near the physical limits crosses many levels of biological organization, from single molecules to cells to perception and learning in the brain,,,," http://www.princeton.edu/~wbialek/wbialek.html bornagain77
Upright
Why not just retract a calculated but failed insinuation?
What, this?
In your first post, you insinuated that simplifiying can lead to misunderstanding.
But it can, can't it? You'd not argue with that?
To illustrate this you compare a actual picture that does not show the internal structure to an idealized picture that does.
You are more then welcome to provide such a picture of internal structure. When you find one that looks anything like the image at the top of the blog please let me know! But you'd agree that there is a significant different to the lay person when seeing the two types of image, right? And it seems that if it's anything, the image at the top is reductionist. Mechanical. So an image of the actual structure would be more apt for this blog, would it not? Ena Sharples
Also, I just spent about 20 minutes trying to find a TEM of the basal attachment of a bacterium, but all I could find were the kinds of schematic diagrams shown at the top of this blog. Either there are no actual TEMs of individual bacterial flagella (a distinct possibility, as their fine structure has been worked out mostly via electron scattering and crystallography, not transmission electron microscopy), or such TEMs are only to be found in the technical literature. As to the distinction between clocks and clouds, of course biological organisms are clouds. This, however, means that a great deal of the fine structure of biological organisms (like the fine structure of clouds) is the result of stochastic processes. That is, a cloud viewed as a single entity exhibits regular structure and function. This is why we can classify them as cirrus, cumulus, stratus, etc. However, this overall regularity is the result of the mass action of a very large number of very small particles, which viewed individually act as purely stochastic "Newtonian" particles. That is, although the cloud as a whole exhibits teleomatic changes over time (to use Ernst Mayr's word for purely physical processes with predictable cause-and-effect relationships), each individual particle moves and collides with others in essentially random patterns. So, once again we find that biological processes exhibit predictable patterns of "behavior" (i.e. change over time), but these are grounded in stochastic processes that have irreducible random components. Ergo, the "complexity" of clouds (as compared with clocks) is due to their massively greater stochasticity, rather than greater organization at the level of fine structure. Therefore, it seems to me that asserting that biological systems, if they are more like clouds than clocks, are much closer to the evolutionary model of reality than the ID model. Clouds evolve (i.e. change over time) as the result of purely "natural" processes which do not require any "intelligence" or "design" at all, whereas clocks (at least the kind that are manufactured by humans) are designed for an intended purpose by intelligent agents. Thanks for clarifying this distinction, Barry! I couldn't have said it better myself... Allen_MacNeill
Ena, "Perhaps I could ask a different question." Why not just retract a calculated but failed insinuation? Upright BiPed
Upright
Ena, the image is for teaching and understanding. If such images are applicable to understanding the functional structure of the flagella, then they are applicable to understanding the fuctional structure of the flangella. It just that simple.
You'd agree then that it's important to also explain at the same time that the real structures look nothing like the illustrations? Ena Sharples
Mr Arrington, I just found this recent gem of a Dr. Craig talk. He goes into a little detail of the reductionist fallacy around the 27:00 minute mark in defending the fine-tuning argument: William Lane Craig - Arguments of God's Existence and Response to the New Atheists - Gracepoint Berkeley (2010) http://www.vimeo.com/11170354 bornagain77
Upright
To illustrate this you compare a actual picture that does not show the internal structure
Perhaps I could ask a different question. If you were given an actual picture of internal structure and an idealized image such as can be found at the heading of this blog, would the values of CSI (or FCSI) be different if you had only the picture to calculate them with? Ena Sharples
BTW, the link in comment #2 ( http://en.wikipedia.org/wiki/File:Chlamydomonas_TEM_09.jpg ) isn't to a TEM of a bacterial flagellum. It's a TEM of the flagellum (technically, "undulapodium") of aChlamydomonas, a unicellular eukaryote ( a green alga, to be precise). Allen_MacNeill
Ena, the image is for teaching and understanding. If such images are applicable to understanding the functional structure of the flagella, then they are applicable to understanding the fuctional structure of the flangella. It just that simple. In your first post, you insinuated that simplifiying can lead to misunderstanding. To illustrate this you compare a actual picture that does not show the internal structure to an idealized picture that does. I hope that helps. Upright BiPed
Clocks, like all machines, are precise, regular, and limited in function. Biological entities are imprecise, irregular, and multifarious in function. To be as clear as possible, biological structures and functions differ from human machines in a quality that could be called "stochastic mass action". That is, they work fairly regularly most of the time because many (but not all) of their parts are massively redundant. There is an irreducible random component in all biological systems which makes necessary this kind of "mass action" that accomplishes biological functions. A very clear example of this is the "mechanism" by which photosynthetic organisms add carbon dioxide to ribulose 1,5 bisphosphate (abbreviated RuBP) in the Calvin cycle. The enzyme that accomplishes this is ribulose bisphosphate carboxylase oxidase, usually referred to as "rubisco". Rubisco is an astonishingly inefficient enzyme. Most enzymes can catalyze several thousand to several hundred thousand reactions per second. Rubisco, by contrast, can only catalyze the addition of about three carbon dioxide molecules to RuBP per second. Photosynthetic organisms get around this extraordinarily inefficient mechanism by producing huge quantities of rubisco. Most plant biologists estimate that rubisco is the most abundant protein in the biosphere. This abundance compensates for the low efficiency of rubisco in carbon fixation. The point here is that the relative inefficiency of biological "machines" such as rubisco is almost always compensated for by "massive redundancy". The schematic diagrams of bacterial flagella, drawn like engineering designs, illustrate the "average" arrangement of such structures. In any given bacterium, the actual structures only approximate this ideal structure. However, given large numbers of "approximations" of the "ideal" structures, biological processes proceed with fairly high efficiency. This model of biological efficiency — that irreducible randomicity is compensated for by massive redundancy – is, of course, the underlying organizing concept in evolution by natural selection. An irreducibly stochastic generator of phenotypic variation (the so-called "engines of variation"; see http://evolutionlist.blogspot.com/2007/10/rm-ns-creationist-and-id-strawman.html for a list) is coupled with a probabilistic "filter" that preserves and reproduces only those phenotypic variations that on the average result in continued function. Or, as the guys with whom I used to work on road construction used to say, "you ain't building a swiss watch". The roads we drive on, like the biological systems of which we are composed, are only approximations of what could be called "ideal designs". The dispute between evolutionary biologists (EBers) and ID supporters (IDers) is between EBers who see biological systems as being constructed and operated "from the bottom up", with irreducible random/stochastic variation woven in at all levels, and IDers who see biological systems as being designed "from the top down", with no genuine random/stochastic variation at all. This dispute between "incommensurate worldviews" is a very old one in western culture (see http://evolutionlist.blogspot.com/2006/02/incommensurate-worldviews.html for more). Personally, I don't really see much hope for a resolution of this dispute, given its long-standing nature and wildly divergent underlying assumptions. Allen_MacNeill
Upright, Is a flagellum a clock or a cloud then? And I would note that the clue is in your own words.
the same idealized images that are in all the science books that biology students are taught from.
Students should learn about things in such a manner, in the first instance. If they choose to go on to study in more detail they'll quickly learn that those images were idealized images not fully representing the item in question. And that things are somewhat more messy then their first textbook makes out. We both know that the real thing looks nothing like the image at the top of this blog post.
He then links to the same image (showing none of the internal structure) as you have done
There are similar images of internal structure, and *none* of them look like the image at the top of this blog. If you can link to such an image, then please do so.
He obviously then hopes no one notices how desperate he sounds.
I think you've misunderstood my intent. I'm not making any claim one way or the other about the design or otherwise of the bac flag, I'm saying that in the context of this blog post we're both commenting on, is the bac flag a cloud or a clock? Does your answer in fact depend on what image you look at? Ena Sharples
Ena, Are you channeling Mark Perakh over here? Professor Perakh, unable to answer ID arguments, foamed at the mouth about the header image on this page. It was an eloquent argument to be sure. It went something like this: creationists like Behe and Dembski are liars, so they use the same idealized images that are in all the science books that biology students are taught from. He posits the warning that idealized images (like the ones which appear in all the science books in order to help students understand the structure) is likely to give someone the idea that there is a recognizable structure. He then links to the same image (showing none of the internal structure) as you have done, and then goes on to say that all flangella are unique (with their individual irregularities) just as all other organisms are individually irregular. He obviously then hopes no one notices how desperate he sounds. It was interesting to see a scientist ignore detail. Apparently, there was too much of it. Upright BiPed
You might say that the flagellum is the Haeckel's embryo of ID. Adel DiBagno
gingoro, Barry, I think trying to make things simple cuts both ways. For example, if you look at the top of this blog you see an image of a flagella. Looks like a machine alright, I'll give you that. Must have been designed. Yet if you look at an actual picture rather then an idealized version it looks rather different, as can be seen here http://en.wikipedia.org/wiki/File:Chlamydomonas_TEM_09.jpg So is the flagellum a clock or a cloud? It seems to depend on what picture you look at. Ena Sharples
Good post Barry. Yes many subjects are complex and not well understood and trying to make them simple is often an error as it makes them too simple. Dave W gingoro

Leave a Reply