Uncommon Descent Serving The Intelligent Design Community

On the impossibility of replicating the cell: A problem for naturalism


I have sometimes had the idea that the best way for Intelligent Design advocates to make their case would be to build a giant museum replicating the complexity of the cell on a large scale, so that people could see for themselves how the cell worked and draw their own conclusions. Recently I came across an old quote from biochemist Michael Denton’s Evolution: A Theory in Crisis (Adler and Adler, 1985) which put paid to that idea, but which raised an interesting philosophical puzzle for people who adhere to scientific naturalism – which I define here as the view that there is nothing outside the natural world, by which I mean the sum total of everything that behaves in accordance with scientific laws [or laws of Nature]. Here is the first part of the quote from Denton, which I had seen before (h/t Matt Chait):

To grasp the reality of life as it has been revealed by molecular biology, we must magnify a cell a thousand million times until it is twenty kilometres in diameter and resembles a giant airship large enough to cover a great city like London or New York. What we would then see would be an object of unparalleled complexity and adaptive design. On the surface of the cell we would see millions of openings, like the portholes of a vast space ship, opening and closing to allow a continual stream of materials to flow in and out. If we were to enter one of these openings with find ourselves in a world of supreme technology and bewildering complexity. We would see endless highly organized corridors and conduits branching in every direction away from the perimeter of the cell, some leading to the central memory bank in the nucleus and others to assembly plants and processing units. The nucleus of itself would be a vast spherical chamber more than a kilometer in diameter, resembling a geodesic dome inside of which we would see, all neatly stacked together in ordered arrays, the miles of coiled chains of the DNA molecules. A huge range of products and raw materials would shuttle along all the manifold conduits in a highly ordered fashion to and from all the various assembly plants in the outer regions of the cell.

We would wonder at the level of control implicit in the movement of so many objects down so many seemingly endless conduits, all in perfect unison. We would see all around us, in every direction we looked, all sorts of robot-like machines. We would notice that the simplest of the functional components of the cell, the protein molecules, were astonishingly, complex pieces of molecular machinery, each one consisting of about three thousand atoms arranged in highly organized 3-D spatial conformation. We would wonder even more as we watched the strangely purposeful activities of these weird molecular machines, particularly when we realized that, despite all our accumulated knowledge of physics and chemistry, the task of designing one such molecular machine – that is one single functional protein molecule – would be completely beyond our capacity at present and will probably not be achieved until at least the beginning of the next century. Yet the life of the cell depends on the integrated activities of thousands, certainly tens, and probably hundreds of thousands of different protein molecules.

We would see that nearly every feature of our own advanced machines had its analogue in the cell: artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of parts and components, error fail-safe and proof-reading devices utilized for quality control, assembly processes involving the principle of prefabrication and modular construction. In fact, so deep would be the feeling of deja-vu, so persuasive the analogy, that much of the terminology we would use to describe this fascinating molecular reality would be borrowed from the world of late twentieth-century technology.

What we would be witnessing would be an object resembling an immense automated factory, a factory larger than a city and carrying out almost as many unique functions as all the manufacturing activities of man on earth. However, it would be a factory which would have one capacity not equalled in any of our own most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours. To witness such an act at a magnification of one thousand million times would be an awe-inspiring spectacle. (pp. 328 ff.)

Reading this passage vindicated my belief that a museum of the cell would be a great way to promote ID. “If we build it, they will come,” I thought. But there was more to follow, which I hadn’t read before. It turns out that we can’t build a replica of the cell, down to the atomic level:

To gain a more objective grasp of the level of complexity the cell represents, consider the problem of constructing an atomic model. Altogether a typical cell contains about ten million million atoms. Suppose we choose to build an exact replica to a scale one thousand million times that of the cell so that each atom of the model would be the size of a tennis ball. Constructing such a model at the rate of one atom per minute, it would take fifty million years to finish, and the object we would end up with would be the giant factory, described above, some twenty kilometres in diameter, with a volume thousands of times that of the Great Pyramid.

Copying nature, we could speed up the construction of the model by using small molecules such as amino acids and nucleotides rather than individual atoms. Since individual amino acids and nucleotides are made up of between ten and twenty atoms each, this would enable us to finish the project in less than five million years. We could also speed up the project by mass producing those components in the cell which are present in many copies. Perhaps three-quarters of the cell’s mass can be accounted for by such components. But even if we could produce these very quickly we would still be faced with manufacturing a quarter of the cell’s mass which consists largely of components which only occur once or twice and which would have to be constructed, therefore, on an individual basis. The complexity of the cell, like that of any complex machine, cannot be reduced to any sort of simple pattern, nor can its manufacture be reduced to a simple set of algorithms or programmes. Working continually day and night it would still be difficult to finish the model in the space of one million years. (Emphases mine – VJT.)

But there’s more, as Matt Chait points out (emphasis mine):

And let me add my two cents to this astounding picture. The model that you would complete a million years later would be just that, a lifeless static model. For the cell to do its work this entire twenty kilometer structure and each of its trillions of components must be charged in specific ways, and at the level of the protein molecule, it must have an entire series of positive and negative charges and hydrophobic and hydrophilic parts all precisely shaped (at a level of precision far, far beyond our highest technical abilities) and charged in a whole series of ways: charged in a way to find other molecular components and combine with them; charged in a way to fold into a shape and maintain that most important shape, and charged in a way to be guided by other systems of charges to the precise spot in the cell where that particle must go. The pattern of charges and the movement of energy through the cell is easily as complex as the pattern of the physical particles themselves.

Also, Denton, in his discussion, uses a tennis ball to stand in for an atom. But an atom is not a ball. It is not even a ‘tiny solar system’ of neutrons, protons and electrons’ as we once thought. Rather, it has now been revealed to be an enormously complex lattice of forces connected by a bewildering array of utterly miniscule subatomic particles including hadrons, leptons, bosons, fermions, mesons, baryons, quarks and anti-quarks, up and down quarks, top and bottom quarks, charm quarks, strange quarks, virtual quarks, valence quarks, gluons and sea quarks…

And let me remind you again, that what we are talking about, a living cell, is a microscopic dot and thousands of these entire factories including all the complexity that we discussed above could fit on the head of a pin. Or, going another way, let’s add to this model of twenty square kilometers of breath taking complexity another one hundred trillion equally complex factories all working in perfect synchronous coordination with each other; which would be a model of the one hundred trillion celled human body, your body, that thing that we lug around every day and complain about; that would, spread laterally at the height of one cell at this magnification, blanket the entire surface of the earth four thousand times over, every part of which would contain pumps and coils and conduits and memory banks and processing centers; all working in perfect harmony with each other, all engineered to an unimaginable level of precision and all there to deliver to us our ability to be conscious, to see, to hear, to smell, to taste, and to experience the world as we are so used to experiencing it, that we have taken it and the fantastic mechanisms that make it possible for granted.

My question is, “Why don’t we know this?” What Michael Denton has written and I have added to is a perfectly accurate, easily intelligible, non-hyperbolic view of the cell. Why is this not taught in every introductory biology class in our schools?

Based on the foregoing, I think it’s fair to say that we’ll never be able to construct a computer model of the cell either, down to the atomic level: the computing resources required would just be too huge. And in that case, it will never be scientifically possible to model a natural process (or a set of processes) and demonstrate that it could have given rise to the cell – or even show that it had a greater than 50% probability of doing so.

So here’s my question for the skeptics: if we have no hope of ever proving the idea that the cell could have arisen through unguided natural processes, or even showing this idea to be probably true, then how can we possibly be said to know for a fact that this actually happened? Knowledge, after all, isn’t merely a true belief; it has to be a justified true belief. What could justify the claim that abiogenesis actually occurred?

It gets worse. We cannot legitimately be said to know that scientific naturalism is true unless we know that life could have arisen via unguided processes. But if we don’t know the latter, then we cannot know the former. Ergo, scientific naturalism, even if were true, can never be known to be true.

There’s more. Scientific naturalists are fond of claiming that there are only two valid sources of knowledge: a priori truths of logic and mathematics, which can be known through reason alone; and a posteriori empirical truths, which are known as a result of experience and/or scientific inquiry. The statement that abiogenesis occurred without intelligent guidance on the primordial Earth is neither a truth of logic and mathematics nor a truth which can be demonstrated (or even shown to be probable) via experience and/or scientific inquiry. And since we cannot know that scientific naturalism is true unless we know that abiogenesis occurred without intelligent guidance, it follows that the truth of scientific naturalism cannot be known through either of the two avenues of knowledge postulated by the skeptic. So either there must be some third source of knowledge (intuition, perhaps?) that the skeptic has to fall back on. Yeah, right.

And please, don’t tell me, “Well, scientists have explained X, Y and Z, so it’s only a matter of time before they can explain life.” First, that’s illicit reasoning: performing inductive logic over a set of things is problematic enough (black swans, anyone?), but performing it over a set of scientific theories, concocted during a time-span of just 471 years – the Scientific Revolution is commonly held to have begun in 1543 – is absolutely ridiculous. And second, as I’ve argued above, there’s good reason to believe that our computing resources will never be up to the task of showing that the first living cell could have arisen via a natural, unguided process.

One last question: if we cannot know that scientific naturalism is true or even probably true, then why should we believe it?

Checkmate, naturalists? Over to you.

Querius said: "I haven’t seen any specific challenge to his math here..." You need to get out more. There's more to the world than this stifling blog. Pachyaena
No, Behe's math was correct both on the malaria resistence mutation example he used in the book and again more recently on chloroquaine resistant malaria. I haven't seen any specific challenge to his math here except from a guy who claimed he was a statistician, but failed a simple probability question, later claiming his wrong answer was a typo. If any of you want to demonstrate why the binomial theorum doesn't apply here, be my guest. -Q Querius
Querius said: "Instead of trying to find fault with me, why don’t you examine Behe’s prediction about malaria in The Edge of Evolution for yourself?" Paging Diogenes! Querius, Behe's claims regarding malaria have been crushed to a pulp. Pachyaena
Apart from what Zac pointed out, you should be aware that Behe has moved on from probability to landscape search and now to ‘dematerialized information’.
Are you confusing Behe with Dembski? keith s
Zachriel said: "As for chloroquine resistance, it actually requires multiple mutations, yet it still evolves." Joe said: "The question is did it evolve via blind watchmaker processes?" Joe, you should already be able to easily answer that question and demonstrate irrefutable evidence to support your answer. You have claimed many times that you can determine the difference between ID and what you call "nature operating freely". You have claimed many times that unguided/blind-watchmaker evolution is the cause of diseases and deformities. You have also claimed many times that all mutations are front-loaded/guided except the ones that cause disease, deformities, and/or death. You have claimed many times that you have the tools/methods/models/hypotheses/evidence to determine and demonstrate design vs. "nature operating freely" (by your claims and definition, the same thing as 'unguided/blind-watchmaker' evolution at least in the case of biology). You've claimed that models can only be made if something is thoroughly understood, so you apparently must thoroughly understand the mutations that result in chloroquine resistance and all other mutations of any type that have ever occurred in every biological entity that has ever existed, otherwise your claims are just lies. So, Mr. Know-it-all-IDer, let's see what you've got on the mutations that result in chloroquine resistance. How, when, where, why, and who. Show your work and don't skimp on the details. Pachyaena
Querius @ 323 Apart from what Zac pointed out, you should be aware that Behe has moved on from probability to landscape search and now to 'dematerialized information'. Let's see how Behe brings in information and consciousness together and manages to dematerialize design and weave it into biological processes. His next book should be interesting. Me_Think
Querius: Behe’s prediction was right on the money with later experimental results. Behe, Edge of Evolution, Free Press 2007. White, Antimalarial drug resistance, Journal of Clinical Investigation 2004: "This suggests that the per-parasite probability of developing resistance de novo is on the order of 1 in 10^20 parasite multiplications." Nor does Behe's simplistic model come close to matching the actual evolutionary pathway of chloroquine resistance. Drug cocktails to defeat quickly evolving diseases were around long before Behe. His contribution of pointing out that rare events are rare is negligible. Zachriel
LOL, Me_Think. Yes, I freely "admit" to the binomial theorum. However, there's no "greater than" in this case. Behe's prediction was right on the money with later experimental results. Instead of trying to find fault with me, why don't you examine Behe's prediction about malaria in The Edge of Evolution for yourself? It's a good read and deals with an area of Behe's own research about this terrible disease. -Q Querius
Querius @ 321
Me_Think @ 319:The odds of single out come is always small. How about the probability of the total > or =7 ? It is 58.4% (0.584) Querius:Why should chance processes differentiate between a physical chance process such as a die roll and a biological chance process such as a mutation?
Thanks for the same side goal. You just admitted that while the exact number of process is impossible, processes greater than or equal to the exact number is highly probable in biology too! Me_Think
Joe, This is simply another unsubstiated assertion as we frequently see coming from darwinists. Why should chance processes differentiate between a physical chance process such as a die roll and a biological chance process such as a mutation? In the end they're both physics, and neither are magic. Behe's predictions were right on the money. His detractors now have to explain why he had such a lucky guess. It will become harder to refute Behe if he keeps making more "lucky" guesses. -Q Querius
the point being dice and coins are irrelevant in calculating biological process and structures’ odds.
How do you know that? And what else do you have seeing that you don't have any evidence? Joe
desperate darwinists, who on this forum have insisted that the binomial theorum ...For example, the odds for rolling a 7 on two fair, six–sided dice is 1/6, regardless of whether both dice are rolled at once or individually.
You can calculate using Discrete Uniform distribution too.The odds of single out come is always small. How about the probability of the total > or =7 ? It is 58.4% (0.584), the point being dice and coins are irrelevant in calculating biological process and structures' odds. Me_Think
It doesn’t model even basic population genetics, such as how a beneficial mutation spreads in a population.
Population genetics doesn't seem to model the real world. That would make it suspect wrt your claims. Joe
As for chloroquine resistance, it actually requires multiple mutations, yet it still evolves.
The question is did it evolve via blind watchmaker processes? And if that adaptation is so difficult just think about all others that require multiple mutations- and that the mutations have to occur in some specified sequence. Then you are out of luck (pun intended). Joe
Querius: Although widely criticized, the confirmation of Behe’s back–of–the–envelope prediction is about as good as they come. Pointing out that rare events are rare is hardly a profound insight. As for chloroquine resistance, it actually requires multiple mutations, yet it still evolves. That's because some of the mutations are beneficial. See actual research Summers et al., Diverse mutational pathways converge on saturable chloroquine transport via the malaria parasite’s chloroquine resistance transporter, Proceedings of the National Academy of Sciences 2014. Zachriel
Bornagain77, Although widely criticized, the confirmation of Behe's back--of--the--envelope prediction is about as good as they come. Naturally, there's a misunderstanding of "simultaneous" by desperate darwinists, who on this forum have insisted that the binomial theorum is time--dependent, which it's not. For example, the odds for rolling a 7 on two fair, six--sided dice is 1/6, regardless of whether both dice are rolled at once or individually. Failing this, they've resorted to claiming that the experiment was not equivalent to Behe's prediction, and then introducing a variety of interesting details, all of which probably were factors in the final experimental outcome (and are worthy of further study), but irrelevant to Behe's vindication, which must indeed be a bitter pill for them to swallow. So much for their claims of being willing to follow the data! -Q Querius
of related note to 309 Waiting Longer for Two Mutations - Michael J. Behe Excerpt: Citing malaria literature sources (White 2004) I had noted that the de novo appearance of chloroquine resistance in Plasmodium falciparum was an event of probability of 1 in 10^20. I then wrote that 'for humans to achieve a mutation like this by chance, we would have to wait 100 million times 10 million years' (1 quadrillion years)(Behe 2007) (because that is the extrapolated time that it would take to produce 10^20 humans). Durrett and Schmidt (2008, p. 1507) retort that my number ‘is 5 million times larger than the calculation we have just given’ using their model (which nonetheless "using their model" gives a prohibitively long waiting time of 216 million years). Their criticism compares apples to oranges. My figure of 10^20 is an empirical statistic from the literature; it is not, as their calculation is, a theoretical estimate from a population genetics model.,,, The difficulty with models such as Durrett and Schmidt’s is that their biological relevance is often uncertain, and unknown factors that are quite important to cellular evolution may be unintentionally left out of the model. That is why experimental or observational data on the evolution of microbes such as P. falciparum are invaluable,,, http://www.discovery.org/a/9461 Here Dr. Behe responds to Durrett and Schmidt’s “attempted rebuttal” in a 5 part essay: Waiting Longer for Two Mutations, Parts 1-5 http://behe.uncommondescent.com/2009/03/ summary at the end of part 5 is here: Waiting Longer for Two Mutations, Part 5 - Michael J. Behe - March 2009 Excerpt: "as I show above, when simple mistakes in the application of their model to malaria are corrected, it agrees closely with empirical results reported from the field that I cited. This is very strong support that the central contention of The Edge of Evolution is correct: that it is an extremely difficult evolutionary task for multiple required mutations to occur through Darwinian means, especially if one of the mutations is deleterious. And, as I argue in the book, reasonable application of this point to the protein machinery of the cell makes it very unlikely that life developed through a Darwinian mechanism." http://behe.uncommondescent.com/2009/03/waiting-longer-for-two-mutations-part-5/ Don't Mess With ID (Overview of Behe's 'Edge' and Durrett and Schmidt's paper at the 20:00 minute mark) - Paul Giem - video http://www.youtube.com/watch?v=5JeYJ29-I7o Thou Shalt Not Put Evolutionary Theory to a Test - Douglas Axe - July 18, 2012 Excerpt: "For example, McBride criticizes me for not mentioning genetic drift in my discussion of human origins, apparently without realizing that the result of Durrett and Schmidt rules drift out. Each and every specific genetic change needed to produce humans from apes would have to have conferred a significant selective advantage in order for humans to have appeared in the available time (i.e. the mutations cannot be 'neutral'). Any aspect of the transition that requires two or more mutations to act in combination in order to increase fitness would take way too long (greater than 100 million years). My challenge to McBride, and everyone else who believes the evolutionary story of human origins, is not to provide the list of mutations that did the trick, but rather a list of mutations that can do it. Otherwise they're in the position of insisting that something is a scientific fact without having the faintest idea how it even could be." Doug Axe PhD. http://www.evolutionnews.org/2012/07/thou_shalt_not062351.html Dr. Behe’s number, (1 in 10^20), has now been confirmed in the lab: An Open Letter to Kenneth Miller and PZ Myers - Michael Behe July 21, 2014 http://www.evolutionnews.org/2014/07/show_me_the_num088041.html "The Edge of Evolution" Strikes Again 8-2-2014 by Paul Giem - video https://www.youtube.com/watch?v=HnO-xa3nBE4 bornagain77
Querius, they even attribute 'godlike' power to material particles in the Many Worlds interpretation, i.e. whenever we try to observe a particle in the double slit, instead of the wave function simply collapsing, the materialists, in order to avoid the Theistic implications of wave collapse, postulates that the material particle, with apparently all the power of God to create universes at will, creates a quasi infinite number of parallel universes. If that is not the mother of all ad hoc hypothesis nothing is! :) bornagain77
Oh, and they're obviously not happy that undifferentiated protoplasm gave way to something that's turning out to be spectacularly complex. Too much too explain how this could develop in only a billion years or less (probably much less). Thanks again for posting the references to Radin's experiments. Naturally, I wonder whether this is unique to human consciousness or a quality of all consciousness. -Q Querius
Bornagain77, I don't think material determinists know what to do with the Radin experiments, just as they can't seem to comprehend Gödel's incompleteness theorems or Chaos theory, probably wishing that they all would just go away. Conversely, they seem to attribute magical, even godlike qualities to random interactions---all Victorian fantasies that are crumbling under the weight of scientific progress! -Q Querius
Zach, you state,,, "empirical evidence of how specific beneficial mutations spread through a population" that was not my claim, (although there are problems with even that claim of yours),, my claim was ‘coordinated beneficial mutations’ building up functional complexity, i.e. negative epistasis: Epistasis between Beneficial Mutations - July 2011 Excerpt: We found that epistatic interactions between beneficial mutations were all antagonistic—the effects of the double mutations were less than the sums of the effects of their component single mutations. We found a number of cases of decompensatory interactions, an extreme form of antagonistic epistasis in which the second mutation is actually deleterious in the presence of the first. In the vast majority of cases, recombination uniting two beneficial mutations into the same genome would not be favored by selection, as the recombinant could not outcompete its constituent single mutations. https://uncommondescent.com/epigenetics/darwins-beneficial-mutations-do-not-benefit-each-other/ Mutations : when benefits level off - June 2011 - (Lenski's e-coli after 50,000 generations) Excerpt: After having identified the first five beneficial mutations combined successively and spontaneously in the bacterial population, the scientists generated, from the ancestral bacterial strain, 32 mutant strains exhibiting all of the possible combinations of each of these five mutations. They then noted that the benefit linked to the simultaneous presence of five mutations was less than the sum of the individual benefits conferred by each mutation individually. http://www2.cnrs.fr/en/1867.htm?theme1=7 A Serious Problem for Darwinists: Epistasis Decreases Chances of Beneficial Mutations - November 8, 2012 Excerpt: A recent paper in Nature finds that epistasis (interactions between genetic changes) is much more pervasive than previously assumed. This strongly limits the ability of beneficial mutations to confer fitness on organisms. ,,, It takes an outsider to read this paper and see how disturbing it should be to the consensus neo-Darwinian theory. All that Darwin skeptics can do is continue to point to papers like this as severe challenges to the consensus view. Perhaps a few will listen and take it seriously. http://www.evolutionnews.org/2012/11/epistasis_decr066061.html Moreover, population genetics quickly breaks down as the level complexity being dealt with increases, and thus population genetics certainly cannot provide testable 'predictions' as to how the unfathomed integrated complexity in life came about: The next evolutionary synthesis: from Lamarck and Darwin to genomic variation and systems biology – Bard - 2011 Excerpt: If more than about three genes (nature unspecified) underpin a phenotype, the mathematics of population genetics, while qualitatively analyzable, requires too many unknown parameters to make quantitatively testable predictions [6]. The inadequacy of this approach is demonstrated by illustrations of the molecular pathways that generates traits [7]: the network underpinning something as simple as growth may have forty or fifty participating proteins whose production involves perhaps twice as many DNA sequences, if one includes enhancers, splice variants etc. Theoretical genetics simply cannot handle this level of complexity, let alone analyse the effects of mutation.. http://www.biosignaling.com/content/pdf/1478-811X-9-30.pdf bornagain77
bornagain77: you have no observational evidence to substantiate your Darwinian claims for ‘coordinated beneficial mutations’ building up functional complexity in the first place We have direct empirical evidence of how specific beneficial mutations spread through a population, and this can be modeled with population genetics. Zachriel
This is the 'prediction', i.e. null hypothesis, for you, as a materialist, to try to falsify: The Law of Physicodynamic Insufficiency - Dr David L. Abel - November 2010 Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”,,, After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.” http://www-qa.scitopics.com/The_Law_of_Physicodynamic_Insufficiency.html Before They've Even Seen Stephen Meyer's New Book, Darwinists Waste No Time in Criticizing Darwin's Doubt - William A. Dembski - April 4, 2013 Excerpt: In the newer approach to conservation of information, the focus is not on drawing design inferences but on understanding search in general and how information facilitates successful search. The focus is therefore not so much on individual probabilities as on probability distributions and how they change as searches incorporate information. My universal probability bound of 1 in 10^150 (a perennial sticking point for Shallit and Felsenstein) therefore becomes irrelevant in the new form of conservation of information whereas in the earlier it was essential because there a certain probability threshold had to be attained before conservation of information could be said to apply. The new form is more powerful and conceptually elegant. Rather than lead to a design inference, it shows that accounting for the information required for successful search leads to a regress that only intensifies as one backtracks. It therefore suggests an ultimate source of information, which it can reasonably be argued is a designer. I explain all this in a nontechnical way in an article I posted at ENV a few months back titled "Conservation of Information Made Simple" (go here). ,,, ,,, Here are the two seminal papers on conservation of information that I've written with Robert Marks: "The Search for a Search: Measuring the Information Cost of Higher-Level Search," Journal of Advanced Computational Intelligence and Intelligent Informatics 14(5) (2010): 475-486 "Conservation of Information in Search: Measuring the Cost of Success," IEEE Transactions on Systems, Man and Cybernetics A, Systems & Humans, 5(5) (September 2009): 1051-1061 per ENV as to "you spewed a list of citations," Actually, the many citations that I 'spewed', which you referred to so disparagingly, are to real world empirical evidence that undermine your belief in neo-Darwinian evolution. I will take real world evidence over computer models, (which are notorious for being inaccurate to the real world), any day! bornagain77
Zach,,,, "Please define information." Can you cite any OBSERVATIONAL evidence of unguided material processes creating any of these following types of functional information?: Can We Falsify Any Of The Following Null Hypothesis (For Information Generation) 1) Mathematical Logic 2) Algorithmic Optimization 3) Cybernetic Programming 4) Computational Halting 5) Integrated Circuits 6) Organization (e.g. homeostatic optimization far from equilibrium) 7) Material Symbol Systems (e.g. genetics) 8) Any Goal Oriented bona fide system 9) Language 10) Formal function of any kind 11) Utilitarian work http://mdpi.com/1422-0067/10/1/247/ag The Capabilities of Chaos and Complexity: David L. Abel - Null Hypothesis For Information Generation - 2009 Excerpt of conclusion pg. 42: "To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: “Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration.” A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis." http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2662469/ To get a bit more technical, Dr. Don Johnson, who taught computer science for over 20 years, explains the difference between Shannon Information and Prescriptive Information, as well as explaining 'the cybernetic cut', in this following Podcast: Programming of Life - Dr. Donald Johnson interviewed by Casey Luskin - audio podcast http://intelligentdesign.podomatic.com/entry/2010-01-27T12_37_53-08_00 Programming of Life - Information - Shannon, Functional & Prescriptive – video https://www.youtube.com/watch?v=h3s1BXfZ-3w ....Zach, you have no observational evidence to substantiate your Darwinian claims for 'coordinated beneficial mutations' building up functional complexity in the first place, (negative epistasis, Lenski's LTEE after 50,000 generations), and yet you want a "intelligently designed" computer model to prove that completely unguided material processes can build functional complexity/information??? Logic certainly does not seem to be your strong suit!!! The Scientific Method - Richard Feynman - video Quote: 'If it disagrees with experiment, it’s wrong. In that simple statement is the key to science. It doesn’t make any difference how beautiful your guess is, it doesn’t matter how smart you are who made the guess, or what his name is… If it disagrees with experiment, it’s wrong. That’s all there is to it.” https://www.youtube.com/watch?v=OL6-x0modwY “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain - Michael Behe - December 2010 Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain. http://behe.uncommondescent.com/2010/12/the-first-rule-of-adaptive-evolution/ Michael Behe talks about the preceding paper on this podcast: Michael Behe: Challenging Darwin, One Peer-Reviewed Paper at a Time - December 2010 http://intelligentdesign.podomatic.com/player/web/2010-12-23T11_53_46-08_00 Michael Behe: Intelligent Design – interview on radio program - ‘The Mind Renewed’ https://www.youtube.com/watch?v=H9SmPNQrQHE Where's the substantiating evidence for neo-Darwinism? https://docs.google.com/document/d/1q-PBeQELzT4pkgxB2ZOxGxwv6ynOixfzqzsFlCJ9jrw/edit bornagain77
bornagain77: why are you not concerned with the fact that material processes cannot create information? Please define information. bornagain77: As to computer simulations of population genetics, why not cite any real world evidence to counter Sanford’s claim? It doesn't model even basic population genetics, such as how a beneficial mutation spreads in a population. That's because the program has a significant flaw which virtually eliminates the effect of any beneficial mutation. bornagain77: I cited plenty showing him to be correct! No, you spewed a list of citations, most of which don't seem relevant. Please provide one citation that actually uses the software to make quantitative predictions, and we'll take a look. Zachriel
And a few pages of related information: http://www.fractalforums.com/let's-collaborate-on-something!/a-(fractal)-theory-of-everything/ Gary S. Gaulin
And Albert Einstein, much like his 'greatest blunder' of introducing the cosmological constant to general relativity to reflect a eternal universe instead of a universe that had a beginning, has been shown to be completely wrong in his EPR postulations of hidden variables: Quantum Entanglement – Bohr and Einstein - The Failure Of Local Realism - Materialism - Alain Aspect - video https://vimeo.com/98206867 Einstein vs quantum mechanics, and why he'd be a convert today - June 13, 2014 Excerpt: In a nutshell, experimentalists John Clauser, Alain Aspect, Anton Zeilinger, Paul Kwiat and colleagues have performed the Bell proposal for a test of Einstein's hidden variable theories. All results so far support quantum mechanics. It seems that when two particles undergo entanglement, whatever happens to one of the particles can instantly affect the other, even if the particles are separated! http://phys.org/news/2014-06-einstein-quantum-mechanics-hed-today.html Quantum Measurements: Common Sense Is Not Enough, Physicists Show - July 2009 Excerpt: scientists have now proven comprehensively in an experiment for the first time that the experimentally observed phenomena cannot be described by non-contextual models with hidden variables. http://www.sciencedaily.com/releases/2009/07/090722142824.htm of related note: The visible comes into existence from the invisible: Quantum Physics and Relativity 2: – Antoine Suarez PhD – video https://www.youtube.com/watch?v=jxuOE2Bo_i0&list=UUVmgTa2vbopdjpMNAQBqXHw Consciousness and the double-slit interference pattern: six experiments - Radin - 2012 Abstract: A double-slit optical system was used to test the possible role of consciousness in the collapse of the quantum wavefunction. The ratio of the interference pattern’s double-slit spectral power to its single-slit spectral power was predicted to decrease when attention was focused toward the double slit as compared to away from it. Each test session consisted of 40 counterbalanced attention-toward and attention-away epochs, where each epoch lasted between 15 and 30 s(seconds). Data contributed by 137 people in six experiments, involving a total of 250 test sessions, indicate that on average the spectral ratio decreased as predicted (z = -4:36, p = 6•10^-6). Another 250 control sessions conducted without observers present tested hardware, software, and analytical procedures for potential artifacts; none were identified (z = 0:43, p = 0:67). Variables including temperature, vibration, and signal drift were also tested, and no spurious influences were identified. By contrast, factors associated with consciousness, such as meditation experience, electrocortical markers of focused attention, and psychological factors including openness and absorption, significantly correlated in predicted ways with perturbations in the double-slit interference pattern. The results appear to be consistent with a consciousness-related interpretation of the quantum measurement problem. http://www.deanradin.com/papers/Physics%20Essays%20Radin%20final.pdf bornagain77
To that I can sum up my well formed opinion (by having personally modeled from QM) by saying Albert Einstein was right. QM is incomplete. Very. See: Yves Couder Explains Wave/Particle Duality via Silicon Droplets [Through the Wormhole] https://www.youtube.com/watch?v=W9yWv5dqSKk Gary S. Gaulin
“I’m going to talk about the Bell inequality, and more importantly a new inequality that you might not have heard of called the Leggett inequality, that was recently measured. It was actually formulated almost 30 years ago by Professor Leggett, who is a Nobel Prize winner, but it wasn’t tested until about a year and a half ago (in 2007), when an article appeared in Nature, that the measurement was made by this prominent quantum group in Vienna led by Anton Zeilinger, which they measured the Leggett inequality, which actually goes a step deeper than the Bell inequality and rules out any possible interpretation other than consciousness creates reality when the measurement is made.” – Bernard Haisch, Ph.D., Calphysics Institute, is an astrophysicist and author of over 130 scientific publications. Preceding quote taken from this following video; Quantum Mechanics and Consciousness - A New Measurement - Bernard Haisch, Ph.D (Shortened version of entire video with notes in description of video) http://vimeo.com/37517080 Nonlocal "realistic" Leggett models can be considered refuted by the before-before experiment - 2008 - Antoine Suarez Center for Quantum Philosophy, Excerpt: (page 3) The independence of quantum measurement from the presence of human consciousness has not been proved wrong by any experiment to date.,,, "nonlocal correlations happen from outside space-time, in the sense that there is no story in space-time that tells us how they happen." http://www.quantumphil.org/SuarezFOOP201R2.pdf A simple approach to test Leggett’s model of nonlocal quantum correlations - 2009 Excerpt of Abstract: Bell's strong sentence "Correlations cry out for explanations" remains relevant,,,we go beyond Leggett's model, and show that one cannot ascribe even partially defined individual properties to the components of a maximally entangled pair. http://www.mendeley.com/research/a-simple-approach-to-test-leggetts-model-of-nonlocal-quantum-correlations/ Quantum theory survives latest challenge – Dec 15, 2010 Excerpt: Even assuming that entangled photons could respond to one another instantly, the correlations between polarization states still violated Leggett’s inequality. The conclusion being that instantaneous communication is not enough to explain entanglement and realism must also be abandoned. This conclusion is now backed up by Sonja Franke-Arnold and collegues at the University of Glasgow and University of Strathclyde who have performed another experiment showing that entangled photons exhibit,, stronger correlations than allowed for particles with individually defined properties – even if they would be allowed to communicate constantly. http://physicsworld.com/cws/article/news/2010/dec/15/quantum-theory-survives-latest-challenge In the following article, Physics Professor Richard Conn Henry is quite blunt as to what quantum mechanics, specifically Leggett's Inequality, reveals to us about the 'primary cause' of our 3D reality: Alain Aspect and Anton Zeilinger by Richard Conn Henry - Physics Professor - John Hopkins University Excerpt: Why do people cling with such ferocity to belief in a mind-independent reality? It is surely because if there is no such reality, then ultimately (as far as we can know) mind alone exists. And if mind is not a product of real matter, but rather is the creator of the "illusion" of material reality (which has, in fact, despite the materialists, been known to be the case, since the discovery of quantum mechanics in 1925), then a theistic view of our existence becomes the only rational alternative to solipsism (solipsism is the philosophical idea that only one's own mind is sure to exist). (Dr. Henry's referenced experiment and paper - “An experimental test of non-local realism” by S. Gröblacher et. al., Nature 446, 871, April 2007 - “To be or not to be local” by Alain Aspect, Nature 446, 866, April 2007 (Leggett's Inequality: Verified, as of 2011, to 120 standard deviations) http://henry.pha.jhu.edu/aspect.html bornagain77
http://en.wikipedia.org/wiki/Leggett_inequality The Leggett inequalities, named for Anthony James Leggett, who derived them, are a related pair of mathematical expressions concerning the correlations of properties of entangled particles.
Even where in the future it's demonstrated to be a part of the process "entangled particles" are a different phenomenon than "consciousness". Gary S. Gaulin
1 2 3 11

Leave a Reply