Uncommon Descent Serving The Intelligent Design Community

2000 views this week for Granville Sewell’s new vid on YouTube

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Evolution is a natural process running backwardHere:

Illustration seems to have worked better than explanation, Sewell says.

Follow UD News at Twitter!

Comments
OT:
Baja California Timelapses - video (speaks a tension between time and timelessness that brings a holiness to mind and eye) http://vimeo.com/11892211
bornagain77
March 12, 2012
March
03
Mar
12
12
2012
07:12 PM
7
07
12
PM
PDT
GD: Thank you for taking time to respond. Let me take up several points: 1] Refrigeration systems -- these can be over unity, but are constrained by the third law of thermodynamics, so we cannot create a zero K heat sink and get away from the second law by that way. But more to the point, they are classic examples of irreducibly complex, organised, integrated counterflow systems manifesting design. Indeed, they are also examples of FSCO/I and of the known source thereof. Design. 2] But if heat flows out entropy goes down -- indeed, per Clausius' ineqn, but the still warm pond etc are cases where heat is flowing IN from the sun, and there is no credible spontaneous source for the organised complexity required for metabolic subsystems joined to a self-replicating facility. And, in the molecular scale environment of a chemical system in a still warm pond, there is no engine that is likely to overcome the forces that move to diffusion. This is not crystallisation or mere polymerisation, but organisation of nanotech machines made from specifically sequenced polymers. With all the complexity of a petroleum refinery and chemical plant, and more. 3] But the issue of getting to organisation in the teeth of the forces driving diffusion is irrelevant -- sorry, the forces driving diffusion are going to be acting in a still little pond, or a volcanic vent or the like. And far from equilibrium systems are irrelevant to this issue. Especially where the required chemistry is vastly endothermic, and the strong tendency will be to break down not to build up complex functional polymers. (Cf Chs 8 - 9, TMLO for their discussion, just as a start.) 4] Microstate issues are irrelevant -- on the contrary your distraction on the case of CO being refrigerated towards absolute zero and running into generally and rather broadly speaking the sort of quantum effects that made two Nobel Prizes for Liquid He studies is what is irrelevant. Again, the issue is that you want to spontaneously move to a specific kind of functionally organised framework, with coded information storage, execution machines, metabolism and self replication; through the forces that drive diffusion and the forces that drive ordinary chemical reactions. Nowhere the faintest trace. The side-track is a red flag that you have no answer on the main issue. 5] You do not understand information and entropy -- Really now. Kindly observe again, wiki's admission against interest on the subject (especially the clip from a certain Gilbert N Lewis), now that the debates over the informational view of statistical thermophysics are beginning to settle down:
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
. . . in short, entropy per the Boltzmann and Gibbs metrics and Shannon Information, is in effect a measure of missing information on the specific microstate [equivalent to freedom to take up any of a set of accessible microstates], once we have specified the macrostate. Using s = k ln w, w is the number of ways mass and energy may be distributed at micro level consistent with a macrostate. Have you read the summary deduction here in context, in my always linked, following Robertson as noted? (I do not want to reproduce it here, for length.) Let me clip this much from Harry Robertson:
. . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . . H({pi}) = - C [SUM over i] pi*ln pi, [. . . "my" Eqn 6] [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp - beta*yi) = Z [Z being in effect the partition function across microstates, the "Holy Grail" of statistical thermodynamics]. . . . [H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . . Jayne's [summary rebuttal to a typical objection] is ". . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly 'objective' quantity . . . it is a function of [those variables] and does not depend on anybody's personality. There is no reason why it cannot be measured in the laboratory." . . . . [pp. 3 - 6, 7, 36; replacing Robertson's use of S for Informational Entropy with the more standard H.]
In short, entropy can legitimately be understood in informational terms. Those terms are then relevant to the unweaving of diffusion challenge that confronts those who hope to spin cells out of molecular noise in warm little ponds or the like. I should clip Shapiro's acid observation on the hopelessness of that game, in his recent Sci Am article:
The analogy that comes to mind is that of a golfer, who having played a golf ball through an 18-hole course, then assumed that the ball could also play itself around the course in his absence. He had demonstrated the possibility of the event; it was only necessary to presume that some combination of natural forces (earthquakes, winds, tornadoes and floods, for example) could produce the same result, given enough time. No physical law need be broken for spontaneous RNA formation to happen, but the chances against it are so immense, that the suggestion implies that the non-living world had an innate desire to generate RNA. The majority of origin-of-life scientists who still support the RNA-first theory either accept this concept (implicitly, if not explicitly) or feel that the immensely unfavorable odds were simply overcome by good luck.
6]"You seem to be under the impression that this means that entropy behaves the way you’d expect information to behave, i.e. requiring intelligence to create, not just being a matter of mere heat and energy, etc." -- have you really read what I did say? If so, why did you put words into my mouth that do not belong there, to make me out to be an idiot? Did you not see that I am speaking and citing here of entropy in the thermodynamic sense having to do with degrees of freedom, and missing information that would be required to get the microstate, on knowing only the macrostate? Did you read the clip from Jaynes that says in effect just that? I therefore must complain that you have erected and knocked over a strawman, evading the real issue you need to address. 7] Rocks in deserts -- why are you ducking the actual thought exercise case I have discussed and pointed you to, in App 1 point 6, of parts for a micro jet in a vat needing to form a flyable jet spontaneously vs by being clumped intelligently? Do you not see how the issue of addressing diffusion -- which for a rock occurs at negligible speed -- is absolutely central? 8]Remember, this thought exercise is actually close to the one carried out by pricking a cell and decanting its content into a test tube. None of these humpty dumpty exercises has ever led to un-weaved diffusion, for the same highly predictable reasons that the microject thought exercise brings out. Please, do a lot better than this, next time. GEM of TKIkairosfocus
March 12, 2012
March
03
Mar
12
12
2012
04:15 PM
4
04
15
PM
PDT
In fact, following the reasoning of Genetic Entropy and 'The First Rule', we find that the loss of morphological traits over time, for all organisms found in the fossil record, was/is so consistent that it was made into a 'scientific law':
Dollo's law and the death and resurrection of genes: Excerpt: "As the history of animal life was traced in the fossil record during the 19th century, it was observed that once an anatomical feature was lost in the course of evolution it never staged a return. This observation became canonized as Dollo's law, after its propounder, and is taken as a general statement that evolution is irreversible." http://www.pnas.org/content/91/25/12283.full.pdf+html
A general rule of thumb for the 'Deterioration/Genetic Entropy' of Dollo's Law as it applies to the fossil record is found here:
Dollo's law and the death and resurrection of genes ABSTRACT: Dollo's law, the concept that evolution is not substantively reversible, implies that the degradation of genetic information is sufficiently fast that genes or developmental pathways released from selective pressure will rapidly become nonfunctional. Using empirical data to assess the rate of loss of coding information in genes for proteins with varying degrees of tolerance to mutational change, we show that, in fact, there is a significant probability over evolutionary time scales of 0.5-6 million years for successful reactivation of silenced genes or "lost" developmental programs. Conversely, the reactivation of long (>10 million years)-unexpressed genes and dormant developmental pathways is not possible unless function is maintained by other selective constraints; http://www.pnas.org/content/91/25/12283.full.pdf+html No Positive Selection, No Darwin: A New Non-Darwinian Mechanism for the Origin of Adaptive Phenotypes - November 2011 Excerpt: Hughes now proposes a model he refers to as the plasticity-relaxation-mutation (PRM) model. PRM suggests that adaptive phenotypes arise as follows: (1) there exists a phenotypically plastic trait (i.e., one that changes with the environment, such as sweating in the summer heat); (2) the environment becomes constant, such that the trait assumes only one of its states for a lengthened period of time; and (3) during that time, deleterious mutations accumulate in the unused state of the trait, such that its genetic basis is subsequently lost. ,,, But if most adaptations result from the loss of genetic specifications, how did the traits initially arise? One letter (Chevin & Beckerman 2011) of response to Hughes noted that the PRM "does not explain why the ancestral state should be phenotypically plastic, or why this plasticity should be adaptive in the first place." http://www.evolutionnews.org/2011/11/no_positive_selection_no_darwi052941.html A. L. Hughes's New Non-Darwinian Mechanism of Adaption Was Discovered and Published in Detail by an ID Geneticist 25 Years Ago - Wolf-Ekkehard Lönnig - December 2011 Excerpt: The original species had a greater genetic potential to adapt to all possible environments. In the course of time this broad capacity for adaptation has been steadily reduced in the respective habitats by the accumulation of slightly deleterious alleles (as well as total losses of genetic functions redundant for a habitat), with the exception, of course, of that part which was necessary for coping with a species' particular environment....By mutative reduction of the genetic potential, modifications became "heritable". -- As strange as it may at first sound, however, this has nothing to do with the inheritance of acquired characteristics. For the characteristics were not acquired evolutionarily, but existed from the very beginning due to the greater adaptability. In many species only the genetic functions necessary for coping with the corresponding environment have been preserved from this adaptability potential. The "remainder" has been lost by mutations (accumulation of slightly disadvantageous alleles) -- in the formation of secondary species. http://www.evolutionnews.org/2011/12/a_l_hughess_new053881.html
In fact since functional 'prescriptive' information is such a much more stringent requirement for 'vertical' evolution than mere Shannon information is, I feel very confident that your big 'if I can show evolution' will never, ever, be fulfilled;
The GS (genetic selection) Principle – David L. Abel – 2009 Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.” http://www.bioscience.org/2009/v14/af/3426/3426.pdf
Further note:
General Relativity, Quantum Mechanics, Entropy, and The Shroud Of Turin - updated video http://vimeo.com/34084462
While I agree with a criticism, from a Christian, that was leveled against the preceding Shroud of Turin video, that God indeed needed no help from the universe in the resurrection event of Christ, I am, none-the-less, very happy to see that what is considered the number one problem of Physicists and Mathematicians in physics today, of a unification into a 'theory of everything' for what is in essence the finite materialistic world of General Relativity and the infinite Theistic world of Quantum Mechanics, does in fact seem to find a credible successful resolution for 'unification' within the resurrection event of Jesus Christ Himself. It seems almost overwhelmingly apparent to me from the 'scientific evidence' we now have that Christ literally ripped a hole in the finite entropic space-time of this universe to reunite infinite God with finite man. That modern science would even offer such a almost tangible glimpse into the mechanics of what happened in the tomb of Christ should be a source of great wonder and comfort for the Christian heart.
Psalms 16:10 because you will not abandon me to the grave, nor will you let your Holy One see decay. Acts 2:31 He seeing this before spake of the resurrection of Christ, that his soul was not left in hell, neither his flesh did see corruption.
Inspirational video:
Timescapes - video - from the 2010 astronomy photographer of the year http://www.timescapes.org/trailer.asp
bornagain77
March 12, 2012
March
03
Mar
12
12
2012
03:31 PM
3
03
31
PM
PDT
Gordon, thank you for keeping it simple. Despite my deficiency in mathematics, I clearly followed what you had to say. You may appreciate this comment from Eugene S
"Klimontovich’s S-theorem, an analogue of Boltzmann’s entropy for open systems, explains why the further an open system gets from the equilibrium, the less entropy becomes. So entropy-wise, in open systems there is nothing wrong about the Second Law. S-theorem demonstrates that spontaneous emergence of regular structures in a continuum is possible.,,, The hard bit though is emergence of cybernetic control (which is assumed by self-organisation theories and which has not been observed anywhere yet). In contrast to the assumptions, observations suggest that between Regularity and Cybernetic Systems there is a vast Cut which cannot be crossed spontaneously. In practice, it can be crossed by intelligent integration and guidance of systems through a sequence of states towards better utility. No observations exist that would warrant a guess that apart from intelligence it can be done by anything else." Eugene S – UD Blogger
Gordon in your response to me, you rightly noted:
(Note: when people talk about adding energy causing an entropy decrease, they’re either just plain ignorant, or confusing entropy decrease with an increase in disequilibrium.)
And yet this has been the primary claim by many neo-Darwinists over the years on UD. Yet it is known, as you rightly pointed out, that adding energy to a system, for the vast majority of times, increases entropy, save for some very basic reactions like water desalination.
Evolution Vs. Thermodynamics - Thomas Kindell - video http://www.metacafe.com/watch/4143014
Indeed harnessing energy intake for useful purposes, instead of destructive purposes, is a severe problem as far as thermodynamics and the maintenance of a life-form, far from thermodynamic equilibrium, is concerned:
Peer-Reviewed Articles in International Journal of Design & Nature - Casey Luskin - February, 2012 Excerpt: Truman further notes that "McIntosh has done us a major service by reminding us that energy processing in useful manners requires specialized machines." http://www.evolutionnews.org/2012/02/peer-reviewed_a056001.html The ATP Synthase Enzyme - exquisite motor necessary for first life - video http://www.youtube.com/watch?v=W3KxU63gcF4
Yet you go on to add;
For an entropy decrease, you need heat flowing out of the system, not into it. (Actually, there are many other things that’ll carry entropy out of the system — matter leaving the system is an obvious case.)
Yet does 'entropy decrease', from a thermodynamically uphill condition, as you have it envisioned, really help Darwinists with the functional information problem??,, The following experiment is very telling as to just how stringent the barrier of the second law is to the generation of molecules even able to store functional information:
Origin of Life: Claiming Something for Almost Nothing (RNA) Excerpt: Yarus admitted, “the tiny replicator has not been found, and that its existence will be decided by experiments not yet done, perhaps not yet imagined.” But does this (laboratory) work support a naturalistic origin of life? A key question is whether a (self-replicating) molecule could form under plausible prebiotic conditions. Here’s how the paper described their work in the lab to get this molecule: RNA was synthesized by Dharmacon. GUGGC = 5’-GUGGC-30 ; GCCU – 5’P-GCCU-3’ ; 5’OH-GCCU = 5’-GCCU-3’ ; GCCU20dU = 5’-GCC-2’-dU; GCC = 5’-GCC-3’ ; dGdCdCrU = 5’-dGdCdCU-3’ . RNA GCC3’dU was prepared by first synthesizing 5’-O-(4,4’- Dimethoxytrityl)3’-deoxyuridine as follows: 3’-deoxyuridine (MP Biomedicals; 991 mg, 0.434 mmol) was dissolved in 5 mL anhydrous pyridine and pyridine was then removed under vacuum while stirring. Solid was then redissolved in 2 mL pyridine. Dimethoxytrityl chloride (170 mg, 0.499 mmol) was dissolved in 12 mL pyridine and slowly added to 3’-deoxyuridine solution. Solution was stirred at room temperature for 4 h. All solutions were sequestered from exposure to air throughout. Reaction was then quenched by addition of 5 mL methanol, and solvent was removed by rotary evaporation. Remaining solvent evaporated overnight in a vacuum chamber. Product was then dissolved in 1 mL acetonitrile and purified through a silica column (acetonitrile elution). Final product fractions (confirmed through TLC, 1.1 hexane:acetonitrile) were pooled and rotary evaporated. Yield was 71%. Dimethoxytrityl-protected 30dU was then sent to Dharmacon for immobilization of 30-dU on glass and synthesis of 5’-GCC-3’-dU. PheAMP, PheUMP, and MetAMP were synthesized by the method of Berg (25) with modifications and purification as described in ref. 6. Yield was as follows: PheAMP 85%, PheUMP 67%, and MetAMP 36%. Even more purification and isolation steps under controlled conditions, using multiple solvents at various temperatures, were needed to prevent cross-reactions. (and then in what I consider the understatement of the century) It is doubtful such complex lab procedures have analogues in nature. http://www.creationsafaris.com/crev201003.htm#20100302a
Yarus was also addressed here by Meyer and Nelson:
Can the Origin of the Genetic Code Be Explained by Direct RNA Templating? Stephen C. Meyer and Paul A. Nelson Excerpt: Although Yarus et al. claim that the DRT model undermines an intelligent design explanation for the origin of the genetic code, the model’s many shortcomings in fact illustrate the insufficiency of undirected chemistry to construct the semantic system represented by the code we see today. http://bio-complexity.org/ojs/index.php/main/article/view/BIO-C.2011.2/BIO-C.2011.2
David L Abel and Jack T Trevors weighed in here on the 'RNA world':
Three subsets of sequence complexity and their relevance to biopolymeric information - David L Abel and Jack T Trevors: Excerpt: Genetic algorithms instruct sophisticated biological organization. Three qualitative kinds of sequence complexity exist: random (RSC), ordered (OSC), and functional (FSC). FSC alone provides algorithmic instruction...No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization...It is only in researching the pre-RNA world that the problem of single-stranded metabolically functional sequencing of ribonucleotides (or their analogs) becomes acute. http://www.biomedcentral.com/content/pdf/1742-4682-2-29.pdf
Gordon you go on to state:
Now consider: if I were to give you an example of constructive evolution (by whatever definition is appropriate)
All I can say, is that's a mighty big IF you are riding there Gordon, for even though, just as with the second law itself, some minor anomalies will be found here and there, I find the principle of Genetic Entropy to be the overriding principle governing all biological adaptations with never a violation:
Genetic Entropy - Dr. John Sanford - Evolution vs. Reality - video (notes in description) http://vimeo.com/35088933 Inside the Human Genome: A Case for Non-Intelligent Design - Pg. 57 By John C. Avise Excerpt: "Another compilation of gene lesions responsible for inherited diseases is the web-based Human Gene Mutation Database (HGMD). Recent versions of HGMD describe more than 75,000 different disease causing mutations identified to date in Homo-sapiens."
Further notes:
“The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain - Michael Behe - December 2010 Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain. http://behe.uncommondescent.com/2010/12/the-first-rule-of-adaptive-evolution/
bornagain77
March 12, 2012
March
03
Mar
12
12
2012
03:24 PM
3
03
24
PM
PDT
Thank you Gordon Davisson for that clear exposition. Worth reading twice. CheersCLAVDIVS
March 12, 2012
March
03
Mar
12
12
2012
02:23 PM
2
02
23
PM
PDT
KF: My reply here may be a little disconnected, since I'm going to try to organize this by topic, which will mean taking things in a different order than you posted them. Hopefully this won't cause too much confusion. Entropy decrease and open systems I agree that adding energy to a system will not (except in really weird circumstances) decrease its entropy. But that doesn't mean the "entropy can decrease in open systems" argument is wrong. As I said above:
(Another note: the argument he cites from Isaac Asimov that entropy decreases on Earth can be compensated by entropy increasing in the Sun is also wrong — heat flows carry entropy, so the heat flow from the Sun to Earth actually carries entropy from the Sun to the Earth, not the other way around. The second law allows entropy to decrease on Earth because of the heat flow from Earth to deep space, which carries far more entropy away than arrives from the Sun. See my calculation here.)
For an entropy decrease, you need heat flowing out of the system, not into it. (Actually, there are many other things that'll carry entropy out of the system -- matter leaving the system is an obvious case.) This is actually visible in the example you gave of heat flowing from subsystem A to subsystem B: A decreases in entropy while B increases (by at least as much, as required by the principle of compensation). But a decrease in entropy is not the same as an increase in order (another of the problems I have with Sewell's paper). A rock, lying in the desert, cools off at night, and so its entropy decreases. Is it more ordered at night than in the day? Maybe in some senses, but not in any interesting way... Basically, by concentrating on entropy you wind up asking irrelevant questions, and hence getting answers that're irrelevant to what you're actually interested in (either that, or relevant answers that're wrong). To some extent this is inevitable, since thermodynamics (and statistical mechanics) isn't about CSI, FSCI, FSCO/I, etc (although it is about Shannon information, as I'll discuss below). But that doesn't mean thermo is completely irrelevant, it just means you need to look at how far things are from equilibrium, which means looking at measures like free energy and negentropy rather than entropy. Disequilibrium is still not quite what you're actually interested in, but it's at least related: a system at equilibrium will never show any particularly interesting characteristics (CSI, etc); a system far from equilibrium might (or might not) show interesting charateristics. The second law places limits on how much disequilibrium you can get under various circumstances, and hence winds up limiting CSI, etc. So what're the limits the second law places on disequilibrium? In an isolated system, all processes lead to overall equilibrium (although you may get local increases in disequilibrium at the expense of decreases elsewhere in the system). In an open system with equilibrium boundary conditions, pretty much the same is true; for example, adding or removing heat at the same temperature the system's at won't increase disequilibrium. But an open system with nonequilibrium boundary conditions can import disequilibrium from the outside. The most obvious case of this is adding either nonthermal energy (called "work" in thermo jargon), or high-temperature heat. (Note: when people talk about adding energy causing an entropy decrease, they're either just plain ignorant, or confusing entropy decrease with an increase in disequilibrium.) 'Course, the Earth does have a rather large supply of high-temperature heat -- the Sun. Sunlight produces and maintains disequilibrium on Earth, providing the thermodynamically necessary prerequisite for various interesting things to happen. Does that mean that any particular interesting thing (life, CSI, etc) will appear on Earth? No, because disequilibrium is a necessary but not sufficient condition for all of these interesting things. Thermodynamics doesn't rule them out, but it doesn't rule them in either. I'll try to expand on this when I reply to Eric Anderson. You may find this rather unsatisfying... in which case, all I can say is I'm sorry, but that's as close as I can come to relating thermodynamic considerations to CSI, FSCI, etc. Macro (thermodynamics) vs. micro (statistical mechanics) views As I've said before, the two are just two different ways of describing the same thing. The micro view can explain why things work the way they do in the macro view, but doesn't fundamentally change the way things work or what the rules are. For an example of the close relation between the macro and micro views, consider the discovery of the residual entropy of carbon monoxide as it's cooled to absolute zero. As many substances are cooled to zero, their entropies also drop to absolute zero. In stat mech terms, this is because there's only (approximately) one lowest-energy state, and since entropy is proportional to the logarithm of the number of available states, the entropy must be (approx) zero. Note that this is a stat mech argument, but its conclusion applies to thermodynamic entropy as well (because thermodymaic entropy is really just a different definition for the same physical quantity as the Boltzmann-Gibbs entropy of stat mech). So what's special about carbon monoxide? As it is cooled toward zero, its entropy doesn't vanish. This is because it forms a disordered crystal, with each carbon-oxygen molecule in a (mostly) random orientation. Since there are two possible orientations for each molecule, this gives a Boltzmann-Gibbs entropy of S = k*ln(2) per molecule (= R*ln(2) per mole). Interestingly, this was not discovered by studying the crystal structure of CO, but by measuring its heat capacity (and hence thermodynamic entropy) as it was cooled toward zero:
Early in [W.F. Giauque's] career he measured the heat capacities and heats of transition of the halogen acids from very low temperatures upward. With his careful measurements the excitations of degrees of freedom "frozen in" at very low temperatures (e.g., molecular rotation) were identified as sharp anomalies in the heat capacity. In other molecular systems, accurate heat capacity measurements allowed him to identify random molecular orientations that showed up as residual entropies, such as S = Rln2 for the carbon monoxide molecule, which could be oriented as C-O or O-C. The structure of ordinary ice was of special interest in this regard. Giauque expected a molecular rotation degree of freedom, while Linus Pauling proposed a tetrahedral structure for the oxygen atoms, connected by random hydrogen bonds, leading to a residual entropy S = Rln3/2. Giauque and Stout confirmed this value experimentally, supporting Pauling's model. Giauque used this example to convince his students of the need for careful measurement as well as the superiority of fact over speculation.
-- From a biographical memoir on Giauque by Kenneth S. Pitzer and David A. Shirley My point here is that he was able to see microscopic disorder by measuring purely macroscopic thermal characteristics, because they're fundamentally the same thing. The relationship between entropy and information I agree that there is a connection, but I think you misunderstand what the connection is and what its implications are. You seem to be identifying information as the opposite of entropy, which suffers from many of the same problems as Sewell's identifying order as the opposite of information. Also, while this view fits well with Brillouin's research into Maxwell's daemon, Brillouin's work has been pretty much overturned by Landauer and Bennett's work on the subject. (BTW, I just saw this today: researchers finally did a precise test of Landauer's principle, and it came up aces. See http://www.nature.com/news/the-unavoidable-cost-of-computation-revealed-1.10186.) I think the clearest way of describing the relationship is that thermodynamic entropy is proportional to the difference in information content of macroscopic vs. microscopic descriptions of a system's state. The section you quoted from G. N. Lewis starts badly, but I agree with the second part: "in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate." Now, let me take a look at the implications of this. You seem to be under the impression that this means that entropy behaves the way you'd expect information to behave, i.e. requiring intelligence to create, not just being a matter of mere heat and energy, etc. This is exactly backward: we know that entropy changes due to things like heat flows, and hence heat flows are sufficient to produce (or at least move around) information (and because of the conversion rate, even a small amount of heat corresponds to a huge amount of information). Worse, we know from analysis of Maxwell's daemon that intelligence cannot decrease entropy. Intelligence is completely irrelevant to this type of information! Note that this is true with the statistical (microscopic) view as well as the thermodynamic (macroscopic) view. Take my rock lying in a desert. At night, it cools off, meaning that it has less thermal energy, and hence there are fewer ways that energy can be distributed among the rock's various degrees of freedom, and hence fewer microscopically distinct states that it might be in. At night, we know more about the rock's precise physical state, and we "learned" this not by any feat of intelligence or even measurement, but just by twiddling our thumbs while it lost heat to its surroundings. BTW, in an earlier discussion about information theory (which I dropped out of partway through -- sorry about that), we were arguing about whether information (as it's defined in Shannon's statistical information theory) is necessarily meaningful, and of intelligent origin. I think the relation between Shannon information and statistical mechanics is a good illustration of my point: we can apply Shannon's theory to the information content of a system's microstate, even though that's mostly (or entirely) meaningless, random, noise. As Shannon said:
The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages.
-- C. E. Shannon, "A Mathematical Theory of Communication" A system's microstate isn't really a "message" and almost never has any semantic content, but is selected (at random) from a set of possibilities (the ensemble corresponding to the macrostate), and that's enough for Shannon's theory to apply.Gordon Davisson
March 12, 2012
March
03
Mar
12
12
2012
12:39 PM
12
12
39
PM
PDT
GD: Kindly see 8 above, which addresses the key underlying issue. KFkairosfocus
March 12, 2012
March
03
Mar
12
12
2012
12:31 PM
12
12
31
PM
PDT
Gordon:
But looking for something, not finding it, and using that as the basis for a claim that it would violate the second law is NOT valid reasoning, and is NOT done.
I agree this would be invalid. Is this what Sewell has done? I haven't looked at his work in detail, but I thought he was starting with the second law and arguing that the proposed mechanism for abiogenesis/evolution would violate the second law. This would be right along the lines you stated in your prior sentence:
. . . we show [by logic] that something would violate the second law, and from that we conclude that it doesn’t exist and that there’s no point in looking for it.
It would be strange if he is doing it backwards, and if so I'm surprised others haven't jumped on that before. Or perhaps he is taking the valid approach? Hmmm . . .Eric Anderson
March 12, 2012
March
03
Mar
12
12
2012
12:30 PM
12
12
30
PM
PDT
Gordon: "I don’t see anyone actually addressing my criticisms of Sewell’s reasoning." Thanks, Gordon. Certainly understand on the time commitment, so no worries, but hopefully we can continue the discussion before the thread gets too old. The purpose of my questions was precisely to try and better understand your position so that I have an accurate grasp of your criticisms.Eric Anderson
March 12, 2012
March
03
Mar
12
12
2012
12:20 PM
12
12
20
PM
PDT
BA77:
Well Gordon, seeing as you seem to be claiming that thermodynamics presents no impenetrable barrier to neo-Darwinism generating the massive amounts of functional information we find bursting at the seams in life, indeed you called Dr. Sewells’s work in this area outlining that barrier ‘creationist-grade nonsense’, and seeing as Dr. Sewell directly referenced Dr. Behe’s work in support for his contention that such a barrier does exist in reality, then it would appear fully applicable that you must refute Dr. Behe’s empirical work in order to support your ‘mathematically abstract’ contention.
Um, no. Sewell didn't cite Behe as supporting his claims about thermodynamics, but just as someone [else] doubting the power of natural selection. Their arguments against evolution are logically separate from each other; either can be wrong without weakening the other. BTW, I'd expect Behe, as a biochemist, to be fairly familiar with thermodynamics. But I don't recall ever seeing him claim any thermodynamic problems with evolution. I'd be very interested to see his opinion on the subject (both evolution and thermo in general, and Sewell vs. my criticisms in particular).
Regardless of how impressed you are with your mathematics, it simply becomes a game of one-ups-man ship if you cannot reference the real world to support your contention. That’s just the way science works! Even Einstein had to submit General Relativity to empirical verification (infamous eclipse) before it would start to be accepted. Why in blue blazes should your contention be any different?
A claim that something violates the second law of thermodynamics is an inherently theoretical claim, and it really must stand or fall based on theory, not experiment. Essentially, saying "X would violate the second law" is not (directly) a claim that X doesn't happen, but a claim that IF X happened, it would prove that the second law is wrong (or, in different terms, that X cannot happen without disproving the second law). Let me give you an example to illustrate this: a refrigerator pumping heat Q_cold from something at temperature T_cold to something at temperature T_hot must consume at least Q_cold * (T_hot-T_cold) / T_cold power; a refrigerator that does the job with less power is said to violate the second law. The reason this would (if it existed) violate the second law is that if you had such a refridgerator, you could take it, two heat reservoirs (one at T_hot and one at T_cold), and a zero-entropy power source, connect them all together appropriately, and isolate them from everything else. As the refrigerator runs, it would cause a the entropy of that isolated system to decrease, thus showing that the second law is wrong. (Actually, since we're pretty sure the second law is correct, the way it actually gets used is more-or-less the opposite: we show [by logic] that something would violate the second law, and from that we conclude that it doesn't exist and that there's no point in looking for it. But looking for something, not finding it, and using that as the basis for a claim that it would violate the second law is NOT valid reasoning, and is NOT done.) Now consider: if I were to give you an example of constructive evolution (by whatever definition is appropriate), would your reaction be "Wow, I guess that doesn't violate the second law", or "Wow, I guess the second law is wrong"? If you actually had shown that it violates the second law, your reaction should be that the second law must be wrong. If it's that it doesn't violate the second law, that pretty much means that you haven't actually shown a conflict.Gordon Davisson
March 12, 2012
March
03
Mar
12
12
2012
11:01 AM
11
11
01
AM
PDT
Hi, all. Sorry I haven't replied until now (responsibility to correspondents is not one of my strong points…) I'll try to get to everyone, but it may take a little while (I write slowly, and like to think for a while first -- I've actually already mostly finished replies to BA77 and KF; UP and Eric may have to wait a while). Before I start actually replying to what everyone's said, I'd like to point out what nobody's said: I don't see anyone actually addressing my criticisms of Sewell's reasoning. Does anyone argue that the claims he makes in his paper are correct, despite the counterexamples I offered?Gordon Davisson
March 12, 2012
March
03
Mar
12
12
2012
10:59 AM
10
10
59
AM
PDT
EA: Interesting further thoughts. I see your suggestion that discussion of thermodynamics should:
. . . be laid out in terms of classical thermodynamics, as my sense is that is where the materialists are coming from. Discussing “informational thermodynamics” is largely in the realm of information, rather than underlying physics, and would be a great follow-on discussion, but I’m afraid it might cloud things if we don’t first have an initial understaing of the basic physical issue
This is actually the heart of the problem: strawman distortions. The classical thermodynamics picture is a macro-view (originally formulated before the atomic molecular picture was fully accepted; let us recall that Einstein's 1905 Brownian motion paper was taken as the first clear direct warrant for this view!). Statistical mechanics (with a side glance at kinetic theory) lays out the micro-level underpinnings that warrant the basically empirically derived classical laws of energy conservation and entropy increase. The third law, on not being able to get to absolute zero in a finite number of refrigeration cycles, is not a big player. The zeroth law, of course, is in reality a definition of temperature equivalence. Let me clip on Clausius:
2] But open systems can increase their order: This is the "standard" dismissal argument on thermodynamics, but it is both fallacious and often resorted to by those who should know better. My own note on why this argument should be abandoned is: a] Clausius is the founder of the 2nd law, and the first standard example of an isolated system -- one that allows neither energy nor matter to flow in or out -- is instructive, given the "closed" subsystems [i.e. allowing energy to pass in or out] in it. Pardon the substitute for a real diagram, for now: Isol System: | | (A, at Thot) --> d'Q, heat --> (B, at T cold) | | b] Now, we introduce entropy change dS >/= d'Q/T . . . "Eqn" A.1 c] So, dSa >/= -d'Q/Th, and dSb >/= +d'Q/Tc, where Th > Tc d] That is, for system, dStot >/= dSa + dSb >/= 0, as Th > Tc . . . "Eqn" A.2 e] But, observe: the subsystems A and B are open to energy inflows and outflows, and the entropy of B RISES DUE TO THE IMPORTATION OF RAW ENERGY. f] The key point is that when raw energy enters a body, it tends to make its entropy rise [skip excursus on what "raw energy" means -- one of the points of pedantic objection raised -- by way of a marbles in a box model] . . . . g] When such energy conversion devices, as in the cell, exhibit FSCI, the question of their origin becomes material, and in that context, their spontaneous origin is strictly logically possible but -- from the above -- negligibly different from zero probability on the gamut of the observed cosmos. (And, kindly note: the cell is an energy importer with an internal energy converter. That is, the appropriate entity in the model is B and onward B' below. Presumably as well, the prebiotic soup would have been energy importing, and so materialistic chemical evolutionary scenarios therefore have the challenge to credibly account for the origin of the FSCI-rich energy converting mechanisms in the cell relative to Monod's "chance + necessity" [cf also Plato's remarks] only.)
So, if we are to understand the issue being raised on spontaneously getting to complex, functionally organised clusters of nanomachines based on C-chemistry in aqueous mediums, and implementing metabolic automata with code-based von Neumann replicators, we need to take up the atomic-molecular statistical view. Unfortunately, this view is significantly more challenging than even the complex, partial differential equation based analysis used in classically oriented thermo-d courses. So, it is just simply not as familiar [though solid state electronics is rooted in it, e.g how a bipolar junction transistor works]. And worse, the Szilard- Brillouin- Jaynes approach is not the conventional approach to stat mech. That is the context in which my always-linked note (through my handle) app 1, addresses the matter. I start with the Clausius derivation of the second law, and highlight what it implies for systems that import heat, then bridge to the molecular view by using a qualitative model of a gas. From this we can see the foundation for several thermodynamic phemomena especially diffusion. I then use diffusion to show how the sort of spontaneous assembly of complex nanotech systems in view essentially is a demand for unweaving of diffusion. (Notice, how this directly parallels Sewell's assessment that diffusion is a fundamental insight into what is going on with entropy.) At this point, there is usually a silly -- but too often effective [i,e, successfully manipulative] -- talking point about the "Hoyle Fallacy." My quick and dirty reply to this is that when you are a live donkey kicking a dead lion, you would be well advised to think again. In more detailed comment, I point out that a functionally specific, complex organised entity can be reduced to a metric of information implied. Once we are beyond 500 bits, the number of possible configs is so large that the atomic level Planck time quantum state resources of our solar system across its typically estimated lifespan to date could not sample more than about 1 in 10^48 of the configs, so a blind process on the scope of our solar system would be comparable to blindly drawing a 1-straw sized sample of a cubical haystack 3 1/2 light days across. Even if our solar system out to Pluto lurked therein, we would all but certainly come up with straw. This is of course just a basic bit of sampling theory, and does not rely on any specific probability estimates. (A distinction that usually is lost on the objectors, who somehow cannot seem to see why it is that inferential statistics often pivot on the premise that on typical gamuts of resources, a random sample is unlikely to come from a special zone of a distribution that fits with what a purposeful choice would plausibly do.) That all brings us back to the real challenge: some energy conversion mechanisms are plausible candidates for spontaneous self-ordering, e.g. a hurricane. Others, that pivot on FSCO/I simply are not. Nor is this exactly news, in the very first ID technical book, TMLO, c. 1984, Thaxton, Bradley and Olsen speak in just these terms as they close off Ch 7:
While the maintenance of living systems is easily rationalized in terms of thermodynamics, the origin of such living systems is quite another matter. Though the earth is open to energy flow from the sun, the means of converting this energy into the necessary work to build up living systems from simple precursors remains at present unspecified (see equation 7-17). The "evolution" from biomonomers of to fully functioning cells is the issue. Can one make the incredible jump in energy and organization from raw material and raw energy, apart from some means of directing the energy flow through the system? In Chapters 8 and 9 we will consider this question, limiting our discussion to two small but crucial steps in the proposed evolutionary scheme namely, the formation of protein and DNA from their precursors. It is widely agreed that both protein and DNA are essential for living systems and indispensable components of every living cell today.11 Yet they are only produced by living cells. Both types of molecules are much more energy and information rich than the biomonomers from which they form. Can one reasonably predict their occurrence given the necessary biomonomers and an energy source? Has this been verified experimentally? These questions will be considered . . .
They then proceed to a thermodynamic analysis, which is classical in general structure but points to some of the informational thermod issues using Brillouin. The result is that for a very generous prebiotic soup, the equilibrium concentration of the model protein they have in mind is much less -- by hundreds of orders of magnitude -- than one molecule for the observed cosmos. That analysis has been publicly accessible in print for over 25 years, and is not particularly hard to follow if you have had a first course in thermodynamics [which just eliminated 99% of people I suspect]. Since then they have elaborated this in terms of CSI [read on down in my note], and others -- Yockey et al -- have made similar analyses. All of them boil down to the same thing that the unweaving of diffusion example I have given [the nanobots thought exercise from point 6 under the linked above] points to. It is essentially observationally impossible for there to be a spontaneous emergence of a complex code based self replicating metabolic C chemistry molecular nanotech system from any plausible prebiotic environment. The ONLY empirically warranted causal explanation for such is intelligent action, and the analyses on blind searches of config spaces -- essentially, a molecular card shuffling version of the monkeys at keyboards type thought exercises -- easily rationalise why that is. So, the empirical tests, the observation, the world of common sense experience and the config space challenge analysis all point tot he same conclusion: FSCO/I is a strongly warranted signature of design. That is design theory has a serious point. But, if you are a priori committed to a system of thought that requires that the utterly implausible "must" have happened, you will find any and every artifice to make it seem plausible that such spontaneous generation is highly probable and plausible. The problem is NOT thermodynamics. It is a priori materialist ideology, and manipulation that seeks to blind with science. That is why Lewontin's let- the- cat- out- of- the- bag quote is so revealing:
To Sagan, as to all but a few other scientists, it is self-evident [[--> actually, science and its knowledge claims are plainly not immediately and necessarily true on pain of absurdity, to one who understands them; this is another logical error, begging the question , confused for real self-evidence; whereby a claim shows itself not just true but true on pain of patent absurdity if one tries to deny it . . ] that the practices of science provide the surest method of putting us in contact with physical reality, and that, in contrast, the demon-haunted world rests on a set of beliefs and behaviors that fail every reasonable test [[--> i.e. an assertion that tellingly reveals a hostile mindset, not a warranted claim] . . . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [[--> another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [[--> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ] [If someone wants to play at "quote-mining accusation distraction and atmosphere poisoning games, kindly follow the link to see why such is ill-founded]
I hope that helps. GEM of TKI PS: about a year ago, the second ID Foundations series post was on thermodynamics issues.kairosfocus
March 10, 2012
March
03
Mar
10
10
2012
04:15 AM
4
04
15
AM
PDT
Does anyone know of other sites that do a smaller number of posts focused more on the scientific issues relating to ID that would allow things to fleshed out in more detail over a period of weeks or months?
Telic ThoughtsJoe
March 9, 2012
March
03
Mar
9
09
2012
06:30 PM
6
06
30
PM
PDT
BTW, one of my “pet peeves” about UD is that there are so many news stories and new things posted that the really interesting threads inevitably get buried within a few days.
As mostly a reader of UD, I agree. Don't get me wrong, I like reading the News entries a lot -- short and sweet comin' fast and furious. But I'd like to see them in a sidebar column, where they'd no doubt be read and commented on as much or more than now.jstanley01
March 9, 2012
March
03
Mar
9
09
2012
05:27 PM
5
05
27
PM
PDT
kf, thanks. The materialist talking point about an 'open system' somehow solving thermodynamic issues is absolutely incorrect and is one of the, frankly, stupidest things anyone could possibly argue. On that point I am completely in agreement with Sewell and, of course, with what you are describing above. As a result, as it relates to the 'open system' business, I've long felt that Sewell was spending an awful lot of energy on something that is so blazingly obvious it needn't occupy a lot of attention. I admit I haven't studied his work closely enough to comment on any other aspects he may be trying to address. What I'm attempting to ascertain is whether there is something more to the materialist position than the 'open system' retort, which is why I asked Gordon to clarify his position. It seemed to me, from his comment #1 on this thread, that (i) he might be making a more nuanced argument that might merit attention, and (ii) he felt Sewell's approach was incorrect in a broader sense than just the open system question, which I'd also like to understand. Recently, by referring us to the Pross article, Matzke (likely unintentionally) implied that he acknowledges there is a live thermodynamic issue to be addressed, which is certainly Pross' position as well. I'm hoping Gordon will share his thoughts in response to my questions in #12. I'm genuinely interested in the basic underlying terminology and definitions so that I can have an intelligent conversation with someone who thinks, say, that the thermodynamic issue is bunk. What is the thermodynamic argument as they understand it? Do they believe it is an issue? If not, why not? Only once that is out on the table can we even have a rational discussion about whether the thermodynamic issue can be solved by purely natural and material processes. ----- Incidentally, do you have posting privileges on UD? I would be genuinely interested in a separate post that lays out briefly the nature of the thermodynamic issue (or issues) so that we can make sure we are all on the same page as to what arguments are even being made. (I'd also humbly suggest that it be laid out in terms of classical thermodynamics, as my sense is that is where the materialists are coming from. Discussing "informational thermodynamics" is largely in the realm of information, rather than underlying physics, and would be a great follow-on discussion, but I'm afraid it might cloud things if we don't first have an initial understaing of the basic physical issue.) Anyway, I'm not trying to create work, but I think this is a topic that merits more discussion and if you wanted to do a post on the topic I would eagerly participate. ----- BTW, one of my "pet peeves" about UD is that there are so many news stories and new things posted that the really interesting threads inevitably get buried within a few days. I understand that there is a lot of news out there and that the moderators want to keep things fresh, but it is frustrating to barely be able to get into the meat of a discussion only to have it die because a dozen new posts have sprung up in the meantime. Does anyone know of other sites that do a smaller number of posts focused more on the scientific issues relating to ID that would allow things to fleshed out in more detail over a period of weeks or months?Eric Anderson
March 9, 2012
March
03
Mar
9
09
2012
04:54 PM
4
04
54
PM
PDT
compensating ENTROPYkairosfocus
March 9, 2012
March
03
Mar
9
09
2012
03:28 PM
3
03
28
PM
PDT
EA: Pardon, but a lot of this is going in circles -- for decades now; combined with blinding with science. The standard TO etc talking point ids that if a system is open to energy and mass flows, then thermodynamics issues are off the table so long as somewhere else, compensating energy is built up. That's why I take it back to Clausius' first example, used to ground the second law. If we look, we will see that the energy-importing subsystem increases entropy. This is explained on opening up more ways for energy and mass to be arranged at micro level. And, lo and behold, this subsystem is obviously not isolated. This ties back to Sewell's observation that if an arrangement is unlikely to spontaneously happen in an isolated system, it does not become likely just because it has been opened up, save if that has been in A WAY THAT MAKES IT NOW NOT UNLIKELY. When we are discussing nanomachine complexes that form a metabolic automaton with digital code driven von Neumann self replicator capacity, we are dealing with information-rich entities that are precisely vanishingly unlikely to spontaneously emerge. And, until you have viable code based self replication, you cannot properly appeal to natural selection, so called, on differential reproductive success. The underlying problem, in short terms, is that a priori materialism has predetermined the answers being given, based on what is perceived to be what "must" have happened. KFkairosfocus
March 9, 2012
March
03
Mar
9
09
2012
03:26 PM
3
03
26
PM
PDT
kairosfocus: I agree with you that CSI is the key issue and thought of mentioning it, but wanted to keep my question to Gordon focused. We know for a fact, for example, that CSI can counteract/overcome thermodynamic constraints, as our machines do it all the time. So in a sense thermodynamic constraints are a secondary issue, but then again so are the other laws of physics and chemistry. So if thermodynamic constraints do present a challenge to the formation of far-from-equilibrium systems (such as in the OOL context Pross is trying to address), then it would seem to be relevant and fair game for discussion, just as any other law of physics or chemistry needs to be taken into account. It seems to me that there is a great deal of confusion regarding the term "thermodynamics" and that many times people are talking past each other. So I'm hoping to understand Gordon's viewpoint: does he think thermodynamic constraints are irrelevant or just that Sewell's approach doesn't make sense? I'd also like to understand the basis for the oft-repeated complaint that thermodynamics is just a "creationist talking point." I can't ascertain if the terms are well defined enough to know if that is a valid charge or if it is, in turn, just a "materialist talking point."Eric Anderson
March 9, 2012
March
03
Mar
9
09
2012
12:50 PM
12
12
50
PM
PDT
EA: You have raised a significant question, though I think the pivotal one is the origin of the functionally specific complex organisation and associated information involved in living systems, and associated with the clusters of nanotech machines in the cell. This takes on a thermodynamic colour on the premise that entropy is intertwined with informational issues. (As in, information at the relevant nanomolecular levels, imposes constraints on arrangements of components. That this info may be tied to functional organisation of structures points to how FSCO/I is reflective of constraint on possibilities relative to the number of ways that say diffusion forces would potentially arrange the same atoms.) GEM of TKIkairosfocus
March 9, 2012
March
03
Mar
9
09
2012
04:42 AM
4
04
42
AM
PDT
Gordon: I haven't followed Sewell's arguments closely enough to have an opinion on his specific approach, but wanted to step back for a moment and ask a basic question. In another thread recently ("The First Gene") Nick Matzke pointed us to work by Addy Pross and his idea of kinetic states as being a partial solution to the origin of life problem. Much of the specific issue Pross is trying to address is how life could arise given thermodynamic considerations. Pross puts it this way in another paper I discussed on that thread:
. . . living systems are far-from-equilibrium systems that must constantly tap into some external source of energy in order to maintain that far-from-equilibrium state. Failure to obtain a continuing supply of energy necessarily leads the animate system toward equilibrium—to death. Inanimate systems on the other hand, though not necessarily in an equilibrium state, do at all times tend toward that lower Gibbs energy state. Clearly, the thermodynamic pattern of behavior expressed by animate as opposed to inanimate systems is quite different and raises the question as to how, from a thermodynamic point of view, the emergence of energy-consuming, far-from-equilibrium systems would arise in the first place.
My question would be, in regard to thermodynamics, whether you believe the question of how life came to have a "far-from-equilibrium" state is an open and important question in science? Do you agree it is an important issue and just find Sewell's approach lacking, or do you not believe it is a real issue to begin with? Thanks,Eric Anderson
March 8, 2012
March
03
Mar
8
08
2012
11:56 PM
11
11
56
PM
PDT
F/N: And, yet another day without any serious response. KFkairosfocus
March 8, 2012
March
03
Mar
8
08
2012
11:22 PM
11
11
22
PM
PDT
F/N: I find it further interesting that another day has passed without any serious attempt to back up the attempted dismissal of the relevance of the [statistical form of the] second law of thermodynamics to the spontaneous FSCO/I generation challenge. KFkairosfocus
March 7, 2012
March
03
Mar
7
07
2012
11:22 PM
11
11
22
PM
PDT
F/N: I find the silence over the past day on this topic interesting. Let us see what the objectors will have to say. KFkairosfocus
March 6, 2012
March
03
Mar
6
06
2012
11:38 PM
11
11
38
PM
PDT
GD: I think the basic issue here is that the macro-level summaries of classical thermodynamics, ever since Gibbs, Boltzmann et al, have been traced to the statistics of micro-particles. In that context, we have abundant reason on the issue of statistical weights of macro-identifiable clusters of micro-states,to see that for systems of relevant scope, the spontaneous access to functionally specific, complex organised states with associated information is easily predictably unobservable on the gamut of the solar system or observed cosmos. On analysis similar to why perpetual motion macines will fail with all but certainty. Failing a serious addressing of that microstate view, the complaints in 1 above are little more than tangents led away to pummelled strawmen. Diffusion, BTW, is a core thermodynamic process and its analysis at micro level is illustrative of what is going on. Let's just summarise/illustrate (I use here a nanobots thought exercise), that the suggested spontaneous assembly of living self-replicating forms from some prebiotic soup or other is effectively a demand that diffusion be unwound. The movie running backwards illustration is dead on. Why not address my observations on the matter here on and here on, noting this key admission from Wikipedia:
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
(I seem to recall posing this challenge recently, e,g. here in recent reply to ES from my bookmarks. Unless the cluster of issues in the linked can be soundly addressed, the dismissive objections fail. And, the first two links are in the note that is linked from my handle; i.e. it is background for every comment I have made at UD.) Unless I see a serious response to these issues, what I will be forced to conclude is that the remarks I am seeing on this are little more than recycling of ill-founded Talk Origins etc talking points. GEM of TKIkairosfocus
March 6, 2012
March
03
Mar
6
06
2012
02:37 AM
2
02
37
AM
PDT
'I don’t claim that peer review is perfect, or even anywhere close to it, just that some sort of nonsense filter is needed.)' Er.. That's what peer-review is supposed to be, as less even than the barest minimum. Maybe not perfect, but a nonsense filter. You don't seem too comfortable with the conventional canons of empirical science. Is there any way these good people can help you? perhaps something that puzzles you about the scientific method, they could elucidate?Axel
March 5, 2012
March
03
Mar
5
05
2012
04:39 PM
4
04
39
PM
PDT
Gordon, In a post last October, I was having a conversation with Larry Moran about information and you made a comment in which you stated:
I haven’t seen a definition [of information] which can be shown to be present in DNA and also cannot be produced without intelligence.
After I discussed the issue and gave several examples, you did not re-enter the conversation. Would you like to now make a comment? A link to the issues raised with Dr Moran is here.Upright BiPed
March 5, 2012
March
03
Mar
5
05
2012
03:21 PM
3
03
21
PM
PDT
Well Gordon, seeing as you seem to be claiming that thermodynamics presents no impenetrable barrier to neo-Darwinism generating the massive amounts of functional information we find bursting at the seams in life, indeed you called Dr. Sewells's work in this area outlining that barrier ‘creationist-grade nonsense’, and seeing as Dr. Sewell directly referenced Dr. Behe's work in support for his contention that such a barrier does exist in reality, then it would appear fully applicable that you must refute Dr. Behe's empirical work in order to support your 'mathematically abstract' contention. Work that opposes Dr. Sewell's work, that neo-Darwinism can do what you claim. Regardless of how impressed you are with your mathematics, it simply becomes a game of one-ups-man ship if you cannot reference the real world to support your contention. That's just the way science works! Even Einstein had to submit General Relativity to empirical verification (infamous eclipse) before it would start to be accepted. Why in blue blazes should your contention be any different?bornagain77
March 5, 2012
March
03
Mar
5
05
2012
03:06 PM
3
03
06
PM
PDT
Hi, BA77. I'll decline your invitation to debate Abel and Behe's work; you can call me an effete high-level math critic if you like, but the fact is that I'm not particularly familiar with either's work, and I prefer not to debate subjects I don't know well. I will, however, point out that they appear to be entirely irrelevant to the points I was making: - If Behe and/or Abel are right and evolution is incapable of producing information/function/IC/whatever, that doesn't mean it violates the second law of thermodynamics. It just means that constructive evolution just joins the long list of things that're impossible for reasons other than thermodynamics (along with e.g. planets following triangular orbits, hydrogen gas at room temperature and pressure spontaneously fusing into hydrogen, etc...) - Even if it turns out that constructive evolution is forbidden by the second law, Dr. Sewell's explanation of why it's impossible is still wrong. Even if his conclusion is correct, he's using bogus premises and bad logic to reach it.Gordon Davisson
March 5, 2012
March
03
Mar
5
05
2012
02:40 PM
2
02
40
PM
PDT
Well Gordon since you are such a critic that Dr. Sewell's paper is nothing but 'creationist-grade nonsense', I'm sure you will have no problem providing actual empirical evidence that supports your claim that entropy presents no impenetrable barrier to neo-Darwinism. Actual empirical evidence that falsifies Abel's null hypothesis for functional information generation, as well as providing actual empirical evidence that shows Dr. Behe's 'first rule', which Dr. Sewell referenced in the video, to be wrong. Or are you just a high level math 'critic' who can't be bothered with dirtying your hands with actual empirical evidence?
“The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain - Michael Behe - December 2010 Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.(that is a net 'fitness gain' within a 'stressed' environment i.e. remove the stress from the environment and the parent strain is always more 'fit') http://behe.uncommondescent.com/2010/12/the-first-rule-of-adaptive-evolution/
Michael Behe talks about the preceding paper on this podcast:
Michael Behe: Challenging Darwin, One Peer-Reviewed Paper at a Time - December 2010 http://intelligentdesign.podomatic.com/player/web/2010-12-23T11_53_46-08_00 Where's the substantiating evidence for neo-Darwinism? https://docs.google.com/document/d/1q-PBeQELzT4pkgxB2ZOxGxwv6ynOixfzqzsFlCJ9jrw/edit
Null Hypothesis;
Three subsets of sequence complexity and their relevance to biopolymeric information - Abel, Trevors Excerpt: Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC).,,, Testable hypotheses about FSC What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses: Null hypothesis #1 Stochastic ensembles of physical units cannot program algorithmic/cybernetic function. Null hypothesis #2 Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function. Null hypothesis #3 Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function. Null hypothesis #4 Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time. We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified. http://www.tbiomed.com/content/2/1/29 The Capabilities of Chaos and Complexity: David L. Abel - Null Hypothesis For Information Generation - 2009 To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: "Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration." A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis. http://www.mdpi.com/1422-0067/10/1/247/pdf Can We Falsify Any Of The Following Null Hypothesis (For Information Generation) 1) Mathematical Logic 2) Algorithmic Optimization 3) Cybernetic Programming 4) Computational Halting 5) Integrated Circuits 6) Organization (e.g. homeostatic optimization far from equilibrium) 7) Material Symbol Systems (e.g. genetics) 8 ) Any Goal Oriented bona fide system 9) Language 10) Formal function of any kind 11) Utilitarian work Is Life Unique? David L. Abel - January 2012 Concluding Statement: The scientific method itself cannot be reduced to mass and energy. Neither can language, translation, coding and decoding, mathematics, logic theory, programming, symbol systems, the integration of circuits, computation, categorizations, results tabulation, the drawing and discussion of conclusions. The prevailing Kuhnian paradigm rut of philosophic physicalism is obstructing scientific progress, biology in particular. There is more to life than chemistry. All known life is cybernetic. Control is choice-contingent and formal, not physicodynamic. http://www.mdpi.com/2075-1729/2/1/106/ "Nonphysical formalism not only describes, but preceded physicality and the Big Bang Formalism prescribed, organized and continues to govern physicodynamics." http://www.mdpi.com/2075-1729/2/1/106/ag The Law of Physicodynamic Insufficiency - Dr David L. Abel - November 2010 Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”,,, After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.” http://www-qa.scitopics.com/The_Law_of_Physicodynamic_Insufficiency.html The Law of Physicodynamic Incompleteness - David L. Abel - August 2011 Summary: “The Law of Physicodynamic Incompleteness” states that inanimate physicodynamics is completely inadequate to generate, or even explain, the mathematical nature of physical interactions (the purely formal laws of physics and chemistry). The Law further states that physicodynamic factors cannot cause formal processes and procedures leading to sophisticated function. Chance and necessity alone cannot steer, program or optimize algorithmic/computational success to provide desired non-trivial utility. http://www.scitopics.com/The_Law_of_Physicodynamic_Incompleteness.html
bornagain77
March 5, 2012
March
03
Mar
5
05
2012
01:33 PM
1
01
33
PM
PDT
Well Gordon since you are such a critic that Dr. Sewell's paper is nothing but 'creationist-grade nonsense', I'm sure you will have no problem providing actual empirical evidence that supports your claim that entropy presents no impenetrable barrier to neo-Darwinism. Actual empirical evidence that falsifies Abel's null hypothesis for functional information generation, as well as providing actual empirical evidence that shows Dr. Behe's 'first rule', which Dr. Sewell referenced in the video, to be wrong. Or are you just a high level math 'critic' who can't be bothered with dirtying your hands with actual empirical evidence?
“The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain - Michael Behe - December 2010 Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.(that is a net 'fitness gain' within a 'stressed' environment i.e. remove the stress from the environment and the parent strain is always more 'fit') http://behe.uncommondescent.com/2010/12/the-first-rule-of-adaptive-evolution/
Michael Behe talks about the preceding paper on this podcast:
Michael Behe: Challenging Darwin, One Peer-Reviewed Paper at a Time - December 2010 http://intelligentdesign.podomatic.com/player/web/2010-12-23T11_53_46-08_00 Where's the substantiating evidence for neo-Darwinism? https://docs.google.com/document/d/1q-PBeQELzT4pkgxB2ZOxGxwv6ynOixfzqzsFlCJ9jrw/edit
Null Hypothesis;
Three subsets of sequence complexity and their relevance to biopolymeric information - Abel, Trevors Excerpt: Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC).,,, Testable hypotheses about FSC What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses: Null hypothesis #1 Stochastic ensembles of physical units cannot program algorithmic/cybernetic function. Null hypothesis #2 Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function. Null hypothesis #3 Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function. Null hypothesis #4 Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time. We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified. http://www.tbiomed.com/content/2/1/29 The Capabilities of Chaos and Complexity: David L. Abel - Null Hypothesis For Information Generation - 2009 To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: "Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration." A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis. http://www.mdpi.com/1422-0067/10/1/247/pdf Can We Falsify Any Of The Following Null Hypothesis (For Information Generation) 1) Mathematical Logic 2) Algorithmic Optimization 3) Cybernetic Programming 4) Computational Halting 5) Integrated Circuits 6) Organization (e.g. homeostatic optimization far from equilibrium) 7) Material Symbol Systems (e.g. genetics) 8 ) Any Goal Oriented bona fide system 9) Language 10) Formal function of any kind 11) Utilitarian work http://mdpi.com/1422-0067/10/1/247/ag Is Life Unique? David L. Abel - January 2012 Concluding Statement: The scientific method itself cannot be reduced to mass and energy. Neither can language, translation, coding and decoding, mathematics, logic theory, programming, symbol systems, the integration of circuits, computation, categorizations, results tabulation, the drawing and discussion of conclusions. The prevailing Kuhnian paradigm rut of philosophic physicalism is obstructing scientific progress, biology in particular. There is more to life than chemistry. All known life is cybernetic. Control is choice-contingent and formal, not physicodynamic. http://www.mdpi.com/2075-1729/2/1/106/ "Nonphysical formalism not only describes, but preceded physicality and the Big Bang Formalism prescribed, organized and continues to govern physicodynamics." http://www.mdpi.com/2075-1729/2/1/106/ag The Law of Physicodynamic Insufficiency - Dr David L. Abel - November 2010 Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”,,, After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.” http://www-qa.scitopics.com/The_Law_of_Physicodynamic_Insufficiency.html The Law of Physicodynamic Incompleteness - David L. Abel - August 2011 Summary: “The Law of Physicodynamic Incompleteness” states that inanimate physicodynamics is completely inadequate to generate, or even explain, the mathematical nature of physical interactions (the purely formal laws of physics and chemistry). The Law further states that physicodynamic factors cannot cause formal processes and procedures leading to sophisticated function. Chance and necessity alone cannot steer, program or optimize algorithmic/computational success to provide desired non-trivial utility. http://www.scitopics.com/The_Law_of_Physicodynamic_Incompleteness.html
bornagain77
March 5, 2012
March
03
Mar
5
05
2012
01:32 PM
1
01
32
PM
PDT
1 2

Leave a Reply