Readers will recall that Sal Cordova doesn’t agree with Granville Sewell’s doubts about Darwin based on the Second Law of Thermodynamics. His argument is here.
Sewell has replied here.
Now, Rob Sheldon weighs in:
As a physicist, I have a few more problems with the 2nd law, which as Sal points out, is a problem with definitions. I agree with Sal that few people understand either thermo or entropy, and as a consequence, make a hash of either pro or con arguments. For the sake of expediency, I think Sal is suggesting that it is better to avoid this topic because it invariably ends up in the weeds.
On the other side, Granville thinks that the weeds are still an interesting place to be. Surely if we avoided all difficult topics, we’d never make progress in anything. We can make progress if we are not afraid to plough the untilled turf, and so Granville’s work is both original and interesting (to quote Eugene Wigner.)
Here’s the definitional challenge of the two meanings of 2nd Law:
1) Thermodynamics: deals with macroscopic states of things like hot bricks and boiling water. This was all worked out in the early 1800’s, and is essential for the operation of steam engines all the way up to nuclear power plants. Entropy was defined as macroscopic state defined by temperature and heat.
No chance of this being wrong, because if it were ever possible to beat the system, you’d have a perpetual motion machine and besides being filthy rich, you could take over the world.
This is where the phrase “2nd law” applies like an iron rule. Unfortunately, cells are not steam engines, and the origin of the first cell is not a problem related to steam engines, so Sal is pleading for prudence in using the 2nd law in this fashion.
2) Statistical Mechanics: Boltzmann and Gibbs redefined macroscopic states of matter into microscopic states of matter, where the entropy of a system is now based on counting microscopic states. As Sal has pointed out, the entropy is now defined in terms of how many different ways there are to arrange, say, nitrogen and carbon dioxide molecules. No “heat” is involved, merely statistical combinatorics. This ostensibly has little to do with heat and dynamics, so it is called “statistical mechanics”.
3) The Equivalence: Boltzmann’s famous equation (and engraved on his tombstone) S = k ln W, merely is an exchange rate conversion. If W is lira, and S is dollars, then k ln() is the conversion of the one to the other, which is empirically determined. Boltzmann’s constant “k” is a semi-empirical conversion number that made Gibbs “stat mech” definition work with the earlier “thermo” definition of Lord Kelvin and co.
Despite this being something as simple as a conversion factor, you must realize how important it was to connect these two. When Einstein connected mass to energy with E = (c2) m, we can now talk about mass-energy conservation, atom bombs and baby universes, whereas before Einstein they were totally different quantities. Likewise, by connecting the two things, thermodynamics and statistical mechanics, then the hard rules derived from thermo can now be applied to statistics of counting permutations.
This is where Granville derives the potency of his argument, since a living organism certainly shows unusual permutations of the atoms, and thus has stat mech entropy that via Boltzmann, must obey the 2nd law. If life violates this, then it must not be lawfully possible for evolution to happen (without an input of work or information.)
The one remaining problem, is how to calculate it.
4) The Entropy problem
Boltzmann was working with ideal gasses, and most of the entropy illustrations in physics books deal with either ideal gasses or energy states of an atom. Nobody but nobody wants to tackle the entropic calculation of a cell.
The problem isn’t just that the number of arrangements of the 10^14 atoms in a cell is at least 10^14!, and that the arrangements show long range ordering, and that they demonstrate constrained dynamics, but frankly we don’t know how to do the arithmetic.
The entropy merely has to show an increase, but we have so many orders of magnitude in this calculation, we can’t tell if the entropy increased or decreased. In the field of numerics, if you subtract two large numbers, most of the significant digits vanish, and you can be left with noise. In this case, the numbers are so big, that subtracting the “after” entropy from the “before” entropy gives nothing but noise.
It is impossible to calculate the Boltzmann entropy change of a cell unless we could write down that number with a 100-trillion digits of math and keep track of the last 100 or so digits.
My (Nobel nominated) college professor used to ask a rhetorical question in his thermo class, “what is the entropy change of a cow after the butcher put a 22 calibre bullet through the brain?” Yet this miniscule entropy change is supposed to tell us the difference between a “living” and “dead” cow. From a physics view point, there is almost no change in disorder, yet from a biological viewpoint it is all the difference in the world. Physics just doesn’t know how to measure this, and doesn’t even know if they ever can measure this quantity.
So that’s the weeds. We have a recipe, but we can’t use it. Despite having this great definition for Boltzmann entropy, we don’t know how to apply it to life, and therefore we can’t tell if entropy is up or down until the critter rots and starts to turn into gas.
That means some other definition of entropy is employed that “approximates” the Boltzmann definition. Shannon information is often employed, as well as some thought-experiments involving black holes. These work fine on the obvious examples, but once again, they fail to detect the difference between a live and dead cow. So when Granville shows a picture of a tornado, we all know intuitively that Boltzmann entropy is increasing, but there’s just no easy way to calculate it. It reminds of the 30 years of lawsuits while tobacco companies said that no one had proven smoking causes cancer–we all knew it was true, but we didn’t have the proof.
Despite this intuitive use of the 2nd law not being mathematically robust, we can still learn a lot by using it. But if our opponent challenges us to prove it, we must be willing to go in the weeds.
Here’s two weeds:
Response 1) The entropy of life balances out. Food in, waste out, entropy up.
Answer: Really? Can you show me your calculation? Your proof would be Nobel prize material!
Response 2) The earth is not a closed system.
Then how about the solar system? No, then the galaxy? No, then surely the universe is a closed system! Where’s the missing entropy? Show me where it went, and give me a rough idea of its magnitude–one or two orders of magnitude is sufficient.
News:
The calculation challenge RS rightly highlights does not dismiss the issue.
To see why, notice that we face similar issues in say economics. We are left to resort to proofs in principle, toy models to illustrate that these are relevant, and useful, often semi-empirical aggregate models. So, we can have confidence in a result or even a qualitative assessment lightly dusted with some algebra or even graphs backed up by a survey of the involved logic. Frankly, one of my best pieces of economic argument was to use two sticks, one fixed almost straight up and the other at an angle, with the intersection passing through a bead. as the movable stick goes right or left a slight amount the bead rises (or, falls) sharply.
Voila, that is one reason energy markets are so volatile: tight supply, shifting demand. Slight jumps in demand will push prices hard. and small drops can push prices up too.
Crude, but good enough to see what is going on beyond the noise of talking heads and screaming politicians.
And, based on valid principles.
Let’s take up the cow shot by the butcher.
Some crude physics, from a design perspective, without a single equation.
We know, empirically, that the well-placed round reliably destroys life function.
Why?
Plainly, by disrupting vital system function that depends on specific configurations.
That is, we see an in-principle illustration of loss of organisation and associated functionally specific information.
In the right place, instant unconsciousness, immediate collapse and rapid death.
That is, apparently minor disruptions to obviously accessible configs, are not compatible with life.
Where also, the cow came about by the infolding of regulatory programs and associated growth and development in accord with genetic and cellular information originally resident in a Zygote that formed a system open to material and energy inflows and outflows.
Where too, the key molecular aggregates in the cells are known to be in large part informationally controlled. Indeed, the cell exhibits metabolism and an integrated von Neumann self replicator.
Where again, we know there is a reasonable bridge between configurational specificity and low entropy. Indeed, there is a whole informational school of thermo-D out there.
So, even where we cannot calculate the numerical values (actually MOST serious problems in physics and related fields cannot be worked out in detail, we use simplifications and models and aggregations or approximations all the time), we can trace the entropic pattern involved. A pattern that is consistent with what we can work out for our usual toy examples.
And the pattern throws out results that point to the utter implausibility of getting to main living forms by blind forces of chance and necessity. Indeed, a good toy comparison is the 500-bit string that is to form coherent English text. This is beyond the credible reach of the solar system’s atomic resources. If you see a 73 or so character string of coherent English, that is a strong sign that the only observed source of such has been at work. Intelligence.
We are here dealing with systems that are known to be far more complex than that.
KF
small supply drops
I’m a very simple person, au fond. I play Chopin on the piano, write artificially intelligent computer programs as a hobby, and earn my living as a software engineer in aerospace R&D.
These qualifications might seem irrelevant concerning the discussion at hand, but they give me some authority concerning the theme of this thread. I have a propensity and a passion for figuring stuff out, and have developed a nose for smelling out BS concerning “scientific” issues.
Forget thermodynamics, entropy, and all the rest. Anyone with any reasonable intelligence and familiarity with engineered systems should be able to recognize that complex information-processing machinery of the kind found in living systems cannot possibly be engineered by Darwinian mechanisms.
Those who propose that Darwinian mechanisms account for this technology are clearly out of contact with reality, and live in some kind of bizarre La-La Land that has nothing to do with reason or evidence.
Attempting to reason with such people is an exercise in futility, because they are completely irrational.
Gil,
You are absolutely right that the whole issue is just common sense, but there is a “common sense law of physics” called the second law, which says that what has happened on Earth should not happen, at least due to unintelligent causes.
My first primitive attempt to make the second law argument was in the second part of my 2001 Mathematical Intelligencer article. I wrote:
—————————————
…to attribute the development of life on Earth to natural selection is to assign to it–and to it alone, of all known natural “forces”–the ability to violate the second law of thermodynamics and to cause order to arise from disorder. It is often argued that since the Earth is not a closed system–it receives energy from the Sun, for example– the second law is not applicable in this case. It is true that
can increase locally, if the local increase is compensated by a decrease elsewhere, ie, an open system can be taken to a less probable state by importing order from outside. For example, we could transport a truckload of encyclopedias and computers to the moon, thereby increasing the order on the moon, without violating the second law. But the second law of thermodynamics–at least the underlying principle behind this law–simply says that natural forces do not cause extremely improbable things to happen, and it is absurd to argue that because the Earth receives energy from the Sun, this principle was not violated here when the original rearrangement of atoms into encyclopedias and computers occurred.
———————————————-
I immediately heard from people who said, natural causes do extremely improbable things all the time, every time we flip a billion coins we get an extremely improbable result. So I responded, I mean the second law says natural forces don’t do macroscopically describable things that are extremely improbable from the microscopic point of view.
Then a few months later I noticed that the equations for entropy change, which are commonly generalized to less quantifiable applications to make the “compensation” argument—that extremely improbable things can happen on Earth as long as they are compensated by entropy increases outside the Earth, when looked at more carefully, do not support the absurd compensation argument, they actually support, when generalized, the common sense argument of my 2001 article. So for 11 years now I have been trying to make the second law argument more clearly and scientifically.
But by now I realize I have completely wasted 11 years of my life, and put up with unbelievable ridicule and abuse for nothing, because I now realize that anyone who can read my original, common sense argument above, and believe that 4 unintelligent forces alone can create “encyclopedias and computers,” is always going to find a way to avoid the obvious conclusion, no matter how clearly and accurately you state the second law argument. They will argue that the second law only applies to thermal entropy, or that what has happened on Earth is just too difficult to quantify, or use the most popular argument: “you’re just an idiot who doesn’t know anything about the second law”, whatever it takes, they will find a way to distract attention from my main, obvious, point. I have wasted 11 years of my life.
Yes, I agree with Granville Sewell’s last sentence in his comment above.
Sal Cordova is right about the Second Law of Thermodynamics not disproving evolution. Granville Sewell is completely confused.
2LOT says that if a closed system has a decrease in entropy deltaS (where I define deltaS is positive for entropy decrease and negative for entropy increase) and if it radiates heat deltaQ to its environment at temperature T, then:
deltaS <= delta Q /T
For an open system, you add corrections for the intrinsic entropy of matter entering or leaving the system. That's all. Special cases: for an isolated system, deltaQ = 0 by definition. For an exothermic reaction, which all living things and ecosystems are, deltaQ is positive and huge.
Granville Sewell is in effect setting deltaQ = 0 for the evolution of a population of organisms, plus the food they eat, plus the poop and dead bodies they produce. This is invalid, because populations of organisms radiate huge amounts of heat, so deltaQ is huge for populations. This is a huge, obvious error.
For the evolution of a population of organisms, e.g. Homo habilis to Homo erectus, deltaS is certainly very, very small compared to the huge amounts of heat radiated to the environment by a half-million years of evolution.
In fact DeltaS could even be NEGATIVE for some kinds of evolution, i.e. Homo habilis to Homo erectus, because Homo erectus individuals are larger, and because entropy is an intrinsic property, so more matter means higher entropy. The "disorder" of a modern human brain could be twice that of an early Homo erectus brain, because it is more massive.
If you prefer to think of entropy as "disorder", and "order" as the opposite of entropy, well, that is a very bad metaphor and leads to bizarre contradictions. Consider the following.
Consider the empty space around planet Earth 3 billion years ago. Let's by convention say that its entropy then was zero. Now if you define "order" as minus entropy, then the space around planet Earth 3 billion years ago had zero "order."
But, while life was evolving on Earth, the ecosystem radiated heat into space. So delta Q over 3 billion years is HUGE. This means the entropy of empty space around Earth is a much, much higher positive number than it was 3 billion years ago.
But if you define "order" as minus entropy, then the empty space around Earth is, right now, a huge, huge NEGATIVE NUMBER. If this seems bizarre or counter-intuitive, then don't call "order" the opposite of entropy. Physicists don't and chemists don't, when they are doing real calculations.
This is exactly what Granville Sewell does–he calls "order" the opposite of entropy, and instead of talking about heat flowing OUT of a system, as a physicist or chemist would, Sewell instead speaks of "order" flowing INTO a system. This leads Granville Sewell to bizarre self-contradictions and counter-intuitive absurdities.
Dr. Sewell,
Your work is not wasted, it has been an inspiration to many including myself.
I think the fruitful avenue of exploration is statistics not thermodynamics. If reference to “thermo” were replaced with notions of “statistics”, it would convey much of the what you want without causing arguments over definitions.
Ideas of entropy can be excorsised of their thermodynamic associations and moved into information theory. The relabeling we alleviate many of the arguments over definitions while preserving the heart of what you are working to demonstrate.
Instead of Boltzman’s entropy in statistical mechanics:
S = k log W
It can be replaced with shannon information or entropy:
H = log W
At least with Shannon you don’t need the “k”, so the formula is simpler.
Then the probablity arguments will still hold but without being conflated with notions in thermodynamics such as heat and temperature nor even energy. Arguments over Clausius, Kelvin-Plank, etc. will vanish leaving only statistical arguments (which are more important anyway).
Gentlemen:
I have several times now pointed that there is a whole informational school of thought on entropy, which provides conceptual tools — including a bridge from Shannon Info theory and average info per symbol [the info th meaning of Shannon’s H] — to the microscopic view of statistical thermodynamics, thence a bridge to classical entropy. (You may wish to skim the discussions here and here on in my always linked.)
I think the concession made by Wiki I excerpted here, is sufficient to outline the bridge that joins these concepts:
And, in that context, it is quite reasonable to compare entropy to disorder.
Pardon, the discussion is now happening across too many threads, I guess I will simply cross-link this point.
Specifically, because order — and, a fortiori, functionally specific organisation — restricts the range of possible organisation at microscopic level. That is why when a block of ice melts, it absorbs a certain latent heat of fusion and in so doing increases its entropy. Disordering the crystal structure took a certain d’Q/T. The resulting water is less specified at molecular level, is less ordered and is of greater entropy, even at the same temperature and nearly the same density, i.e. inter-molecular spacing.
Likewise, the transition to the vapour state of water that is boiling is an increase of entropy and of disorder. In this case, usually with a dramatic shift in possible intermolecular spacing. Hence, the drop in density of orders of magnitude. (A ten times increase in spacing is linked to a thousand times decrease in density. This is roughly what happens.)
Informationally, the amount of missing info on where molecules are and how they are moving, given the set of macro observable variables sufficient to describe bulk state, rose sharply both times.
Similarly, the increased energy available pulls up the high energy skirt of molecular energy distribution, and accelerates activation processes exponentially. Thus the proverbial doubling of rates of such processes — aging of components [and thus halving of system lifespan], conductivity of semiconductors etc — per eight degrees Celsius rise around room temperature.
So, no, it will not do to try to sever or dismiss the link from entropy to disorder.
Moreover, there is no good reason to insist that the logic involved depends on our requiring a microscope to inspect the particles or components involved. That is, once we can see aggregate behaviours underpinned by a micro-level, the same considerations apply. As in, the micro-macro distinction in economics. I suspect, this may even apply to the gap between molecular and neuronal level behaviour in the CNS and the unified behaviour of an individual.
So, I see no reason to artificially separate informationally-focussed analyses and statistical thermodynamic ones once the bridge between the two has been made. Thus, in outline from Harry S Robertson in his Statistical Thermo-physics (Prentice-Hall, 1993):
In short, what we have been thinking of as heat, energy, entropy etc from one view, and as information etc from another, are integrated once we see the conceptual bridges. Indeed, this also has relevance to economic analysis, which is informationally constrained on similar macro/micro and “atomic freedom” vs average behaviour issues.
And in that context, the Creationists and design thinkers have been right to highlight the pivotal observation that opening up a system to energy and mass flows does not answer to the question of origin of functionally specific complex organisation sufficient to implement a metabolic entity that is self replicating, nor does it explain the body plans. The von Neumann observation, that the pivotal issue is the joining of a contructor to a self replication facility driven by a control tape, is crucial. It also points to the often overlooked thought exercise in Paley’s Natural Theology Ch 2, in which he envisioned a self-replicating time-keeping watch. Namely, the additionality of having separate function and an informationally controlled replication system that reproduces the functional entity points to a higher order of intelligent design than even just the direct functional article.
So, the correct answer is not to dismiss the links between thermodynamics and information (and between information and functional organisation), but to recognise and apply them.
Hence, my discussion of the significance of why a .22 round in a vital spot will kill a cow. Then, going back to the zygote that grew into that cow by virtue of taking in materials and energy under informational control,we see the link to FSCO/I. Onward this connects to constraint vs freedom and linked macro-level observables and states.
There is an island of possible states consistent with the living cow. A kinetic disruption to those states can trigger catastrophic functional collapse, i.e. here, death. The same molecules and atoms, suddenly re arranged in what were always possible ways, and life function vanishes. Life being a macro-observable consistent with certain constrained clusters of underlying configurations forming a target zone. Bringing in the thermodynamic possibilities for the relevant atoms and molecules, we see that some serious work of clumping and configuration had to have gone into the development and growth of the living cow, which can be crudely and partially released by burning the dead cow.
But, it is obvious that the symbolic functional constraints in the cow’s dna are the same: molecules are clumped then specifically constrained for functional reasons. The organisation can be destroyed by raw energy injection: heat it up and destroy it, and the increment of work and entropy reduction to configure may be lost in the decimal places, but we can detect it by using other conceptual and measuring tools, as information in bits.
So, we are back to the significance of FSCO/I.
And, let us not forget, if you suffer a trauma, the EMTs using the Glasgow coma scale are seeking to infer intelligence from functional response in the context of possible disorder that damages or destroys proper function.
Intelligent life, itself is sustained by active, informationally controlled processes, in the teeth of the tendency of molecules to randomise and go to increasingly disordered states. Eventually, entropy will win, not least by corrupting the genetic and regulatory info in our cells.
And in the end, that genetic entropy will add such a burden to the human genome that our race will become non-viable.
Thus, we see the point that those who argue for writing genetic and associated regulatory info by cumulative, rewarded happy accidents, are missing the significance of the link between information and entropy.
So, no, I think the Creationists who first noticed these issues were on to something, decades ago.
And, I think the Design theorists who have looked in greater details on the informational issues, are on to something.
Where also, those who are ever so desperate to dismiss and deride this, are barking up the wrong tree.
Especially, when they resort to the lazy, shabby tactics of namecalling, guilt by association, and the genetic fallacy.
There is an issue to be addressed soberly and seriously on its merits, and let us be about it.
KF
Kairosfocus,
Thank you for that. The thing I find so insulting about Sal’s post and comment above is, throughout all of it is the implication that I just don’t know enough about thermodynamics to realize that the second law of thermodynamics applies only to thermodynamics, when it is used much more generally by many, many people on both sides of the ID debate. It may be difficult (or impossible) to apply it in a quantitative way to things like tornados and evolution, but most everyone I’ve been arguing with these last 11 years acknowledges that what has happened on Earth would violate the second law if the Earth were an isolated system. I would have been much less insulted if he would have at least acknowledged that my point of view on the application of the law is very widely held, and not just due to ignorance of thermodynamics. Actually, Sal, I probably would not have been insulted at all if it were just your post, but I have been told continually for 11 years that I don’t know what I’m talking about, that’s why I’m so hypersensitive. Though my AML article has received high praise from many good scientists I know, always in private of course. One engineering professor called it “a really highly significant piece of work” but told me never to quote him by name!
And Sal, you really should do some homework before attacking friends on UD. It is completely clear to me that you had not read anything I had written on the second law before you posted this (maybe you have now), you had just heard that I believed that evolution violated the second law and took off from there. No wonder you didn’t include any links to my work, you probably didn’t know of any. They are all over http://www.evolutionnews.org.
Then your friends and colleagues failed you on this matter. Maybe your true friends are the ones willing to disagree.
I’m sorry, but that is simply not true. I read and studied your work, and the more I learned the more it became apparent something had to be said.
I publicly disagreed with you in far more polite terms here at Uncommon Descent on April 2, 2007. Here is the link to one of our first exchanges FIVE YEARS AGO:
http://www.uncommondescent.com.....ent-109481
I was far more polite and pettitioning to you then, and I’m sorry I had to far more rude this time around.
The following statement by you yourself says it all:
We can quantify entropy change for chemical reactions and bricks to several significant figures but we can’t do the same for various evolutionary claims. The inability to quantify this amount makes the relevance of the 2nd law suspect at best.
There is non-thermal entropy change, but to quantify this it is more appropriate to use statistics and statistical mechanics and information theories NOT the 2nd law.
We use thermodynamics to measure thermal entropy change. The fact that you yourself admit that it is difficult to use the second law to calculate non-thermal entropy change in things like evolution should be indicative that maybe it is not the most appropriate avenue for defending design concepts or criticizing evolutionary claims.
In fact, in one of your works you made reference that dynamite may provide energy to an open system but that did not mean that useful work toward construction of a building could be accomplished. That phrase was used in Mystery of Life’s Origin by Thaxton, Bradley, and Olsen. I quoted Bradley in my critiques of your work.
In that book they calculated the NON-THERMAL entropy change needed to polymerize a functional protein and the corresponding amount of Gibbs free energy that needed to be involved. Did they use the 2nd law to calculate this non-thermal entropy change? NO! They used statistical mechanics. That is the more fruitful approach, imho.
Sal,
So if you watched a video of a tornado running backward, turning rubble into houses and cars, you would sit there and say, it is just too hard to quantify what is happening on this video, so I can’t decide if the video is running forward or backward? It’s too hard to quantify, so I can’t tell if entropy is increasing or decreasing? And if you watched a barren planet producing intelligent brains and computers and airplanes and the Internet, your reaction would be, this is just too hard to quantify, we can’t apply the second law?
Come on, some things are obvious even if they are difficult to quantify! As Kairosfocus says, the calculation challenge does not dismiss the issue. Science isn’t all about quantifying things.
As to the various metrics used to quantify information in a living cell, it is interesting to note just how much information is found to be in a ‘simple’ cell from the thermodynamic perspective:
Professor Harold Morowitz shows the Origin of Life ‘problem’ escalates dramatically over the 1 in 10^40,000 figure when working from a thermodynamic perspective,:
Dr. Don Johnson lays out some of the probabilities for life in this following video:
Dr. Morowitz did another probability calculation working from the thermodynamic perspective with a already existing cell and came up with this number:
The information content that is derived to be in a cell when working from a purely thermodynamic perspective is simply astonishing to ponder:
For calculations for information, when working from the thermodynamic perspective, please see the following site:
Thus, regardless of the whatever nitpicking gripes Sal may have as to the lack of mathematical precision, I find the argument for ID from thermodynamics to be very effective, especially for the origin of life, and I am certainly not going to stop using the argument just because Sal thinks we should!
As to ‘mathematical precision’ for measuring information in cells, there actually is a precise limit in place.,, I would consider the second law violated in purely neo-Darwinian processes generated enough functional information to account for JUST ONE novel functional protein:
Another very good test which is very easy to understand, which would show a violation of the second law but which, as far as I know, has never been violated is the “fitness test”:
This following study demonstrated that bacteria which had gained antibiotic resistance by mutation are less fit than wild type bacteria::
The tornado running backward is a bad analogy, because it fails to take into account: 1. the heat radiated by the system, and 2. the attractive forces between the structures being assembled.
As for 1., let’s assume that in a regular, forward tornado, heat is radiated by the system to the environment. So if the heat radiated to the environment is deltaQ, where deltaQ is positive if heat is radiated outward. Define delta S as the entropy decrease of the forward tornado, where deltaS is positive for entropy decrease, negative for entropy increase. Then, at temperature T, the second law of thermodynamics for an isolated system says
delta S 0, which permits an entropy decrease.
In Granville Sewell’s analogy of the backwards tornado, the houses, cars etc. that assemble themselves are not attracted to each other by attractive forces. Thus, his analogy of self-assembling but non-attractive structures is a bad analogy for real chemical systems.
Again: In his self-assembling house analogy, the reaction is not exothermic because the parts do not have mutual attractive forces. In real systems that undergo spontaneous decreases in entropy (e.g. magnetization, crystallization, etc.) there are attractive forces which liberate energy as the parts approach each other, so the reaction is exothermic, delta Q is positive, and 2LOT specifically permits a local DECREASE in entropy. The problem is with Granville Sewell’s analogy, not with physics.
sal:
I will not intervene in your back-forth with Dr Sewell, save to suggest a bit of cooldown. A cold coconut water – for preference under a coconut tree with turquoise waves lapping at your feet — always makes things look better.
However, I am pointing to a well-defined conceptual bridge with worked through Math. The information bridge is real and once we have a macro-micro view going thermodynamic concepts apply to the business of physically expressing degrees of info by constraining configs.
That means we can draw on statistical thermodynamics concepts in explaining relevant phenomena. For instance, I just did so to understand what is happening when a cow– an open system — grows from a zygote, then goes for a one way visit to the butchers. We cannot calculate the specific entropy numbers involved (BTW, the numbers for a steam table etc are relative) but we can trace the pattern. We can reasonably infer that the vitals in a cow’s brain and CNS are beyond 500 bits of complexity, and we can see that the scrambling of configs consequent on a .22 hit in the right place takes us out of a one of functional configs. With predictable consequences.
We can then look from zygote up, at protein codes, regulatory info etc. We see that where FSCO/I is involved, the cow is dependent on a program, or actually a suite. We can reasonably see that this is well beyond 500 bits, and that the only reasonable explanation for the functional info to build a cow, in the end is design.
The usual objection is that self replicating systems can evolve.
But that depends on a strawman caricature of Paley, per his remarks on having the additional capacity of self replication in Ch 2 of Nat Theol. That is, updating, the von Neuman self replicator tied to a cellular level constructor points to design. And once design is on the table it is there across the board.
So, thermodynamic and informational thinking are linked and cohere. They both contribute to the design view.
I trust this helps
KF
To muddy up the water just a bit, here are links to some people who think that the laws of physics are on the side of evolution. I don’t buy these ideas personally, because matter cannot spontaneously become intentional. But they at least show how confusing the whole matter can be.
The first set deal with R. Swenson’s Law of Maximum Entropy Production (LMEP), sometimes called the Maximum Entropy Production Principle. To a non-physicist such as myself, this idea sounds almost like a tautology, but he’s selling it as the Fourth Law of thermodynamics. It’s the idea that entropy/temperature/etc. gradients will relieve themselves in the manner that maximizes the rate at which entropy is produced. In the case of biological evolution then, life exists and evolves because it enables an even faster production of entropy than a non-living system.
LMEP
http://www.entropylaw.com
Interview with Swenson
Next is an article tying evolution in with the Second Law and the Principle of Least Action.
Natural Selection for Least Action
Don’t anticipate that this latter article will describe how a steady stream of external energy manages to create and evolve life, because it doesn’t. (But it does take up space on the internet.)
EDTA:
Bare speculation:
There is instead a driving force to drive random variations in a context where the space of possibilities is dominated by non-functional, non-specific states. So, there is no reliable driving force to get us to islands of function, which is where mechanisms lie. These, to blind forces of chance and necessity, are deeply isolated and present it with the monkeys at keyboards challenge, and nowhere near the resources to surmount it.
The rise of this sort of speculation in the teeth of evidence and analysis is a mark of desperation.
This, from an exchange of Orgel vs hapiro on OOL, is apt:
Orgel’s rejoinder, of course, was that much the same holds for metabolism first scenarios.
Both are plainly right, and those who try to pretend that mere injection of raw energy allows an escape from the challenges of blind walks through vast config spaces, do us a disservice.
KF
F/N: This, from the LMEP link, is misleading:
No, there is no mysterious field of force and associated potential gradiebt that pushes to maximum entropy. There are only the probabilistic implications of random molecular interactions and large config spaces.
Cf my ice tray and beads exercise, here on.
F/N 2: This comes ever so close to spotting bthe key point:
In short, quanta of energy are subject to diffusion. Much as beads in ice trays that are shaken up.
As a result, the clusters of states with higher statistical weights strongly dominate.
This moves systems towards higher entropy, and disorder.
and obviously, once FSCO/I is involved, that is going to sit on isolated islands of function in vast seas of non function. Thence the blind walk challenge to find needles in haystacks. (And Mung, I had to update myself on that.)
F/N 2: This one, from the linked interview, staggers the imagination, on the misleading power of begging the pivotal question:
__________
>> Ordered flow, including life, was permissible as long as it produced enough entropy to compensate for its own internal entropy reduction. The central problem remained, however: If the spontaneous production of order was “infinitely improbable,” as Boltzmann had surmised,
then why were ordered systems such a fundamental and characteristic property of the visible world? LMEP provided the answer: Order production is inexorable because order produces entropy faster than disorder. >>
__________
Of course, a living system must export entropy.
But the mere export of entropy does not explain or justify the proposed spontaneous emergence — better, origin — of FSCO/I based forms that have integrated metabolic processes and von Neumann architecture self-replication.
The space of possibilities must be bridged to arrive at shorelines of function based on complex specific alignments of well matched parts, starting with OOL. Cf the remark on this here, earlier this morning.
and if you think you can easily twist this about and pretend that continents of function dominate the space of possibilities, first remember what a .22 in the head does to a cow. In short, the strongly evident reality in font of us is that FSCO/I comes in islands. Think about the misplaced comma that put a rocket off course. Think about what happens when you need a single hard to find car part. think about why it is that monkey at the keyboard exercises have so far peaked at about 24 ASCII characters in coherent English and how the algors to detect and use the sense that was stumbled upon were intelligently designed.
In short, think outside the a priori materialist box, and see what the evidence is screaming out to those who will but look and listen.
KF