Uncommon Descent Serving The Intelligent Design Community

Granville Sewell’s important contribution to physics: Entropy-X

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Abstract:   In Sewell’s discussion of entropy flow, he defines “Entropy-X”, which is a very useful concept that should clarify misconceptions of entropy promulgated by Wikipedia and scientists unfamiliar with thermodynamics. Sewell’s important contribution is to argue that one can and should reduce the “atom-entropy” into a subsets of mutually exclusive “entropy-X”. Mathematically, this is like factoring an N x M matrix into block diagonal form, by showing that the cross terms between blocks do not contribute to the total. Each of the blocks in the diagonal then correspond to a separately computed entropy, or “Entropy-X”.  This contribution not only clarifies many of the misunderstandings of laypersons, it also provides a way for physicists to overcome their confusion regarding biology.

– 0 –

Introduction:     Entropy was initially discussed in terms of thermodynamics, as a quantity that came out of the energy, temperature, work formulas.  Ludwig Boltzmann found a way to relate billiard-ball counting statistics to this thermodynamic quantity, with a formula he had inscribed on his tombstone:  S = k ln(Ω). The right-hand-side of this equation, contains a logarithm of the possible ways to arrange the atoms. The left-hand-side is the usual thermodynamic quantity. Relating the two different worlds of counting and heat, is this constant “k”, now called the “Boltzmann constant”.

Now for the shocking part. There is no theory that predicts its value. It is a conversion constant that is experimentally determined. It works best when the real physical system approximates billiard balls–such as noble gasses. The constant gets progressively worse, or needs more adjustments, if the right-hand-side becomes N2 (diatomic gas) or CO2 (triatomic). Going from gas to liquid introduces even more corrections, and by the time we get to solids we use a completely different formula.

For example, a 1991 Nobel prize was awarded for studying how a long oily molecule moves around in a liquid, because not every state of rearrangement is accessible for tangled strings.  So 100 years after Boltzmann, we are just now tackling liquids and gels and trying to understand their entropy.

Does that mean we don’t know what entropy is?

No, it means that we don’t have a neat little Boltzmann factor for relating thermodynamic-S to computer statistics. We still believe that it is conserved, we just can’t compute the number very easily.  This is why Granville Sewell uses “X-entropy” to describe all the various forms of order in the system. We know they must be individually conserved, barring some conversion between the various types of entropy in the sytem, but we can’t compute it very well.

Nor is it simply that the computation gets too large. For example, in a 100-atom molecule, the entropy is NOT computed by looking at all the 100! permutations of atoms, simply because many of those arrangements are energetically impossible.  Remember, when Boltzmann described “ln(Ω)” he was calling it the possible states of the system. If the state is too energetic, it isn’t accessible without a huge amount of energy.  In particle physics, this limitation becomes known as “spontaneous symmetry breaking”, and is responsible for all the variation we see in our universe today.

So rather than counting “atom states”, we assemble atoms into molecules and form new entities that act as complete units, as “molecules”, and then the entropy consists of counting “states of the molecule”–a much smaller number than “states of the atoms of the molecules”.  Molecules form complexes, and then we compute “states of the molecular complexes”. And complexes form structures, such as membranes. Then we compute “states of the structures”. This process continues to build as we approach the size of a cell, and then we have to talk about organs and complete organisms, and then ecologies and Gaia. The point is that our “unit” of calculation is a getting larger and larger as the systems display larger and larger coherence.

Therefore it is completely wrong to talk about single-atom entropy and the entropy of sunlight when we are discussing structural entropy, for as Granville and previous commentators have said, the flow of energy in sunlight with a coherence length of one angstrom cannot explain the decameter coherence of a building.

So from consideration of the physics, it is possible to construct a hierarchical treatment of entropy which enables entropy to address the cell, but in 1991 we had barely made it to the oily liquid stage of development. So on the one hand, unlike many commentators imagine, physicists don’t know how to compute an equivalent “Boltzmann equation” for the entropy of life, but on the other hand Granville is also right, we don’t need to compute the Boltzmann entropy to show that it must be conserved.

– o –

Mathematical Discussion:   Sewell’s contribution is to recognize that there must be a hierarchical arrangement of atoms that permit the intractible problem of calculating the entropy to be treated as a sum of mutually exclusive sub-entropies.

This contribution can be seen in the difference between “Entropy-chromium” and “Entropy-heat” that he introduces in his paper, where Entropy-chromium is the displacement of chromium atoms in a matrix of iron holding the velocity of the atoms constant, and Entropy-heat considers a variation in velocities while holding the position constant.  These two type of entropy have a large energy barrier separating them on the order of several eV per atom that prevent them from interconverting. At sufficiently high temperature–say, the temperature at which the iron-chrome allow was poured–the chromium atoms have sufficient mobility to overcome the energy barrier and move around. But at the present room temperature, they are immobile.  So in the creation event of the chromium bar, the entropy was calculated for both position and velocity, but as it cooled, “spontaneous symmetry breaking” produced two smaller independent entropies from the single larger one.

Now the beginning of this calculation is the full-up, atom entropy where everything is accessible. This “big-bang” entropy gives at least 7 degrees of freedom for each atom. That is, the number of bins available for each atom are at least 7–one for the species, 3 that give the position in x,y,z and 3 that give the velocity in Vx,Vy,Vz.  We could subdivide species into all the different quantum numbers that define atoms, but for the moment we’ll ignore that nuance.  In addition, the quantization of space and velocity into “Planck” sizes of 10^-34 meters, means that our bins do not always have a real number, but have a quantized length or velocity specified by an integer number of Planck sizes.  But again, the real quantization is that atoms don’t overlap, so we can use a much coarser quantization of 10^-10 meters or angstrom atomic lengths. The reason this is important, is that we are reducing ln(Ω) by restricting the number of states of the system that we need to consider.

But if S = ln(Ω), then this means we are mathematically throwing away entropy! How is that fair?

There is a curious result in quantum mechanics, that says if we can’t distinguish two particles, then there is absolutely no difference if they are swapped. This is another way of saying that their position entropy is zero. So if we have two states of a system, separated by a Planck length, but can’t tell the difference, it doesn’t contribute to the entropy.  Now this isn’t to say that we can’t invent a system that can tell the difference, but since a Planck length corresponds to light with gamma-ray intensity, we really have to go back to the Big Bang to find a time when this entropy mattered.   This reconfirms our assumption that as a system cools, it loses entropy and has fewer and fewer states available.

But even this angstrom coarse graining in position represented by “Entropy-Chromium”, is still too fine for the real world because biology is not made out of noble gasses, bouncing in a chamber. Instead, life exists in a matrix of water, an H2O molecule of nanometer size. Just as we cannot tell the difference if we swap the two hydrogen atoms in the water molecule around, we can’t tell the difference if we swap two water molecules around. So the quantization entropy gets even coarser, and the number of states of the system shrink, simply because the atoms form molecules.

A very similar argument holds for the velocities. A hydrogen atom can’t have every velocity possible because it is attached to an oxygen atom. That chemical bond means that the entire system has to move as a unit. But QM tells us that as the mass of a system goes up, the wavelength goes down, which is to say, the number of velocities we have to consider in our binning is reduced as we have a more massive system. Therefore the velocity entropy drops as the system becomes more chemically bound.

And of course, life is mostly made out of polymers of 100’s or 1000’s of nanometers in extent, which have even more constraints as they get tangled around each other and attach or detach from water molecules. That was what the 1991 Nobel prize was about.

Mathematically, we can write the “Big Bang” entropy as a N x M matrix, where N is the number of particles and M the number of bins. As the system cools and becomes more organized, the entropy is reduced, and the system becomes “block-diagonal”, where blocks can correspond to molecules, polymer chains, cell structures, organelles, cells, etc.

Now here is the key point.

If we only considered the atom positions, and not these molecular and macro-molecular structures, the matrix would not be in block diagonal form. Then when we computed the Boltzmann entropy, ln(Ω), we would have a huge number of states available. But in fact, biology forms so much structure, that ln(Ω) is greatly reduced.  All those cross-terms in the matrix are empty, because they are energetically inaccessible, or topologically inaccessible (see the 1991 Nobel prize). Sewell is correct, there is a tremendous amount of order (or reduced entropy) that is improperly calculated if life is considered as a ball of noble gas.

Let me say this one more time. Sewell is not only correct about the proper way to calculate entropy, he has made a huge contribution in arguing for the presence of (nearly) non-convertible forms of sub-entropy.

– o –

Thermodynamic Comment:   Some have argued that this “sub-entropy” of Sewell’s can be explained by some sort of spontaneous symmetry breaking due to the influx of energy. We have talked about “cooling” causing spontaneous symmetry breaking, which is consistent with the idea that higher temperatures have higher entropy, but the idea that “heating” can also drive symmetry breaking and lowered entropy is incoherent. This is simply because thermodynamically d(TS) = dE, or dS = d(E/T), which is to say, energy brings entropy and temperature simultaneously.

Let’s look at this a bit closer and apply it to the Earth.  The Sun is at 5500K, and therefore its energy comes principally in the form of yellow photons.  The Earth global temperature averages out to about 300K, so it emits infrared photons.   In thermodynamic equilibrium, the energy entering has to balance the energy leaving (or the Earth would heat up!).  Since the entropy of the photons hitting the Earth have almost twenty times less than the entropy of the photons leaving the Earth, the Earth must be a source for entropy. Ergo, sunlight should be randomizing every molecule in the Earth and preventing life from happening.

Does this make sense? I mean everybody and their brother say that entropy can decrease if you have a heat engine in the system. Energy comes into your refrigerator as low entropy energy. Energy billows out of the coils in the back as high entropy heat. But inside the fridge is a low-entropy freezer.  Couldn’t this apply to Earth? (E.g., compensation argument.)

Well, if the Earth had a solar-powered refrigerator, and some insulation, and some fancy piping, yes. But of course, all that is an even more low entropy system than the electricity, so we are invoking bigger and bigger systems to get the entropy in the small freezer to go down. The more realistic comparison is without all that special pleading. Imagine you leave your freezer door open accidently and leave for the weekend. What will you find? A melted freezer and a very hot house, because your fridge would run continuously trying to cool something down that warmed even faster. In fact, this is how dehumidifiers function. So your house would be hot and dry, with a large puddle of coolish water in the kitchen. Some fridges dump that water into a pan under the heating coils, which would evaporate the water and we would then be back to our original state but at higher temperature. Would the entropy of the kitchen be greater or smaller than if you had unplugged the fridge for the weekend? Greater, because of all that heat.

And in fact, this is the basic argument for why objects that sit in space for long periods of time have surfaces that are highly randomized.  This is why Mars can’t keep methane in its atmosphere. This is why Titan has aerosols in its atmosphere. This is why the Moon has a sterile surface.

If the Earth is unique, there is more than simply energy flow from the Sun that is responsible.

Comments
oops, ignore the Valley Girl interrogative at the end of the first line!Elizabeth B Liddle
July 8, 2013
July
07
Jul
8
08
2013
11:41 AM
11
11
41
AM
PDT
Granville, as you are there, I have one more question?: Let's say that ID is correct, and the explanation for life is that a Designer caused the earliest life-forms to assemble, and possibly later caused favorable mutations to occur. For that to happen, the Designer must have done work in the physical sense, i.e. moved matter over a distance, even if it was only to nudge a molecule into place here and there. In your view, must the Designer have experienced a decrease in entropy as a result of that work done? Or was it "free" so as to speak?Elizabeth B Liddle
July 8, 2013
July
07
Jul
8
08
2013
11:36 AM
11
11
36
AM
PDT
I'll address "X-entropy" later, when I have time, but first I must ask, what is it about ID supporters and grandiose claims? Robert Koons calls William Dembski "the Isaac Newton of Information Theory." George Gilder calls Darwin's Doubt "the best science book ever written." And now Robert Sheldon tells us that X-entropy is an "important contribution to physics."keiths
July 8, 2013
July
07
Jul
8
08
2013
11:34 AM
11
11
34
AM
PDT
Granville:
My points are MUCH simpler!
I agree, and would respectfully suggest that bringing thermodynamics is a distraction from them. Your points simply do not require any reference to the 2nd Law, which only exposes them to criticism, because you appear to make claims that are manifestly false (for instance your implication that for natural selection to work, it must have the capacity to violate the 2LoT). If all you mean is that organised functional things require an explanation that shows they are not improbable, I entirely agree. But you don't need the 2nd Law to tell you that: very improbable things won't happen in a finite universe, and so if we observe something that happen that we would have otherwise thought was very improbable, something that we haven't thought of yet must have happened to make it more probable. You probably think "Designer". I, and other "evolutionists" think "physics, chemistry, and feedback loops". The argument is whether which of us right, not over whether the 2nd Law was violated.Elizabeth B Liddle
July 8, 2013
July
07
Jul
8
08
2013
11:29 AM
11
11
29
AM
PDT
GS: Your pivotal point is just that observation that diffusion-like, random spreading mechanisms are acting and strongly tend to drive unconstrained systems to clusters of possible states where the formerly concentrated or more orderly items are now spread out and are utterly unlikely to return to the original state or something like that. There is a "time's arrow" at work leading to a system that "forgets" its initial condition and moves towards a predominant cluster of microstates that has an overwhelming statistical weight. For instance (following a useful simple model of diffusion in Yavorsky and Pinski's nice elementary Physics), if we have ten each of white and black marbles in two rows in a container: ||**********|| ||0000000000|| There is bit one way to be as shown, but over 63,000 to be 5B/W in each row, and 6:4 and 4:6 cases are close to such numbers:44,000 +. So, if the system is sufficiently agitated for balls to swap positions at random on a regular basis, it soon moves towards a dominant cluster near the 5:5 peak and "forgets" the initial state. This basic pattern works for heat, diffusion of ink drops in beakers of water, C atoms in a bar of steel, and more. The basic reason for such does not go away if the system is opened up to energy and mass flows, and the point that rare, specifically and simply describable states ill seldom be revisited or found, for enough complexity -- 500 - 1,00 bits, soon becomes that such states are beyond the reach of the solar system's or the observed cosmos' search capacity. RS' point that there are states that can be locked away from interaction so that it is reasonable to partition entropy accounting, is also quite useful. My own emphasis is that we need to see the difference between what diffusion like factors/ forces will strongly tend to do and what produces shaft work thence constructive work ending in FSCO/I. Let me try a second diagram using textual features: Heat transfer in Isolated system: || A (at T_a) –> d’Q –> B (at T_b) || dS >/= (-d’Q/T_a) + (+d’Q/T_b), Lex 2, Th in a common form Heat engine, leaving off the isolation of the whole: A –> d’Q_a –> B’ =====> D (shaft work) Where also, B’ –> d’Q_b –> C, heat disposal to a heat sink Where of course the sum of heat inflows d’Q_a will equal the sum of rise in internal energy of B dU_b, work done on D, dW, and heat transferred onwards to C, d’Q_b. The pivotal questions are, the performance of constructive work by dW, the possibility that this results in FSCO/I, the possibility that B itself shows FSCO/I enabling it to couple energy and do shaft work, and the suggested ideas that d’Q's can spontaneously lead to shaft work constructing FSCO/I as an informational free lunch. By overwhelming statistical weights of accessible microstates consistent with the gross flows outlined, this is not credibly observable on the gamut of our solar system or the observed cosmos. There is but one empirically, reliably observed source for FSCO/I, design. The analysis on statistical weights of microstates and random forces such as diffusion and the like, shows why. Which brings us to the point Hoyle was making: letting his tornado stand in for an agitating energy source triggering configs at random, tornadoes passing through junkyards don't build jumbo jets or even instruments on their panels. But similar intelligently directed expenditures, through precise patterns of work, routinely do build jumbo jets. This should help us understand the differences in view. KFkairosfocus
July 8, 2013
July
07
Jul
8
08
2013
11:27 AM
11
11
27
AM
PDT
Dr. Sheldon, very interesting article as far as I could follow it. If you get an opportunity, could you please expand on the following a fact you elucidated a bit more clearly so as to show how you arrived at the conclusion. It looks like a very useful piece of information.
Let’s look at this a bit closer and apply it to the Earth. The Sun is at 5500K, and therefore its energy comes principally in the form of yellow photons. The Earth global temperature averages out to about 300K, so it emits infrared photons. In thermodynamic equilibrium, the energy entering has to balance the energy leaving (or the Earth would heat up!). Since the entropy of the photons hitting the Earth have almost twenty times less (entropy) than the entropy of the photons leaving the Earth, the Earth must be a source for entropy. Ergo, sunlight should be randomizing every molecule in the Earth and preventing life from happening. https://uncommondescent.com/intelligent-design/granville-sewalls-important-contribution-to-physics-entropy-x/
bornagain77
July 8, 2013
July
07
Jul
8
08
2013
11:24 AM
11
11
24
AM
PDT
Robert, Granville's term is "X-entropy", not "Entropy-X".keiths
July 8, 2013
July
07
Jul
8
08
2013
11:02 AM
11
11
02
AM
PDT
How about correcting the mis-spelling of Granville Sewell's name in the OP.Alan Fox
July 8, 2013
July
07
Jul
8
08
2013
11:01 AM
11
11
01
AM
PDT
Well, I managed to mess up the link to the Bio-Complexity paper, it is here (Sometimes I can edit my comments, sometimes not; Barry if you can correct this link in my first comment, and eliminate this one, please do so...Granville Sewell
July 8, 2013
July
07
Jul
8
08
2013
10:58 AM
10
10
58
AM
PDT
Robert, First, let me provide links here to my Cornell contribution and my new Bio-Complexity paper, and second, let me reassure onlookers that it is not necessary to understand Robert's analysis to understand my papers, the main ideas in them are extremely simple, and only require a little common sense. In fact, I am having a little trouble understanding his analysis myself, because statistical thermodynamics is not my forte. My field is partial diffential equations, so I was looking at things from the point of view of their PDEs (in an appendix on a book about the numerical solution of PDEs, in fact). I just pointed out that the PDE that describes heat conduction is essentially identical to that which describes diffusion of anything else (X), so there is really nothing special about thermal entropy, you can define another entropy associated with anything else that diffuses, in the same way, and through an identical analysis, show that this "X-entropy" cannot decrease in an isolated system (assuming nothing is going on but diffusion) and cannot decrease faster than it is exported, even in an open system. (The latter was not unknown, I provided a link to an 1975 thermo book which reaches the same conclusion, but it seems to have still been noticed by very few people, for reasons I discuss in the Cornell paper.) While I may have coined the term "X-entropy", the fact that diffusion of anything is governed by the second law is certainly not original with me, this is widely recognized. Now, Robert is looking at things from the statistical thermo point of view, while I am looking at things from the point of view of the macroscopic definitions of X-entropy. The only comment I had about the Boltzmann formula, in my Bio-Complexity paper, was that Styer was obviously misusing it. In fact, in my first response to Bob Lloyd's piece in the Mathematical Intelligencer , whose main point was that my X-entropies are not always independent of each other (I never said that they were), I compared Styer's application of the Boltzmann formula to evolution, to applying it to poker. One can define a "poker entropy" as S= k*log(W), where W is the number of possible hands of a given type (eg, full house), and have a nice formula which increases when the probability increases, and is additive (the entropy associated with two consecutive hands can be calculated by adding the entropies of the two hands). So the problem with Styer's application is not the "log", it is 1) the constant k out front: this constant is chosen equal to k_B when discussing thermal entropy, simply so that the statistical definition of thermal entropy coincides with the usual macroscopic definition. For poker, there is no alternative definition of entropy I know of, so the k can be chosen completely arbitarily, and there is no Earthly reason to chose it to have units of Joules/degree Kelvin, and 2) even if you arbitrarily choose k=k_B, it still makes absolutly no sense to add poker entropy and thermal entropy to see if the result is positive or not! Robert's comments are from the point of view of statistical thermodynamics, and he discusses the fact that choosing k=k_B is not reasonable in more general case, I am still trying to understand the details of his post myself. In any case, I want to emphasize that the main points in my papers do not really require any understanding of statistical thermodynamics, or even (in the case of the Biocomplexity paper especially) PDEs or mathematics in general. My points are MUCH simpler!Granville Sewell
July 8, 2013
July
07
Jul
8
08
2013
10:44 AM
10
10
44
AM
PDT
1 3 4 5

Leave a Reply