# Granville Sewell’s important contribution to physics: Entropy-X

Abstract:   In Sewell’s discussion of entropy flow, he defines “Entropy-X”, which is a very useful concept that should clarify misconceptions of entropy promulgated by Wikipedia and scientists unfamiliar with thermodynamics. Sewell’s important contribution is to argue that one can and should reduce the “atom-entropy” into a subsets of mutually exclusive “entropy-X”. Mathematically, this is like factoring an N x M matrix into block diagonal form, by showing that the cross terms between blocks do not contribute to the total. Each of the blocks in the diagonal then correspond to a separately computed entropy, or “Entropy-X”.  This contribution not only clarifies many of the misunderstandings of laypersons, it also provides a way for physicists to overcome their confusion regarding biology.

– 0 –

Introduction:     Entropy was initially discussed in terms of thermodynamics, as a quantity that came out of the energy, temperature, work formulas.  Ludwig Boltzmann found a way to relate billiard-ball counting statistics to this thermodynamic quantity, with a formula he had inscribed on his tombstone:  S = k ln(Ω). The right-hand-side of this equation, contains a logarithm of the possible ways to arrange the atoms. The left-hand-side is the usual thermodynamic quantity. Relating the two different worlds of counting and heat, is this constant “k”, now called the “Boltzmann constant”.

Now for the shocking part. There is no theory that predicts its value. It is a conversion constant that is experimentally determined. It works best when the real physical system approximates billiard balls–such as noble gasses. The constant gets progressively worse, or needs more adjustments, if the right-hand-side becomes N2 (diatomic gas) or CO2 (triatomic). Going from gas to liquid introduces even more corrections, and by the time we get to solids we use a completely different formula.

For example, a 1991 Nobel prize was awarded for studying how a long oily molecule moves around in a liquid, because not every state of rearrangement is accessible for tangled strings.  So 100 years after Boltzmann, we are just now tackling liquids and gels and trying to understand their entropy.

Does that mean we don’t know what entropy is?

No, it means that we don’t have a neat little Boltzmann factor for relating thermodynamic-S to computer statistics. We still believe that it is conserved, we just can’t compute the number very easily.  This is why Granville Sewell uses “X-entropy” to describe all the various forms of order in the system. We know they must be individually conserved, barring some conversion between the various types of entropy in the sytem, but we can’t compute it very well.

Nor is it simply that the computation gets too large. For example, in a 100-atom molecule, the entropy is NOT computed by looking at all the 100! permutations of atoms, simply because many of those arrangements are energetically impossible.  Remember, when Boltzmann described “ln(Ω)” he was calling it the possible states of the system. If the state is too energetic, it isn’t accessible without a huge amount of energy.  In particle physics, this limitation becomes known as “spontaneous symmetry breaking”, and is responsible for all the variation we see in our universe today.

So rather than counting “atom states”, we assemble atoms into molecules and form new entities that act as complete units, as “molecules”, and then the entropy consists of counting “states of the molecule”–a much smaller number than “states of the atoms of the molecules”.  Molecules form complexes, and then we compute “states of the molecular complexes”. And complexes form structures, such as membranes. Then we compute “states of the structures”. This process continues to build as we approach the size of a cell, and then we have to talk about organs and complete organisms, and then ecologies and Gaia. The point is that our “unit” of calculation is a getting larger and larger as the systems display larger and larger coherence.

Therefore it is completely wrong to talk about single-atom entropy and the entropy of sunlight when we are discussing structural entropy, for as Granville and previous commentators have said, the flow of energy in sunlight with a coherence length of one angstrom cannot explain the decameter coherence of a building.

So from consideration of the physics, it is possible to construct a hierarchical treatment of entropy which enables entropy to address the cell, but in 1991 we had barely made it to the oily liquid stage of development. So on the one hand, unlike many commentators imagine, physicists don’t know how to compute an equivalent “Boltzmann equation” for the entropy of life, but on the other hand Granville is also right, we don’t need to compute the Boltzmann entropy to show that it must be conserved.

– o –

Mathematical Discussion:   Sewell’s contribution is to recognize that there must be a hierarchical arrangement of atoms that permit the intractible problem of calculating the entropy to be treated as a sum of mutually exclusive sub-entropies.

This contribution can be seen in the difference between “Entropy-chromium” and “Entropy-heat” that he introduces in his paper, where Entropy-chromium is the displacement of chromium atoms in a matrix of iron holding the velocity of the atoms constant, and Entropy-heat considers a variation in velocities while holding the position constant.  These two type of entropy have a large energy barrier separating them on the order of several eV per atom that prevent them from interconverting. At sufficiently high temperature–say, the temperature at which the iron-chrome allow was poured–the chromium atoms have sufficient mobility to overcome the energy barrier and move around. But at the present room temperature, they are immobile.  So in the creation event of the chromium bar, the entropy was calculated for both position and velocity, but as it cooled, “spontaneous symmetry breaking” produced two smaller independent entropies from the single larger one.

Now the beginning of this calculation is the full-up, atom entropy where everything is accessible. This “big-bang” entropy gives at least 7 degrees of freedom for each atom. That is, the number of bins available for each atom are at least 7–one for the species, 3 that give the position in x,y,z and 3 that give the velocity in Vx,Vy,Vz.  We could subdivide species into all the different quantum numbers that define atoms, but for the moment we’ll ignore that nuance.  In addition, the quantization of space and velocity into “Planck” sizes of 10^-34 meters, means that our bins do not always have a real number, but have a quantized length or velocity specified by an integer number of Planck sizes.  But again, the real quantization is that atoms don’t overlap, so we can use a much coarser quantization of 10^-10 meters or angstrom atomic lengths. The reason this is important, is that we are reducing ln(Ω) by restricting the number of states of the system that we need to consider.

But if S = ln(Ω), then this means we are mathematically throwing away entropy! How is that fair?

There is a curious result in quantum mechanics, that says if we can’t distinguish two particles, then there is absolutely no difference if they are swapped. This is another way of saying that their position entropy is zero. So if we have two states of a system, separated by a Planck length, but can’t tell the difference, it doesn’t contribute to the entropy.  Now this isn’t to say that we can’t invent a system that can tell the difference, but since a Planck length corresponds to light with gamma-ray intensity, we really have to go back to the Big Bang to find a time when this entropy mattered.   This reconfirms our assumption that as a system cools, it loses entropy and has fewer and fewer states available.

But even this angstrom coarse graining in position represented by “Entropy-Chromium”, is still too fine for the real world because biology is not made out of noble gasses, bouncing in a chamber. Instead, life exists in a matrix of water, an H2O molecule of nanometer size. Just as we cannot tell the difference if we swap the two hydrogen atoms in the water molecule around, we can’t tell the difference if we swap two water molecules around. So the quantization entropy gets even coarser, and the number of states of the system shrink, simply because the atoms form molecules.

A very similar argument holds for the velocities. A hydrogen atom can’t have every velocity possible because it is attached to an oxygen atom. That chemical bond means that the entire system has to move as a unit. But QM tells us that as the mass of a system goes up, the wavelength goes down, which is to say, the number of velocities we have to consider in our binning is reduced as we have a more massive system. Therefore the velocity entropy drops as the system becomes more chemically bound.

And of course, life is mostly made out of polymers of 100’s or 1000’s of nanometers in extent, which have even more constraints as they get tangled around each other and attach or detach from water molecules. That was what the 1991 Nobel prize was about.

Mathematically, we can write the “Big Bang” entropy as a N x M matrix, where N is the number of particles and M the number of bins. As the system cools and becomes more organized, the entropy is reduced, and the system becomes “block-diagonal”, where blocks can correspond to molecules, polymer chains, cell structures, organelles, cells, etc.

Now here is the key point.

If we only considered the atom positions, and not these molecular and macro-molecular structures, the matrix would not be in block diagonal form. Then when we computed the Boltzmann entropy, ln(Ω), we would have a huge number of states available. But in fact, biology forms so much structure, that ln(Ω) is greatly reduced.  All those cross-terms in the matrix are empty, because they are energetically inaccessible, or topologically inaccessible (see the 1991 Nobel prize). Sewell is correct, there is a tremendous amount of order (or reduced entropy) that is improperly calculated if life is considered as a ball of noble gas.

Let me say this one more time. Sewell is not only correct about the proper way to calculate entropy, he has made a huge contribution in arguing for the presence of (nearly) non-convertible forms of sub-entropy.

– o –

Thermodynamic Comment:   Some have argued that this “sub-entropy” of Sewell’s can be explained by some sort of spontaneous symmetry breaking due to the influx of energy. We have talked about “cooling” causing spontaneous symmetry breaking, which is consistent with the idea that higher temperatures have higher entropy, but the idea that “heating” can also drive symmetry breaking and lowered entropy is incoherent. This is simply because thermodynamically d(TS) = dE, or dS = d(E/T), which is to say, energy brings entropy and temperature simultaneously.

Let’s look at this a bit closer and apply it to the Earth.  The Sun is at 5500K, and therefore its energy comes principally in the form of yellow photons.  The Earth global temperature averages out to about 300K, so it emits infrared photons.   In thermodynamic equilibrium, the energy entering has to balance the energy leaving (or the Earth would heat up!).  Since the entropy of the photons hitting the Earth have almost twenty times less than the entropy of the photons leaving the Earth, the Earth must be a source for entropy. Ergo, sunlight should be randomizing every molecule in the Earth and preventing life from happening.

Does this make sense? I mean everybody and their brother say that entropy can decrease if you have a heat engine in the system. Energy comes into your refrigerator as low entropy energy. Energy billows out of the coils in the back as high entropy heat. But inside the fridge is a low-entropy freezer.  Couldn’t this apply to Earth? (E.g., compensation argument.)

Well, if the Earth had a solar-powered refrigerator, and some insulation, and some fancy piping, yes. But of course, all that is an even more low entropy system than the electricity, so we are invoking bigger and bigger systems to get the entropy in the small freezer to go down. The more realistic comparison is without all that special pleading. Imagine you leave your freezer door open accidently and leave for the weekend. What will you find? A melted freezer and a very hot house, because your fridge would run continuously trying to cool something down that warmed even faster. In fact, this is how dehumidifiers function. So your house would be hot and dry, with a large puddle of coolish water in the kitchen. Some fridges dump that water into a pan under the heating coils, which would evaporate the water and we would then be back to our original state but at higher temperature. Would the entropy of the kitchen be greater or smaller than if you had unplugged the fridge for the weekend? Greater, because of all that heat.

And in fact, this is the basic argument for why objects that sit in space for long periods of time have surfaces that are highly randomized.  This is why Mars can’t keep methane in its atmosphere. This is why Titan has aerosols in its atmosphere. This is why the Moon has a sterile surface.

If the Earth is unique, there is more than simply energy flow from the Sun that is responsible.

## 130 Replies to “Granville Sewell’s important contribution to physics: Entropy-X”

1. 1
Granville Sewell says:

Robert,

First, let me provide links here to my Cornell contribution and my new Bio-Complexity paper, and second, let me reassure onlookers that it is not necessary to understand Robert’s analysis to understand my papers, the main ideas in them are extremely simple, and only require a little common sense.

In fact, I am having a little trouble understanding his analysis myself, because statistical thermodynamics is not my forte. My field is partial diffential equations, so I was looking at things from the point of view of their PDEs (in an appendix on a book about the numerical solution of PDEs, in fact). I just pointed out that the PDE that describes heat conduction is essentially identical to that which describes diffusion of anything else (X), so there is really nothing special about thermal entropy, you can define another entropy associated with anything else that diffuses, in the same way, and through an identical analysis, show that this “X-entropy” cannot decrease in an isolated system (assuming nothing is going on but diffusion) and cannot decrease faster than it is exported, even in an open system. (The latter was not unknown, I provided a link to an 1975 thermo book which reaches the same conclusion, but it seems to have still been noticed by very few people, for reasons I discuss in the Cornell paper.) While I may have coined the term “X-entropy”, the fact that diffusion of anything is governed by the second law is certainly not original with me, this is widely recognized.

Now, Robert is looking at things from the statistical thermo point of view, while I am looking at things from the point of view of the macroscopic definitions of X-entropy. The only comment I had about the Boltzmann formula, in my Bio-Complexity paper, was that Styer was obviously misusing it. In fact, in my first response to Bob Lloyd’s piece in the Mathematical Intelligencer , whose main point was that my X-entropies are not always independent of each other (I never said that they were), I compared Styer’s application of the Boltzmann formula to evolution, to applying it to poker. One can define a “poker entropy” as S= k*log(W), where W is the number of possible hands of a given type (eg, full house), and have a nice formula which increases when the probability increases, and is additive (the entropy associated with two consecutive hands can be calculated by adding the entropies of the two hands). So the problem with Styer’s application is not the “log”, it is

1) the constant k out front: this constant is chosen equal to k_B when discussing thermal entropy, simply so that the statistical definition of thermal entropy coincides with the usual macroscopic definition. For poker, there is no alternative definition of entropy I know of, so the k can be chosen completely arbitarily, and there is no Earthly reason to chose it to have units of Joules/degree Kelvin, and

2) even if you arbitrarily choose k=k_B, it still makes absolutly no sense to add poker entropy and thermal entropy to see if the result is positive or not!

Robert’s comments are from the point of view of statistical thermodynamics, and he discusses the fact that choosing k=k_B is not reasonable in more general case, I am still trying to understand the details of his post myself.

In any case, I want to emphasize that the main points in my papers do not really require any understanding of statistical thermodynamics, or even (in the case of the Biocomplexity paper especially) PDEs or mathematics in general. My points are MUCH simpler!

2. 2
Granville Sewell says:

Well, I managed to mess up the link to the Bio-Complexity paper, it is here

(Sometimes I can edit my comments, sometimes not; Barry if you can correct this link in my first comment, and eliminate this one, please do so…

3. 3
Alan Fox says:

How about correcting the mis-spelling of Granville Sewell’s name in the OP.

4. 4
keiths says:

Robert,

Granville’s term is “X-entropy”, not “Entropy-X”.

5. 5
bornagain77 says:

Dr. Sheldon, very interesting article as far as I could follow it. If you get an opportunity, could you please expand on the following a fact you elucidated a bit more clearly so as to show how you arrived at the conclusion. It looks like a very useful piece of information.

Let’s look at this a bit closer and apply it to the Earth. The Sun is at 5500K, and therefore its energy comes principally in the form of yellow photons. The Earth global temperature averages out to about 300K, so it emits infrared photons. In thermodynamic equilibrium, the energy entering has to balance the energy leaving (or the Earth would heat up!). Since the entropy of the photons hitting the Earth have almost twenty times less (entropy) than the entropy of the photons leaving the Earth, the Earth must be a source for entropy. Ergo, sunlight should be randomizing every molecule in the Earth and preventing life from happening.
http://www.uncommondescent.com.....entropy-x/

6. 6
kairosfocus says:

GS:

Your pivotal point is just that observation that diffusion-like, random spreading mechanisms are acting and strongly tend to drive unconstrained systems to clusters of possible states where the formerly concentrated or more orderly items are now spread out and are utterly unlikely to return to the original state or something like that.

There is a “time’s arrow” at work leading to a system that “forgets” its initial condition and moves towards a predominant cluster of microstates that has an overwhelming statistical weight.

For instance (following a useful simple model of diffusion in Yavorsky and Pinski’s nice elementary Physics), if we have ten each of white and black marbles in two rows in a container:

||**********||
||0000000000||

There is bit one way to be as shown, but over 63,000 to be 5B/W in each row, and 6:4 and 4:6 cases are close to such numbers:44,000 +. So, if the system is sufficiently agitated for balls to swap positions at random on a regular basis, it soon moves towards a dominant cluster near the 5:5 peak and “forgets” the initial state.

This basic pattern works for heat, diffusion of ink drops in beakers of water, C atoms in a bar of steel, and more.

The basic reason for such does not go away if the system is opened up to energy and mass flows, and the point that rare, specifically and simply describable states ill seldom be revisited or found, for enough complexity — 500 – 1,00 bits, soon becomes that such states are beyond the reach of the solar system’s or the observed cosmos’ search capacity.

RS’ point that there are states that can be locked away from interaction so that it is reasonable to partition entropy accounting, is also quite useful.

My own emphasis is that we need to see the difference between what diffusion like factors/ forces will strongly tend to do and what produces shaft work thence constructive work ending in FSCO/I.

Let me try a second diagram using textual features:

Heat transfer in Isolated system:

|| A (at T_a) –> d’Q –> B (at T_b) ||

dS >/= (-d’Q/T_a) + (+d’Q/T_b), Lex 2, Th in a common form

Heat engine, leaving off the isolation of the whole:

A –> d’Q_a –> B’ =====> D (shaft work)

Where also, B’ –> d’Q_b –> C, heat disposal to a heat sink

Where of course the sum of heat inflows d’Q_a will equal the sum of rise in internal energy of B dU_b, work done on D, dW, and heat transferred onwards to C, d’Q_b.

The pivotal questions are, the performance of constructive work by dW, the possibility that this results in FSCO/I, the possibility that B itself shows FSCO/I enabling it to couple energy and do shaft work, and the suggested ideas that d’Q’s can spontaneously lead to shaft work constructing FSCO/I as an informational free lunch.

By overwhelming statistical weights of accessible microstates consistent with the gross flows outlined, this is not credibly observable on the gamut of our solar system or the observed cosmos.

There is but one empirically, reliably observed source for FSCO/I, design. The analysis on statistical weights of microstates and random forces such as diffusion and the like, shows why.

Which brings us to the point Hoyle was making: letting his tornado stand in for an agitating energy source triggering configs at random, tornadoes passing through junkyards don’t build jumbo jets or even instruments on their panels. But similar intelligently directed expenditures, through precise patterns of work, routinely do build jumbo jets.

This should help us understand the differences in view.

KF

7. 7

Granville:

My points are MUCH simpler!

I agree, and would respectfully suggest that bringing thermodynamics is a distraction from them.

Your points simply do not require any reference to the 2nd Law, which only exposes them to criticism, because you appear to make claims that are manifestly false (for instance your implication that for natural selection to work, it must have the capacity to violate the 2LoT).

If all you mean is that organised functional things require an explanation that shows they are not improbable, I entirely agree.

But you don’t need the 2nd Law to tell you that: very improbable things won’t happen in a finite universe, and so if we observe something that happen that we would have otherwise thought was very improbable, something that we haven’t thought of yet must have happened to make it more probable.

You probably think “Designer”.
I, and other “evolutionists” think “physics, chemistry, and feedback loops”.

The argument is whether which of us right, not over whether the 2nd Law was violated.

8. 8
keiths says:

I’ll address “X-entropy” later, when I have time, but first I must ask, what is it about ID supporters and grandiose claims?

Robert Koons calls William Dembski “the Isaac Newton of Information Theory.”

George Gilder calls Darwin’s Doubt “the best science book ever written.”

And now Robert Sheldon tells us that X-entropy is an “important contribution to physics.”

9. 9

Granville, as you are there, I have one more question?:

Let’s say that ID is correct, and the explanation for life is that a Designer caused the earliest life-forms to assemble, and possibly later caused favorable mutations to occur.

For that to happen, the Designer must have done work in the physical sense, i.e. moved matter over a distance, even if it was only to nudge a molecule into place here and there.

In your view, must the Designer have experienced a decrease in entropy as a result of that work done? Or was it “free” so as to speak?

10. 10

oops, ignore the Valley Girl interrogative at the end of the first line!

11. 11
Alan Fox says:

I have one more question?

Don’t tell me you are a Kylie fan, too!

12. 12
bornagain77 says:

Dr. Sewell, I suggest you ignore Elizabeth posts since as far as I can tell she has absolutely no intention of ever being honest with the evidence. Hundreds of thousands of words have been wasted on the ‘move the goalpost’ tactics she employs, and In my honest opinion your time would be well spent elsewhere than chasing her down whatever irrevelant rabbit holes she dreams up to obfuscate the matter.

13. 13
bornagain77 says:

pardon: “irrelevant rabbit holes”,,

14. 14
Alan Fox says:

Phil “Mr-pots-and-kettles” Cunningham has the bare-faced cheek to complain of someone else of “irrelevant rabbit holes she dreams up to obfuscate the matter.”

Irony meters must be smoking everywhere!

15. 15
bornagain77 says:

Oh but Mr. Fox, don’t be so jealous, you are very close second in my book for people who could care less about truth, and do their damnedest to obfuscate it! But Elizabeth just has you beat on style. Maybe if you put more smiley faces on your deceptions!

16. 16
Collin says:

Keiths,

I also find it a little cringe-inducing when the accolades start being handed out. I don’t want hero-worship to be encouraged in ID. Only critical thinking and mutual respect.

17. 17
Collin says:

Bornagain,

I liked “irrevelant” better. Fits with the Alice in Wonderland idea. 🙂

But I do kind of think that Dr. Sewell should address her concerns if they are reasonable, because I find them interesting.

18. 18

bornagan77:

You of course disagree with me profoundly, and I with you (mostly), but I need to make it clear to you: I am not dishonest.

Fallible, sure, but we all are. Grumpy, occasionally acid, sure. But I never intend to deceive anyone, and I am as prepared as anyone to follow the evidence and argument where it leads.

I don’t expect to change your opinion of me, but “for record” as Kairofocus would say, I state here plainly: I never deliberately say things that I do not believe to be true.

Apart from anything else – what would be the point?

19. 19
Granville Sewell says:

Elizabeth,

The approach many scientists take toward ID is to “define” science so that it excludes ID, and then declare “ID is not science” so they don’t have to deal with the issue of whether or not ID is true.

As CS3 pointed out on another thread, what you are trying to do here is very similar. In my Bio-Complexity article, I quoted three common statements of the second law, taken from a typical general physics text: the first is apparently the only one you accept as valid, and I acknowledged that evolution has little to do with this statement, it is the more general statements (2) and (3) that are relevant. You are determined to limit the second law so that, by definition, there is no conflict with evolution, and you want us to believe that only “creationists” apply it more generally (to things like tornados running backward), which is patently false. So there is not much left to discuss, IF you insist that (1) is the only legitimate statement of the second law, then you can declare victory, because I admit evolution doesn’t violate this statement.

But I believe laws of Nature are defined by Nature, not by man. If Isaac Newton had stated the law of gravity as “the Earth attracts apples”, you could say that technically, oranges falling to the ground has nothing to do with the law of gravity. But I would be interested in knowing if the law of gravity could really be generalized beyond apples.

20. 20

Granville: thank you for your response.

The approach many scientists take toward ID is to “define” science so that it excludes ID, and then declare “ID is not science” so they don’t have to deal with the issue of whether or not ID is true.

That is not my position. I do think that some (many) paper on ID are not valid science (do not draw justified conclusions from their data, or make flawed arguments), but I see no intrinsic reason why the theory that life was created by a designer is not a perfectly good topic for scientific investigation, even if the designer is postulated to have properties not possessed by other known entities.

As CS3 pointed out on another thread, what you are trying to do here is very similar. In my Bio-Complexity article, I quoted three common statements of the second law, taken from a typical general physics text: the first is apparently the only one you accept as valid, and I acknowledged that evolution has little to do with this statement, it is the more general statements (2) and (3) that are relevant. You are determined to limit the second law so that, by definition, there is no conflict with evolution

No, this is not a right reading of my position. Firstly, those three statements of the 2nd Law of thermodynamics are deemed in text books to be equivalent, not statements of slightly different laws. If one statement is not consistent with the others, it is either too loosely worded, or being interpreted too loosely.

There is a single 2nd Law of thermodynamics, and although verbal formulations may vary, and even allow for ambiguities, they all mean the same thing.

Moreover, I do not interpret things so that they agree some position I wish to preserve. I am a working scientists, and would far rather prove myself wrong than kid myself that I was correct. I simply do not understand what would motivate anyone to do such a thing.

, and you want us to believe that only “creationists” apply it more generally (to things like tornados running backward), which is patently false.

I want you to believe no such thing.

So there is not much left to discuss, IF you insist that (1) is the only legitimate statement of the second law, then you can declare victory, because I admit evolution doesn’t violate this statement.

If evolution doesn’t violate the first statement, then it doesn’t violate any, because all the statements are equivalent.

The third one you quote is rather loose because it refers to “probability” as though it is a property of an arrangement, rather than the property of an arrangement, given a generative process. Obviously very improbable things won’t happen (that isn’t a conclusion of the 2nd Law, but the truism that makes it self-evident). But things that are very improbable under some conditions (still air suddenly forming a vortex) are highly probable under others (a convection current generated by a sun-warmed patch of earth) that are specifically not disallowed under the 2nd Law of thermodynamics.

Taking the word “probability” out of any context that gives the circumstances under which a thing is probable, is to misinterpret that 3rd statement.

Which is, in fact, perfectly correct, as it says that

In an isolated system, the direction of spontaneous change is from an arrangement of lesser probability to an arrangement of greater probability

This is absolutely true, because it talks about an isolated system. That system can contain subsystems in which arrangements that would be improbable if they were isolated, but of course they are not. They are in contact with the rest of the super-system, which can do work on it, at the cost of increased entropy in the rest of the supersystem.

Your counter-argument to the “compensation” story is not correct. It is perfectly true that the fact that entropy is increasing in some remote unconnected system won’t make spaceships appear here, and it is also true that the fact that entropy is increasing in my cup of coffee won’t mean that my dishes will wash themselve and put themselves away.

But nobody is claiming that solely because there is an entropy increase somewhere, any old thing can happen somewhere nearby. What we claim is that if a system is not isolated, work can be done on that system that can result in greater order, although system doing the work will necessarily experience increased entropy.

The sun does not explain life on earth. It merely provides the potential for work to be done (via temperature gradients for instance). The complicated part is explaining what that work was and why. But it does counter the claim that work could not have been done because of the 2nd Law.

But I believe laws of Nature are defined by Nature, not by man. If Isaac Newton had stated the law of gravity as “the Earth attracts apples”, you could say that technically, oranges falling to the ground has nothing to do with the law of gravity. But I would be interested in knowing if the law of gravity could really be generalized beyond apples.

I agree that laws are discovered, not invented, so at least we agree on something! I’d be very interested in your answer to my question at 9.

Cheers

Lizzie

21. 21
bornagain77 says:

As pointed out before, I liked this comment from Dr. Sheldon in the OP:

‘Let’s look at this a bit closer and apply it to the Earth. The Sun is at 5500K, and therefore its energy comes principally in the form of yellow photons. The Earth global temperature averages out to about 300K, so it emits infrared photons. In thermodynamic equilibrium, the energy entering has to balance the energy leaving (or the Earth would heat up!). Since the entropy of the photons hitting the Earth have almost twenty times less (entropy) than the entropy of the photons leaving the Earth, the Earth must be a source for entropy. Ergo, sunlight should be randomizing every molecule in the Earth and preventing life from happening.’

The reason I liked Dr. Sheldon’s statement on light coming from the sun is that agrees with some of the empirical evidence I’ve been recently gathering from another angle,,,

I’ve always found the compensation (open system) argument from atheists to be a very disingenuous argument on their part since the second law was formulated right here on earth, an open system, in the first place! ,,, Moreover, most of the electromagnetic emissions coming from the sun is either harmful or useless for life. Yet the harmful and useless energy coming from the sun is the portion that is most prevented portion that is constrained from reaching the earth,,,

Fine Tuning Of Light to the Atmosphere, to Biological Life, and to Water – graphs

Fine Tuning Of Universal Constants, Particularly Light – Walter Bradley – video
http://www.metacafe.com/watch/4491552

In fact, in the following video, at the 5:00 minute mark, reveals that these specific frequencies of light (that enable plants to manufacture food and astronomers to observe the cosmos) represent less than 1 trillionth of a trillionth (10^-24) of the universe’s entire range of electromagnetic emissions.

Privileged Planet – The Extreme Fine Tuning of Light for Life and Scientific Discovery – video
http://www.metacafe.com/w/7715887

a very interesting sidenote to all this is that DNA is optimized to prevent damage from light:

DNA Optimized for Photostability
Excerpt: These nucleobases maximally absorb UV-radiation at the same wavelengths that are most effectively shielded by ozone. Moreover, the chemical structures of the nucleobases of DNA allow the UV-radiation to be efficiently radiated away after it has been absorbed, restricting the opportunity for damage.
http://www.reasons.org/dna-soaks-suns-rays

i.e. if radiation from the sun were really driving the decrease in entropy of life on earth then why in blue blazes is there optimized photostability present for DNA to prevent the incoming energy from the sun from having any effect on DNA??? ,,, It is yet another sheer disconnect between what the empirical evidence actually say and what Darwinists claim for reality. A disconnect that they will never really honestly address.,,, But to move on past my disgust for Darwinists,,

,,, even though the energy coming from the sun is very constrained by the atmosphere (and by the magnetic field, etc..?) in such a way as to ‘just so happen’ to only allow the useful part of light through to the surface of the earth, in the following video Dr. Thomas Kindell points out that even that tiny sliver of 1 in 10^24 (trillionth of a trillionth) of raw energy coming from the sun, that is allowed to reach the earth, is still destructive in its effect on the earth and must be channeled into useful energy (ATP) by photosynthesis,,,

Evolution Vs. Thermodynamics – Open System Refutation – Thomas Kindell – video
http://www.metacafe.com/watch/4143014

And indeed, very much contrary to evolutionary expectations, we now have evidence for complex photosynthetic life suddenly appearing on earth, as soon as water appeared on the earth, in the oldest sedimentary rocks ever found on earth.

The Sudden Appearance Of Photosynthetic Life On Earth – video
http://www.metacafe.com/watch/4262918

U-rich Archaean sea-floor sediments from Greenland – indications of +3700 Ma oxygenic photosynthesis (2003)

,,,yet photosynthesis is a very, very, complex process which is certainly not conducive to any easy materialistic explanation,,,

“There is no question about photosynthesis being Irreducibly Complex. But it’s worse than that from an evolutionary perspective. There are 17 enzymes alone involved in the synthesis of chlorophyll. Are we to believe that all intermediates had selective value? Not when some of them form triplet states that have the same effect as free radicals like O2. In addition if chlorophyll evolved before antenna proteins, whose function is to bind chlorophyll, then chlorophyll would be toxic to cells. Yet the binding function explains the selective value of antenna proteins. Why would such proteins evolve prior to chlorophyll? and if they did not, how would cells survive chlorophyll until they did?” Uncommon Descent Blogger

Evolutionary biology: Out of thin air John F. Allen & William Martin:
The measure of the problem is here: “Oxygenetic photosynthesis involves about 100 proteins that are highly ordered within the photosynthetic membranes of the cell.”
http://www.nature.com/nature/j.....5610a.html

The Miracle Of Photosynthesis – electron transport – video

22. 22
bornagain77 says:

Some of the ‘coincidences’ of photosynthesis reach all the way down to foundational physics and chemistry (i.e. the universe was ‘set up’ for photosynthesis) and are just plain ‘spooky’ to behold as Dr. Michael Denton briefly elaborates here:

Michael Denton: Remarkable Coincidences in Photosynthesis – podcast
http://www.idthefuture.com/201....._coin.html

In fact there is a irreducibly complex molecular machine at the heart of photosynthesis:

The ATP Synthase Enzyme – exquisite motor necessary for first life – video

ATP Synthase, an Energy-Generating Rotary Motor Engine – Jonathan M. May 15, 2013
Excerpt: ATP synthase has been described as “a splendid molecular machine,” and “one of the most beautiful” of “all enzymes” .,, “bona fide rotary dynamo machine”,,,
If such a unique and brilliantly engineered nanomachine bears such a strong resemblance to the engineering of manmade hydroelectric generators, and yet so impressively outperforms the best human technology in terms of speed and efficiency, one is led unsurprisingly to the conclusion that such a machine itself is best explained by intelligent design.
http://www.evolutionnews.org/2.....72101.html

Thermodynamic efficiency and mechanochemical coupling of F1-ATPase – 2011
Excerpt:F1-ATPase is a nanosized biological energy transducer working as part of FoF1-ATP synthase. Its rotary machinery transduces energy between chemical free energy and mechanical work and plays a central role in the cellular energy transduction by synthesizing most ATP in virtually all organisms.,,
Our results suggested a 100% free-energy transduction efficiency and a tight mechanochemical coupling of F1-ATPase.

The 10 Step Glycolysis Pathway In ATP Production: An Overview – video

At the 6:00 minute mark of the following video, Chris Ashcraft, PhD – molecular biology, gives us an overview of the Citric Acid Cycle, which is, after the 10 step Glycolysis Pathway, also involved in ATP production:

Evolution vs ATP Synthase – Molecular Machine – video
http://www.metacafe.com/watch/4012706

Glycolysis and the Citric Acid Cycle: The Control of Proteins and Pathways – Cornelius Hunter – July 2011
http://darwins-god.blogspot.co.....cycle.html

Yet, photosynthesis presents a far, far, more difficult challenge to Darwinists than just explaining how all these extremely complex mechanisms for converting raw energy into useful ATP energy ‘just so happened’ to ‘randomly’ come about,, so as to enable higher life to be possible in the first place.,,,In what I find to be a very fascinating discovery, it is found that photosynthetic life, which is an absolutely vital link that all higher life on earth is dependent on for food, uses ‘non-local’, beyond space and time, quantum mechanical principles to accomplish photosynthesis,,

Quantum Mechanics at Work in Photosynthesis: Algae Familiar With These Processes for Nearly Two Billion Years – Feb. 2010
Excerpt: “We were astonished to find clear evidence of long-lived quantum mechanical states involved in moving the energy. Our result suggests that the energy of absorbed light resides in two places at once — a quantum superposition state, or coherence — and such a state lies at the heart of quantum mechanical theory.”,,, “It suggests that algae knew about quantum mechanics nearly two billion years before humans,” says Scholes.
http://www.sciencedaily.com/re.....131356.htm

23. 23
bornagain77 says:

At the 21:00 minute mark of the following video, Dr Suarez explains why photosynthesis needs a ‘non-local’, beyond space and time, cause to explain its effect:

Nonlocality of Photosynthesis – Antoine Suarez – video – 2012

Now as a Theist, I, of course, have a ‘non-local’ beyond space and time cause to appeal to to explain how ‘non-local’ coherence of photosynthesis is possible,,,,

Verse and Music:

1 John 1:5
This is the message we have heard from him and proclaim to you, that God is light, and in him is no darkness at all.

Toby Mac (In The Light) – music video

,,,Whereas the atheists have crickets chirping as to any coherent explanation for ever explaining how ‘non-local’ photosynthesis is even possible in the first place,,

Cricket Chirping

24. 24

Fixed the misspelled name in the title and abstract. It was correct elsewhere.

The problem several people have with “entropy” is that they confuse it with a substance. For statistical physics, it is a shorthand for discussing the number of states of the system, while for thermodynamicists, it is related to both energy and temperature. The only place, and I stress it again, that statistics and thermal physics overlap, is when we are discussing atomic gasses. Then we can use Boltzmann’s equation. Everywhere else, we can’t convert them into thermal properties, and probably not even from one statistic to another.

For example, one could convert a computer code into 1’s and 0’s and measure its statistical entropy, but if I use that computer code to, say, sort all the books in the library into alphabetical order, does that produce a constant that relates the entropy of “computer code” into the entropy of “library catalogues”? I don’t think so. One needs a “converter”, and the efficiency of a converter depends on design, not physics.

Why is this important?

Well, the theorems for conservation of entropy all come out of thermodynamics. What Boltzmann did, was to show how this could be converted to statistical mechanics. Thus the peculiar “ordering” of atoms has all the same constant properties as the thermal physics of collections of atoms. This was the power of the equation.

We may not have a conversion constant for other forms of ordering, but the existence of these constants for noble gas atom ordering, strongly suggest that other forms of ordering are also conserved.

This provided a beachhead into the sorts of ordering that Granville refers to, and we can fruitfully discuss the conservation of “Entropy-X”, even if we don’t know how to calculate it.

I tried to describe how far physicists had gone in calculating the entropy of complex systems, but in another sense, this is a red herring. That is, we almost never use the entropy in calculation, only the change in entropy. And in complicated systems, the change in entropy is a path-dependent function. Or to say it another way, it is dS/dx that is important, not S itself, or perhaps integral[(dS/dx) dx]. So for example, when a nitroglycerine molecule dissociates, the reordering of the chemical bonds releases energy, and the process is driven by the increase in entropy of the gas products over the molecular precursor. So it is the local dS/dx that drives the reaction so very quickly.

By analogy then, Granville doesn’t have to calculate the entropy of his jet airliner, simply the gradients in the entropy from junk-yard to airliner. Likewise, life has enormous gradients, both spatially and temporally, which should determine the direction of the reaction, but don’t because the exact opposite is observed. And yes, this violation of entropy gradients is a direct violation of the 2nd Law of Thermodynamics.

There may be ways to get around this with long-range forces and correlations. But then, all of statistical mechanics presupposes that there are no long-range correlations, so more than thermodynamics is lost if we invoke long-range forces.

25. 25
Granville Sewell says:

Elizabeth,

If evolution doesn’t violate the first statement, then it doesn’t violate any, because all the statements are equivalent

Statement (1) is “in an isolated system, thermal entropy cannot decrease.” (3) is “in an isolated system, the direction of spontaneous change is from an arrangement of lesser probability to an arrangement of greater probability”

How could these two possibly be equivalent??? (3) is obviously more general than (1), there are other statements that are equivalent to (1), but (2) and (3) are not. And the author of this textbook obviously understands that (2) and (3) have more general applications than to energy, as he applies them to things like autos colliding and dead rabbits decaying. And Asimov even applies them to order in a house, for example, and he was hardly a creationist. Nice try.

26. 26

Granville:

Statement (1) is “in an isolated system, thermal entropy cannot decrease.” (3) is “in an isolated system, the direction of spontaneous change is from an arrangement of lesser probability to an arrangement of greater probability”

How could these two possibly be equivalent??? (3) is obviously more general than (1), there are other statements that are equivalent to (1), but (2) and (3) are not. And the author of this textbook obviously understands that (2) and (3) have more general applications than to energy, as he applies them to things like autos colliding and dead rabbits decaying. And Asimov even applies them to order in a house, for example, and he was hardly a creationist. Nice try.

Granville: there is ONE 2nd Law of thermodynamics. If several statements of the law appear to differ, then at least one of those statements is inadequate and does not belong in a textbook, or you are using an unintended definition of one of the words.

You are a mathematician, Granville: you know that equations can be stated in many different ways. But if they are the same equation, then those statements are equivalent

You cannot say: oh, but here’s another version of the same equation, and by this version of it, X is possible but by that version of it, X is not.

1. In an isolated system, thermal entropy cannot decrease.
2. In an isolated system, the direction of spontaneous change is from order to disorder.
3. In an isolated system, the direction of spontaneous change is from an arrangement of lesser probability to an arrangement of greater probability

Let’s do this semi-formally. Each statement begins with:

[In an isolated system]

Statements 2 and 3 then have the phrase:

[the direction of spontaneous change is from]

2 has: [order] to [disorder]
3 has: [an arrangement of lesser probability] to [an arrangement of greater probability]

Therefore, if the statements are equivalent:

order = an arrangement of lesser probability, and
disorder = an arrangement of greater probability.

Turning to statement 1, then
[thermal entropy cannot decrease]=[the direction of spontaneous change is from order to disorder] = [the direction of spontaneous change is from an arrangement of lesser probability to an arrangement of greater probability]

Therefore [thermal entropy] = [probability of an arrangement] = [orderedness]

We cannot therefore interpret “probability of an arrangement” or “orderedness” to mean anything other than “thermal entropy”, or the statements would not be equivalent statements of the 2nd Law of thermodynamics.

There could well be (and is) a another kind of entropy, e.g. Shannon entropy, which has a similar definition, but the 2nd Law of thermodynamics is not about Shannon entropy, and there is no law that says that Shannon entropy cannot increase in an isolated system, because that wouldn’t have any meaning. You might say that Shannon entropy cannot increase without intelligence, for example, and Dembski’s Law of Conservation of Information might boil down to that, I don’t know, but it isn’t the 2nd Law of thermodynamics.

Now, a tidy room could well have less thermal entropy than an untidy room, with things clustered on shelves, with the capacity to smash on the floor, and thus do work But not necessarily. A nice neatly made bed, for instance, with all the sheets smoothed to their lowest potential energy level probably has greater entropy than a messed up bed with the capacity for the pillow to slide onto the floor. Similarly, a sugar molecule has more thermal entropy than the carbon dioxide and water molecules that existed before the plant converted them to sugar through photosynthesis. Indeed, when we do work (whether housework or even having a wild party), we might find we tend to reduce entropy in our surroundings by leaving them in a configuration less likely than that which would “spontaneously” occur in the absence of us doing work.

So your examples mostly are of changes in the arrangement of the energy states of the elements of an isolated (or non-isolated) system.

And nobody is suggesting (not even you, I don’t think) that the entropy changes implied by evolution are not changes in the energetic configuration of a system. Metabolism is a fundamental to living things – without a metabolism living things are not alive – their metabolism is what makes them need energy resources (food), in order to survive and breed.

So of course evolution is all about thermal entropy – biology is all about how living creatures reduce their thermal entropy at the cost of increased entropy in their “exhaust” (to put it politely). But they are, of course, not isolated systems so nothing they do requires any violation of the 2nd Law, whether it is feed, breed-with-variance, or die, and if they feed, breed-with-variance and die, natural selection is going to occur.

And if you don’t believe me that all statements of the 2nd Law of thermodynamics must be equivalent to be statements of the 2nd law, try this.

😀

27. 27
Collin says:

I don’t think that this comment will get me any friends, but I think that the entropy that Dr. Sewell is talking about is an example of something true and self evident but maybe not yet quantifiable by science. It’s like the difference between red and green. There was a time that we could not measure the wavelength of light. We only knew that red and green were different because we observed it to be so.

We generally observe things to go from more ordered to less ordered so evolution is a surprising hypothesis. There should be something special explaining why life can go from less ordered to more ordered and not violate this principle. Some argue that the high energy of the sun can explain the increase in order. But then, why doesn’t the sun create life on the moon or venus? Or create computers or some other complex thing? It seems that those who profess evolutionism must show, with convincing evidence, that the sun can accomplish this feat. The burden shouldn’t be on Granville to disprove it.

28. 28
Granville Sewell says:

Elizabeth,

Another quote from Ford’s text:

Heat flow is so central to most applications of thermodynamics that the second law is sometimes stated in this “restricted” form: (4) heat never flows spontaneously from a cooler to a hotter body. Notice that this is a statement about macroscopic behavior, whereas the more general and fundamental statements of the second law, which make use of the ideas of probability and disorder, refer to the submicroscopic structure of matter.

This statement (4) is basically equivalent to my (1) (thermal entropy cannot decrease in an isolated system) and to several other statements, but not to the two more general statements.

I am well aware that the idea that “entropy” is a single quantity which measures, in units of thermal entropy, all types of disorder is widespread, and promoted for the same reason that you promote it, that is one of the main points my BioComplexity article refutes. But it is patently false. There are applications of the second law which are quantifiable, such as the diffusion of chromium given in my scenario (B), which obviously have nothing to do with heat or energy, just with probability, and many others which are not easily quantifiable, which also have nothing to do with heat or energy. When Asimov talked about the entropy decrease associated with the development of the human brain, he was obviously NOT talking about any change in thermal entropy, and many other examples of “entropy” increases are given in many texts which obviously have nothing to do with thermal entropy, such as those I mentioned earlier.

29. 29
bornagain77 says:

Despite what our Darwinian detractors would prefer for people to believe, There is good reason that Dr. Sewell calls SLOT ‘the common sense law of physics’

The common sense law of physics – Granville Sewell – July 2010
Excerpt: Yesterday I spoke with my wife about these questions. She immediately grasped that chaos results on the long term when she would stop caring for her home.
http://www.uncommondescent.com.....f-physics/

Everything that is around us tends towards disorder and decay. Everyone can see this happening. It is everywhere. Something is new it will get old. Something is born, it will grow old and die. PERIOD. That is what makes it an extraordinary claim on the Darwinists part to claim that material processes acting without intelligent input can organize themselves into microbes that contain the equivalent to 10^12 bits completely in contradiction to what we know from this ‘common sense law of physics’:

“a one-celled bacterium, e. coli, is estimated to contain the equivalent of 100 million pages of Encyclopedia Britannica. Expressed in information in science jargon, this would be the same as 10^12 bits of information. In comparison, the total writings from classical Greek Civilization is only 10^9 bits, and the largest libraries in the world – The British Museum, Oxford Bodleian Library, New York Public Library, Harvard Widenier Library, and the Moscow Lenin Library – have about 10 million volumes or 10^12 bits.” – R. C. Wysong

Moleular Biophysics – Information theory. Relation between information and entropy: – Setlow-Pollard, Ed. Addison Wesley
Excerpt: Linschitz gave the figure 9.3 x 10^12 cal/deg or 9.3 x 10^12 x 4.2 joules/deg for the entropy of a bacterial cell. Using the relation H = S/(k In 2), we find that the information content is 4 x 10^12 bits. Morowitz’ deduction from the work of Bayne-Jones and Rhees gives the lower value of 5.6 x 10^11 bits, which is still in the neighborhood of 10^12 bits. Thus two quite different approaches give rather concordant figures.
http://www.astroscu.unam.mx/~a.....ecular.htm

Well perhaps common sense is not enough and the atheists have something to back their claim up? I mean weird things are discovered in science all the time right? Well this is not the case here. Sidestepping the ludicrous compensation (open system) argument to look at the empirical evidence itself we find deep concordance with what common sense tells us should be the case. OOL research is a completely blocked in on every side by the SLOT’s relentless grip, And if we look at life itself we find that the tendency of things to decay (SLOT) is in full force all the way down toi the molecular level:

“The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain – Michael Behe – December 2010
Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.
http://behe.uncommondescent.co.....evolution/

List Of Degraded Molecular Abilities Of Antibiotic Resistant Bacteria:
Excerpt: Resistance to antibiotics and other antimicrobials is often claimed to be a clear demonstration of “evolution in a Petri dish.” ,,, all known examples of antibiotic resistance via mutation are inconsistent with the genetic requirements of evolution. These mutations result in the loss of pre-existing cellular systems/activities, such as porins and other transport systems, regulatory systems, enzyme activity, and protein binding.
http://www.trueorigin.org/bacteria01.asp

Mutations : when benefits level off – June 2011 – (Lenski’s e-coli after 50,000 generations)
Excerpt: After having identified the first five beneficial mutations combined successively and spontaneously in the bacterial population, the scientists generated, from the ancestral bacterial strain, 32 mutant strains exhibiting all of the possible combinations of each of these five mutations. They then noted that the benefit linked to the simultaneous presence of five mutations was less than the sum of the individual benefits conferred by each mutation individually.
http://www2.cnrs.fr/en/1867.htm?theme1=7

In fact the detrimental nature of mutations in humans is overwhelming for scientists have already cited over 100,000 mutational disorders.

Inside the Human Genome: A Case for Non-Intelligent Design – Pg. 57 By John C. Avise
Excerpt: “Another compilation of gene lesions responsible for inherited diseases is the web-based Human Gene Mutation Database (HGMD). Recent versions of HGMD describe more than 75,000 different disease causing mutations identified to date in Homo-sapiens.”

I went to the mutation database website cited by John Avise and found:

HGMD®: Now celebrating our 100,000 mutation milestone!
http://www.hgmd.org/

I really question their use of the word ‘celebrating’. (Of note, apparently someone with a sense of decency has now removed the word ‘celebrating’)

30. 30
bornagain77 says:

And despite such a overwhelming rate of detrimental mutations, I have yet to find even a single unambiguously beneficial mutations in humans that did not come at a cost of compromising some other molecular function.

Human Genome in Meltdown – January 11, 2013
Excerpt: According to a study published Jan. 10 in Nature by geneticists from 4 universities including Harvard, “Analysis of 6,515 exomes reveals the recent origin of most human protein-coding variants.”,,,:
“We estimate that approximately 73% of all protein-coding SNVs [single-nucleotide variants] and approximately 86% of SNVs predicted to be deleterious arose in the past 5,000 -10,000 years. The average age of deleterious SNVs varied significantly across molecular pathways, and disease genes contained a significantly higher proportion of recently arisen deleterious SNVs than other genes.”,,,
As for advantageous mutations, they provided NO examples,,,
http://crev.info/2013/01/human-genome-in-meltdown/

In fact, the loss of morphological traits over time, for all organisms found in the fossil record, was/is so consistent that it was made into a ‘scientific law’:

Dollo’s law and the death and resurrection of genes:
Excerpt: “As the history of animal life was traced in the fossil record during the 19th century, it was observed that once an anatomical feature was lost in the course of evolution it never staged a return. This observation became canonized as Dollo’s law, after its propounder, and is taken as a general statement that evolution is irreversible.”
http://www.pnas.org/content/91.....l.pdf+html

And this extends to the molecular level as well,,

Dollo’s law, the symmetry of time, and the edge of evolution – Michael Behe
Excerpt: We predict that future investigations, like ours, will support a molecular version of Dollo’s law:,,, Dr. Behe comments on the finding of the study, “The old, organismal, time-asymmetric Dollo’s law supposedly blocked off just the past to Darwinian processes, for arbitrary reasons. A Dollo’s law in the molecular sense of Bridgham et al (2009), however, is time-symmetric. A time-symmetric law will substantially block both the past and the future.
http://www.evolutionnews.org/2.....f_tim.html

As well Dr. Sanford has noted that the process of decay is to be expected to the molecular level in his book ‘Genetic Entropy’:

Dr. John Sanford “Genetic Entropy and the Mystery of the Genome” 1/2 – video

Thus Darwinists can play word games all they want, hoping to decieve others, but the plain fact, ‘common sense’ fact, is that what we see happening all around us at the macro-level, of things growing old and decaying (and dying), holds true for things at the macro-level. There is no empirical evidence that Darwinists can appeal to to show that the process of decay does not hold for life as well:

further notes on extensive repair mechanisms in DNA

Verse and music:

Psalm 102:25-27
Of old You laid the foundation of the earth, And the heavens are the work of Your hands. They will perish, but You will endure; Yes, they will all grow old like a garment; Like a cloak You will change them, And they will be changed. But You are the same, And Your years will have no end.

Johnny Cash – Ain’t No Grave [Official HD] – The Johnny Cash Project – song starts around 2:50 minute mark

31. 31
Joe says:

Obviously this Ford guy and his text are wrong. 🙄

32. 32
CS3 says:

Since some continue to misrepresent the literature to fit their own views:

But nobody is claiming that solely because there is an entropy increase somewhere, any old thing can happen somewhere nearby. What we claim is that if a system is not isolated, work can be done on that system that can result in greater order, although system doing the work will necessarily experience increased entropy.

Yet again, you are imposing your view on Styer, Bunn, and others who have made the compensation argument.

If they were trying to argue, as you do, that energy makes the development of complex organisms not extremely improbable, they would not (to quote the Styer paper) estimate that “due to evolution, each individual organism is 1000 times “more improbable” than the corresponding individual was 100 years ago.” They would say something like, “Organisms may seem to be getting more improbable each century, but, in fact, they are actually becoming more probable, due to the actions of the four fundamental forces and the solar influx.”

They may well see energy as related to the processes forming organisms, in that if there were no energy, nothing would happen, but they are clearly not arguing that the energy makes these processes not improbable. If they did not think anything improbable was happening, then there would be no need for them to convert the probabilities of improbable events into an entropy and compare that to a different type of entropy to satisfy an inequality.

And, even if the energy were causing these events, it makes no sense for them to try to convert from the original improbability of what happened to how much energy is needed. It takes energy to flip coins, but it takes no more energy to flip all heads than to flip half heads and half tails.
Even if the processes increasing the improbability of the organisms are the exact same processes as those increasing the thermal entropy, this accounting is completely invalid.

If I think energy is simply making something, for example, a plant forming a flower, not improbable (and I would agree in this case), I say, as you do, that energy is making that something not improbable. Perhaps I provide some details of a mechanism by which that might be the case. If I want to know how much energy is required, I analyze the mechanism, or perhaps perform an experiment if possible. I do not compute the ratio of the number of microstates of “flower” to the number of microstates of “dirt” and plug it into the Boltzmann formula to see how much energy I need, not even as an upper or lower bound. I only do that if I am trying to compensate improbable events with events that, if reversed would, be more improbable, according to some global accounting scheme.

As I challenged keiths in another thread,

Can you explain how the methodology used by Styer and Bunn cannot be used to show that “anything, no matter how improbable, can happen in a system as long as the above criterion is met?” Just substitute the probability ratio of, say, a set of a thousand coins going from half heads and half tails to all heads in place of their estimate for the increase in improbability of organisms due to evolution. Plug that into the Boltzmann formula, and compare to the thermal entropy increase. If its magnitude is less, the Second Law is satisfied.

Furthermore, I challenge you to explain how their methodology helps make the claim you are trying to make.

Again, to summarize their methodology, which no one has been able to defend:

They estimate how much more “improbable” some organism is than an ancestral organism, plug that into the Boltzmann formula, and then multiply by the number of organisms and divide by the time taken to evolve, to get a value, in Joules per degree Kelvin per second, for the rate of entropy decrease due to the evolution. Then, they compare this value to the value for the rate of increase in entropy in the cosmic microwave background. So long as the magnitude of the evolution entropy decrease is less than the magnitude of the cosmic microwave background increase, they conclude that “the second law of thermodynamics is safe.”

Hopefully you can forgive Sewell for writing a paper that responds to the arguments in the literature rather than to the personal views of UD posters.

——————————————————

If Sewell’s arguments have nothing to do with the Second Law, then why do all of these have to do with the Second Law?

1) Isaac Asimov publishes an article in the Smithsonian Institute Journal, entitled “In the game of energy and thermodynamics, you can’t even break even”, arguing that, when applying the Second Law of Thermodynamics to the improbability of organisms due to evolution, there is no conflict because the increase in the complexity of organisms is compensated by the increase in entropy of the Sun. To quote the article itself:

You can argue, of course, that the phenomenon of life may be an exception [to the second law]. Life on earth has steadily grown more complex, more versatile, more elaborate, more orderly, over the billions of years of the planet’s existence. From no life at all, living molecules were developed, then living cells, then living conglomerates of cells, worms, vertebrates, mammals, finally Man. And in Man is a three-pound brain which, as far as we know, is the most complex and orderly arrangement of matter in the universe. How could the human brain develop out of the primeval slime? How could that vast increase in order (and therefore that vast decrease in entropy) have taken place? Remove the sun, and the human brain would not have developed…. And in the billions of years that it took for the human brain to develop, the increase in entropy that took place in the sun was far greater; far, far greater than the decrease that is represented by the evolution required to develop the human brain.

2) Daniel Styer publishes an article in the American Journal of Physics, entitled “Entropy and Evolution”, arguing that, when applying the Second Law of Thermodynamics to the improbability of organisms due to evolution, there is no conflict because the increase in the complexity of organism is compensated by the increase in entropy of the cosmic microwave background. His paper is a quantitative version of the compensation argument frequently made in textbooks and by prominent Darwinists such as Isaac Asimov and Richard Dawkins. To quote the article itself:

Does the second law of thermodynamics prohibit biological evolution?…Suppose that, due to evolution, each individual organism is 1000 times “more improbable” than the corresponding individual was 100 years ago. In other words, if Ui is the number of microstates consistent with the speci?cation of an organism 100 years ago, and Uf is the number of microstates consistent with the speci?cation of today’s “improved and less probable” organism, then Uf = 10^-3Ui.

Presumably the entropy of the Earth’s biosphere is indeed decreasing by a tiny amount due to evolution, and the entropy of the cosmic microwave background is increasing by an even greater amount to compensate for that decrease. But the decrease in entropy required for evolution is so small compared to the entropy throughput that would occur even if the Earth were a dead planet, or if life on Earth were not evolving, that no measurement would ever detect it.

3) Emory Bunn publishes an article in the American Journal of Physics, entitled “Evolution and the Second Law of Thermodynamics”, arguing that, when applying the Second Law of Thermodynamics to the improbability of organisms due to evolution, there is no conflict because the increase in the complexity of organisms is compensated by the increase in entropy of the cosmic microwave background. His estimate of the improbability of life due to evolution is “more generous” than Styer’s. To quote the article itself:

We now consider (dS/dt)life. .. far from being generous, a probability ratio of Ui/Uf = 10^-3 is probably much too low. One of the central ideas of statistical mechanics is that even tiny changes in a macroscopic object (say, one as large as a cell) result in exponentially large changes in the multiplicity (that is, the number of accessible microstates). I will illustrate this idea by some order of magnitude estimates. First, let us address the precise meaning of the phrase “due to evolution.” If a child grows up to be slightly larger than her mother due to improved nutrition, we do not describe this change as due to evolution, and thus we might not count the associated multiplicity reduction in the factor Ui/Uf. Instead we might count only changes such as the turning on of a new gene as being due to evolution. However, this narrow view would be incorrect. For this argument we should do our accounting in such a way that all biological changes are included. Even if a change like the increased size of an organism is not the direct result of evolution for this organism in this particular generation, it is still ultimately due to evolution in the broad sense that all life is due to evolution. All of the extra proteins, DNA molecules, and other complex structures that are present in the child are there because of evolution at some point in the past if not in the present, and they should be accounted for in our calculation… We conclude that the entropy reduction required for life on Earth is far less than |dS life| ? 10^44k… the second law of thermodynamics is safe.

4) Bob Lloyd publishes a viewpoint in the Mathematical Intelligencer backing Styer and Bunn, arguing that, when applying the Second Law of Thermodynamics to the improbability of organisms due to evolution, there is no conflict because the increase in the organism complexity is compensated by the increase in entropy of the cosmic microwave background. To quote the article itself:

The qualitative point associated with the solar input to Earth, which was dismissed so casually in the abstract of the AML paper, and the quantitative formulations of this by Styer and Bunn, stand, and are unchallenged by Sewell’s work.

——————————————————

More quotations from textbooks and articles that apply the general form of the Second Law:

From University Physics by Young and Freedman, in the Chapter “The Second Law of Thermodynamics”:

There is a relationship between the direction of a process and the disorder or randomness of the resulting state. For example, imagine a tedious sorting job, such as alphabetizing a thousand book titles written on file cards. Throw the alphabetized stack of cards into the air. Do they come down in alphabetical order? No, their tendency is to come down in a random or disordered state. In the free expansion of a gas, the air is more disordered after it has expanded into the entire box than when it was confined in one side, just as your clothes are more disordered when scattered all over your floor than when confined to your closet.

From a different edition of University Physics, in a section about “building physical intuition” about the Second Law:

A new deck of playing cards is sorted out by suit (hearts, diamonds, clubs, spades) and by number. Shuffling a deck of cards increases its disorder into a random arrangement. Shuffling a deck of cards back into its original order is highly unlikely.

From Basic Physics by Kenneth Ford:

Imagine a motion picture of any scene of ordinary life run backward. You might watch…a pair of mangled automobiles undergoing instantaneous repair as they back apart. Or a dead rabbit rising to scamper backward into the woods as a crushed bullet re-forms and flies backward into a rifle while some gunpowder is miraculously manufactured out of hot gas. Or something as simple as a cup of coffee on a table gradually becoming warmer as it draws heat from its cooler surroundings. All of these backward-in-time views and a myriad more that you can quickly think of are ludicrous and impossible for one reason only – they violate the second law of thermodynamics. In the actual scene of events, entropy is increasing. In the time reversed view, entropy is decreasing.

From General Chemistry, 5th Edition, by Whitten, Davis, and Peck:

The Second Law of Thermodynamics is based on our experiences. Some examples illustrate this law in the macroscopic world. When a mirror is dropped, it can shatter…The reverse of any spontaneous change is nonspontaneous, because if it did occur, the universe would tend toward a state of greater order. This is contrary to our experience. We would be very surprised if we dropped some pieces of silvered glass on the floor and a mirror spontaneously assembled… The ideas of entropy, order, and disorder are related to probability.

From Isaac Asimov in “In the game of energy and thermodynamics, you can’t even break even”:

We have to work hard to straighten a room, but left to itself, it becomes a mess again very quickly and very easily…. How difficult to maintain houses, and machinery, and our own bodies in perfect working order; how easy to let them deteriorate. In fact, all we have to do is nothing, and everything deteriorates, collapses, breaks down, wears out — all by itself — and that is what the second law is all about.

33. 33
bornagain77 says:

Of note: Two papers investigate the thermodynamics of quantum systems – July 8, 2013
Excerpt: As one of the pillars of the natural sciences, thermodynamics plays an important role in all processes that involve heat, energy, and work. While the principles of thermodynamics can predict the amount of work done in classical systems, for quantum systems there is instead a distribution of many possible values of work. Two new papers published in Physical Review Letters have proposed theoretical schemes that would significantly ease the measurement of the statistics of work done by quantum systems.,,,
“Fundamentally, we could start exploring quantum thermodynamics, which puts together a genuine quantum approach and the rock-solid foundations of thermodynamics,” he said. “We (and a few other researchers) are trying to do it from an information theoretic viewpoint, hoping to get new insight into this fascinating area.,,,
http://phys.org/news/2013-07-p.....antum.html

related interest:

“Is there a real connection between entropy in physics and the entropy of information? ….The equations of information theory and the second law are the same, suggesting that the idea of entropy is something fundamental…”
Siegfried, Dallas Morning News, 5/14/90, [Quotes Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin]

Maxwell’s demon demonstration (knowledge of a particle’s position) turns information into energy – November 2010
Excerpt: Until now, demonstrating the conversion of information to energy has been elusive, but University of Tokyo physicist Masaki Sano and colleagues have succeeded in demonstrating it in a nano-scale experiment. In a paper published in Nature Physics they describe how they coaxed a Brownian particle to travel upwards on a “spiral-staircase-like” potential energy created by an electric field solely on the basis of information on its location. As the particle traveled up the staircase it gained energy from moving to an area of higher potential, and the team was able to measure precisely how much energy had been converted from information.
http://www.physorg.com/news/20.....nergy.html

Demonic device converts information to energy – 2010
Excerpt: “This is a beautiful experimental demonstration that information has a thermodynamic content,” says Christopher Jarzynski, a statistical chemist at the University of Maryland in College Park. In 1997, Jarzynski formulated an equation to define the amount of energy that could theoretically be converted from a unit of information2; the work by Sano and his team has now confirmed this equation. “This tells us something new about how the laws of thermodynamics work on the microscopic scale,” says Jarzynski.
http://www.scientificamerican......rts-inform

Quantum knowledge cools computers: New understanding of entropy – June 2011
Excerpt: No heat, even a cooling effect;
In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that “more than complete knowledge” from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy. Renner emphasizes, however, “This doesn’t mean that we can develop a perpetual motion machine.” The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what’s known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says “We’re working on the edge of the second law. If you go any further, you will break it.”
http://www.sciencedaily.com/re.....134300.htm

Information and entropy – top-down or bottom-up development in living systems? A.C. McINTOSH – Dr Andy C. McIntosh is the Professor of Thermodynamics (the highest teaching/research rank in U.K. university hierarchy) Combustion Theory at the University of Leeds.
Excerpt: This paper highlights the distinctive and non-material nature of information and its relationship with matter, energy and natural forces. It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate.
http://journals.witpress.com/paperinfo.asp?pid=420

34. 34
CS3 says:

One more, from Chemistry by Zumdahl and Zumdahl.

The natural progression of things is from order to disorder, from lower entropy to higher entropy. To illustrate the natural tendency toward disorder, you only have to think about the condition of your room. Your room naturally tends to get messy (disordered), because an ordered room requires everything to be in its place. There are simply many more ways for things to be out of place than for them to be in their places.

As another example, suppose you have a deck of playing cards ordered in some particular way. You throw these cards into the air and pick them all up at random. Looking at the new sequence of the cards, you would be very surprised to find that it matched the original order. Such an event would be possible, but very improbable. There are billions of ways for the deck to be disordered, but only one way to be ordered according to your definition. Thus the chances of picking the cards up out of order are much greater than the chance of picking them up in order. It is natural for disorder to increase.

Entropy is a thermodynamic function that describes the number of arrangements (positions and/or energy levels) that are available to a system existing in a given state. Entropy is closely associated with probability. The key concept is that the more ways a particular state can be achieved, the greater is the likelihood (probability) of finding that state. In other words, nature spontaneously proceeds toward the states that have the highest probabilities of existing. This conclusion is not surprising at all. The difficulty comes in connecting this concept to real-life processes. For example, what does the spontaneous rusting of steel have to do with probability? Understanding the connection between entropy and spontaneity will allow us to answer such questions. We will begin to explore this connection by considering a very simple process, the expansion of an ideal gas into a vacuum. Why is this process spontaneous? The driving force is probability. Because there are more ways of having the gas evenly spread throughout the container than there are ways for it to be in any other possible state, the gas spontaneously attains the uniform distribution.

Nature always moves toward the most probable state available to it.

35. 35
CS3 says:

More details from University Physics by Young and Freedman, in a section entitled “Microscopic Interpretation of Entropy” in the chapter “The Second Law of Thermodynamics”:

Entropy is a measure of the disorder of the system as a whole. To see how to calculate entropy microscopically, we first have to introduce the idea of macroscopic and microscopic states.

Suppose you toss N identical coins on the floor, and half of them show heads and half show tails. This is a description of the large-scale or macroscopic state of the system of N coins. A description of the microscopic state of the system includes information about each individual coin: Coin 1 was heads, coin 2 was tails, coin 3 was tails, and so on. There can be many microscopic states that correspond to the same macroscopic description. For instance, with N=4 coins there are six possible states in which half are heads and half are tails. The number of microscopic states grows rapidly with increasing N; for N=100 there are 2^100 = 1.27×10^30 microscopic states, of which 1.01×10^29 are half heads and half tails.

The least probable outcomes of the coin toss are the states that are either all heads or all tails. It is certainly possible that you could throw 100 heads in a row, but don’t bet on it: the possibility of doing this is only 1 in 1.27×10^30. The most probable outcome of tossing N coins is that half are heads and half are tails. The reason is that this macroscopic state has the greatest number of corresponding microscopic states.

To make the connection to the concept of entropy, note that N coins that are all heads constitutes a completely ordered macroscopic state: the description “all heads” completely specifies the state of each one of the N coins. The same is true if the coins are all tails. But the macroscopic description “half heads, half tails” by itself tells you very little about the state (heads or tails) of each individual coin. We say that the system is disordered because we know so little about its microscopic state. Compared to the state “all heads” or “all tails”, the state “half heads, half tails” has a much greater number of possible microstates, much greater disorder, and hence much greater entropy (which is a quantitative measure of disorder).

Now instead of N coins, consider a mole of an ideal gas containing Avogadro’s number of molecules. The macroscopic state of this gas is given by its pressure p, volume V, and temperature T; a description of the microscopic state involves stating the position and velocity for each molecule in the gas. At a given pressure, volume, and temperature the gas may be in any one of an astronomically large number of microscopic states, depending on the positions and velocities of its 6.02×10^23 molecules. If the gas undergoes a free expansion into a greater volume, the range of possible positions increases, as does the number of possible microscopic states. The system becomes more disordered, and the entropy increases.

We can draw the following general conclusion: For any system the most probable macroscopic state is the one with the greatest number of corresponding microscopic states, which is also the macroscopic state with the greatest disorder and the greatest entropy.

Sewell’s statement follows directly from this: in an isolated system, the reason natural forces (such as tornados) “may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy.”

36. 36
CS3 says:

And if you don’t believe me that all statements of the 2nd Law of thermodynamics must be equivalent to be statements of the 2nd law, try this.

They are equivalent in the context of thermal entropy. Some can also be applied more broadly.

37. 37

There may be all kinds of applications of the general principle that improbable things are more improbable than probable things; indeed it’s what underlies the principle of null hypothesis testing.

But that doesn’t make the 2nd Law of thermodynamics not about thermodynamics.

Or is it possible that Granville and CS3 are confusing heat, which is measured in joules with temperature, which is measured in degrees

If Granville’s point is not about energy, i.e. something measured in joules then it is not about the 2nd Law of thermodynamics, and obviously in that case, any counter argument based on the assumption that it is, e.g.about compensation and external energy sources, will miss their mark.

But if this is the case, then Granville should stop invoking the 2nd Law, or, at the very least, make it clear that he is only using it metaphorically.

CS3: Can you give me an example of a broader practical application of the 2nd Law that is not about thermodynamic energy?

Where thermodynamic energy is defined as the wiki entry has it:

The internal energy of a system, also often called the thermodynamic energy, includes other forms of energy in a thermodynamic system in addition to thermal energy, namely forms of potential energy that do not influence temperature and do not absorb heat, such as the chemical energy stored in its molecular structure and electronic configuration, and the nuclear binding energy that binds the sub-atomic particles of matter.

38. 38

Granville:

I am well aware that the idea that “entropy” is a single quantity which measures, in units of thermal entropy, all types of disorder is widespread, and promoted for the same reason that you promote it, that is one of the main points my BioComplexity article refutes. But it is patently false. There are applications of the second law which are quantifiable, such as the diffusion of chromium given in my scenario (B), which obviously have nothing to do with heat or energy,

Of course your scenario B has “to do with heat or energy”! How do you think the chromium diffuses through the steel bar if not by work being done on the chromium atoms, and thus the thermal entropy decreasing!

just with probability

“Probability” is a meaningless concept without reference to the generative process under which we compute the probability distribution. In the case of your chromium, it is the thermal energy in the bar – the jiggling in a uniform distribution of directions that makes uniformly diffused chromium the most probable final outcome.

When Asimov talked about the entropy decrease associated with the development of the human brain, he was obviously NOT talking about any change in thermal entropy,

You think a human brain can develop without any change in thermal entropy? It can’t even function without changes in thermal entropy! That’s why people like me can measure which parts of the brain are active at any time – by measuring proxies for brain metabolism!

and many other examples of “entropy” increases are given in many texts which obviously have nothing to do with thermal entropy, such as those I mentioned earlier.

It’s not obvious to me that your examples “have nothing to do with thermal entropy”. In fact it seems extremely clear that they have!

39. 39
kairosfocus says:

CS3:

Well said, and well cited.

The second law of thermodynamics has the peculiarity of being rooted in a broad principle and pattern of observation, leading to a plurality of statements of varying breadth.

In the context of origin, heat engines, they will naturally be very close, but some are more narrow, others less so. For instance one of the first formulations was that one could not build a heat engine whose only function would be to convert heat into work (i.e. random molecular motion cannot wholly be converted into orderly forced motion). That is, it banned a certain type r perpetual motion machine as impossible; in effect daring you to provide a successful counter-example.

The familiar formulation in terms of isolated systems and entropy net being at least preserved, was itself already a widening from that and the like.

The formulation in statistical terms, based on more or less accessible clusters of states that are discernible at micro level and consistent with a macro-level set of conditions, that systems tend to migrate to clusters of higher statistical weight, sometimes phrased higher thermodynamic probability, is an analysis of the former in light of the understanding that emerged across C19 and into early C20, that matter was empirically established as atomic and molecular. (Boltzmann’s sad end was in part due to how the fierce objections he encountered to his atomism excited his condition. That gravestone marker, has a sad point to it, it is not merely celebratory of an achievement.)

Next, there is a tendency to suggest that the law has applications only to isolated systems and that open ones are unconstrained by it. That early formulation in terms of forbidding a certain class of perpetual motion machine, gives the lie to that.

And in fact the formulation in terms of isolated systems, is meant to give an ideal context that then grounds what happens when we move to the cases of wider interest, first the heat engine — obviously an open system. And in studying thermodynamics, so soon as the law is put, it is combined with the first law to be used in analysis of such.

Going further, the isolated system context examines subsystems open to heat flows [in the relevant terminology, “closed” systems, “open” ones being those open to mass and energy flows, etc.], and the changes that occur once a quantum of heat is passed due to temperature difference. It shows that once a subsystem B received such a quantum, its entropy defined in terms of a quantity increasing as dS >/= (- d’Q/T_A) + (d’Q/T_B), tends to rise in absence of a compensating change elsewhere (which requires an energy conversion mechanism such as a heat engine, coupled tot he intake of energy, and will normally also require exhausting energy as waste heat to a heat sink, C).

The loss of entropy of the donating subsystem A, is then algebraically compensated by the higher value of the rise in B.

Moving to analytical level on the micro picture, the rise in numbers of ways energy and mass can be arranged at micro level so far increases that it exceeds the number of ways lost by A.

GS’s key contribution, is that he has aptly highlighted that there s another empirically confirmed analytical factor at work, diffusion-like processes including heat spreading.

Such processes tend to undo concentrations across time, spreading out the concentrated item. Such is driven by random walks through accessible interactions by atoms, molecules, etc. Random walks leading to spreading out, are then tied to the point that that clusters of states that are rare in the overall state/phase space — we here reckon with not only positions and masses but momentum etc (which are tied to energy measures) — continue to be rare relative to the overwhelming bulk clusters, whether or not a system is isolated or open to energy and/or mass inflows or outflows.

That may seem trivial, but it is pivotal, and it is obviously easily missed.

It means that there will be a spontaneous tendency of systems to gravitate in the state space to accessible configuration clusters that are dominant, rather than to those that are rare and isolated: we deal with cases where the numbers of possibilities are beyond astronomical, even for something as simple as 500 – 1,000 coins; that means that random walk based processes are not likely to be able to access such, as the atomic and temporal resources of our solar system or the observed cosmos are not adequate to search a significant part of the phase space.

That is, we are inherently unable to sample enough of the space of possibilities to make the observation of vanishingly rare special and specific clusters to be reasonably observable. (E.g. we have no reason to expect to see 500H on tossing coins, even if we were to expend major efforts on such on the gamut of the atomic and temporal resources of our solar system. [Significantly, the ideologically motivated objectors we are dealing with here at UD, refuse to acknowledge this point. Not on grounds that it is poorly warranted, but evidently that they think since any one state is as improbable as any other, we should show no surprise to see any state appear. To do this, they refuse to acknowledge the significance of clusters of possibilities that dominate a space of states.])

Consequently, the observation of “counterflow” leading to such isolated clusters needs to be explained on mechanisms that make them not vanishingly improbable.

Let me reproduce the analysis that I presented at 6 above (and in a previous thread) to draw this out a bit more:

For instance (following a useful simple model of diffusion in Yavorsky and Pinski’s nice elementary Physics [MIR, 1974]), if we have ten each of white and black marbles in two rows in a container:

||**********||
||0000000000||

There is but one way to be as shown, but over 63,000 to be 5B/W in each row, and 6:4 and 4:6 cases are close to such numbers:44,000 +. So, if the system is sufficiently agitated for balls to swap positions at random on a regular basis, it soon moves towards a dominant cluster near the 5:5 peak and “forgets” the initial state.

This basic pattern works for heat, diffusion of ink drops in beakers of water, C atoms in a bar of steel, and more.

The basic reason for such does not go away if the system is opened up to energy and mass flows, and the point that rare, specifically and simply describable states will seldom be revisited or found, for enough complexity — 500 – 1,000 bits, soon becomes that such states are beyond the reach of the solar system’s or the observed cosmos’ search capacity.

RS’ point that there are states that can be locked away from interaction so that it is reasonable to partition entropy accounting, is also quite useful.

My own emphasis is that we need to see the difference between what diffusion like factors/ forces will strongly tend to do and what produces shaft work thence constructive work ending in FSCO/I.

Let me try a second diagram using textual features:

Heat transfer in Isolated system:

|| A (at T_a) –> d’Q –> B (at T_b) ||

dS >/= (-d’Q/T_a) + (+d’Q/T_b), Lex 2, Th in a common form

Heat engine, leaving off the isolation of the whole:

A –> d’Q_a –> B’ =====> D (shaft work)

Where also,

B’ –> d’Q_b –> C, heat disposal to a heat sink

Where of course the sum of heat inflows d’Q_a will equal the sum of rise in internal energy of B dU_b, work done on D, dW, and heat transferred onwards to C, d’Q_b.

The pivotal questions are, the performance of constructive work by dW, the possibility that this results in FSCO/I, the possibility that B itself shows FSCO/I enabling it to couple energy and do shaft work, and the suggested ideas that d’Q’s can spontaneously lead to shaft work constructing FSCO/I as an informational free lunch.

By overwhelming statistical weights of accessible microstates consistent with the gross flows outlined, this is not credibly observable on the gamut of our solar system or the observed cosmos.

There is but one empirically, reliably observed source for FSCO/I, design. The analysis on statistical weights of microstates and random forces such as diffusion and the like, shows why.

Which brings us to the point Hoyle was making: letting his tornado stand in for an agitating energy source triggering configs at random, tornadoes passing through junkyards don’t build jumbo jets or even instruments on their panels. But similar intelligently directed expenditures, through precise patterns of work, routinely do build jumbo jets.

Now, yes, the 10 W + 10B balls example is a toy example, just like the 500 coins. They are accessible, amenable to calculation, and help build a broader, deeper intuition by grounding a concept and providing a reference point. In this case, we see what diffusion is about, at a simple level. One that makes sense of the observation that say once we drop an ink drop in a beaker of water, over time it spreads out, and eventually becomes dispersed, but never do we see it spontaneously reforming.

Once similar forces and factors are at work, the same sort of dispersive pattern will obtain, for the same reason of dominant clusters of possibilities that are accessible.

GS is right to highlight that diffusion-like spreading out processes are central to our understanding of the second law. RS, properly highlights that when energy access barriers obtain, we can have partitioning of dispersive effects leading to a proper subscripting of our diffusion and entropy accounting. X-entropy or a similar terminology — though not particularly common — is reasonable.

And the next point is now quite plain.

We cannot reasonably expect diffusion-like disorganising forces to give us access to rare clusters, given the limits of our relevant search resources on earth, in the solar system, in the observed cosmos.

Functionally specific, complex organisation and associated information are not credibly accessible through diffusion-like disorganising, disordering forces. And we can properly see why, even without having to precisely calculate exact probabilities (yet another hyperskeptical objection), once we recognise the force of sampling theory for blind samples made with resources being inadequate to capture more than a small snapshot of the bulk of a distribution of accessible possibilities.

Where also, the very definition of FSCO/I itself underscores how rare it will be: multiple, well matched parts need to be correctly arranged and coupled together for function, thus sharply constraining the set of acceptable configurations. As an example, the letters in this post are sharply constrained by the rules of English text, in order to function effectively as a message. So, if the locations or selection of character states from the ASCII set were by contrast to be selected blindly, we would very soon have gibberish due to the equivalent of a diffusion like process: fieghqvkehju . . .

Likewise, constraining forces similar to those of crystallisation (e.g., KS’s attempt to use the freezing of water) will produce repetitive order, not information-rich functional organisation: FGFGFGFGFGF . . .

With these on the table, we can see how the design inference on signs such as FSCO/I, is rooted in underlying analysis of thermodynamics considerations.

Now, of course, I do not expect ideologues to accept or even recognise the above, but that is besides the point. Our interest is in reasonable discussion in light of evidence and analysis, leading to a better insight into empirical reality, not the sort of agenda pushing that has led to the rhetorical mess and worse we see all about.

KF

40. 40
keiths says:

KF,

Since you mentioned my freezing water example, let me remind you that you still haven’t rebutted my simple 4-step argument:

CS3,

I’ve mentioned this a couple of times already but people (including you) haven’t picked up on it, so let me try again.

When Granville argues against the compensation idea, he is unwittingly arguing against the second law itself.

It’s easy to see why. Imagine two open systems A and B that interact only with each other. Now draw a boundary around just A and B and label the contents as system C.

Because A and B interact only with each other, and not with anything outside of C, we know that C is an isolated system (by definition). The second law tells us that the entropy cannot decrease in any isolated system (including C). We also know from thermodynamics that the entropy of C is equal to the entropy of A plus the entropy of B.

All of us (including Granville) know that it’s possible for entropy to decrease locally, as when a puddle of water freezes. So imagine that system A is a container of water that becomes ice.

Note:

1. The entropy of A decreases when the water freezes.

2. The second law tells us that the entropy of C cannot decrease.

3. Thermodynamics tells us that the entropy of C is equal to the entropy of A plus the entropy of B.

4. Therefore, if the entropy of A decreases, the entropy of B must increase by at least an equal amount to avoid violating the second law.

The second law demands that compensation must happen. If you deny compensation, you deny the second law.

Thus Granville’s paper is not only chock full of errors, it actually shoots itself in the foot by contradicting the second law!

It’s a monumental mess that belongs nowhere near the pages of any respectable scientific publication. The BI organizers really screwed up when they accepted Granville’s paper.

41. 41
kairosfocus says:

KS:

This is an example of a straight out willfully misleading propagandistic fabrication in the teeth of evident facts on your part. I did take time to address the above point by point and corrected it as a case of red herrings led away to strawmen. Much of that response is a clip from my always linked note, App A which I believe dates in main part to 2008, i.e. a further answer to KS’ strawman tactics is present through my handle in EVERY comment I have ever made at UD.

Let me clip 231 in the difference thread, posted July 7, 2013 at 6:03 am:

___________
>> When Granville argues against the compensation idea, he is unwittingly arguing against the second law itself.>>

1 –> The core of GS’s argument is that forces that on balance of probabilities lead to diffusion and the like, are maximally implausible as the source of constructive work. Citing his paper, A Second Look at the Second Law, again (as done at 190 above):

Note that (2) [a flow gradient expression] simply says that heat flows from hot to cold regions—because the laws of probability favor a more uniform distribution of heat energy . . . . From [an eqn that
entails that in such a system, d’S >/= 0] (5) it follows that in an isolated, closed, system, where there is no heat ?ux through the boundary d’S >/= 0. Hence, in a closed system, entropy can never decrease. Since thermal entropy measures randomness (disorder) in the distribution of heat, its opposite (negative) can be referred to as ”thermal order”, and we can say that the thermal order can never
increase in a closed system.

Furthermore, there is really nothing special about ”thermal” entropy. We can define another entropy, and another order, in exactly the same way, to measure randomness in the distribution of any other substance that diffuses, for example, we can let U(x,y,z,t) represent the concentration of carbon diffusing in a solid
(Q is just U now), and through an identical analysis show that the ”carbon order” thus defined cannot increase in a closed system. It is a well-known prediction of the second law that, in a closed system, every type of order is unstable and must eventually decrease, as everything tends toward more probable states . . .

2 –> At no point have objectors provided an example of FSCO/I arising spontaneously by such dispersive forces, through their providing constructive work. This is also the implicit point in Hoyle’s example of a tornado passing through a junkyard and lo and behold a jumbo jet emerges, NOT. By contrast, the work involving a probably comparable amount of energy or even less, by men, machines and equipment working to a constructive plan will build a jumbo jet. That is we must recognise the difference between forces that blindly and freely move things around in accord with statistical patterns and those that move them according to a plan.

3 –> This issue lies as well at the heart of the recent challenge to explain how a box of 500 coins, all H came to be. KS, EL, and others of their ilk have been adamant to refuse the best explanation [constructive work] and to refuse as well to recognise that due to the
differing statistical weights of clusters of microstates, such a 500H state arising by random tossing is practically unobservable on the gamut of the solar system.

4 –> Notice, also, GS has put the issue of forces of diffusion at the pivot of his case, and indeed that at once allows us to see that when he speaks of X-entropy, he is speaking of the sort of thing that makes C diffuse even in the solid state.

>>It’s easy to see why. Imagine two open systems A and B that interact only with each other. Now draw a boundary around just A and B and label the contents as system C.>>

5 –> Here KS revisits Clausius’ first example, which appears in my always linked note and which is clipped in the FYI-FTR, he is about to refuse to look seriously at what is happening at micro level when d’Q of heat moves from A at a higher temp to B at a lower. In short he
leads away via a red herring and erects and burns a strawman. Let me lay out the summary that was there for literally years in App 1 my note:

1] TMLO: In 1984, this well-received work provided the breakthrough critical review on the origin of life that led to the modern design school of thought in science. The three online chapters, as just linked, should be carefully read to understand why design thinkers think that the origin of FSCI in biology is a significant and
unmet challenge to neo-darwinian thought. (Cf also Klyce’s relatively serious and balanced assessment, from a panspermia advocate. Sewell’s remarks here are also worth reading. So is Sarfati’s discussion of Dawkins’ Mt Improbable.)

2] But open systems can increase their order: This is the “standard” dismissal argument on thermodynamics, but it is both fallacious and often resorted to by those who should know better. My
own note on why this argument should be abandoned is:

a] Clausius is the founder of the 2nd law, and the first standard example of an isolated system — one that allows neither energy nor matter to flow in or out — is instructive, given the “closed” subsystems [i.e. allowing energy to pass in or out] in it:

Isol System:

| |(A, at Thot) –> d’Q, heat –> (B, at T cold)| |

b] Now, we introduce entropy change dS >/= d’Q/T . . . “Eqn” A.1

c] So, dSa >/= -d’Q/Th, and dSb >/= +d’Q/Tc, where Th > Tc

d] That is, for system, dStot >/= dSa + dSb >/= 0, as Th > Tc . . . “Eqn” A.2

e] But, observe: the subsystems A and B are open to energy inflows and outflows, and the entropy of B RISES DUE TO THE IMPORTATION OF RAW ENERGY.

f] The key point is that when raw energy enters a body, it tends to make its entropy rise. This can be envisioned on a simple model of a gas-filled box with piston-ends at the left and the right:

=================================
||::::::::::::::::::::::::::::::::::::::::::||===
=================================

1: Consider a box as above, filled with tiny perfectly hard marbles [so collisions will be elastic], scattered similar to a raisin-filled Christmas pudding (pardon how the textual elements give the impression of a regular grid, think of them as scattered more or less hap-hazardly as would happen in a cake).

2: Now, let the marbles all be at rest to begin with.

3: Then, imagine that a layer of them up against the leftmost wall were given a sudden, quite, quite hard push to the right [the left and right ends are pistons].

4: Simply on Newtonian physics, the moving balls would begin to collide with the marbles to their right, and in this model perfectly elastically. So, as they hit, the other marbles would be set in motion in succession. A wave of motion would begin, rippling from left to right

5:As the glancing angles on collision will vary at random, the marbles hit and the original marbles would soon begin to bounce in all sorts of directions. Then, they would also deflect off the walls, bouncing back into the body of the box and other marbles, causing the motion to continue indefinitely.

6: Soon, the marbles will be continually moving in all sorts of directions, with varying speeds, forming what is called the Maxwell-Boltzmann distribution, a bell-shaped curve.

7: And, this pattern would emerge independent of the specific initial arrangement or how we impart motion to it, i.e. this is an attractor in the phase space: once the marbles are set in motion somehow, and move around and interact, they will soon enough settle into the M-B pattern. E.g. the same would happen if a small charge of
explosive were set off in the middle of the box, pushing our the balls there into the rest, and so on. And once the M-B pattern sets in, it will strongly tend to continue . . . .

for the injection of energy to instead do predictably and consistently do something useful, it needs to be coupled to an energy conversion device.
g] When such energy conversion devices, as in the cell, exhibit FSCI, the question of their origin becomes material, and in that context, their spontaneous origin is strictly logically possible but — from the above — negligibly different from zero probability on the gamut of the observed cosmos. (And, kindly note: the cell
is an energy importer with an internal energy converter. That is, the appropriate entity in the model is B and onward B’ below.
Presumably as well, the prebiotic soup would have been energy importing, and so materialistic chemical evolutionary scenarios therefore have the challenge to credibly account for the origin of the FSCI-rich energy converting mechanisms in the cell relative to Monod’s “chance + necessity” [cf also Plato’s remarks] only.)

h] Now, as just mentioned, certain bodies have in them energy conversion devices: they COUPLE input energy to subsystems that harvest some of the energy to do work, exhausting sufficient waste energy to a heat sink that the overall entropy of the system is increased. Illustratively, for heat engines — and (in light of exchanges with email correspondents circa March 2008) let us note: a good slice of classical thermodynamics arose in the context of studying, idealising and generalising from steam engines [which exhibit organised, functional complexity, i.e FSCI; they are of course artifacts of
intelligent design and also exhibit step-by-step problem-solving processes (even including “do-always” looping!)]:

| | (A, heat source: Th): d’Qi –> (B’, heat engine, Te): –>

d’W [work done on say D] + d’Qo –> (C, sink at Tc) | |

i] A’s entropy: dSa >/= – d’Qi/Th

j] C’s entropy: dSc >/= + d’Qo/Tc

k] The rise in entropy in B, C and in the object on which the work is done, D, say, compensates for that lost from A. The second law — unsurprisingly, given the studies on steam engines that lie at its roots — holds for heat engines. [–> Notice, I have addressed the compensation issue all along.]

l] However for B since it now couples energy into work and exhausts waste heat, does not necessarily undergo a rise in entropy having imported d’Qi. [The problem is to explain the origin of the heat engine — or more generally, energy converter — that does this, if it exhibits FSCI.] [–> Notice the pivotal question being ducked in the context of the origin of cell based life, through red herrings and strawmen.]

m] There is also a material difference between the sort of heat engine [an instance of the energy conversion device mentioned] that forms spontaneously as in a hurricane [directly driven by boundary conditions in a convective system on the planetary scale, i.e. an example of order], and the sort of complex, organised, algorithm-implementing energy conversion device found in living cells [the DNA-RNA-Ribosome-Enzyme system, which exhibits massive FSCI].

n] In short, the decisive problem is the [im]plausibility of the ORIGIN of such a FSCI-based energy converter through causal mechanisms traceable only to chance conditions and undirected [non-purposive] natural forces. This problem yields a conundrum for chem evo scenarios, such that inference to agency as the probable cause of such FSCI — on the direct import of the many cases where we do directly know the causal story of FSCI — becomes the better explanation. As TBO say, in bridging from a survey of the basic thermodynamics of living systems in CH 7, to that more focussed discussion in ch’s 8 – 9:

While the maintenance of living systems is easily rationalized in terms of thermodynamics, the origin of such living systems is quite another matter. Though the earth is open to energy flow from the sun, the means of converting this energy into the necessary work to build up living systems from simple precursors remains at present
unspecified (see equation 7-17).
The “evolution” from
biomonomers of to fully functioning cells is the issue.
Can one make the incredible jump in energy and organization from raw material and raw energy, apart from some means of directing the energy flow through the system? In Chapters 8 and 9 we will consider this question, limiting our discussion to two small but crucial steps in the proposed
evolutionary scheme namely, the formation of protein and DNA from their precursors.

It is widely agreed that both protein and DNA are essential for living systems and indispensable components of every living cell today.11 Yet they are only produced by living cells. Both types of molecules are much more energy and information rich than the biomonomers from which they form. Can one reasonably predict their occurrence given the necessary biomonomers and an energy source? Has this been verified experimentally? These questions will be considered . . . [Bold emphasis added. Cf summary in the peer-reviewed journal of the American Scientific Affiliation, “Thermodynamics and the Origin of Life,” in Perspectives on Science and Christian Faith 40 (June 1988): 72-83, pardon the poor quality of the scan. NB:as the journal’s online
issues will show, this is not necessarily a “friendly audience.”]

[[–> in short this question was actually addressed in the very first design theory work, TMLO, in 1984, so all along the arguments we are here addressing yet again are red herrings led away to strawmen soaked in ad hominems as we will see again below, and set alight to cloud, confuse, poison and polarise the atmosphere.]

>>Because A and B interact only with each other, and not with anything outside of C, we know that C is an isolated system (by definition). The second law tells us that the entropy cannot decrease in any isolated system (including C). We also know from thermodynamics that the entropy of C is equal to the entropy of A plus the entropy of B.>>

6 –> KS is setting up his red herring and strawman version.

>>All of us (including Granville) know that it’s possible for entropy to decrease locally, as when a puddle of water freezes. So imagine that system A is a container of water that becomes ice.>>

7 –> Having dodged the pivotal issues of dispersive forces like diffusion being asked to carry out constructive work resulting in organisation of something that is rich in FSCO/I, KS gives an irrelevant example, of order emerging by mechanical necessity acting in the context of heat outflow, where the polar molecules of water will form ice crystals on being cooled enough. This very example is
specifically addressed in TMLO, and I have already spoken to this and similar cases. [–> Let me add a link to my always linked note: Here, on hurricanes, snowflakes and the like. In any case, Wicken and Orgel already give an excellent answer [anticipation, these are in 1979 and 1973 . . . ] as cited below.]

8 –> By contrast, hear honest and serious remarks by Wicken and Orgel (which since 2010 have sat in the beginning of section D, IOSE intro-summary page, so KS either knows of or should know of this):

WICKEN, 1979: >> ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from
arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional
complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes
added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)] >>

ORGEL, 1973: >> . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.]>>

9 –> KS, of course, has presented to us a case of
crystallisation, as though it is an answer to the matter at stake. At this point, given his obvious situation as a highly informed person, this is willful perpetuation of a misrepresentation, which has a short sharp, blunt three-letter name that begins with L.

>>Note:
1. The entropy of A decreases when the water freezes.
2. The second law tells us that the entropy of C cannot decrease.
3. Thermodynamics tells us that the entropy of C is equal to the entropy of A plus the entropy of B.
4. Therefore, if the entropy of A decreases, the entropy of B must increase by at least an equal amount to avoid violating the second law.>>

10 –> In these notes, KS ducks his intellectual responsibility to address just what happens with B so that the overall entropy is increased. Namely, that precisely because of the rise in accessible energy, the number of ways for energy and mass to be arranged at micro level, so far increases as to exceed the loss in number of ways of A.

11 –> And, the exact same diffusive and dissipative forces already described strongly push B towards the clusters of states with the highest statistical weights, and away from those clusters with very low statistical weights. So, by importing energy B’s entropy increases
and by enough that the net result is at minimum to have entropy of the system constant.

12 –> It is the statistical reasoning linked to this, and the onward link to the information involved, thence the information involved in functionally specific complex organisation, thence the need for constructive work rather than expecting diffusion and the like to do spontaneously this for “free” that are pivotal to the case that KS
has here distracted form and misrepresented. (Cf my microjets in a vat thought exercise case study here, which has been around since when, was it 2008 or so? And even if KS was ignorant of that, he had the real import of Hoyle’s argument, a contrast between what chaotic forces do and what planned constructive work does, as well as access to the points made by Orgel and Wicken. Likewise we can compare what Shapiro and Orgel said
in their exchange on OOL. Golf balls do not in our experience play themselves around golf courses by lucky clusters of natural forces. If pigs could fly scenarios are nonsense. And the rock avalanche spontaneously forms Welcome to Wales at the border of Wales example
has been around for a long time too. All of these are highlighting the difference in capability between blind chance and mechanical necessity and intelligently directed constructive work.)

>>The second law demands that compensation must happen. If you deny compensation, you deny the second law.>>

13 –> A sad excuse to play at red herrings and strawmen.

>>Thus Granville’s paper is not only chock full of errors, it actually shoots itself in the foot by contradicting the second law!>>

14 –> here comes the smoke of burning, ad hominem soaked strawmen , now.

>>It’s a monumental mess that belongs nowhere near the pages of any respectable scientific publication. The BI organizers really screwed up when they accepted Granville’s paper. >>

15 –> throwing on more ad hominems to the fire to make even more polarisation, clouding of issues and poisoning of the atmosphere.
____________

Since obviously KS will not read the above much less follow up a link, I will follow this with the linked case study on the creation of hurricanes and snowflakes by mechanical necessity and blind chance for the onlooker.

KF

42. 42
kairosfocus says:

FYI & FTR: KS has tried to substitute the strawman argument of freezing of water for explaining constructive work, on the propagandistic pretence that the mechanical necessity that allows ice to form once sufficient latent heat has been extracted, answers to the complex functional organisation shown by say a Jumbo jet or a string of text or the nanomachines in a living cell or the coded information in DNA. He refuses therefore to acknowledge that mechanical necessity only leads to natural regularity, and cannot explain high contingency in itself. Even in the case of “chaos,” what accounts for dramatic differences in runs is that here are small initial differences that are drastically amplified by the nonlinear dynamics. High contingency is required to exhibit information storage or configurability required for construction of functionally specific complex entities. High contingency is on massive empirical observation, only caused by chance or choice, and the manifestation of FSCO/I, not credibly reachable by chance on the gamut of the observed cosmos or at any rate the solar system, is only in our experience caused by choice. The analysis on the clusters of microstates shows why that is, the deep isolation and rarity of relevant islands of function makes them utterly dominated by the overswelming bulk of the space of possibilities, gibberish irrelevant to the functions in question. Blind sampling or blindly selected search mechanisms, on the gamut of the solar system with all but certainty will only pick up that bulk.

Clipping, partly in anticipation of other linked objections:

________________

>>A tropical cyclone is by and large shaped by convective and Coriolis forces acting on a planetary scale over a warm tropical ocean whose surface waters are at or above about 80 degrees F. That is, it is a matter of chance + necessity leading to order under appropriate boundary conditions, rather than to complex, functionally specified information.

Similarly, the hexagonal, crystalline symmetry of snowflakes is driven by the implications of the electrical polarisation in the H-O-H (water) molecule — which is linked to its kinked geometry, and resulting hexagonal close packing. [–> mechanical necessity leading to order under circumstances where not sufficient energy is present at micro level to disrupt it] Their many, varied shapes are controlled by the specific micro-conditions of the atmosphere along the path travelled by the crystal as it forms in a cloud. [–> complexity shaped by chance]

As the just linked summarises [in a 1980’s era, pre-design movement Creationist context] and illustrates by apt photographic examples [which is a big part of why it is linked]:

Hallet and Mason2. . . found that water molecules are preferentially incorporated into the lattice structure of ice crystals as a function of temperature. Molecules from the surrounding vapor that land on a growing crystal migrate over its surface and are fixed to either the axial [tending to lead to plate- or star-shaped crystals] or basal planes [tending to lead to columnar or needle-like crystals] depending upon four temperature conditions. For example, snow crystals will grow lengthwise to form long, thin needles and columns . . . when the temperature is between about -3°C and -8°C. When the temperature is between about -8°C and -25°C, plate-like crystals will form . . . Beautiful stellar and dendritic crystals form at about -15°C. In addition, the relative humidity of the air and the presence of supercooled liquid cloud droplets will cause secondary growth phenomena known as riming and dendritic growth. [NB: this is what leads to the most elaborate shapes.] The small, dark spheres attached to the edges of the plate[-type crystal] in Figure 5 are cloud droplets that were collected and attached to the snow crystal as rime as the crystal fell through these droplets on its way to the earth’s surface. The dendritic and feathery edges . . . are produced by the rapid growth of snow crystals in a high-humidity environment . . . . The modern explanation of the hexagonal symmetry of snow crystals is that a snow crystal is a macroscopic, outward manifestation of the internal arrangement of the molecules in ice. The molecules form an internal pattern of lowest free energy, one that possesses high structural symmetry. For the water molecule this is a type of symmetry called hexagonal close pack.

[“Microscopic Masterpieces: Discovering Design in Snow Crystals,” Larry Vardiman, ICR, 1986. (Note, too, from the context of the above excerpts, on how “design” and “creation” are rather hastily inferred to in this 1980’s era Creationist article; a jarringly different frame of thought from the far more cautious, empirical, step by step explanatory filter process and careful distinctions developed by TBO and other design theorists. Subsequently, many Creationists have moved towards the explanatory filter approach pioneered by the design thinkers. This article — from Answers in Genesis’ Technical Journal — on the peacock’s tail is an excellent example, and a telling complement to the debates on the bacterial flagellum. Notice, in particular, how it integrates the aesthetic impact issue that is ever so compelling intuitively with the underlying issue of organised complexity to get to the aesthetics.) Cf also an AMS article here. [–> for economy given the limited UD budget per comment some links will not be added, go to the linked point in my note]]

A snowflake may indeed be (a) complex in external shape [reflecting random conditions along its path of formation] and (b) orderly in underlying hexagonal symmetrical structure [reflecting the close-packing molecular forces at work], but it simply does not encode functionally specific information. Its form simply results from the point-by-point particular conditions in the atmosphere along its path as it takes shape under the impact of chance [micro-atmospheric conditions] + necessity [molecular packing forces].

The tendency to wish to use the snowflake as a claimed counter-example alleged to undermine the coherence of the CSI concept thus plainly reflects a basic confusion between two associated but quite distinct features of this phenomenon:

(a) external shape — driven by random forces and yielding complexity [BTW, this is in theory possibly useful for encoding information, but it is probably impractical!]; and,

(b) underlying hexagonal crystalline structure — driven by mechanical forces and yielding simple, repetitive, predictable order. [This is not useful for encoding at all . . .] Of course, other kinds of naturally formed crystals reflect the same balance of forces and tend to have a simple basic structure with a potentially complex external shape, especially if we have an agglomeration of in effect “sub-crystals” in the overall observed structure.

In short, a snowflake is fundamentally a crystal, not an aperiodic and functionally specified information-bearing structure serving as an integral component of an organised, complex information-processing system, such as DNA or protein macromolecules manifestly are. >>

________________

So, from Wicken and Orgel, and from the difference between order and organisation, it is quite evident that that which accounts for randomness, regularities and order, is not the same as that which accounts for FSCO/I. The only thing that does so is design, on wide and exceptionless obnservation and experience.

of course, KS’s other point is that bodies giving off energy lose entropy so there. This simply fails to address the relevant body in the cases relevant to OOL and origin of FSCO/I generally, which are IMPORTING energy. In the case of OOL, there is good reason to see that importation of relevant raw energy and materials such as abundantly present H2O molecules, should tend rather to break up molecules and also to form interfering cross reactions.

Certainly, that is part of why Miller and Urey trapped out the formed chemicals in their apparatus. An apparatus that was critically dependent on a reducing atmosphere that for decades has no longer been credible as an early earth atmosphere.

We already have repeatedly addressed the substitution of body A for Body B in the thermodynamic analysis on what happens when a body IMPORTS heat etc, but it will not hurt to give it one more time, to rivet home that constructive work requires a heat engine, body B in the following:

For instance (following a useful simple model of diffusion in Yavorsky and Pinski’s nice elementary Physics [MIR, 1974]), if we have ten each of white and black marbles in two rows in a container:

||**********||
||0000000000||

There is but one way to be as shown, but over 63,000 to be 5B/W in each row, and 6:4 and 4:6 cases are close to such numbers:44,000 +. So, if the system is sufficiently agitated for balls to swap positions at random on a regular basis, it soon moves towards a dominant cluster near the 5:5 peak and “forgets” the initial state.

This basic pattern works for heat, diffusion of ink drops in beakers of water, C atoms in a bar of steel, and more.

The basic reason for such does not go away if the system is opened up to energy and mass flows, and the point that rare, specifically and simply describable states will seldom be revisited or found, for enough complexity — 500 – 1,000 bits, soon becomes that such states are beyond the reach of the solar system’s or the observed cosmos’ search capacity.

RS’ point that there are states that can be locked away from interaction so that it is reasonable to partition entropy accounting, is also quite useful.

My own emphasis is that we need to see the difference between what diffusion like factors/ forces will strongly tend to do and what produces shaft work thence constructive work ending in FSCO/I.

Let me try a second diagram using textual features:

Heat transfer in Isolated system:

|| A (at T_a) –> d’Q –> B (at T_b) ||

dS >/= (-d’Q/T_a) + (+d’Q/T_b), Lex 2, Th in a common form

Heat engine, leaving off the isolation of the whole:

A –> d’Q_a –> B’ =====> D (shaft work)

Where also,

B’ –> d’Q_b –> C, heat disposal to a heat sink

Where of course the sum of heat inflows d’Q_a will equal the sum of rise in internal energy of B dU_b, work done on D, dW, and heat transferred onwards to C, d’Q_b.

The pivotal questions are, the performance of constructive work by dW, the possibility that this results in FSCO/I, the possibility that B itself shows FSCO/I enabling it to couple energy and do shaft work, and the suggested ideas that d’Q’s can spontaneously lead to shaft work constructing FSCO/I as an informational free lunch.

By overwhelming statistical weights of accessible microstates consistent with the gross flows outlined, this is not credibly observable on the gamut of our solar system or the observed cosmos.

There is but one empirically, reliably observed source for FSCO/I, design. The analysis on statistical weights of microstates and random forces such as diffusion and the like, shows why.

Which brings us to the point Hoyle was making: letting his tornado stand in for an agitating energy source triggering configs at random, tornadoes passing through junkyards don’t build jumbo jets or even instruments on their panels. But similar intelligently directed expenditures, through precise patterns of work, routinely do build jumbo jets.

Constructive, counterflow work resulting in FSCO/I has only one empirically warranted explanation, design.

KF

43. 43
kairosfocus says:

Oops, forgot to link to the section of the note.

44. 44
kairosfocus says:

F/N 2: It is appropriate to here clip the microjets and nanobots thought exercise from here in my note, as this shows why diffusion strongly tends to be one-way and how clumping and organisation by constructive work are best explained on design, with a spot of context:

___________

>>6] It is worth pausing to now introduce a thought (scenario) experiment that helps underscore the point, by scaling down to essentially molecular size the tornado- in- a- junkyard- forms- a- jet example raised by Hoyle and mentioned by Dawkins with respect in the just linked excerpt in Section A above. Then, based on (a) the known behaviour of molecules and quasi-molecules through Brownian-type motion (which, recall, was Einstein’s Archimedian point for empirically demonstrating the reality of atoms), and (b) the also known requirement of quite precise configurations to get to a flyable micro-jet, we may (c) find a deeper understanding of what is at stake in the origin of life question:

NANOBOTS & MICRO-JETS THOUGHT EXPT:

i] Consider the assembly of a Jumbo Jet, which requires intelligently designed, physical work in all actual observed cases. That is, orderly motions were impressed by forces on selected, sorted parts, in accordance with a complex specification. (I have already contrasted the case of a tornado in a junkyard that it is logically and physically possible can do the same, but the functional configuration[s] are so rare relative to non-functional ones that random search strategies are maximally unlikely to create a flyable jet, i.e. we see here the logic of the 2nd Law of Thermodynamics, statistical thermodynamics form, at work. [Intuitively, since functional configurations are rather isolated in the space of possible configurations, we are maximally likely to exhaust available probabilistic resources long before arriving at such a functional configuration or “island” of such configurations (which would be required before hill-climbing through competitive functional selection, a la Darwinian natural Selection could take over . . . ); if we start from an arbitrary initial configuration and proceed by a random walk.])

ii] Now, let us shrink the Hoylean example, to a micro-jet so small [~ 1 cm or even smaller] that the parts are susceptible to Brownian motion, i.e they are of about micron scale [for convenience] and act as “large molecules.” (Cf. “materialism-leaning ‘prof’ Wiki’s” blowing-up of Brownian motion to macro-scale by thought expt, here; indeed, this sort of scaling-up thought experiment was just what the late, great Sir Fred was doing in his original discussion of 747’s.) Let’s say there are about a million of them, some the same, some different etc. In principle, possible: a key criterion for a successful thought experiment. Next, do the same for a car, a boat and a submarine, etc.

iii] In several vats of “a convenient fluid,” each of volume about a cubic metre, decant examples of the differing mixed sets of nano-parts; so that the particles can then move about at random, diffusing through the liquids as they undergo random thermal agitation.

iv] In the control vat, we simply leave nature to its course.

Q: Will a car, a boat a sub or a jet, etc, or some novel nanotech emerge at random? [Here, we imagine the parts can cling to each other if they get close enough, in some unspecified way, similar to molecular bonding; but that the clinging force is not strong enough at appreciable distances [say 10 microns or more] for them to immediately clump and precipitate instead of diffusing through the medium.]

ANS: Logically and physically possible (i.e. this is subtler than having an overt physical force or potential energy barrier blocking the way!) but the equilibrium state will on statistical thermodynamics grounds overwhelmingly dominate — high disorder.

Q: Why?

A: Because there are so many more accessible scattered state microstates than there are clumped-at -random state ones, or even moreso, functionally configured flyable jet ones . . . [–> some dead links, I hate that . . . ] )

v] Now, pour in a cooperative army of nanobots into one vat, capable of recognising jet parts and clumping them together haphazardly. [This is of course, work, and it replicates bonding at random. “Work” is done when forces move their points of application along their lines of action. Thus in addition to the quantity of energy expended, there is also a specificity of resulting spatial rearrangement depending on the cluster of forces that have done the work. This of course reflects the link between “work” in the physical sense and “work” in the economic sense; thence, also the energy intensity of an economy with a given state of technology: energy per unit GDP tends to cluster tightly while a given state of technology and general level of economic activity prevail. (Current estimate for Montserrat: 1.6 lbs CO2 emitted per EC\$ 1 of GDP, reflecting an energy intensity of 6 MJ/EC\$, and the observation that burning one US Gallon of gasoline or diesel emits about 20 lbs of that gas . . . .

Q: After a time, will we be likely to get a flyable nano jet?

A: Overwhelmingly, on probability, no. (For, the vat has ~ [10^6]^3 = 10^18 one-micron locational cells, and a million parts or so can be distributed across them in vastly more ways than they could be across say 1 cm or so for an assembled jet etc or even just a clumped together cluster of micro-parts. [a 1 cm cube has in it [10^4]^3 = 10^12 cells, and to confine the nano-parts to that volume obviously sharply reduces the number of accessible cells consistent with the new clumped macrostate.] But also, since the configuration is constrained, i.e. the mass in the microjet parts is confined as to accessible volume by clumping, the number of ways the parts may be arranged has fallen sharply relative to the number of ways that the parts could be distributed among the 10^18 cells in the scattered state. [–> this undoes a lot of the effect of diffusion] (That is, we have here used the nanobots to essentially undo diffusion of the micro-jet parts.) The resulting constraint on spatial distribution of the parts has reduced their entropy of configuration. For, where W is the number of ways that the components may be arranged consistent with an observable macrostate, and since by Boltzmann, entropy, s = k ln W, we see that W has fallen so S too falls on moving from the scattered to the clumped state. [–> I add, if you want to factor in a case where due to energy barriers and the like, states will not be accessible, or equally accessible, we can move tot he Gibbs formalism, a weighted average SUM of Pi ln Pi, as is used in the Robertson derivation used elsewhere in the note, eg. here on. That of course directly shows the link to information per the metric of average info per symbol under similar circumstances.]

vi] For this vat, next remove the random cluster nanobots, and send in the jet assembler nanobots. These recognise the clumped parts, and rearrange them to form a jet, doing configuration work. (What this means is that within the cluster of cells for a clumped state, we now move and confine the parts to those sites consistent with a flyable jet emerging. That is, we are constraining the volume in which the relevant individual parts may be found, even further. [–> this is also a species of the concept of undoing diffusion]) A flyable jet results — a macrostate with a much smaller statistical weight of microstates. We can see that of course there are vastly fewer clumped configurations that are flyable than those that are simply clumped at random, and thus we see that the number of microstates accessible due to the change, [a] scattered –> clumped and now [b] onward –> functionally configured macrostates has fallen sharply, twice in succession. Thus, by Boltzmann’s result s = k ln W, we also have seen that the entropy has fallen in succession as we moved from one state to the next, involving a fall in s on clumping, and a further fall on configuring to a functional state; dS tot = dSclump + dS config. [Of course to do that work in any reasonable time or with any reasonable reliability, the nanobots will have to search and exert directed forces in accord with a program, i.e this is by no means a spontaneous change, and it is credible that it is accompanied by a compensating rise in the entropy of the vat as a whole and its surroundings. This thought experiment is by no means a challenge to the second law. But, it does illustrate the implications of the probabilistic reasoning involved in the microscopic view of that law, where we see sharply configured states emerging from much less constrained ones.]

vii] In another vat we put in an army of clumping and assembling nanobots, so we go straight to making a jet based on the algorithms that control the nanobots. Since entropy is a state function, we see here that direct assembly is equivalent to clumping and then reassembling from a random “macromolecule” to a configured functional one. That is:

dS_tot (direct) = dS_clump + dS_config.

viii] Now, let us go back to the vat. For a large collection of vats, let us now use direct microjet assembly nanobots, but in each case we let the control programs vary at random a few bits at a time -– say hit them with noise bits generated by a process tied to a zener noise source. We put the resulting products in competition with the original ones, and if there is an improvement, we allow replacement. Iterate, many, many times.

Q: Given the complexity of the relevant software, will we be likely to for instance come up with a hyperspace-capable spacecraft or some other sophisticated and un-anticipated technology? (Justify your answer on probabilistic grounds.)

My prediction: we will have to wait longer than the universe exists to get a change that requires information generation (as opposed to information and/or functionality loss) on the scale of 500 – 1000 or more bits. [See the info-generation issue over macroevolution by RM + NS?]

ix] Try again, this time to get to even the initial assembly program by chance, starting with random noise on the storage medium. See the abiogenesis/ origin of life issue?

x] The micro-jet is of course an energy converting device which exhibits FSCI, and we see from this thought expt why it is that it is utterly improbable on the same grounds as we base the statistical view of the 2nd law of thermodynamics, that it should originate spontaneously by chance and necessity only, without agency.

xi] Extending to the case of origin of life, we have cells that use sophisticated machinery to assemble the working macromolecules, direct them to where they should go, and put them to work in a self-replicating, self-maintaining automaton. Clumping work [if you prefer that to TBO’s term chemical work, fine [–> that triggered the debate in 2008 IIRC]], and configuring work can be identified and applied to the shift in entropy through the same s = k ln W equation. For, first we move from scattered at random in the proposed prebiotic soup, to chained in a macromolecule, then onwards to having particular monomers in specified locations along the chain — constraining accessible volume again and again, and that in order to access observably bio-functional macrostates. Also, s = k ln W, through Brillouin, TBO link to information, viewed as “negentropy,” citing as well Yockey-Wicken’s work and noting on their similar definition of information; i.e this is a natural outcome of the OOL work in the early 1980’s, not a “suspect innovation” of the design thinkers in particular. BTW, the concept complex, specified information is also similarly a product of the work in the OOL field at that time, it is not at all a “suspect innovation” devised by Mr Dembski et al, though of course he has provided a mathematical model for it. [ I have also just above pointed to Robertson, on why this link from entropy to information makes sense — and BTW, it also shows why energy converters that use additional knowledge can couple energy in ways that go beyond the Carnot efficiency limit for heat engines.]

7] We can therefore see the cogency of Mathematician, Granville Sewell’s observations, here. Excerpting:

. . . The second law is all about probability, it uses probability at the microscopic level to predict macroscopic change: the reason carbon distributes itself more and more uniformly in an insulated solid is, that is what the laws of probability predict when diffusion alone is operative. The reason natural forces may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy. The second law of thermodynamics is the reason that computers will degenerate into scrap metal over time, and, in the absence of intelligence, the reverse process will not occur; and it is also the reason that animals, when they die, decay into simple organic and inorganic compounds, and, in the absence of intelligence, the reverse process will not occur.

The discovery that life on Earth developed through evolutionary “steps,” coupled with the observation that mutations and natural selection — like other natural forces — can cause (minor) change, is widely accepted in the scientific world as proof that natural selection — alone among all natural forces — can create order out of disorder, and even design human brains, with human consciousness. Only the layman seems to see the problem with this logic. In a recent Mathematical Intelligencer article [“A Mathematician’s View of Evolution,” The Mathematical Intelligencer 22, number 4, 5-7, 2000] I asserted that the idea that the four fundamental forces of physics alone could rearrange the fundamental particles of Nature into spaceships, nuclear power plants, and computers, connected to laser printers, CRTs, keyboards and the Internet, appears to violate the second law of thermodynamics in a spectacular way.1 . . . .

What happens in a[n isolated] system depends on the initial conditions; what happens in an open system depends on the boundary conditions as well. As I wrote in “Can ANYTHING Happen in an Open System?”, “order can increase in an open system, not because the laws of probability are suspended when the door is open, but simply because order may walk in through the door…. If we found evidence that DNA, auto parts, computer chips, and books entered through the Earth’s atmosphere at some time in the past, then perhaps the appearance of humans, cars, computers, and encyclopedias on a previously barren planet could be explained without postulating a violation of the second law here . . . But if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here.” Evolution is a movie running backward, that is what makes it special.

THE EVOLUTIONIST, therefore, cannot avoid the question of probability by saying that anything can happen in an open system, he is finally forced to argue that it only seems extremely improbable, but really isn’t, that atoms would rearrange themselves into spaceships and computers and TV sets . . . [NB: Emphases added. I have also substituted in isolated system terminology as GS uses a different terminology. Cf as well his other remarks here and here.]>>

___________

The bottomline is simple, it is not reasonable to expect diffusion and related patterns to substitute for constructive work resulting not in order but FSCO/I.

The pretence that the freezing of a puddle full of water or the like answers to this, is a red herring led away to a strawman.

KF

45. 45
Gordon Davisson says:

KF:

a] Clausius is the founder of the 2nd law, and the first standard example of an isolated system — one that allows neither energy nor matter to flow in or out — is instructive, given the “closed” subsystems [i.e. allowing energy to pass in or out] in it:

Isol System:

| |(A, at Thot) –> d’Q, heat –> (B, at T cold)| |

b] Now, we introduce entropy change dS >/= d’Q/T . . . “Eqn” A.1

c] So, dSa >/= -d’Q/Th, and dSb >/= +d’Q/Tc, where Th > Tc

d] That is, for system, dStot >/= dSa + dSb >/= 0, as Th > Tc . . . “Eqn” A.2

e] But, observe: the subsystems A and B are open to energy inflows and outflows, and the entropy of B RISES DUE TO THE IMPORTATION OF RAW ENERGY.

In the first place, you are making a basic error in logic here: giving an example of something not happening does not show that it cannot happen (and certainly doesn’t show that it would violate the second law of thermodynamics). If you want to argue that compensation is impossible, giving examples of it not happening is not adequate to support your claim. That’s like saying “it’s not raining today, therefore rain is impossible.” It’s just nonsense.

This particular fallacy is disturbingly common in these discussions. Your nanobots & micro-jet argument gives an example of organization not occurring without intelligent input, but does nothing to support the claim that organization requires intelligent input. Sewell’s paper gives examples of X-entropy not decreasing (unless certain boundary conditions are met), but does nothing to support the claim that a decrease requires that those boundary conditions are met.

In the second place, your example actually does show compensation taking place. The subsystem A undergoes an entropy decrease, compensated by the larger increase in B. This is what is meant by compensation. This may not be what you mean by compensation, but in that case you’re using a nonstandard definition. Compensation is not about probability or order or organization except as far as they’re related to entropy. If what you’re talking about is related to entropy, then compensation will be relevant to it (although to understand how it’s relevant, you have to understand the relation between entropy and whatever you’re interested in). If entropy isn’t related to what you’re interested in, then the second law isn’t relevant either.

(Sewell’s argument against compensation is based on trying to apply it to probability instead of entropy; probability is related to entropy, but Sewell doesn’t understand the connection, so the result is a hopeless muddle.)

46. 46
keiths says:

KF,

You can’t make up in volume for what your argument lacks in substance.

If you could refute my simple 4-step argument, you would. You would point out the exact statement you disagree with, and you would explain why you thought it was wrong.

You can’t do that, so instead you spam the thread with thousands of words, hoping that the onlookers will assume that there’s a refutation in there somewhere.

There isn’t, and the onlookers know it.

47. 47
kairosfocus says:

Re KS: Simply compare who has taken time to deal with the matter in context (starting with a point by point refutation backed up by supportive materials that KS is trying to pretend does not exist and/or to push off as “spam” — proof of his bad faith if any was needed) and who is trying to whistle by the graveyard in the dark. This duppy leans on the fence and says, BOOO! KF

48. 48
kairosfocus says:

GD:

Kindly stop putting words in my mouth that don’t belong there. It is KS who has tried to suggest (frankly, at this stage it is a willful strawman distortion) that GS and I are trying to or say something equivalent to trying to overthrow the second law.

What we have actually said is something else, to practical certainty, given the underlying statistical basis for the law diffusion and similar processes cannot credibly perform constructive work issuing in FSCO/I.

Just as we do not see golf balls, by lucky collocations of forces spontaneously played through 18 holes of Golf, to give Shapiro’s example. If you think otherwise in the teeth of the evidence and reasoning that establishes the second law, just kindly give us an actually observed and recorded example: ____________ , and tell us when the Nobel Prize was awarded for the success: _________ .

That when a body of water is cooled, the cooling process manifests a rise of entropy elsewhere that exceeds the loss does not deflect the force of the point that diffusive like factors for heat etc overwhelmingly tend to move systems to clusters of microstates that have heavier statistical weight, they are simply not credible as the cause of constructive work ending in creation of FSCO/I. For essentially the same reason that we have no good basis to expect that a solar system full of rock avalanches over the past 10^17 s, would not once cause rocks to fall into a pattern of rocks spelling out this post. This is logically possible but so lost in the space of possibilities compared to the dominant clusters of outcomes, that the outcome is practically unobservable. Likewise, it is logically possible that the post you are reading is produced by noise on the internet, but this too is simply unobservable in empirical terms for the same reason.

Now, finally the 2nd law is a case of an inductive generalisation that on observations and related analysis forbids certain outcomes. Like all similar laws, it is provisional but on evidence is highly reliable. The force of that underlying analisis as the case above illustrates is that due to relative statistical weight, diffusion will predictably, reliably not in nour observation assemble a jet or any comparable thing. But assembly robots properly instructed can do so. Or, scaling back up to Hoyle’s example, a tornado is maximally unlikely to build a jet. But intelligently directed constructive work routinely does so. The reason being relative statistical weights of clusters of relevant states in the space of possibilities.

If you can give an actually credibly observed case of chaotic forces of blind chance and mechanical necessity such as diffusion or tornadoes performing constructive work issuing in FSCO/I (as opposed to order) kindly give it: __________

KF

49. 49
Timaeus says:

I find it interesting that when someone with a Ph.D. in physics, who has worked for NASA, who has many peer-reviewed papers, etc., writes a column (the one above) on thermodynamics that mostly sides with Granville Sewell, those who have been criticizing Sewell continue to engage Sewell (whom they have complained is a non-physicist and does not understand thermodynamics, entropy, etc.), but remain silent in response to the arguments made by the physicist (whom one would suppose to be reasonably well-trained in these subjects). I wonder what the reason for this silence is.

But I guess I notice odd things.

50. 50
keiths says:

Timaeus,

Why do you find that odd? Sheldon is defending Sewell and his lamentable paper, so of course Sewell’s ideas are the focus.

51. 51
Timaeus says:

It’s odd because you and Elizabeth and other detractors are acting as if:

(1) The name signed to the column above is Granville Sewell rather than Robert Sheldon;

(2) Robert Sheldon’s extensive remarks (probably a couple of thousand words in the op-ed, and several hundred more in comment 24), which include both factual statements about thermodynamics and arguments regarding their application, *add nothing to the discussion* and therefore can be ignored.

52. 52
Gordon Davisson says:

KF:

GD:

Kindly stop putting words in my mouth that don’t belong there. It is KS who has tried to suggest (frankly, at this stage it is a willful strawman distortion) that GS and I are trying to or say something equivalent to trying to overthrow the second law.

I certainly don’t intend to misrepresent your position, and I don’t see where I’ve done anything like what you describe above. I did say that you gave an example of compensation occurring, but A) you clearly did, and B) this is not in any way “equivalent to trying to overthrow the second law”. Compensation is an integral part of the second law, not a challenge to it.

Honestly, I don’t entirely understand what your position is, so I haven’t tried to represent your position, let alone misrepresent it. While you post in great volume, you could really stand to work on your clarity. I’ll ask some questions below to try to get you to clarify some relevant parts of it.

I also don’t think you’re bothering to pay attention to my and/or Keith’s points. For instance, your massive “point-by-point refutation” didn’t actually engage Keith’s point at all — he’s making the same basic point I am, that the second law allows local entropy decreases when they’re coupled with compensating entropy increases elsewhere; your response was all about organization and related topics, not entropy. You’re not even talking about the same thing.

Speaking of which:

What we have actually said is something else, to practical certainty, given the underlying statistical basis for the law diffusion and similar processes cannot credibly perform constructive work issuing in FSCO/I.

Just as we do not see golf balls, by lucky collocations of forces spontaneously played through 18 holes of Golf, to give Shapiro’s example. If you think otherwise in the teeth of the evidence and reasoning that establishes the second law, just kindly give us an actually observed and recorded example: ____________ , and tell us when the Nobel Prize was awarded for the success: _________ .

Speaking of putting words in someone else’s mouth… where did I say anything at all about FSCO/I or golf?

Ok, now that I’ve ranted about you not understanding our points, let me try to get you to clarify yours:

That when a body of water is cooled, the cooling process manifests a rise of entropy elsewhere that exceeds the loss does not deflect the force of the point that diffusive like factors for heat etc overwhelmingly tend to move systems to clusters of microstates that have heavier statistical weight, they are simply not credible as the cause of constructive work ending in creation of FSCO/I.

If I’m reading this right, you’re agreeing that compensation happens (“…a rise of entropy elsewhere that exceeds the loss…”), but denying that this can lead to the production of FSCO/I.

A) Is that correct?
B) If it’s not, and you deny that compensation happens, how do you reconcile that with it being an example of an entropy decrease in one place compensated by an increase elsewhere?
C) If it’s not, and you aren’t denying that this can lead to the production of FSCO/I, then… I’m massively confused; please try to explain again.

Also, if you do claim this (whatever “this” is) cannot lead to the production of FSCO/I, do you claim that this is impossible because it would violate the second law of thermodynamics, or for other reasons?

While I’m at it:

h] Now, as just mentioned, certain bodies have in them energy conversion devices: they COUPLE input energy to subsystems that harvest some of the energy to do work, exhausting sufficient waste energy to a heat sink that the overall entropy of the system is increased. Illustratively, for heat engines — and (in light of exchanges with email correspondents circa March 2008) let us note: a good slice of classical thermodynamics arose in the context of studying, idealising and generalising from steam engines [which exhibit organised, functional complexity, i.e FSCI; they are of course artifacts of
intelligent design and also exhibit step-by-step problem-solving processes (even including “do-always” looping!)]:

| | (A, heat source: Th): d’Qi –> (B’, heat engine, Te): –>

d’W [work done on say D] + d’Qo –> (C, sink at Tc) | |

i] A’s entropy: dSa >/= – d’Qi/Th

j] C’s entropy: dSc >/= + d’Qo/Tc

k] The rise in entropy in B, C and in the object on which the work is done, D, say, compensates for that lost from A. The second law — unsurprisingly, given the studies on steam engines that lie at its roots — holds for heat engines. [–> Notice, I have addressed the compensation issue all along.]

Are you claiming that the conversion of heat to work (plus lower-temperature waste heat) can only occur via FSCI (or FSCO/I or whatever)? If so, do you claim that this is a consequence of the second law, or for other reasons?

53. 53
keiths says:

Timaeus,

You find it odd that Lizzie and I aren’t engaging Robert Sheldon’s OP, but you are reading too much into that, as you tend to do. Everyone else in the thread is also ignoring the OP, including Granville, kairosfocus, and CS3.

Granville jumped in with the first comment, said he was having “a little trouble” understanding the OP, and then proceeded to tell everyone that

Robert’s comments are from the point of view of statistical thermodynamics… I am still trying to understand the details of his post myself.

In any case, I want to emphasize that the main points in my papers do not really require any understanding of statistical thermodynamics, or even (in the case of the Biocomplexity paper especially) PDEs or mathematics in general. My points are MUCH simpler!

In other words, he was basically advising all of us not to spend any time pondering Robert’s post, since it wasn’t necessary! You can hardly blame us for taking Granville’s advice.

I did read through the OP, but didn’t find anything particularly comment-worthy, so I left it alone.

However, today you pointed me to Robert’s comment #24, so I took a look. Boy, was that an eye-opener! Thank you for pointing it out.

Robert actually makes this claim:

Likewise, life has enormous gradients, both spatially and temporally, which should determine the direction of the reaction, but don’t because the exact opposite is observed. And yes, this violation of entropy gradients is a direct violation of the 2nd Law of Thermodynamics.

Holy crap! He’s saying that life itself, not just evolution, violates the second law!

The irony, Timaeus, is that you tried so hard in the other thread to persuade me that Granville’s paper wasn’t claiming that evolution violated the second law. Your argument was that even if Granville personally believed it, he wasn’t stating it in the paper, and so the paper deserved to be taken seriously.

So today you asked Lizzie and me to take a closer look at what Robert wrote, in both the OP and his comment. I did, and I found something even more outlandish than the evolution/second law claim.

Robert has out-Granvilled Granville!

Shall we inform the Nobel committee that life violates the second law of thermodynamics, according to Robert Sheldon?

54. 54
Timaeus says:

keiths:

Regarding the statement you’ve asked about, I presumed that Sheldon meant that the *spontaneous emergence of life from non-life* would involve such a violation, not that *the continued existence of life* does so.

Be that as it may, let’s bracket out that statement from Comment 24 for the moment. Let’s just concentrate on the op-ed. Sheldon argues that Sewell’s argument is, if not perfectly well formulated, at least substantially correct in most of its statements about thermodynamics and entropy. Sheldon gives some reasons for this opinion of his. I would like it if you and/or Elizabeth and/or anyone else would comment directly on the op-ed, saying what you agree with or disagree with in it.

Obviously, you disagree with Sheldon’s *conclusion* — that Sewell’s article is not crap science, but fairly good science — but I already know that. I want to hear where you think Sheldon is right, and where you think he is wrong. As Elizabeth has pointed out, it is not the conclusions that matter so much (in determining the value of a scientific writing) as the reasoning. Since Sheldon is a highly trained physicist, it seems likely that he would as a general rule employ sound reasoning in matters of physics. It is also likely that he would not make a gross error in his account of the use of the term “entropy” or his understanding of the Second Law. So I’m curious to know if you think he has made any errors of either definition or reasoning in his op-ed.

55. 55
kairosfocus says:

GD:

Please. I have pointed out (and given supportive details above) that:

1: GS and I have both argued that diffusion-like mechanisms (including for heat spreading) are not credibly able to perform constructive work resulting in FSCO/I, on grounds linked to the statistical underpinnings of the second law. RS has added a significant point on how differences in energy levels will lock away access to certain states (through metastability), also making a discussion on the issue that Boltzmann analysis is different from Gibbs analysis which explicitly reckons with degree of accessibility of different possible configs. (BTW, I suspect that not every physicist will fully agree with RS’s way of putting his case [and I am not inclined to take up a debate on that], but the fundamental point here is apt. I have also pointed out the work flowing from Brillouin, Jaynes and co down to Robertson and co on the informational view on entropy: a metric of the average missing information to specify microstate on knowing macrostate, which forces us to treat these states as random. I should add that when something is in a functional state depending on a narrow cluster of configs reflecting FSCO/I, a LOT of information has just been communicates about its micro state given the rarity of such clusters in the space of possible configs. That rarity is enforced by the need for multiple, well matched, correctly arranged and coupled parts, such as we can see in even (I) the way the symbols used to compose a post like this in English are organised, as opposed to (II) randomness typical outputs: fjiwhghjwuo . . . or (III) what mechanical necessity would force:sksksksksk . . . )

2: I have repeatedly pointed out and given explanations on how the only empirically and analytically credible cause of such constructive work leading to FSCO/I is design. As is massively evidenced from our observation of such entities, with billions of cases in point.

3: I have pointed out and explained that, when KS tried to suggest that either GS or I have implied or tried to say that when an entity A actually undergoes a loss of entropy, there is not a mechanism at work that will cause a rise in entropy elsewhere (e.g. in B or in B”, C and D) that by virtue of the inefficiencies and/or molecular statistics involved, will lead to a rise of entropy elsewhere that will equal or exceed the reduction at A, HE HAS MISREPRESENTED WHAT WE HAVE SAID AND IMPLIED, AND SO HAS LED AWAY AFTER A RED HERRING THEN HAS ERECTED A STRAWMAN THEN KNOCKED IT DOWN. (He has done so now several times in the teeth of reasonable correction, leading to his indulging in willfully continued misrepresentation. And yes, at this point, whether or not he will acknowledge or face it, the just linked definition plainly, sadly, applies to what he is doing.)

4: In particular, KS has held up a — corrected but this has been ignored and further misrepresented to the point where it is plainly a willful tactic — case where on loss of sufficient heat, a pool of water freezes due to the ordering forces present in the polar water molecules leading to crystal packing. I have pointed out, that ever since Wicken and Orgel, it has been recognised that order is not to be confused with organisation that Wicken describes in terms of being assembled per a wiring diagram, and which both associate with life.

5: I have further pointed out that in design thought the difference has been underscored since Thaxton et al in The Mystery of Life’s Origin, the very first design theory work. That is, since 1984 on public and easily accessible record. That is, we must address: chance based randomness, necessity based order and complex, fucit0onallys pecific complex organisation and associated information that in our uniform, repeated experience of its creation is only sources in choice contingency, aka design.

6: I have taken time through the nanobots and microjets thought exercise, to show why this is so, per the vast difference on statistical weight of relevant functional and non-functional microstates, with a context where diffusion or some comparable blind and chance driven force is operative vs an intelligent process of constructive work is operative.

If, however, you come along and assume that KS has grounds for what he says and we do not, then you will be likely indeed to be misled.

But then, that is a longstanding pattern of design theory, too many inclined to support darwinism assume that ideologues who do not shun to stoop to willful, sustained miserepresentations, are giving an accurate picture, e.g. “creationists in cheap tuxedos.”

Do I need to point out that willfully sustained misrepresentation in the face of easily accessible corrective facts is morally irresponsible, indeed a species of lying?

Now, I think several of your questions above have been anmswe3red. I will note that I have taken time to explicitly point out that from Thaxton et al on, design thinkers have been careful to make relevant distinctions. For instance, there are naturally occurring heat engines triggered by fluid dynamics, e.g. hurricanes and tornadoes — manifestations of order tracing to mechanical necessity, not organisation. The focal case is where constructive work leading to FSCO/I is the outcome, and the point is that on the relevant statistics, etc, this is not credibly produced by diffusion-like forces. Thus the use of scaling up and down on jets examples, and the reference to Shapiro’s equivalent macro-level example the golf ball that played itself through 18 holes with aid of winds, earthquakes an d the like.

In this context, it is not to put worlds in your moutht o point out that KS has erected and kncled over a distractive strawman by going off on a tangent about tsomething no one disputres, and pretending that GS and I have argued things that say or imply that what he makes a song and dance about does not happen.

And in light of that misrepresentation, the correction that I have put up is actually responsive by exposing a strawman and emphasising what we actually HAVE said and argued, as opposed to what we have been willfully and irresponsibly, persistently misrepresented as.

I hope as well you will understand how such willful misrepresentation — an unfortunately demonstrably habitual pattern of argument by darwinist objectors to design for over a decade now — frustrates honest and civil discussion by clouding and poisoning the atmosphere.

Which is part of how such uncivil tactics work.

And when that is then compounded by the pretence that we are doing much the same when we protest,t hat simply makes matters worse.

If you are indeed an honest interlocutor, please change path.

KF

56. 56
kairosfocus says:

Re KS:

It is patent that KS will not ber corrected nor will he cease from willful distort6ions, so I ntoe as follows for record:

1: GS and I have both argued that diffusion-like mechanisms (including for heat spreading) are not credibly able to perform constructive work resulting in FSCO/I, on grounds linked to the statistical underpinnings of the second law. RS has added a significant point on how differences in energy levels will lock away access to certain states (through metastability), also making a discussion on the issue that Boltzmann analysis is different from Gibbs analysis which explicitly reckons with degree of accessibility of different possible configs. (BTW, I suspect that not every physicist will fully agree with RS’s way of putting his case [and I am not inclined to take up a debate on that], but the fundamental point here is apt. I have also pointed out the work flowing from Brillouin, Jaynes and co down to Robertson and co on the informational view on entropy: a metric of the average missing information to specify microstate on knowing macrostate, which forces us to treat these states as random. I should add that when something is in a functional state depending on a narrow cluster of configs reflecting FSCO/I, a LOT of information has just been communicates about its micro state given the rarity of such clusters in the space of possible configs. That rarity is enforced by the need for multiple, well matched, correctly arranged and coupled parts, such as we can see in even (I) the way the symbols used to compose a post like this in English are organised, as opposed to (II) randomness typical outputs: fjiwhghjwuo . . . or (III) what mechanical necessity would force:sksksksksk . . . )

2: I have repeatedly pointed out and given explanations on how the only empirically and analytically credible cause of such constructive work leading to FSCO/I is design. As is massively evidenced from our observation of such entities, with billions of cases in point.

3: I have pointed out and explained that, when KS tried to suggest that either GS or I have implied or tried to say that when an entity A actually undergoes a loss of entropy, there is not a mechanism at work that will cause a rise in entropy elsewhere (e.g. in B or in B”, C and D) that by virtue of the inefficiencies and/or molecular statistics involved, will lead to a rise of entropy elsewhere that will equal or exceed the reduction at A, HE HAS MISREPRESENTED WHAT WE HAVE SAID AND IMPLIED, AND SO HAS LED AWAY AFTER A RED HERRING THEN HAS ERECTED A STRAWMAN THEN KNOCKED IT DOWN. (He has done so now several times in the teeth of reasonable correction, leading to his indulging in willfully continued misrepresentation. And yes, at this point, whether or not he will acknowledge or face it, the just linked definition plainly, sadly, applies to what he is doing.)

4: In particular, KS has held up a — corrected but this has been ignored and further misrepresented to the point where it is plainly a willful tactic — case where on loss of sufficient heat, a pool of water freezes due to the ordering forces present in the polar water molecules leading to crystal packing. I have pointed out, that ever since Wicken and Orgel, it has been recognised that order is not to be confused with organisation that Wicken describes in terms of being assembled per a wiring diagram, and which both associate with life.

5: I have further pointed out that in design thought the difference has been underscored since Thaxton et al in The Mystery of Life’s Origin, the very first design theory work. That is, since 1984 on public and easily accessible record. That is, we must address: chance based randomness, necessity based order and complex, fucit0onallys pecific complex organisation and associated information that in our uniform, repeated experience of its creation is only sources in choice contingency, aka design.

6: I have taken time through the nanobots and microjets thought exercise, to show why this is so, per the vast difference on statistical weight of relevant functional and non-functional microstates, with a context where diffusion or some comparable blind and chance driven force is operative vs an intelligent process of constructive work is operative.

I suggest that onlookers beware of KS’s tactics and their consequences.

KF

57. 57
keiths says:

kairosfocus:

I suggest that onlookers beware of KS’s tactics and their consequences.

KF,

The onlookers are wondering why you can’t locate a flaw in my 4-step argument.

58. 58
keiths says:

Timaeus:

Since Sheldon is a highly trained physicist, it seems likely that he would as a general rule employ sound reasoning in matters of physics.

Timaeus,

Your obsession with credentials is showing again.

Do you think Sheldon’s PhD outweighs his bizarre claim that life violates the second law?

If you asked 100 highly-trained physicists whether they think that life violates the second law, what do you think they would say?

Do their PhD’s outweigh Sheldon’s? How does the credential calculus work?

59. 59
kairosfocus says:

T:

Thanks for a thoughtful intervention.

I draw your attention to the following summary, in which I highlight where I have taken up in brief some of RS’s themes above:

1: GS and I have both argued — cf. here on — that diffusion-like mechanisms (including for heat spreading) are not credibly able to perform constructive work resulting in FSCO/I, on grounds linked to the statistical underpinnings of the second law. RS has added a significant point on how differences in energy levels will lock away access to certain states (through metastability), also making a discussion on the issue that Boltzmann analysis is different from Gibbs analysis which explicitly reckons with degree of accessibility of different possible configs. (BTW, I suspect that not every physicist will fully agree with RS’s way of putting his case [and I am not inclined to take up a debate on that], but the fundamental point here is apt. I have also pointed out the work flowing from Brillouin, Jaynes and co down to Robertson and co on the informational view on entropy: a metric of the average missing information to specify microstate on knowing macrostate, which forces us to treat these states as random. I should add that when something is in a functional state depending on a narrow cluster of configs reflecting FSCO/I, a LOT of information has just been communicates about its micro state given the rarity of such clusters in the space of possible configs. That rarity is enforced by the need for multiple, well matched, correctly arranged and coupled parts, such as we can see in even (I) the way the symbols used to compose a post like this in English are organised, as opposed to (II) randomness typical outputs: fjiwhghjwuo . . . or (III) what mechanical necessity would force:sksksksksk . . . )

2: I have repeatedly pointed out and given explanations on how the only empirically and analytically credible cause of such constructive work leading to FSCO/I is design. As is massively evidenced from our observation of such entities, with billions of cases in point.

3: I have pointed out and explained that, when KS tried to suggest that either GS or I have implied or tried to say that when an entity A actually undergoes a loss of entropy, there is not a mechanism at work that will cause a rise in entropy elsewhere (e.g. in B or in B”, C and D) that by virtue of the inefficiencies and/or molecular statistics involved, will lead to a rise of entropy elsewhere that will equal or exceed the reduction at A, HE HAS MISREPRESENTED WHAT WE HAVE SAID AND IMPLIED, AND SO HAS LED AWAY AFTER A RED HERRING THEN HAS ERECTED A STRAWMAN THEN KNOCKED IT DOWN. (He has done so now several times in the teeth of reasonable correction, leading to his indulging in willfully continued misrepresentation. And yes, at this point, whether or not he will acknowledge or face it, the just linked definition plainly, sadly, applies to what he is doing.)

4: In particular, KS has held up a — corrected but this has been ignored and further misrepresented to the point where it is plainly a willful tactic — case where on loss of sufficient heat, a pool of water freezes due to the ordering forces present in the polar water molecules leading to crystal packing. I have pointed out, that ever since Wicken and Orgel — cf. the point 8 in the just linked, it has been recognised that order is not to be confused with organisation that Wicken describes in terms of being assembled per a wiring diagram, and which both associate with life.

5: I have further pointed out that in design thought the difference between order and organisation and the divergent empirically grounded causes of both, has been underscored since Thaxton et al in The Mystery of Life’s Origin, the very first design theory work. That is, since 1984 on public and easily accessible record. That is, we must address: chance based randomness, necessity based order and complex, functionally specific complex organisation and associated information that in our uniform, repeated experience of its creation is only sources in choice contingency, aka design. (This is of course the context in which the design inference explanatory filter — yet another massively, willfully strawmannised point — is grounded.)

6: I have taken time through the nanobots and microjets thought exercise, to show why this is so, per the vast difference on statistical weight of relevant functional and non-functional microstates, with a context where diffusion or some comparable blind and chance driven force is operative vs an intelligent process of constructive work is operative.

Now, going beyond, I see that there is a point where I do need to bring out a disagreement with RS:

KS: today you pointed me to Robert’s comment #24, so I took a look. Boy, was that an eye-opener! Thank you for pointing it out.

Robert actually makes this claim:

[RS:] Likewise, life has enormous gradients, both spatially and temporally, which should determine the direction of the reaction, but don’t because the exact opposite is observed. And yes, this violation of entropy gradients is a direct violation of the 2nd Law of Thermodynamics.

[KS, vulgarity deleted:] He’s saying that life itself, not just evolution, violates the second law!

The context is:

[RS:] Well, the theorems for conservation of entropy all come out of thermodynamics. What Boltzmann did, was to show how this could be converted to statistical mechanics. Thus the peculiar “ordering” of atoms has all the same constant properties as the thermal physics of collections of atoms. This was the power of the equation.

We may not have a conversion constant for other forms of ordering, but the existence of these constants for noble gas atom ordering, strongly suggest that other forms of ordering are also conserved.

This provided a beachhead into the sorts of ordering that Granville refers to, and we can fruitfully discuss the conservation of “Entropy-X”, even if we don’t know how to calculate it.

I tried to describe how far physicists had gone in calculating the entropy of complex systems, but in another sense, this is a red herring. That is, we almost never use the entropy in calculation, only the change in entropy. And in complicated systems [–> note qualifier], the change in entropy is a path-dependent function. [–> In particular, if something is performing programmed constructive work in the situation, things are not so simple and direct anymore.] Or to say it another way, it is dS/dx that is important, not S itself, or perhaps integral[(dS/dx) dx]. So for example, when a nitroglycerine molecule dissociates, the reordering of the chemical bonds releases energy, and the process is driven by the increase in entropy of the gas products over the molecular precursor. So it is the local dS/dx that drives the reaction so very quickly. [–> Injecting first law considerations, and the like we end up with a Gibbs free energy equivalent here, so this is quite valid.]

By analogy then, Granville doesn’t have to calculate the entropy of his jet airliner, simply the gradients in the entropy from junk-yard to airliner. Likewise, life has enormous gradients, both spatially and temporally, which should determine the direction of the reaction, but don’t because the exact opposite is observed. [–> because of programmed constructive work, often fed by chains of ATP molecules that push things way up the energy and entropy hill, paying for this elsewhere and of course dependent on a whole system of FSCO/I rich machines organised to carry out genetic info and regulation] And yes, this violation of entropy gradients is a direct violation of the 2nd Law of Thermodynamics. [–> this I disagree with, on grounds similar to the point that was just made.]

There may be ways to get around this with long-range forces and correlations. [–> The ways around have to do with the obvious presence of not mere correlations but programmed constructive work with actual energy batteries used to drive it uphill] But then, all of statistical mechanics presupposes that there are no long-range correlations, so more than thermodynamics is lost if we invoke long-range forces.

That is, first, RS is giving an important qualifier and wider context, that KS neatly clipped off in his quote.

Second, RS has at minimum been sloppy in his wording, and as the wording appears, I disagree for reasons as annotated. The living cell is an automaton that is chock full of programmed operations and machinery organised to carry out things that would not spontaneously occur without such programming and a steady flow of ATP energy batteries.

In effect we have something like Maxwell’s demon on steroids at work here [the system is obviously programmed to “know” what to expect and how to use that to perform work that is otherwise not reasonable in a system that does not have that degree of organisation], and while actually calculating the entropy numbers would be very hard indeed, there is no serious reason to believe that the informationally directed work of a system violates the overall degradation of energy in the world as it proceeds.

We have an FSCO/I rich autonomous system that is self maintaining and self replicating based on information and machinery. That is there are FSCO/I rich energy conversion devices aplenty at work capable of performing and actually carrying out programmed construction work. the FSCO/I being produced and self replicated through constructive work begs for explanation on the known, empirically warranted source of FSCO/I in its various forms. And, there is just no serious reason to doubt that in the end, the inefficiencies associated with the system will dump entropy elsewhere — note how carefully our bodies work to keep from cooking ourselves in our own body heat, what in effect a fever of enough elevation would do. All of that sweating etc that seeks to keep a regulated temp going, has a reason. So does shivering as a mechanism to try to keep us warm. And more.

But also, the case just discussed shows how the attempt to say that compensation by exporting entropy in effect is a good enough answer fails. Again and again, we see constructive work being carried out in accord with programs, e.g. protein synthesis in ribsomes as a classic case. Constructive work associated with heat flows as an inevitable byproduct, will dump heat elsewhere, thus entropy.

Let me again clip my core, summary analysis from was it 6 above:

Heat transfer in Isolated system:

|| A (at T_a) –> d’Q –> B (at T_b) ||

dS >/= (-d’Q/T_a) + (+d’Q/T_b), Lex 2, Th in a common form

[–> I add, this shows how an increment of heat flow, will cause entropy in the receiving body B to tend to rise, and due tot he sharp rise in number of possible ways to distribute mass and energy at micro levels implied, relative to the relatively fewer numbers lost for A, the overall number of ways to distribute mass and energy at micro levels in the system rises, the same as saying entropy rises overall. or, in a certain ideal case useful in calculations, quasi-static equilibrium, it can be constant.

–> I have taken pains to point out that the application of this case to a pool of water freezing by losing enough heat that the attractions of its polar molecules can impose crystal order, freezing to form ice, is a case of order not organisation and is irrelevant to the matter at focus, constructive work issuing in FSCO/I, and whether forces like those responsible for diffusion and the like, are a credible source of such]

Heat engine, leaving off [in the diagram] the isolation of the whole:

A –> d’Q_a –> B’ =====> D (shaft work)

Where also, B’ –> d’Q_b –> C, heat disposal to a heat sink

Where of course the sum of heat inflows d’Q_a will equal the sum of rise in internal energy of B dU_b, work done on D, dW, and heat transferred onwards to C, d’Q_b.

[–> again, this shows that I am arguing in a context where lex I see 2 th holds and holds by virtue that heat dumping allows it to do so, but this is not enough to answer to the question that is implied by the “compensation” arguments, that we can credibly expect diffusion or the like to perform constructive shaft work issuing in FSCO/I in relevant cases of interest e.g at OOL in that warm little pond of Darwin or the like purely physical and chemical envt.]]

The pivotal questions are, the performance of constructive work by dW, the possibility that this results in FSCO/I, the possibility that B itself shows FSCO/I enabling it to couple energy and do shaft work, and the suggested ideas that d’Q’s can spontaneously lead to shaft work constructing FSCO/I as an informational free lunch.

[–> Notice how I have focussed the real issues]

By overwhelming statistical weights of accessible microstates consistent with the gross flows outlined, this is not credibly observable on the gamut of our solar system or the observed cosmos.

There is but one empirically, reliably observed source for FSCO/I, design. The analysis on statistical weights of microstates and random forces such as diffusion and the like, shows why.

[–> cf. 6 above for a simple model of diffusion:

[KF to GS:] diffusion-like, random spreading mechanisms are acting and strongly tend to drive unconstrained systems to clusters of possible states where the formerly concentrated or more orderly items are now spread out and are utterly unlikely to return to the original state or something like that.

There is a “time’s arrow” at work leading to a system that “forgets” its initial condition and moves towards a predominant cluster of microstates that has an overwhelming statistical weight.

For instance (following a useful simple model of diffusion in Yavorsky and Pinski’s nice elementary Physics), if we have ten each of white and black marbles in two rows in a container:

||**********||
||0000000000||

There is but one way to be as shown [–> 10B on top of 10W], but over 63,000 to be 5B/W in each row, and 6:4 and 4:6 cases are close to such numbers:44,000 +. So, if the system is sufficiently agitated for balls to swap positions at random on a regular basis, it soon moves towards a dominant cluster near the 5:5 peak and “forgets” the initial state.

[–> Under circumstances where metastabilities break, systems then migrate to maximally weighted clusters of microstates . . . think of a frozen material here with molecules of two types locked in a very special pattern then melt, so diffusion happens and the things become mixed. If we observe at instants thereafter, it will be maximally implausible to see the simply describable case again, for systems of relevant complexity beyond 500 – 1,000 bits. or, if such started in a more typical state, it will not be at all plausible on the gamut of the solar system or the observed cosmos to find deeply isolated clusters of special configs. The 500H coin example illustrates why.]

This basic pattern works for heat, diffusion of ink drops in beakers of water, C atoms in a bar of steel, and more.

The basic reason for such does not go away if the system is opened up to energy and mass flows, and the point that rare, specifically and simply describable states ill seldom be revisited or found, for enough complexity — 500 – 1,000 bits, soon becomes that such states are beyond the reach of the solar system’s or the observed cosmos’ search capacity.

Which brings us to the point Hoyle was making: letting his tornado stand in for an agitating energy source triggering configs at random, tornadoes passing through junkyards don’t build jumbo jets or even instruments on their panels. But similar intelligently directed expenditures, through precise patterns of work, routinely do build jumbo jets.

Now, T, it is normal in complex cases like this for there to be differences and errors. However, I am pretty sure that for years to come, people like KS will be snipping out of context, and playing all sorts of willfully continued misrepresentation games, to “prove” that we do not know what we are talking about. he is here to disrupt and harvest snippets to use here and elsewhere for rhetorical or even propagandistic purposes, not to seek the truth through reasonable discussion.

How do I confidently know this?

Because that is what happened to me above in this thread, and because it is what I have observed too many times elsewhere and over the course of years with KS’s ilk.

RS has given an honest opinion,and has in my view made an error which I have addressed.

This, predictably will be pounced on and snatched out of context to make up and pummel a strawman, never mind corrections or protests.

Indeed, my corrections above were brushed aside with the willful and irresponsible misrepresenatations that I was spamming the thread and had not adequately addressed the strawman distortion I have yet again taken time to correct.

That is what we are up against.

It brings back all too many memories of how too many communists operated, but frankly, this seems to be worse, more deviously calculated a la Alinsky; most communists were true believers and there was something to be said for much of their analysis and they were often quite sincere. (Never mind how dangerous they were as a result.)

There is nothing positive to be said for a farrago of willfully sustained distractions, distortions and caricatures led to ad hominems designed to polarise and confuse.

KF

60. 60
kairosfocus says:

Onlookers,

as predicted, KS is simply repeating his willful misrepresentations.

1: GS and I have both argued — cf. here on — that diffusion-like mechanisms (including for heat spreading) are not credibly able to perform constructive work resulting in FSCO/I, on grounds linked to the statistical underpinnings of the second law. RS has added a significant point on how differences in energy levels will lock away access to certain states (through metastability), also making a discussion on the issue that Boltzmann analysis is different from Gibbs analysis which explicitly reckons with degree of accessibility of different possible configs. (BTW, I suspect that not every physicist will fully agree with RS’s way of putting his case [and I am not inclined to take up a debate on that], but the fundamental point here is apt. I have also pointed out the work flowing from Brillouin, Jaynes and co down to Robertson and co on the informational view on entropy: a metric of the average missing information to specify microstate on knowing macrostate, which forces us to treat these states as random. I should add that when something is in a functional state depending on a narrow cluster of configs reflecting FSCO/I, a LOT of information has just been communicates about its micro state given the rarity of such clusters in the space of possible configs. That rarity is enforced by the need for multiple, well matched, correctly arranged and coupled parts, such as we can see in even (I) the way the symbols used to compose a post like this in English are organised, as opposed to (II) randomness typical outputs: fjiwhghjwuo . . . or (III) what mechanical necessity would force:sksksksksk . . . )

2: I have repeatedly pointed out and given explanations on how the only empirically and analytically credible cause of such constructive work leading to FSCO/I is design. As is massively evidenced from our observation of such entities, with billions of cases in point.

3: I have pointed out and explained that, when KS tried to suggest that either GS or I have implied or tried to say that when an entity A actually undergoes a loss of entropy, there is not a mechanism at work that will cause a rise in entropy elsewhere (e.g. in B or in B”, C and D) that by virtue of the inefficiencies and/or molecular statistics involved, will lead to a rise of entropy elsewhere that will equal or exceed the reduction at A, HE HAS MISREPRESENTED WHAT WE HAVE SAID AND IMPLIED, AND SO HAS LED AWAY AFTER A RED HERRING THEN HAS ERECTED A STRAWMAN THEN KNOCKED IT DOWN. (He has done so now several times in the teeth of reasonable correction, leading to his indulging in willfully continued misrepresentation. And yes, at this point, whether or not he will acknowledge or face it, the just linked definition plainly, sadly, applies to what he is doing.)

4: In particular, KS has held up a — corrected but this has been ignored and further misrepresented to the point where it is plainly a willful tactic — case where on loss of sufficient heat, a pool of water freezes due to the ordering forces present in the polar water molecules leading to crystal packing. I have pointed out, that ever since Wicken and Orgel — cf. the point 8 in the just linked, it has been recognised that order is not to be confused with organisation that Wicken describes in terms of being assembled per a wiring diagram, and which both associate with life.

5: I have further pointed out that in design thought the difference between order and organisation and the divergent empirically grounded causes of both, has been underscored since Thaxton et al in The Mystery of Life’s Origin, the very first design theory work. That is, since 1984 on public and easily accessible record. That is, we must address: chance based randomness, necessity based order and complex, functionally specific complex organisation and associated information that in our uniform, repeated experience of its creation is only sources in choice contingency, aka design. (This is of course the context in which the design inference explanatory filter — yet another massively, willfully strawmannised point — is grounded.)

6: I have taken time through the nanobots and microjets thought exercise, to show why this is so, per the vast difference on statistical weight of relevant functional and non-functional microstates, with a context where diffusion or some comparable blind and chance driven force is operative vs an intelligent process of constructive work is operative.

Those who are genuinely interested to find out the balance on the merits can follow up, and will see who is being truthful and who is playing at straw3man caricatures and now outright drumbeat repetition of a big lie tactics.

(I have presented a point by point analysis and correction that exposed the strawman tactic and backed it up with a substantial analysis above and elsewhere, KS is blandly and brazenly denying that this exists. he believes that drumbeat repetition of strawman distortions maintained in the teeth of correction will confuse, polarise and generally create the false impression he wants. Do not fall for his uncivil tactics.)

KF

61. 61
keiths says:

KF,

Which step of my simple argument is incorrect, and why?

62. 62
JWTruthInLove says:

@keiths:

In order to stop your broken record, I volunteer to speak for kf on this matter:

If kf is not happy with “his” answer, he can step in… In this case we will know that at least one step is incorrect.

63. 63
Timaeus says:

keiths:

I will not defend or even comment further on the remarks you are railing about, until Robert Sheldon has chipped in to clarify what he meant. It would not be fair to him to guess what he meant, and then try to defend him on that basis. That is why I asked you to bracket out that passage from Comment 24 and *concentrate on the op-ed*.

If you are willing to do that, then I will talk. If you aren’t, then this conversation is over.

Now, tell me the errors and incompetence that you detect in the physics of Robert Sheldon *as expressed in the op-ed*.

Or, if you detect no such errors or incompetence, then why do you not agree with his conclusions?

64. 64
Axel says:

‘The approach many scientists take toward ID is to “define” science so that it excludes ID, and then declare “ID is not science” so they don’t have to deal with the issue of whether or not ID is true.’

LOL VL, GS. That is it in a nutshell, an optimally neatly- design nutshell. It really is so surreally true as to be side-splittingly funny.

You know, you have just identified the absolutely primordially fundamental problem, and expressed it perfectly. We are following their wretchedly perverse agenda with its risibly inverted assumption of the absence of intelligent design throughout the universe (qualifying ‘design’ with ‘intelligent’ is really superfluous, since it is essentially, a tautology).

I’m afraid Planck was wrong, wrong, wrong. Science perforce (since so perverse) advances not one funeral at a time but, alas, hundreds of funerals at a time; in great blocks, wadges, and only now does there seem an end in sight. Those who are too dim to be sidling out of it all, or simply too combatively high-profile, are squealing like stuck pigs more and more loudly.

65. 65
Axel says:

KF,

You can’t make up in volume for what your argument lacks in substance.

If you could refute my simple 4-step argument, you would. You would point out the exact statement you disagree with, and you would explain why you thought it was wrong.

You can’t do that, so instead you spam the thread with thousands of words, hoping that the onlookers will assume that there’s a refutation in there somewhere.

There isn’t, and the onlookers know it.’

KF, why don’t you just own up? Put your hands up, and admit that, to his own mind and on his own terms(?) however mysterious, not to say farcical, a fool can outsmart the cleverest of the clever?

66. 66
Axel says:

Phillip’ suffered similar tosh from the jackanapes and dullards, who whinged that he was spamming(!), because he persisted in trying to get through to them!

67. 67

If you don’t exhaustively annotate and footnote, explain and define, they will take every opportunity to misrepresent, obfuscate, and attempt to re-examine that which has already been covered. If you do exhaustively explain and contextualize and define and attribute, then you’re attacked for being too wordy or spamming.

It’s a guerrilla tactic; they aren’t attempting to debate, they’re trying to win a war. You cannot explain the obvious to those that deny it.

68. 68

Like others here (including Granville!) I am finding it difficult to make sense of the OP. But let me try to understand Robert’s post at 24:

The problem several people have with “entropy” is that they confuse it with a substance. For statistical physics, it is a shorthand for discussing the number of states of the system, while for thermodynamicists, it is related to both energy and temperature. The only place, and I stress it again, that statistics and thermal physics overlap, is when we are discussing atomic gasses. Then we can use Boltzmann’s equation. Everywhere else, we can’t convert them into thermal properties, and probably not even from one statistic to another.

Robert seems to be saying that thermodynamics and “statistical physics” are different. “Statistics” is certainly categorically different to either, just as the integer “2” is categorically different to the 2 as in “2 apples”. Thermodynamic entropy is about physical things i.e. about physics. Statistics is a tool we used to model it.

So to say that “the only place…that statistics and thermal physics overlap, is where when we are discussing atomic gasses” seems utterly extraordinary to me (and rather at odds with what Granville is saying!). Certainly the statement that “Everywhere else, we can’t convert them into thermal properties, and probably not even from one statistic to another” seems very odd.

Let’s go back to the 1st Law of thermodynamics, which is the law of conservation of energy, and which says that the change in INTERNAL ENERGY of a system is equal to its HEAT, a form of energy, measured in joules, which is Force x Distance, plus the WORK done, which is also defined as Force x Distance, and is thus a measure of energy expended, and can also be measured in joules.

So of course we can “convert” macroscopic things like houses and tornados and trees into thermal properties. And we can calculate how much of internal energy a such a system has, and thus its capacity to do work it is able to do; after which its internal energy will be reduced, and its entropy increased.

For example, one could convert a computer code into 1?s and 0?s and measure its statistical entropy, but if I use that computer code to, say, sort all the books in the library into alphabetical order, does that produce a constant that relates the entropy of “computer code” into the entropy of “library catalogues”? I don’t think so. One needs a “converter”, and the efficiency of a converter depends on design, not physics.

If you are talking about Shannon entropy (which you seem to be), we are indeed not talking about energy, but about something else entirely. The Shannon entropy of a piece of computer code simply tells you something about the efficiency of the code (the “channel capacity”), not how good a specific configuration will be at sorting book titles into alphabetical order. We do know that the vast majority of configurations of the 1’s and 0’s of your code will not do anything, let alone sort your books. But, unlike a flask of water into which you drop a drop of dye, there is no tendency for a piece of code that initially starts off, say, with all the 0’s at one end and all the 1’s at the other, will, left to its own devices, end up jumbled. Nor is there any reason to think that code that consists of only a few 1’s and millions of 0’s, which will have far less Shannon entropy than one in which the 1’s and 0’s are fairly equal in number, will be more likely to do a useful job. In fact, it’s less likely, as the channel capacity (i.e. the Shannon entropy) will be much less.

So you are correct that you can’t convert Shannon entropy to thermodynamic entropy. But if Granville’s claim was about Shannon entropy it would be false. A system with low thermodynamic entropy can do more work than a system with high thermodynamic entropy. But a piece of code with low Shannon entropy is much less capable of being rearranged into effective code than a piece of code with high Shannon entropy. So in terms of usefulness, they are the opposite: to get good code you want high Shannon entropy; to get lots of work, you want low thermodynamic entropy. So if Granville is talking about thermodynamic entropy (as his appeal to the 2nd Law of thermodynamics indicates) his claim is false. If he is talking about some other entropy, not to do with energy, then it is not at all clear what he is even claiming.

Why is this important?

Well, the theorems for conservation of entropy all come out of thermodynamics. What Boltzmann did, was to show how this could be converted to statistical mechanics. Thus the peculiar “ordering” of atoms has all the same constant properties as the thermal physics of collections of atoms. This was the power of the equation.

But entropy is not conserved. The entropy of a system always decreases. And statistical mechanics does indeed show why this is true – unless we add energy to a system (by doing work on it), then any work done within the system will decrease its entropy, i.e. reduce its capacity for work. This is as true whether we are talking about the gradual reduction of a heat gradient, or rocks gradually dropping off a cliff face. There is no conflict between statistical mechanics and thermodynamics – it’s just that statistical mechanics gives you a microscopic view of why the 2nd Law of thermodynamics must be true. But both approaches are about energy, specifically the relationship between work (in joules), heat (in joules), and the internal energy of a system (in joules), whether you are talking about the potential energy of a rock at the top of cliff, its kinetic energy as it falls, and the heat released when it hits the ground; or the conduction of heat along a metal bar (or for that matter, the diffusion of chromium within a metal bar). Or indeed the internal energy changes, heat release, and work that take place during a tornado.

I tried to describe how far physicists had gone in calculating the entropy of complex systems, but in another sense, this is a red herring. That is, we almost never use the entropy in calculation, only the change in entropy. And in complicated systems, the change in entropy is a path-dependent function. Or to say it another way, it is dS/dx that is important, not S itself, or perhaps integral[(dS/dx) dx]. So for example, when a nitroglycerine molecule dissociates, the reordering of the chemical bonds releases energy, and the process is driven by the increase in entropy of the gas products over the molecular precursor. So it is the local dS/dx that drives the reaction so very quickly.

By analogy then, Granville doesn’t have to calculate the entropy of his jet airliner, simply the gradients in the entropy from junk-yard to airliner.

Likewise, life has enormous gradients, both spatially and temporally, which should determine the direction of the reaction, but don’t because the exact opposite is observed. And yes, this violation of entropy gradients is a direct violation of the 2nd Law of Thermodynamics.

No! Energy gradients can increase spontaneously as well as reduce spontaneously – look at the tornado itself! What was uniform still air, is now a massively powerful high energy system with extreme energy gradients! You are not, surely, saying that tornadoes violate the 2nd Law of thermodynamics? Yet all that has happened is that work was done on the air molecules by virtue of a thermal gradients on the earth’s surface.

If you are talking about the 2nd Law of thermodynamics you are talking about energy, measured in joules. If you are talking about any other kind of entropy, for example the possible ways of arranging 1’s and 0’s in your computer code, then the 2nd Law does not apply. Indeed it will trip you up, because entropy in computer code is a helpful thing – it increases the efficiency of your code. But that’s because it has nothing to do with energy. If Granville wants to talk about the 2nd Law of thermodynamics, then he is talking about joules.

And there is absolutely no reason to suppose that any chemical or biochemical reaction by which living things grow from raw ingredients violates the 2nd Law of thermodynamics. All the energy is accounted for.

69. 69
Joe says:

Lizzie:

What was uniform still air, is now a massively powerful high energy system with extreme energy gradients!

Except tornadoes do NOT form out of “uniform still air”. I would think that would be a violation of some law.

70. 70

If you don’t exhaustively annotate and footnote, explain and define, they will take every opportunity to misrepresent, obfuscate, and attempt to re-examine that which has already been covered. If you do exhaustively explain and contextualize and define and attribute, then you’re attacked for being too wordy or spamming.

It’s a guerrilla tactic; they aren’t attempting to debate, they’re trying to win a war. You cannot explain the obvious to those that deny it.

William, science requires very precise and specific definitions. If you do not define your terms – including your units, you will find that your argument doesn’t work.

Many things are true in the word that are no obvious. Just because something is not obvious does not mean it is not true.

Saying that it is obvious that a Boeing 747 has less entropy than a tornado, or than the junkyard from which the tornado putatively constructed it does not make it so.

The fact that it seems “obvious” to you that it does, doesn’t, as you say, make it possible to explain it to someone who denies that the 2nd law of thermodynamics is, um, not about thermodynamics.

It is clear that the Boeing has more something than the junkyard (greater capacity to fly, for instance), but it is far from clear that that that thing is -entropy.

And the reason becomes clear when you actually define the terms consistently. The 2nd Law of thermodynamics is about entropy change, where entropy is measured in joules. If the difference cannot be measured in joules, than it is not a change in entropy as defined in the 2nd Law.

If there isn’t a change in entropy as defined in the 2nd Law, then the 2nd Law of thermodynamics does not apply.

The obfuscation, and misrepresentation and “drumbeat repetition in the teeth of repeated correction”, as KF would say, seems to me to be coming from Granville’s supporters, however inadvertently, not his critics.

Be that as it may: thermodynamics is about energy. If something isn’t about energy, it isn’t about thermodynamics, although the statistical concepts my come in handy elsewhere.

71. 71
Joe says:

Elizabeth:

William, science requires very precise and specific definitions. If you do not define your terms – including your units, you will find that your argument doesn’t work.

And that is exactly why evolutionism and materialism do not work abd are not science

72. 72

Joe

Except tornadoes do NOT form out of “uniform still air”. I would think that would be a violation of some law.

Well, they do, Joe. The air molecules going round and round in a tornado are the same ones that were sitting quietly doing their knitting a few moments before, just as the debris going round and round with them was sitting quietly being someone’s house a few moments earlier.

And have you never seen a dust devil form on a still hot day?

73. 73

In fact here’s some nice footage, that makes my point really well. You can see how still the air is from the glassy surface of the puddles, and the fact that no dust is being lifted anywhere other than where the dust devil is.

Dust devils form when hot air near the surface rises quickly through a small pocket of cooler, low- pressure air above it. If conditions are just right, the air may begin to rotate. As the air rapidly rises, the column of hot air is stretched vertically, thereby moving mass closer to the axis of rotation, which causes intensification of the spinning effect by conservation of angular momentum. The secondary flow in the dust devil causes other hot air to speed horizontally inward to the bottom of the newly forming vortex. As more hot air rushes in toward the developing vortex to replace the air that is rising, the spinning effect becomes further intensified and self-sustaining. A dust devil, fully formed, is a funnel-like chimney through which hot air moves, both upwards and in a circle. As the hot air rises, it cools, loses its buoyancy and eventually ceases to rise. As it rises, it displaces air which descends outside the core of the vortex. This cool air returning acts as a balance against the spinning hot-air outer wall and keeps the system stable.[4]

The spinning effect, along with surface friction, usually will produce a forward momentum. The dust devil is able to sustain itself longer by moving over nearby sources of hot surface air.

As available extreme hot air near the surface is channelled up the dust devil, eventually surrounding cooler air will be sucked in. Once this occurs, the effect is dramatic, and the dust devil dissipates in seconds. Usually this occurs when the dust devil is not moving fast enough (depletion) or begins to enter a terrain where the surface temperatures are cooler, causing unbalance.[5]

Certain conditions increase the likelihood of dust devil formation.

Flat barren terrain, desert or tarmac: Flat conditions increase the likelihood of the hot-air “fuel” being a near constant. Dusty or sandy conditions will cause particles to become caught up in the vortex, making the dust devil easily visible.
Clear skies or lightly cloudy conditions: The surface needs to absorb significant amounts of solar energy to heat the air near the surface and create ideal dust devil conditions.
Light or no wind and cool atmospheric temperature: The underlying factor for sustainability of a dust devil is the extreme difference in temperature between the near-surface air and the atmosphere. Windy conditions will destabilize the spinning effect (like a Tornado) of a dust devil.

And in fact this directly falsifies Robert’s claim in the OP:

Does this make sense? I mean everybody and their brother say that entropy can decrease if you have a heat engine in the system. Energy comes into your refrigerator as low entropy energy. Energy billows out of the coils in the back as high entropy heat. But inside the fridge is a low-entropy freezer. Couldn’t this apply to Earth? (E.g., compensation argument.)

Yes, it could, and does, Robert. These dust devils are heat engines. They pump hot air from near the ground, cooling it. They are low entropy “freezers”. Hot air billows out of the top of the “chimney” as high energy “heat”.

And they form spontaneously, yet do not violate the 2nd Law of thermodynamics.

Or do you think they do?

74. 74
Joe says:

Elizabeth,

Tornadoes form during thunderstorms. The air is hardly still. There are updrafts and downdrafts. Look at tornado alley- cold winds coming down from the north collide with warm winds coming up from the Gulf of Mexico. Winds mean the air is moving, Liz.

And dust devils also require wind.

75. 75
kairosfocus says:

JWT: KS has played a strawman distortion game, which I have repeatedly corrected. At this point, I suggest you acquaint yourself with the substantial point, that he and others of his ilk are trying to find some subterfuge to get away with pretending or suggesting that diffusion and the like overwhelmingly dispersive forces, can reasonably perform constructive work leading to FSCO/I. The focus on that per 2nd law a body that loses heat will reduce its entropy which has to go to something at a lower temp which then increases its entropy beyond that lost, is a red herring, led away to the strawman that GS and I have denied or implied denial of the 2nd law. In fact the relevant entity is in the diagrams and discussions I have put up, is B or B’ . . . an energy IMPORTING entity, especially when B’ performs shaft work on importing energy, perhaps by heat. Or otherwise. The statistical underpinnings of the 2nd law lead directly to the issue that diffusion etc will overwhelmingly lead away from constructive work: systems gravitate to configuration clusters that have overwhelming statistical weight, and on the relevant gamut for OOL etc, the resources of the observed cosmos are not enough to sample enough of the phase space at random in the cosmic lifespan, to credibly ever hit on the sort of special isolated and rare clusters of configs implied by the FSCO/I of life. These, ad more on related points — there is a fairly large body of related issues — have been outlined and explained for those who really do want to understand. (And on the whole, I have dodged the math, which gets hairy real fast. A more mathematical analysis that leads to the same essential point is in TMLO chs 7 & 8, here. It could be updated a bit, but is essentially on target as far as it goes. The appendix 1 my always linked gives a bit more too.) KF.

76. 76

Joe

Elizabeth,

Tornadoes form during thunderstorms. The air is hardly still. There are updrafts and downdrafts. Look at tornado alley- cold winds coming down from the north collide with warm winds coming up from the Gulf of Mexico. Winds mean the air is moving, Liz.

And dust devils also require wind.

Dust devils (and tornadoes) are winds, Joe!

Winds are convection currents. In the case of dust devils (check the wiki) they actually only form in “light or no wind”.

This is because they are rising convection currents from a layer of hot air heated by hot ground which in turn is heated by solar radiation. On windy days they don’t form because that layer of hot air keeps getting whisked away.

Sometimes (because the ground is non-uniformly heat absorbent) a particular patch of ground will get hotter than the rest, and the hot air will start to rise (i.e. become a vertical wind) and turn. This forms a chimney which acts exactly like a fridge – pumps hot air from near the ground and out of the top of the chimney.

No law is violated when they do so.

There are some awesome picture of dust devils on Mars too.

77. 77
Joe says:

keiths:

If you asked 100 highly-trained physicists whether they think that life violates the second law, what do you think they would say?

No one cares what they say. People care what they can demonstrate. And it is a given that they cannot demonstrate that blind and undirected chemical processes can produce a living organisms from non-living matter.

78. 78
Joe says:

Eizabeth:

Dust devils (and tornadoes) are winds, Joe!

I know! However the winds that form tornadoes are not themselves tornadoes. And you sed the air was still- that is incorrect.

Winds are convection currents. In the case of dust devils (check the wiki) they actually only form in “light or no wind”.

Thjey need the movement of air in order to form. Even wiki says that.

This is because they are rising convection currents from a layer of hot air heated by hot ground which in turn is heated by solar radiation.

Right, the air is NOT still. You sed it was still.

And true, hot air flowing into cool air is not a violation of the law.

79. 79
kairosfocus says:

Dr Liddle:

Are you familiar with the perspective of Jayne et al down to Robertson et al?

And, with the impact they have at length had on understanding what entropy is?

Can you specifically show where they are wrong when in effect they summarise — I have clipped on this in that which is being dismissed as spamming by those who have no intention to address matters on the merits — more or less as follows:

the entropy of a body or system can be seen as the average missing information to specify the microstate of a system, given its macro state, suitably referred to energy by a conversion factor.

Let me again clip for your convenience the summary from Robertson:

. . . we may average the information per symbol in [a] communication system thusly (giving in terms of -H to make the additive relationships clearer):

– H = p1 log p1 + p2 log p2 + . . . + pn log pn

or, H = – SUM [pi log pi] . . . Eqn 5

H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: “it is often referred to as the entropy of the source.” [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1 below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form. Though somewhat controversial even in quite recent years, this is becoming more broadly accepted in physics and information theory, as Wikipedia now discusses [as at April 2011] in its article on Informational Entropy (aka Shannon Information, cf also here):

At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann’s constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing.

But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon’s information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, “Gain in entropy always means loss of information, and nothing more” . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell’s demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).

Summarising Harry Robertson’s Statistical Thermophysics (Prentice-Hall International, 1993) — excerpting desperately and adding emphases and explanatory comments, we can see, perhaps, that this should not be so surprising after all. (In effect, since we do not possess detailed knowledge of the states of the vary large number of microscopic particles of thermal systems [typically ~ 10^20 to 10^26; a mole of substance containing ~ 6.023*10^23 particles; i.e. the Avogadro Number], we can only view them in terms of those gross averages we term thermodynamic variables [pressure, temperature, etc], and so we cannot take advantage of knowledge of such individual particle states that would give us a richer harvest of work, etc.)

For, as he astutely observes on pp. vii – viii:

. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . .

And, in more details, (pp. 3 – 6, 7, 36, cf Appendix 1 below for a more detailed development of thermodynamics issues and their tie-in with the inference to design; also see recent ArXiv papers by Duncan and Samura here and here):

. . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . .

[deriving informational entropy, cf. discussions here, here, here, here and here; also Sarfati’s discussion of debates and the issue of open systems here . . . ]

H({pi}) = – C [SUM over i] pi*ln pi, [. . . “my” Eqn 6]

[where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp – beta*yi) = Z [Z being in effect the partition function across microstates, the “Holy Grail” of statistical thermodynamics]. . . .

[H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . .

Jayne’s [summary rebuttal to a typical objection] is “. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.” . . . . [pp. 3 – 6, 7, 36; replacing Robertson’s use of S for Informational Entropy with the more standard H.]

As is discussed briefly in Appendix 1, Thaxton, Bradley and Olsen [TBO], following Brillouin et al, in the 1984 foundational work for the modern Design Theory, The Mystery of Life’s Origins [TMLO], exploit this information-entropy link, through the idea of moving from a random to a known microscopic configuration in the creation of the bio-functional polymers of life, and then — again following Brillouin — identify a quantitative information metric for the information of polymer molecules. For, in moving from a random to a functional molecule, we have in effect an objective, observable increment in information about the molecule. This leads to energy constraints, thence to a calculable concentration of such molecules in suggested, generously “plausible” primordial “soups.” In effect, so unfavourable is the resulting thermodynamic balance, that the concentrations of the individual functional molecules in such a prebiotic soup are arguably so small as to be negligibly different from zero on a planet-wide scale . . .

This outlines a good part of my reason for taking this view seriously, and as the Wiki clip acknowledges, it is increasingly seen as a reasonable perspective or school of thought. (Remember there are several schools of thought on quantum physics also.)

Taking this back to the matters in hand, disffusive forces and the like, overwhelmingly, move systems to microstate clusters where the bulk of possibilities for a system lie; as I showed in brief in the toy example for diffusion.

In short, let me clip from no 6 again, as it is obvious you have not attended tot he point:

For instance (following a useful simple model of diffusion in Yavorsky and Pinski’s nice elementary Physics), if we have ten each of white and black marbles in two rows in a container:

||**********||
||0000000000||

There is bit one way to be as shown, but over 63,000 to be 5B/W in each row, and 6:4 and 4:6 cases are close to such numbers:44,000 +. So, if the system is sufficiently agitated for balls to swap positions at random on a regular basis, it soon moves towards a dominant cluster near the 5:5 peak and “forgets” the initial state.

This basic pattern works for heat, diffusion of ink drops in beakers of water, C atoms in a bar of steel, and more.

The basic reason for such does not go away if the system is opened up to energy and mass flows, and the point that rare, specifically and simply describable states ill seldom be revisited or found, for enough complexity — 500 – 1,00 bits, soon becomes that such states are beyond the reach of the solar system’s or the observed cosmos’ search capacity.

RS’ point that there are states that can be locked away from interaction so that it is reasonable to partition entropy accounting, is also quite useful.

My own emphasis is that we need to see the difference between what diffusion like factors/ forces will strongly tend to do and what produces shaft work thence constructive work ending in FSCO/I.

To assume or hope that such a type of effect will perform organised shaft work that6 constructs an entity manifesting FSCO/I is empirically futile. there is a logical possibility but he overwhelming balance of statistical weights of micro state clusters will push the system towards states that are anything but what FSCO/I requires.

As has been repeatedly pointed out, but ignored and derided or twisted into a strawman and dismissed.

KF

80. 80
kairosfocus says:

Joe the pretence or suggestion above that anyone — other than RS it seems if his qualifications are not meant to be his main point — is arguing that the 2nd law fails, is a strawman caricature. GS is arguing that diffusion-like processes characterise systems, whether isolated or open, and so do I. My own analysis begins from Clausius’ example and examines the fate of energy importing bodies, which are parallel to what would have happened with earth. The bottomline is that functionally specific complex organisation and associated implied information, are not empirically gotten for free, and certainly are not credibly gotten from diffusion or the like. The strawman tactic looks to be set up to distract attention form this problem, which is a serious one, as in effect a lot of very specific info that is even coded in some parts, is in effect being pulled out of noise. KF

81. 81
Joe says:

KF, it’s all about the evidence and people have noticed that their position doesn’t have any. They can talk and talk but they cannot show that the rubber meets the road.

Tornadoes spontaneously arising from still air? Pulllease

82. 82

Thanks for the extract, KF. Yes indeed, Shannon entropy and thermodynamic entropy are mathematically almost identical, and not by coincidence.

And indeed, you could interpret the thermodynamic entropy (measured in joules) as representing the Shannon information of the system (in bits) if the number of possible microstates represented the number of possible messages that could be transmittted by the system.

You could further directly relate the two by saying that the thermodynamic entropy (in joules) is a measure of how many bits would be needed to specify the microstate.

Therefore the greater the thermodynamic entropy (the more joules) the less information the description of the macrostate (how many joules) tells you about the microstate (i.e the more possible microstates fit that description).

Because of course, as you are well aware, the quantity of Shannon entropy in a message tells you nothing about the message it contains. It just means that there are more possible messages that it could contain.

So how do we relate this to Granville’s argument?

It comes back to specification, as I’m sure you will agree. A Boeing 747 is a highly “specified” description of an arrangement of junkyard parts, while a “A junkyard of parts” can describe a vast number of of arrangements.

But in Shannon terms, both have identical Shannon entropy (unless some of the parts have gone missing in the tornado, or were not included in the Boeing, in which case one or the other may have less).

Just as “KAIROSFOCUS” has exactly the same Shannon entropy as SCAOSFIUORK, even thought one has meaning and the other does not.

So it is not true to say that the reason a tornado will not make a Boeing 747 in a junkyard is not that a Boeing has much less Shannon entropy (in bits) than a junkyard, nor is it that a Boeing has much less thermodynamic entropy (in joules) than a junkyard (it may have more, it may have less).

So what ever the reason is that a tornado cannot build a Boeing 747 in a junkyard is not only nothing to do with the 2nd Law of thermodynamics, but it has nothing to do with entropy, whether thermal or Shannon.

Which is not to say that a tornado can build a 747. It can’t. It’s just that Granville’s argument that it can’t is incorrect.

However, when it comes to living things, it is perfectly true that living things have less entropy than the things they are composed of, just as a tornado (or a dust devil) has less entropy than still air.

So can entropy decrease spontaneously? Yes, it can. Convection currents would not be possible if it couldn’t, nor would saltpans. And seeds, whether designed or not, would not grow into trees and convert low entropy carbon dioxide and water into sugar and starch.

So yet again, Granville’s argument fails. Life does not violate the 2nd Law, because the 2nd Law does not prevent the spontaneous development of systems of entropy. It just says that this only happens if work is done by a system external to the low entropy system, which, as a result will experience an increase of entropy at least equal to the local decrease.

83. 83

Joe

Tornadoes spontaneously arising from still air? Pulllease

Oh boy, back to Cantor’s infinities. Look at the videos, Joe.

84. 84

KF:

GS is arguing that diffusion-like processes characterise systems, whether isolated or open, and so do I.

They certainly characterise systems on which no work is being done. But clearly, diffusion is not the only spontaneous direction of rearrangement of matter. If it were, how would you explain weather?

85. 85
Joe says:

Look at the videos, Joe.

Oh boy, back to deception and misdirection.

Which video shows a TORNADO spontaeously arising from still air?

I saw videos of dust/ dirt devils and even wiki says they require the air to be in motion in order to form.

86. 86

Obviously the air has to be in motion for them to form, Joe, because they consist of air in motion.

But that air was still before it got into motion and became a dust devil (let’s ignore the tornado for now, as the dust devil energy accounting is simpler)

And the reason the air moves (upwards) is that it is heated by the hot ground.

And as a result, the column of still air that preceded the dust devil, and which had high entropy (was in a low energy state) now rises and turns and becomes a low entropy system called a dust devil.

This pumps hot air from the ground and cools it. Eventually, after the dust devil has vacuumed around and got rid of all the hot air, it goes back to being still air again.

It’s no big deal, but it’s the simplest example I know of a non living system that effectively behaves like a fridge, and forms spontaneously, i.e. no-one designs it, or plugs it in, or does anything at all. As I said, they form on Mars, and so far there is no sign of designers (or life) on Mars.

87. 87
kairosfocus says:

Some notes:

1: wind systems are often manifestations of convective effects, ulimately at planetary scale. Tehndencies to form vortices come about by various means, including Coriolis effects, and shearing differently directed winds creating a bass for rotation.

2 –> Wind systems are manifestations of fluid dynami cs, with aid of some thermodynamics. Of course the systems are extremely non linear and sensitive to initial conditions.

3 –> Once we see that entropy is also about MISSING info on microstate on knowing macrostate variables such as pressure, temp etc, we can see the degrees of freedom present, so we see a lower relative quantum of info, which under relevant circumstances of energy to change states etc, can be converted into work, at least in part. (States, include motion of massive objects, however tiny and this is immediately an energy storage mechanism. Translation, rotation, vibration.)

4 –> Constructive work imparts forced ordered motion to components that in accord with some plan, makes them go to a specific functional state. The classic instance is forming a protein by chaining monomers and forcing peptide reactions in accord with a stored code.

5 –> Such a protein then folds based on its config of elements in its chain, sometimes on its own, often with aid of a Chaperone molecule (prions are yet lower energy state folds that are non functional and indeed are implicated in serious diseases.)

6 –> Again, both information and energy are involved, and functional specificity is a tightly constrained outcome, thus low entropy, paid for elsewhere. The natural tendency once the metastability is broken, is for proteins to denature, and break down. DNA is also notoriously metastable at best.

7 –> In short, information is involved in constructive work issuing in FSCO/I, and it is intimately bound up in the high energy states built up based on instructions and put to work thereafter. These are energetically very much uphill, endothermic molecules.

8 –> Knowing that something is in a functional state is a very useful thing, and can often be used to perform desired work in itself. It also confines you to a very narrow zone in a config space. Finding such a cluster by chance or blind processes, will be difficult indeed. Starting with OOL.

9 –> When it comes to tornadoes and 747’s Hoyle’s argument was in effect that this is a macro analogue, crudely, of diffusion. GS is speaking about this sort of thing. because diffusion processes nad the like tend so strongly to move to dominant clusters of states — and away from FSCO/I — they are not feasible as means to do constructive work. So, there is no free lunch here to get constructive work for “free” from chance factors. Thus the issue of analysing how work gets done.

10 –> Constructive work requires organised forced ordered motion. This yields entities that are wired according to a functional plan, whether a Jumbo jet or a string of characters in a post like this, or a protein chain or a DNA chain. Such requires entities capable of the required “shaft work,” whether at micro or macro level. Thus we see men and machines expending a similar amount of energy and building the Jumbo Jet. In the cell, mRNA, tRNA, ribosomes, etc work together to build proteins.

11 –> FSCO/I is in common, and the only empirically warranted originating source of FSCO/I is design. Certainly not diffusion or the like, whether in isolated, closed or open systems.

KF

88. 88
kairosfocus says:

EL: Weather is a manifestation of order and chance, on planetary scale. Convection — an orderly pattern resulting from a differential heating and a means by which heat is dispersed — leads to wind systems, water vapour content and variation with height, pressure etc leads to precipitation, clouds and more. Dust devils are vortices with entrained dust. Vortices being a characteristic orderly pattern in fluids where rotation is injected. Remember, the issue is that we see necessity, chance and choice at work in our world. And the argument is not about order but organisation that is functionally specific, and complex enough to be beyond credible chance contingency (by diffusion etc) is commonly and only seen as the product of choice. For reasons closely connected to why chance is not a good explanation for FSCO/I — sampling of the config space where one would have to credibly capture quite rare and isolated zones.Dust devils, tornadoes and hurricanes are simply not to be compared to proteins formed in ribosomes on coded instructions. to attempt such verges on a strawman. KF

89. 89

EL: Weather is a manifestation of order and chance, on planetary scale.

Yes it is, and not just on a planetary scale. 0n very small scales too.

Convection — an orderly pattern resulting from a differential heating and a means by which heat is dispersed — leads to wind systems, water vapour content and variation with height, pressure etc leads to precipitation, clouds and more. Dust devils are vortices with entrained dust. Vortices being a characteristic orderly pattern in fluids where rotation is injected. Remember, the issue is that we see necessity, chance and choice at work in our world.

Yes indeed. And all those vortices you mention represent local entropy reductions. None of them violate the 2nd Law of thermodynamics. They can raise things to a higher energy potential, and undiffuse what was diffused. They render the most probable states ones in which there are gradients, rather than ones in which there are not.

And the argument is not about order but organisation that is functionally specific, and complex enough to be beyond credible chance contingency (by diffusion etc) is commonly and only seen as the product of choice. For reasons closely connected to why chance is not a good explanation for FSCO/I — sampling of the config space where one would have to credibly capture quite rare and isolated zones.

Exactly. So the issue has nothing to do with order in the sense that -entropy is order, but with organisation. Therefore it has nothing to do with the 2nd Law of thermodynamics.

Granville’s claim simply boils down to the same argument as Dembski’s and the 2nd Law is irrelevant to it.

Dust devils, tornadoes and hurricanes are simply not to be compared to proteins formed in ribosomes on coded instructions. to attempt such verges on a strawman. KF

Yes indeed, but the straw man is of Granville’s making. Local entropy reduction, on a very powerful scale, is perfectly possible on earth, so the idea that because life represents reduced entropy, therefore evolution can’t be right, is a straw man. Reduced entropy may be necessary for life, but it isn’t sufficient. Simple logic therefore tells us that saying that entropy reduction is forbidden by the 2nd Law, therefore no evolution, isn’t valid.

There may be reasons evolution can’t happen, but the idea that the required entropy reduction would violate the 2nd Law isn’t one of them. If it did, so would any vortex.

90. 90

This is a long comment thread, and I was chastised for having abandonned it too soon. I will try to answer some of the questions raised about my meanings and definitions.

#26, #37 EBL
No, there are many 2nd laws, as Granville says. There is one for thermo entropy S, and one for chromium entropy S1, one for socks entropy S2 …
Only in S=k ln(W) where W=noble gas statistics, is it possible to convert thermo entropy to noble-gas-statistical entropy. This one success, however, suggests that S1 = ln(W1), S2=ln(W2)…are all conserved, even if we don’t know “k”, the conversion constant to thermal entropy S.

The reason it is SLoT (emphasizing T), is because entropy was defined in the context of heat engines, Carnot cycles, and steam power, long before Ludwig Boltzmann converted it to statistics. Boltzmann’s discovery enabled it to be exactly computable from statistics of ordering. This was considered a “deep” result, having far more explanatory power than simply heat engines, since just about anything could be discussed in terms of ordering.

#32 CS3
However, the mistake of Styer, Bunn, and Lloyd, is that they think entropy is convertible, so for example, chromium entropy can be “compensated” by thermal entropy. But this is the one thing we don’t know, how to convert S1, S2 into S and thereby add or subtract from the sum. This is why Sewell conserves them seperately.

#27 collin —> has the gist of the argument. We can have conservation laws of things without quantifying the things. E.g, most of the time we don’t know “S0”, the additive offset in thermodynamic entropy calculations.

#39 KF
Has supported me and GS, but I would point out that diffusion ASSUMES local forces only. Once again, the theory of diffusion that is used by GS to discuss SLoT, assumes that heat acts randomly like noble-gasses diffusing. When long-range forces dominate, as in long, oily molecules, proteins, DNA, or fully ionized plasmas, then diffusion doesn’t behave the way it is expected, the conversion constant “k” is unknown, and the system is said to be “open”.

#40, #46, #57, #61 KS
Premise 2 is wrong. S(C) =/= S(A) + S(B).
a)Tallis discusses the possibility of “non-extensive” entropy that violates this assumption even for heat.
b)Every physics book has the trick problem of a gas expanding into a vacuum with the trick question, what is the final temperature? It turns out that the sum is undefined because the entropy is “space-dependent”.
c)But even more importantly, only if A and B are EXACTLY the same entropy, say, thermal entropy, is it even likely that they can be added. And even then, many other conditions have to be met. They have to have the same kind of “atom”, otherwise like mixing water and alcohol, the mixture is less than the sum. All of which Granville explains clearly, and you confuse.

#45, 52 GD
No, Granville did compensation properly. You assumed that all forms of entropy are convertible. They aren’t, or at least, if they are then somebody needs to go collect their Nobel Prize as they did in 1991 for oily liquids. Therefore molecular entropy lost in biological growth CANNOT be converted to thermal entropy from the Sun, and thus cannot be “compensated”, as you mistakenly aver.

#49-51
Yes, I am defending Granville’s paper. It’s very clear and nicely written. One can refuse to engage the defenses, but of course, that opens one up to being out-flanked.

#53 KS (and #54, #63 T)
Don’t have a cow Keith, our eternal destiny doesn’t depend on keeping the 2nd law, fortunately.

As far as “thermodynamics” is concerned, the cell is a heat engine, consuming fuel in the form of glucose, and putting out waste in the form of CO2 and H2O, as well as intermediate sized acids. This does not violate SLoT, nor could it, or else we wouldn’t have to eat to live. But it also says nothing about where the heat engine came from. Clausius or Carnot or Boltzmann said nothing about where the steam engine originated, they only said they could describe its use of coal and water and its output of work.

The Origin of Life (OOL), the origin of chromium entropy, the origin of socks entropy VIOLATES the 2nd Law of Thermodynamics when interpreted by Boltzmann as the statistical order of these objects. This is easily explained, it is because all these things mentioned are in an “open” system, their origin is external to the system. And that was the whole point of Granville’s paper.

One more time: if we restrict ourselves to thermal entropy alone, then the cell strictly obeys the 2nd Law. If we allow ourselves the luxury of applying Boltzmann’s definition, S=k ln(W), then the cell no longer obeys this formulation, as best as we can estimate W.

#58 KS
You keep asking about what people think. Are you religious? If so, then what faith? Some religions put emphasis on epistemology (knowing the truth), others on metaphysics (being the truth), while others on ethics (obeying the truth). From your posts, I would assume you are a category 2 believer. Which may be why Granville’s paper bothers you.

#59 KF
I am notoriously sloppy in my wording, but I’m sure we could clarify our respective positions and find agreement.

#64–66 Axel
Seems to be channelling KS while under the influence. Read above rebuttals to KS.

#68 EBL
You are confusing “energy” with “entropy”. The 1st law was proven by a British spy in the Revolutionary War, who nonetheless got an elementary school named after him in Woburn, Massachusetts–Count Rumford. He showed that work could be converted to heat by a precise ratio. They were convertible. The same could not be said for entropy, the topic of the 2nd law.

Granville (and all physicists) have no trouble with converting energy and work, we just have trouble with people who convert thermal entropy into life, or Shannon entropy, or chromium entropy, or socks entropy…

Your comment about “entropy not being conserved,.. but decreasing” is mistaken. The SLoT prohibits thermal entropy of closed systems from decreasing–it is either conserved and constant, or increasing.

Likewise your comment that SLoT “is about energy” is also mistaken. The appropriate equation is dS = dQ/T. Entropy is about the ratio of energy over temperature. Quite another sort of bird altogether.

Again, I wasn’t discussing energy gradients, but entropy gradients. So your comments aren’t relevant to my discussion. Nevertheless, atmospheric scientists refer to tornadoes as “entropy-driven” systems because of the way vorticity plays a part in their creation–NOT energy gradients!

#70 EBL
Thermodynamics is NOT about energy primarily, look at the word–“thermo”= heat, “dynamics” = motion. Its about heat flow. Or more precisely dS = dQ/T, but I already said that.

#72, 73, 76, 86, 89 etc
Lizzie, neither dust devils nor tornadoes violate SLoT, but they neither are they spontaneous. They are driven by gradients of entropy (not gradients of energy). Your Wikipedia reference to convection layers is missing a big piece of the physics, because if simply convection were enough, you should see tornadoes in every pot of water that is boiling on the stove.

But I’m glad you brought them up–they are a good example of how to generate order using entropy gradients.

#82 EBL
Shannon entropy has different units from thermal entropy. They are as far from each other as fish and bicycles.

91. 91
Joe says:

Elizabeth:

But that air was still before it got into motion and became a dust devil (let’s ignore the tornado for now, as the dust devil energy accounting is simpler)

That’s not a given- that the air was still.

And the reason the air moves (upwards) is that it is heated by the hot ground.

And the air above that hot ground is cooler.

It is a simple event.

92. 92
keiths says:

Robert:

#64–66 Axel
Seems to be channelling KS while under the influence. Read above rebuttals to KS.

Lol. Axel is on your side, Robert, though you probably wish he weren’t.

93. 93
kairosfocus says:

EL:

It seems you want for me to try to prove over and over again that you and ilk are setting up and knocking over strawmen, the better to dismiss such and repeat falsehoods over and over again as though I am guilty by your assertions. (This is similar to how you hosted slander, then denied for months then when it was shown beyond dispute tried to defend it, in part on what J’cans call “a nuh nutten.”)

Has it as yet registered with you that:

(a) The Clausius situation I began my analysis c. 2006 – 2008 with, has local entropy variations that are positive and negative, but are connected in particular ways by heat transfer and work transfers leading to the 2nd law? (which BTW then sets up relations that apply this law to other case4s that are not isolated.) Let me refresh your memory:

Heat transfer in Isolated system:

|| A (at T_a) –> d’Q –> B (at T_b) ||

dS >/= (-d’Q/T_a) + (+d’Q/T_b), Lex 2, Th in a common form [–> FYI, A here has a local entropy reduction, linked to the rise in B such that Lex 2 th emerges]

Heat engine, leaving off the isolation of the whole [–> case 2, extending to the heat engine and with contexts for energy converters in general]:

A –> d’Q_a –> B’ =====> D (shaft work)

Where also, B’ –> d’Q_b –> C, heat disposal to a heat sink

Where of course the sum of heat inflows d’Q_a will equal the sum of rise in internal energy of B dU_b, work done on D, dW, and heat transferred onwards to C, d’Q_b. [–> Shaft work is what in our observation performs constructive work, from turning the drive shaft of an engine to powering a robot’s limbs etc to do constructive work, even “bio-robots” i.e. our limbs.]

(b) Notice this, from GS’s Second Thoughts on the second Law article, from about a decade ago:

. . . The second law is all about probability, it uses probability at the microscopic level to predict macroscopic change: the reason carbon distributes itself more and more uniformly in an insulated solid is, that is what the laws of probability predict when diffusion alone is operative. [–> he highlights how heat transfers follow diffusion-like laws] The reason natural forces may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy. The second law of thermodynamics is the reason that computers will degenerate into scrap metal over time, and, in the absence of intelligence, the reverse process will not occur; and it is also the reason that animals, when they die, decay into simple organic and inorganic compounds, and, in the absence of intelligence, the reverse process will not occur . . . .

What happens in a[n isolated] system depends on the initial conditions; what happens in an open system depends on the boundary conditions as well. As I wrote in “Can ANYTHING Happen in an Open System?”, “order can increase in an open system, not because the laws of probability are suspended when the door is open, but simply because order may walk in through the door [–> or be imposed by bulk laws such as that heated bodies of fluids, e.g. air, expand and so lower their density, leading to upthrust and floatation in teh wider atmsophere, which will tend to draw in air from elsewhere, leading to a convection loop, thence winds and related phenomena including vortices etc.] … If we found evidence that DNA, auto parts, computer chips, and books entered through the Earth’s atmosphere at some time in the past, then perhaps the appearance of humans, cars, computers, and encyclopedias on a previously barren planet could be explained without postulating a violation of the second law here . . . But if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here.” Evolution is a movie running backward, that is what makes it special.

THE EVOLUTIONIST, therefore, cannot avoid the question of probability by saying that anything can happen in an open system, he is finally forced to argue that it only seems extremely improbable, but really isn’t, that atoms would rearrange themselves into spaceships and computers and TV sets . . .

(C) Returning to what I argued, the challenge then goes to what diffusion-like processes and the like can credibly do (given available atomic and temporal resources on solar system or cosmos as a whole, constraining scope of sampling of large config spaces of beyond astronomical scale), vs what organised shaft work is easily observed to do, when constructive work issuing in FSCO/I occurs. More from 6 above:

The pivotal questions are, the performance of constructive work by dW, the possibility that this results in FSCO/I, the possibility that B itself shows FSCO/I enabling it to couple energy and do shaft work, and the suggested ideas that d’Q’s can spontaneously lead to shaft work constructing FSCO/I as an informational free lunch. [–> notice, what I set out to address in a context where the matter of local shifts in entropies of interacting bodies had been addressed already.]

By overwhelming statistical weights of accessible microstates consistent with the gross flows outlined, this is not credibly observable on the gamut of our solar system or the observed cosmos. [–> Notice, what I am saying, why, and why this implies that the attempts to twist me into denying lex 2 th are strawman tactics]

There is but one empirically, reliably observed source for FSCO/I, design. The analysis on statistical weights of microstates and random forces such as diffusion and the like, shows why. [–> the nanobots and microjets thought exercise, clipped at 44 above after several posts in which KS’s strawman tactics are exposed and corrected on lognstanding record, draws out why]

Which brings us to the point Hoyle was making: letting his tornado stand in for an agitating energy source triggering configs at random, tornadoes passing through junkyards don’t build jumbo jets or even instruments on their panels. But similar intelligently directed expenditures, through precise patterns of work, routinely do build jumbo jets.

(d) In short, I have at no point tried to suggest that when we have coupled changes in energy in local subsystems, there are no local entropy decreases or increases. Just the opposite, laid out in diagrams and algebra several times in this thread alone. So, to pretend or suggest otherwise by setting up and knocking over a strawman, is a false accusation by implication.

(e) Similarly, at no point that I am aware of has GS tried to imply that a local decrease of entropy of an interacting subsystem is impossible. he is pointing out that when the manner of interaction is by diffusion like forces, it by probability reasons linked to clusters of microstates and the occurrence of utterly dominant scattered states due to sheer statistical weight, typical entities that manifest FSCO/I are not credible products.

(f) That is, what I am doing is bringing to bear an explicit Clausius law and shaft work organised to carry out constructive work analysis that draws out why this is so.

(g) When therefore you suggest vortices as cases of local order emerging by thermomynamic forces feeding into fluid dynamics, you are NOT answering to the relevant case but are setting up and knocking over a strawman. Just as KS’s attempt with freezing ponds was doing the same.

(h) Do I need to clip here my discussion of hurricanes and snowflakes again, from 42 above to highlight that I have in fact addressed both vortices and crystallisation as claimed counter examples, by highlighting just how these are not FSCO/I and wiring diagram organisation, but cases of order fed by heat flows? (Or, should I let it suffice to state that I specifically said that I was clipping this in anticipation of objections using such cases? If you doubt me just click the just linked.)

(i) In short, I have abundant reason to point out that that which KS sneeringly dismissed as spamming the thread, was on target and substantially relevant. Pardon some fairly direct words: had he and you simply been willing to recognise that someone on the design side may actually know a bit of what they were talking about, you would have saved us all a lot of going in circles of responding to strawman tactics.

KF

94. 94
CS3 says:

Robert Sheldon:

#32 CS3
However, the mistake of Styer, Bunn, and Lloyd, is that they think entropy is convertible, so for example, chromium entropy can be “compensated” by thermal entropy. But this is the one thing we don’t know, how to convert S1, S2 into S and thereby add or subtract from the sum. This is why Sewell conserves them seperately.

I completely agree. Just to be clear, my comment was definitely not directed at you (it was primarily for EL). While I have not yet fully digested your initial post, I certainly also appreciate Sewell’s paper. Thanks for your interesting post!

95. 95
keiths says:

Timaeus,

I want to thank you again for suggesting that I take a closer look at Robert Sheldon’s OP and comments.

The whole situation is just fantastic. You try to defend Granville’s wretched paper, saying that he doesn’t claim that evolution violates the second law. But you refer me to Robert, a PhD physicist who not only thinks that evolution violates the second law, he even thinks that a tree violates the second law every time it sprouts a new leaf! And who casually dismisses violations of the second law:

Don’t have a cow Keith, our eternal destiny doesn’t depend on keeping the 2nd law, fortunately.

Well, if Robert defends Granville’s paper, it must be good. ID “science” is in beautiful shape.

More, please! Do you have any other recommendations? Anyone else whose writings I should take a closer look at? 😀

96. 96

Robert:

#26, #37 EBL
No, there are many 2nd laws, as Granville says. There is one for thermo entropy S, and one for chromium entropy S1, one for socks entropy S2 …
Only in S=k ln(W) where W=noble gas statistics, is it possible to convert thermo entropy to noble-gas-statistical entropy. This one success, however, suggests that S1 = ln(W1), S2=ln(W2)…are all conserved, even if we don’t know “k”, the conversion constant to thermal entropy S.
The reason it is SLoT (emphasizing T), is because entropy was defined in the context of heat engines, Carnot cycles, and steam power, long before Ludwig Boltzmann converted it to statistics. Boltzmann’s discovery enabled it to be exactly computable from statistics of ordering. This was considered a “deep” result, having far more explanatory power than simply heat engines, since just about anything could be discussed in terms of ordering.

Well, he didn’t “convert it to statistics”. He expressed it in terms of the probability of microstates, but his formula has units of energy – hence the constant. You can use it for purely statistically, as in Shannon’s entropy, but then the SLoT wouldn’t necessarily apply. That’s why I keep saying that it’s important to say what a p value is the probability of, under what conditions. In Boltzmann’s formula, the p value is the probability of microstates, in which is defined in terms of energy. The constant gives you the answer in joules/kelvin.

#68 EBL
You are confusing “energy” with “entropy”. The 1st law was proven by a British spy in the Revolutionary War, who nonetheless got an elementary school named after him in Woburn, Massachusetts–Count Rumford. He showed that work could be converted to heat by a precise ratio. They were convertible. The same could not be said for entropy, the topic of the 2nd law.
Granville (and all physicists) have no trouble with converting energy and work, we just have trouble with people who convert thermal entropy into life, or Shannon entropy, or chromium entropy, or socks entropy…
Your comment about “entropy not being conserved,.. but decreasing” is mistaken. The SLoT prohibits thermal entropy of closed systems from decreasing–it is either conserved and constant, or increasing.
Likewise your comment that SLoT “is about energy” is also mistaken. The appropriate equation is dS = dQ/T. Entropy is about the ratio of energy over temperature. Quite another sort of bird altogether.

Point taken – actually I mistyped (meant “increasing”). I was thinking “order”. And yes I probably was conflating energy with entropy (joules vs joules per kelvin). However, that doesn’t actually mean that the 2LoT is not about energy: dS = dQ/T is an equation with an energy term. So how can it not be about energy? It seems to be you and Granville who think that “sock entropy” has something to do with the 2LoT!

Again, I wasn’t discussing energy gradients, but entropy gradients. So your comments aren’t relevant to my discussion. Nevertheless, atmospheric scientists refer to tornadoes as entropy-driven” systems because of the way vorticity plays a part in their creation–NOT energy gradients!

Well, internal energy gradients, surely? But your point is taken. I shall take more care with my units!

#70 EBL
Thermodynamics is NOT about energy primarily, look at the word–”thermo”= heat, “dynamics” = motion. Its about heat flow. Or more precisely dS = dQ/T, but I already said that.

But both heat and motion are about energy! That’s like saying that velocity is not about distance travelled!

#72, 73, 76, 86, 89 etc
Lizzie, neither dust devils nor tornadoes violate SLoT, but they neither are they spontaneous. They are driven by gradients of entropy (not gradients of energy). Your Wikipedia reference to convection layers is missing a big piece of the physics, because if simply convection were enough, you should see tornadoes in every pot of water that is boiling on the stove.

Well, you sometimes do. And yes, of course they do not violate the SLoT, but they are perfectly “spontaneous” in the sense of not being designed. In other words, contra Granville’s claim, low entropy systems do emerge, regularly, on earth, without violating the 2LoT, and without being designed. A dust devil is precisely the kind of refrigerator you seemed to be implying required something not available on earth without some kind of intervention.

But I’m glad you brought them up–they are a good example of how to generate order using entropy gradients.

Yes, they are indeed.

Shannon entropy has different units from thermal entropy. They are as far from each other as fish and bicycles.

Yes, I know. That was my point. You can’t just take a law that applies to fish, and then express concern when it is violated by bicycles.

97. 97
kairosfocus says:

EL (attn RS):

Pardon a comment on your exchange with RS.

A fundamental point of phenomena in our world is that things have a temperature, which is a metric of average random energy per molecule (or similar entity) per degree of accessible freedom in light of relevant barriers of access.

This thermal energy is usually dispersed in a 3-d pattern, across translation [3 degrees corresponding to the three orthogonal axes], rotation, vibration modes etc. Consequently, heat capacity exists as a measure of how much added heat it takes to increase the temp of a given substance. “Surprise,” it varies with temperature itself in many key cases, i.e. there is a phenomenon known as freezing out of degrees of freedom.

A theoretical gateway into understanding this is from the gases, and our knowledge that bodies in thermal equilibrium A and B, and B and C are such that A and C will also be in thermal equilibrium, i.e. temp, T, is an equivalence relationship. We can go to our marbles in a piston model and work out that for Maxwell Boltzmann statistics for ideal gas — dilute noble gas in effect [though we can usually get away with using thin enough air etc] — molecules, the molar gas constant R (8.314 J/K mol), divided down by number of molecules in a mole N_A [6.02 * 10^123 or so) and with some suitable fractional parameter to adjust for number of modes, will give us a measure of the energy accessible per particle on simply being at a given temp. k_B = R/N_A is therefore a useful constant, and 1/2 * kT energy per degree of accessible freedom is a useful estimate that can be worked out from the above.

Basically, if your potential wells or accessible modes can move in increments of 1/2 * kT, then they are accessible at T. Going quantum brings to bear steps of energy and an irremovable zero point energy.

As of now we have accessible energy from the random energy per degree of freedom from having a temp. This is available locally to move things around in possibility space, leading to a natural tendency to explore the space of possibilities for energy and mass to be distributed at molecular or comparable level, in a blind, random walk.

This makes Boltzmann’s special case for entropy, where number of possible distributions, W, is flat random accessible, a very important value indeed:

S = k * Log (W)

Gibbs worked out a different approach, which brings to bear that here may be potential barriers as hinted at by RS in the initial post, so that different modes are more or less accessible on a case by case basis, essentially

S = [SUM on i] p_i * log (P_i)

Complication. In the quantum world, often the relevant one at this stage, potential hills that have to be surmounted to move from one valley to another — the source of metastability of various configs of entities, including in chem rxns and the like — are porous, i.e. there is a finite probability of tunnelling that depends on accessible energy and strength of the barrier.

Then come the many special cases, even oil-water mixing and fluid-fluid mixing more generally. In these cases, Physicists and Chemists tend to combine laws and derive thermodynamic potentials that evaluate drivers, Gibbs — same fellow — free energy being maybe the most useful. Wiki has a nice description:

Just as in mechanics, where potential energy is defined as capacity to do work, similarly different potentials have different meanings. The Gibbs free energy is the maximum amount of non-expansion work that can be extracted from a closed system; this maximum can be attained only in a completely reversible process. When a system changes from a well-defined initial state to a well-defined final state, the Gibbs free energy delta-G equals the work exchanged by the system with its surroundings, minus the work of the pressure forces, during a reversible transformation of the system from the same initial state to the same final state.[2]

Gibbs energy (also referred to as [delta-]G) is also the chemical potential that is minimized when a system reaches equilibrium at constant pressure and temperature. Its derivative with respect to the reaction coordinate of the system vanishes at the equilibrium point. As such, it is a convenient criterion of spontaneity for processes with constant pressure and temperature . . . .

The Gibbs free energy is defined as:

G(p,T) = U + pV ? TS

which is the same as:

G(p,T) = H ? TS

[–> i.e. enthalpy, roughly heat content, H = U + PV, a sort of generalisation of internal energy plus pressure-volume energy]

where:

U is the internal energy (SI unit: joule)
p is pressure (SI unit: pascal)
V is volume (SI unit: m3)
T is the temperature (SI unit: kelvin)
S is the entropy (SI unit: joule per kelvin)
H is the enthalpy (SI unit: joule)

Essentially, if delta_G is negative, a spontaneous process is favoured, and if positive, it is not. At 0, we are at equilibrium and tend to be there. This embeds entropy and involves especially changes at molecular levels. In effect mix 1st and 2nd laws to give the TdS equation, and bring to bear factors. That is, Gibbs free energy consideration are materially based on entropy considerations, and are effectively entropy enabled/disabled in key parts.

As a result of these considerations, there are ways in which counting possible states is related to energy and system behaviour, via the existence of temperature.

Similarly, we have seen why energetically uphill cases tend to be disfavoured, we would have to inject energy to move uphill, energy not easily accessible in the system from temperature or easy rearrangements.

Catalysis works by lowering potential barriers through providing alternative paths, and enzymes and other biological nano-machine process units also use ATP as an energy enabler that injects the jump to make the leap.

Enzymes are of course proteins, ribosomes are assemblies of proteins and RNA, etc. Each of these is complex and built up from highly endothermic molecules that are highly functionally specific and complex. To set up such internal process units in a sort of nano scale chem engineering system, we have encapsulation with smart gating, organised metabolic pathways of extreme complexity and integration strongly reminiscent of a refinery network (recall those wall-sized cell biochem pathways charts?), and code based von Neumann self replication, all of which perform important and necessary functions.

Essentially, none of this is such that diffusion-like mechanisms, would favour formation of such a complex organised entity.

RS, has raised issues of long range interactions in response to such. This (if I understand him right . . . and note his remark on sloppy expression of what he really means) is essentially an appeal to self-organisation based on structures within a system, and perhaps to ordering based on boundary conditions.

This is of course back to the point that ordering is not the same as wiring diagram based functional organisation in the face of high contingency. Where, information holding capacity is directly linked to the requisite of high contingency.

However, obviously, if there are barriers too high for relevant accessible thermal energy to surmount [about 1/40 electron Volt at “room temp” is a longstanding rule of thumb [I usually remember this from the rule of thumb energy of thermal neutrons in nuclear interactions . . . ], where red light photons hold about 2 eV each and blue light might be about 4 eV), accessible state counting and assumption of random walks across so accessible states on assumption of thermal energy accessibility are going to break down. This is going to partition up access to entropy. As a useful example, a major part of why oil and water don’t mix is that the water would kill off access to many ways to interact among the oil molecules so the statistics lock us up into separate phases. This is part of what is being exploited in the lipid bilayer membranes used by cells, but cell membranes are not just layers, they enfold sensors, sophisticated gating and more, integrated into the wider cell system.

This brings us to the significance of assembly by organised shaft work, i.e. constructive work, leading to things that exhibit functionally specific, complex organisation and associated information [FSCO/I].

It is underscored that diffusion and similar forces do not point that way, but away from it.

It is underscored that long ranger interactions and potential barriers etc, point to mechanically necessary ordering as opposed to highly contingent functionally specific organisation.

It is underscored that the empirically observed, reliably known source of FSCO/I is design.

Where also the living cell, including the simplest we see or can conceive, will be chock full of such FSCO/I. (Start with geometry based nanomachines dependent on homochiral molecules to work, embedding essentially 1 bit per monomer just from this.)

This brings us back home to the home base of design thinking in the world of life, OOL. Thence the observation that OOL is the necessary root of the Darwinist tree of life, so we see that no roots, no shoot, no tree.

So also, we see that the best empirically and analytically warranted account of the origin of FSCO/I — a major component of any reasonably conceivable first cell based life form — is design, serving to co-ordinate constructive work through shaft work producing energy conversion devices.

OOL is the pivot.

KF

98. 98
kairosfocus says:

OOPS: N_A = 6.02 * 10^23

99. 99
kairosfocus says:

KS: With all reasonable respect, it is high time to stop the ideological posturing, especially red herrings led away to strawmen and soaked in ad hominems then set alight to cloud, confuse, poison and polarise. Serious matters are on the table and they are being addressed seriously, hopefully this will not have to get a lot more complex beyond this point. KF

PS: With aid of Wiki etc, TMLO here [an HTML excerpt] 7 and then 8, from chs 7 – 9 especially, may be a very useful primer on the sort of issues that lurk here.

100. 100
kairosfocus says:

F/N: Thaxton et al use E for internal energy, not U. Also beware of the divergent terminologies on isolated, closed and open systems, also there are different sign conventions for direction of work flows relative to a system, work on system being positive or negative depending on author. And more.

101. 101
kairosfocus says:

F/N 2: On a re-look, I don’t know if I made it plain enough above that the availability of thermal energy in 1/2 * kT lumps, allows us to bridge fish and bicycles. That is, to explore accessible sets of distributions of mass and energy on micro level across various translation, rotation, vibration etc modes. Thence, things like forming various chemical species and finding equilibria etc. Where also of course where we then run into potential barriers and issues linked to that including tunnelling, etc. And yes, that is a point where I think a lot of physicists and chemists would say that they differ with RS (KS, observe on how to differ without resorting to silly personalities). I am also emphasising that ordering forces do not explain organisation to plan. Information needs to be explained, and in the end the best known source of complex functionally specific info is mind.

102. 102

KF and Robert (and Granville):

It seems to me this is really very simple:

The 2nd Law of thermodynamics is about thermodynamics – the flow of heat, as Robert points out.

The 2nd Law says that in a closed system the spontaneous direction of change is in the direction of increased entropy. This cannot refer to any other kind of entropy other than thermodynamic entropy, measured in energy units/temperature units, because it isn’t true for other entropies.

For example if I have a code consisting of 1’s and 0’s with a an entropy, of, say 100 bits, there is no law that says that that code cannot spontaneously increase or decrease in entropy. It would be meaningless, because the 1s and 0s in a computer code don’t spontaneously do anything. Not only that, but the entropy of that code is exactly the same, whether the code does something interesting like print ‘hello world’ or whether it just makes the computer freeze. And not only that, but the greater the entropy, the more useful work the code can do. Precisely the opposite of thermodynamic entropy.

Or let’s take the “frequency entropy” of the distribution of frequencies emitted by a black body radiator. The entropy of the frequencies is higher for a hot body than for a cool body, because it’s broader band. So in that case, the spontaneous direction of entropy is to decrease, not increase.

Sure you can apply thermodynamic entropy to arrangements of macroscopic things, rather than to arrangements of atoms or molecules, but that still doesn’t alter the units, which remain energy/temperature. A sofa up a tree has less thermodynamic entropy than the same sofa before it was lifted into the tree by a tornado. As you will discover when eventually its entropy dramatically increases when the branch breaks and it falls on your head.

So you simply can’t rescue Granville’s argument by saying, well, he wasn’t actually talking about thermodynamic entropy.

If he wasn’t, the second law doesn’t apply. And if he was, the existence of tornadoes themselves, and the work they do, demonstrate that he is incorrect.

103. 103

Elizabeth B Liddle

“So you simply can’t rescue Granville’s argument by saying, well, he wasn’t actually talking about thermodynamic entropy. If he wasn’t, the second law doesn’t apply. And if he was, the existence of tornadoes themselves, and the work they do, demonstrate that he is incorrect.I think CSI is a circular concept.”

Prof. Sewell talks about statistical mechanics entropy and the SLoT in its statistical mechanics sense. Thermodynamics, in that sense, is not only about joules and heat. Please, don’t try to hidden yourself behind a finger…

The existence of tornadoes themselves, and the work they do, perfectly demonstrate that Sewell is correct. They are war-machines of destruction and well represent the destructive nature of entropy increase. About the inverse – entropy decrease – you, evolutionists, don’t continue to betray yourself and others by attributing to it a miraculous constructive and organizational power that entropy decrease has not.

About CSI as a circular concept. Your posts are CSI, what else. If CSI is circular, then all you say is circular too.

104. 104

Prof. Sewell talks about statistical mechanics entropy and the SLoT in its statistical mechanics sense. Thermodynamics, in that sense, is not only about joules and heat. Please, don’t try to hidden yourself behind a finger…

And in the statistical mechanics sense it is about energy.

If you do not state what the probabilities in the formula are the probabilities of you will simply end up with a number that could mean anything.

And the only anythings that the 2nd Law of Thermodynamics applies to is the distribution of energy states.

You can get the “entropy” of other things of courses, and we do. But the 2nd Law won’t necessarily hold true for those things.

It would be like saying that the law “hot air rises” could be applied to anything and still be true, including hot bricks.

The existence of tornadoes themselves, and the work they do, perfectly demonstrate that Sewell is correct. They are war-machines of destruction and well represent the destructive nature of entropy increase.

No they do not. A tornado often rearranges things so that the arrangement has less entropy than it did before, such as piling things up, or depositing things in high places.

About the inverse – entropy decrease – you, evolutionists, don’t continue to betray yourself and others by attributing to it a miraculous constructive and organizational power that entropy decrease has not.

Of course it doesn’t. Nobody is claiming that it does. The entire thing is a straw man.

105. 105
kairosfocus says:

EL:

I have taken time to show how and why thermodynamics is not simplicitas about flow of heat, but takes in a LOT of molecular and similar scale forces and phenomena, in the first instance. It extends to macro phenomena and as a classic case in point, diffusion is not about heat but about molecular randomness.

Through issues on conservation and degradation of energy it has a lot to say about macro-scale phenomena.

Through the connexion to information and to config spaces, if speaks much to relvant phenomena on information and organisation.

As for the latest attempt to dismiss complex specified info, the materially relevant part is functionally specific comple3x info, often implicit in wiring-diagram [nodes and arcs] organisation. Describing something that can be observed to work or not work in a relevant context [functional specificity], and is dependent on putting many matched parts [complexity] together in a narrow cluster of ways [specific organisation] from the set of possible ways to clump or scatter such across our planet or solar system or observed cosmos, is plainly objective. Indeed, by describing something as functional in a particular way, we have a quick and dirty way to specify it without having to in detail list off its parts, arrangement and coupling.

Your post and this are both ASCII coded posts in English in the context of a thread and exhibit FSCO/I.

It is time that you accepted reality instead of trying to rhetorically brush it off.

KF

106. 106
kairosfocus says:

PS: And BTW, the inference that we are dealing with shaft work carrying out constructive work resulting in FSCO/I, not diffusion like forces has nothing to do with the usual false dichotomy of objectors tot he design inference, that it is about a contrast natural vs supernatural. As has been stated, cited and explained and routinely willfully ignored in haste to get back to strawman talking points, from at least Plato on the record, the proper contrast is nature = chance + necessity acting spontaneously (in the Gibbs sense] vs art acting by constrictive work. if you will not acknowledge by now that design theorists from Thaxton et al on, openly and freely acknowledge that design inferences on the observed world of life, does not allow us to infer on that to designers within or beyond the cosmos, it reflects sustained willful misrepresentation on your side’s part, not honest discussion or disagreement. Have at minimum the honesty to acknowledge that we have openly said that for nigh on 30 years.

107. 107

KF

I have taken time to show how and why thermodynamics is not simplicitas about flow of heat,

but it IS about the flow of energy.

but takes in a LOT of molecular and similar scale forces and phenomena, in the first instance. It extends to macro phenomena

Indeed. As in my example of a tornado that lifts a sofa into a tree.

and as a classic case in point, diffusion is not about heat but about molecular randomness.

And that randomness is about the kinetic energy of the molecules, and thus about heat.

108. 108

I do think that the major contributor to misunderstanding in these discussion is lack of clarity regarding what a probability is.

A probability is just a number between 0 and 1, and can be converted into “bits” by taking the negative log.

It tells us absolutely nothing on its own, and is not the property of a pattern, but may be probability of a pattern, or one of a class of patterns) under given conditions. Robert makes this clear,pointing out that 1 over the number of possible permutations of a pattern is not the same as the probability of each permutation, because there may be physical constraints that favour some more than others.

And that is absolutely key. If we see a rather special-looking pattern, we cannot simply say: this is improbable, it must have been designed. We must also ask: what physical constraints might have made this pattern probable?

And just as certain physical constraints make a dust devil probable on a cool sunny day in a dry desert, in other words, make a local entropy decrease probable, so we have no reason a priori to think that there may also have been physical constraints that made self-replication probable on earth 3 or 4 billion years ago.

And certainly, no reason to infer that there were not simply because the probability under random walk is prohibitively low.

And, Kairosfocus, this is not a “talking point”: it is simply a point which I would be very pleased to see you address.

109. 109

Elizabeth B Liddle

No they do not [tornados do not destroy]. A tornado often rearranges things so that the arrangement has less entropy than it did before, such as piling things up, or depositing things in high places (a tornado lifts a sofa into a tree).

Elizabeth you are priceless. You (Evolutionists & Co.) should be here to show us how spontaneous organization (evolution) arises instead of following the general towards-disorder trend of the SLoT and you what offer? a tornado that lifts a sofa into a tree! 🙂

And in the statistical mechanics sense it is about energy. And the only anythings that the 2nd Law of Thermodynamics applies to is the distribution of energy states. You can get the “entropy” of other things of courses, and we do. But the 2nd Law won’t necessarily hold true for those things.

Disagree. The SLoT of Thermodynamics-statistical mechanics deals with matter and energy and microstates and states and entropy and order and information and organization and CSI…

110. 110
CS3 says:

keithS: How do you reconcile the First Law with the Big Bang? Could there then not have also been one (or perhaps more) creative events (whether due to intelligence or not) in the history of the origin and/or development of life which also “violated” at least the normal understanding of a fundamental natural law? At least in theory, even if you do not believe the evidence currently indicates such?

On a related note, is there any reason that multiverses and the anthropic principle can be used to explain why we are in a universe with laws finely-tuned for the development of life, but could not be used, at least in theory, to explain one or more extremely improbable events in the origin and/or development of life (other than that allowing such a possibility might give some credibility to those who have claimed all along that materialist origin of life and evolutionary theories are insufficient)?

111. 111
kairosfocus says:

EL:

if you will take time to notice you will see that first Robertson is using the Gibbs approach which directly reckons with varied probabilities of accessing modes. The Gibbs formulation reduces to Boltzmann on a flat random distribution but is not locked down to it. However the Boltzmann approach helps us see what is going on by using an instructive simple case.

Second, The info measure of entropy is similarly based on that same approach, i.e. the flat random probability is only a special case and is not a main part of the analysis.

In the case of Durston et al, the shift from null state to ground state to functional state on empirical observation reckons with that too.

Finally, in the FSCO/I threshold the metric is NOT — repeat, NOT — built around probabilites but something far more blunt but effective for a threshold: sampling.

500 bits worth of complexity swallows up the search resources of our solar system to the proportion of a blind sample of 1 straw to a cubical haystack 1,000 Light years thick. That is if you were to superpose the haystack on our galactic neighbourhood, and pull a straw sized sample, regardless of many thousands of star systems in it, with all but absolute certainty, you will only pull a straw, because the bulk so utterly dominates. This is the same point in thermodynamics.

So as has been said to you umpteen times and studiously ignored in haste to find any dodge, the matter is settled simply on the relative rarity of FSCO/I such that we have no reason to expect so small a relative sample, if blind, to capture it. Where the req’t of multi part matching, arrangement and coupling to yield function guarantees that we are dealing with rather special and unusual arrangements.

But then by now I need to recognise that you will probably never acknowledge this point so this is a note for onlookers, in absence of a reasonable responsiveness on your part.

KF

PS: As has been explained, thermodynamics, especially the statistical form, is concerned with a lot more than heat flows. Energy flows is not a good term as work for just one instance is not a flow. There are a great many linked pehenomena that are not really about heat and heat engines, e.g. viscosity, diffusion, etc etc. All of this is part of why I gave that pistons and marbles thought exercise to help see this. But instead of accepting that this has some significance in helping the onlooker, this was derided uncivilly as “spamming.” Translation, we hold you in contempt and have not the slightest intent to listen to or even put up with what you have to say — much less acknowledge that you just may have some knowledge here. After all design thinkers and the like are all obviously ignorant, stupid, insane or wicked.

112. 112
keiths says:

Lizzie:

And, Kairosfocus, this is not a “talking point”: it is simply a point which I would be very pleased to see you address.

Lizzie,

You don’t understand. Every point made by an Alinskyite evo mat propagandist such as you is a drumbeat strawman talking point, soaked in oil of ad hominem and set alight to cloud, poison, and polarise the atmosphere, and also to homosexualise the sacred institution of marriage, as Plato warned. Why do you persist in the teeth of correction?

Whereas KF’s points are straight from the mouths of angels and always linked to boot.

113. 113
kairosfocus says:

F/N 2: It is always a sign that the objections are breaking down when there is the attempt to resort to multiverses and some version or other of the anthropic principle. The problem is that this is ad hoc, first, and has utterly zero actual empirical warrant. Second, even if there were a multiverse the problem is that he observed cosmos sits at a LOCALLY fine tuned operating point suited for life. That is, in John Leslie’s example, it is like a long wall with a local zone in which there is jut one fly. Swat, it is hit by a bullet. Even if there are other parts of the wall elsewhere that positively are carpeted with flies so that a bullet hitting anywhere would smash a fly, the reasonable person on seeing this case would infer to a good marksman wielding a tack-driving rifle, which is itself a serious challenge. Onlookers will want to look here and onwards linked for more.

114. 114
keiths says:

CS3,

How do you reconcile the First Law with the Big Bang?

It’s pretty easy, because the universe’s positive energy is offset by the negative energy of gravitation. Most cosmologists therefore think that the net energy of the universe is zero.

And sure, what we refer to as laws aren’t necessarily sacrosanct and inviolable. The first law can be violated if you “repay” the debt fast enough — this is what vacuum fluctuations are about — and the second law is actually expected to be violated if enough time elapses or if the system involved is small enough, since it is a statistical law, not an absolute one.

However, to suggest as Robert does that the second law is continually being violated all over the world every time a plant grows, is rather extreme even by crank standards.

As is suggesting that Granville has made “an important contribution to physics” with his “X-entropy.”

115. 115
kairosfocus says:

KS: I think you need to read here on then go look yourself in the eye in a mirror. Your tactics are straight out of the rulebook. Alinsky’s rulebook. KF

116. 116
kairosfocus says:

F/N:

It’s pretty easy, because the universe’s positive energy is offset by the negative energy of gravitation. Most cosmologists therefore think that the net energy of the universe is zero.

This is a very poorly, popularly worded way of speculating that we are in effect a fluctuation of a wider universe as a whole. For which the empirical evidence is nil. This is speculative metaphysics not science and it is often very badly done — e,g, something from nothing (non-being). Because, those who do it are not particularly qualified in metaphysics.

KF

PS: Onlookers, it has long since been pointed out that there are good reasons to partition entropy, KS is just playing the usual irresponsible talking point strawman tactic games he seems to have played for years. The aim is to swarm down by drumbeat repetition, wearing down. (Americans expect knockout quick wins. This is a case where ideologues will need to know they are only succeeding in showing how unreasonable, irresponsible and outright uncivil they are, all warning flags.)

117. 117
keiths says:

KF:

For which the empirical evidence is nil.

KF, you crack me up.

Recent WMAP measurements show that the universe is flat to within 0.4%. Flat means zero energy.

You are terrible at bluffing.

118. 118
kairosfocus says:

EL:

just as certain physical constraints make a dust devil probable on a cool sunny day in a dry desert, in other words, make a local entropy decrease probable, so we have no reason a priori to think that there may also have been physical constraints that made self-replication probable on earth 3 or 4 billion years ago.

1 –> whirlwinds are observed, speculated self replicating molecules simply are not and have not. Why are you trying to suggest the second is as empirical as the other? Or, that we must answer to air castle molecules with speculated properties as though they were realities? Show them real first, then we can do science, as in based on observations. Absent that you are talking materialist fantasies.

2 –> The only self replicating life observed uses gated encapsulation, metabolism using nanotech molecular machines and a code based replication facility following von Neumann’s kinematic self replicator architecture. Such is chock-full of FSCO/I, manifests oodles of constructive work using FSCO/I rich machines, and we know but one empirically substantiated, analytically credible source for FSCO/I.

3 –> Worse, dust devils are a manifestation of mechanical necessity riding on convection processes. Like hurricanes, they reflect order, not organisation on a Wicken wiring diagram, i.e. there is a category confusion linked to your refusal to acknowledge the logic of the design explanatory filter’s first step, namely that necessity leading to regularity is distinct from high contingency tracing to chance or choice where also complex functional organisation has but one empirically warranted explanation, design.

4 –> Notice Wiki:

Dust devils form when hot air near the surface rises quickly through a small pocket of cooler, low- pressure air above it. If conditions are just right, the air may begin to rotate. As the air rapidly rises, the column of hot air is stretched vertically, thereby moving mass closer to the axis of rotation, which causes intensification of the spinning effect by conservation of angular momentum. The secondary flow in the dust devil causes other hot air to speed horizontally inward to the bottom of the newly forming vortex. As more hot air rushes in toward the developing vortex to replace the air that is rising, the spinning effect becomes further intensified and self-sustaining. A dust devil, fully formed, is a funnel-like chimney through which hot air moves, both upwards and in a circle. As the hot air rises, it cools, loses its buoyancy and eventually ceases to rise. As it rises, it displaces air which descends outside the core of the vortex. This cool air returning acts as a balance against the spinning hot-air outer wall and keeps the system stable.[4]

The spinning effect, along with surface friction, usually will produce a forward momentum. The dust devil is able to sustain itself longer by moving over nearby sources of hot surface air.

As available extreme hot air near the surface is channelled up the dust devil, eventually surrounding cooler air will be sucked in. Once this occurs, the effect is dramatic, and the dust devil dissipates in seconds. Usually this occurs when the dust devil is not moving fast enough (depletion) or begins to enter a terrain where the surface temperatures are cooler, causing unbalance.

5 –> Order, shaped by initial conditions, boundary conditions and mechanical laws.

6 –> Contrast Wicken and Orgel, as has repeatedly been brought to your attention and just as repeatedly ignored the better to repeat long since cogently answered assertions — yes, talking points repeated in an ad nauseam drumbeat regardless of cogent response or correction — to the point of willfulness. Let me clip from point 8 at 41, dismissed by KS as spam, reflecting his own refusal to attend to cogent material:

WICKEN, 1979: >> ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from
arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional
complexity and carries information
. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)] >>

ORGEL, 1973: >> . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.]

7 –> Here is a key part of what I noted on vortices and crystals in my clipped remarks on hurricanes and snowflakes, in 42 above:

A tropical cyclone is by and large shaped by convective and Coriolis forces acting on a planetary scale over a warm tropical ocean whose surface waters are at or above about 80 degrees F. That is, it is a matter of chance + necessity leading to order under appropriate boundary conditions, rather than to complex, functionally specified information.

Similarly, the hexagonal, crystalline symmetry of snowflakes is driven by the implications of the electrical polarisation in the H-O-H (water) molecule — which is linked to its kinked geometry, and resulting hexagonal close packing.

8 –> Now you tried to make much of how forming an ordered vortex implies a local reduction in entropy, as if voila, reductions in entropy resulting in constructive work creating FSCO/I rich systems can be bought so cheaply.

9 –> There is a basis for the structure of a crystal or vortex that reflects order. The properties of molecules, atoms or ions, the specifics of convectional circumstances. None of these has anything to do with aperiodic systems built up in accordance with wiring diagrams or coded complex specifications such as we see with say proteins.

10 –> In short, FSCO/I patently cannot be had on the cheap. Let me cite Thaxton et al on the matter, TMLO ch 8:

Only recently has it been appreciated that the distinguishing feature of living systems is complexity rather than order.4 This distinction has come from the observation that the essential ingredients for a replicating system—enzymes and nucleic acids—are all information-bearing molecules. In contrast, consider crystals. They are very orderly, spatially periodic arrangements of atoms (or molecules) but they carry very little information. Nylon is another example of an orderly, periodic polymer (a polyamide) which carries little information. Nucleic acids and protein are aperiodic polymers, and this aperiodicity is what makes them able to carry much more information. By definition then, a periodic structure has order. An aperiodic structure has complexity. In terms of information, periodic polymers (like nylon) and crystals are analogous to a book in which the same sentence is repeated throughout. The arrangement of “letters” in the book is highly ordered, but the book contains little information since the information presented—the single word or sentence—is highly redundant.

It should be noted that aperiodic polypeptides or polynucleotides do not necessarily represent meaningful information or biologically useful functions. A random arrangement of letters in a book is aperiodic but contains little if any useful information since it is devoid of meaning.

[NOTE: H.P. Yockey, personal communication, 9/29/82. Meaning is extraneous to the sequence, arbitrary, and depends on some symbol convention. For example, the word “gift,” which in English means a present and in German poison, in French is meaningless].

Only certain sequences of letters correspond to sentences, and only certain sequences of sentences correspond to paragraphs, etc. In the same way only certain sequences of amino acids in polypeptides and bases along polynucleotide chains correspond to useful biological functions. Thus, informational macro-molecules may be described as being and in a specified sequence . . . .

Three sets of letter arrangements show nicely the difference between order and complexity in relation to information:

1. [Class 1:] An ordered (periodic) and therefore specified arrangement:

THE END THE END THE END THE END

Example: Nylon, or a crystal . . . .

2. [Class 2:] A complex (aperiodic) unspecified arrangement:

AGDCBFE GBCAFED ACEDFBG

Example: Random polymers (polypeptides).

3. [Class 3:] A complex (aperiodic) specified arrangement:

THIS SEQUENCE OF LETTERS CONTAINS A MESSAGE!

Example: DNA, protein.

Yockey7 and Wickens5 develop the same distinction, that “order” is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, “organization” refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity. In short, the redundant order of crystals cannot give rise to specified complexity of the kind or magnitude found in biological organization; attempts to relate the two have little future.

________

The category confusion is duly corrected. At least for the benefits of onlookers, for on long track record, you simply will not listen or respond appropriately. (I wish that at length you will show me wrong, but every evidence points to my being right.)

KF

119. 119
kairosfocus says:

Onlookers, yet another KS strawman, duly laced with ad hominems and set alight to cloud the issue, poison and polarise the atmosphere. There is no evidence observed of a multiverse or another wider universe that threw this one up as a fluctuation, all is admitted speculation. KS cites evidence pointing to some aspects of fine tuning, if anything, ad then pretends to find his underlying sub cosmos to bubble us up as if it were empirically observed rather than a highly speculative model. KF

120. 120
keiths says:

KF,

I didn’t say anything about a multiverse or a fluctuation.

I explained to CS3 that the Big Bang is not an example of a first law violation, because the net energy of the universe appears to be zero:

It’s pretty easy, because the universe’s positive energy is offset by the negative energy of gravitation. Most cosmologists therefore think that the net energy of the universe is zero.

You disputed that, and you were wrong. Again.

121. 121
CS3 says:

It’s pretty easy, because the universe’s positive energy is offset by the negative energy of gravitation. Most cosmologists therefore think that the net energy of the universe is zero.

Even assuming this is true, would it not be fair to say that represents, as I said, a “violation” of what was, at least at one point, “the normal understanding of a fundamental natural law”?

In fact, many scientists argued against the Big Bang because it violated their notion of the fundamentals of science:

“Some prominent scientists began to feel the same irritation over the expanding universe that Einstein had expressed earlier. Eddington wrote in 1931, ‘I have no ax to grind in this discussion, but the notion of a beginning is repugnant to me. The expanding universe is preposterous…incredible, it leaves me cold.’ The German chemist Walter Nernst wrote ‘To deny the infinite duration of time would be to betray the very foundation of science.'”

So, is it not possible that a conflict could be found between, say, the current materialist view of the origin and development of life and the current understanding of what the fundamental laws of nature are? Maybe there is something missing in one of these two current understandings. Maybe that something is “natural” (like “negative energy of gravitation”), or maybe it is not.

122. 122
bornagain77 says:

as to

It’s pretty easy (to explain the Big Bang), because the universe’s positive energy is offset by the negative energy of gravitation. Most cosmologists therefore think that the net energy of the universe is zero.

Seems somebody is channeling Peter Atkins

William Lane Craig vs Peter Atkins (HQ) 4a/11

Craig’s rejoinder to Atkin’s argument, at the 50:00 minute of this following video, is classic:

William Lane Craig vs Peter Atkins: “Does God Exist?”, University of Manchester, October 2011

At least Krauss’s argument from nothing is not quite as absurd as Atkin’s is/was:

Why Atheism Is Nonsense – Part 2 “Something is Nothing”

Please note that Krauss refers at the 2:20 mark that the energy of empty space is not zero (i.e. does not balance to zero as Atkins holds) but is a ‘gazillion times the energy we see’. I believe what Krauss is referring to is this fact:

Vacuum energy
Excerpt: Vacuum energy is an underlying background energy that exists in space even when the space is devoid of matter (free space). (Vacuum energy has a postulated) value of 10^113 Joules per cubic meter.

and:

(10^113 joules) per (cubic meter) = 10 ^113 pascals (Pa)

and

10^113 Pa approx = 4.6×10^113 Pa = 6.7×10^109 psi;

Of note: The Planck pressure, (4.63×10^108 bar), is/was not reached except shortly after the Big Bang or in a black hole.
http://en.wikipedia.org/wiki/O.....ressure%29

Of course the problem for Krauss was that he tried to redefine nothing as this empty space that is boiling with these virtual particles. The problem with all that of course is that space-time is also shown to have come into existence at the Big Bang, thus Krauss does not even have space-time filled with virtual particles to appeal to since even space-time did not exist. But I guess all that is just a small detail that could be overlooked if your primary goal is to deny Genesis in the first place:

Notes:

Mathematics of Eternity Prove The Universe Must Have Had A Beginning – April 2012
Excerpt: Cosmologists use the mathematical properties of eternity to show that although universe may last forever, it must have had a beginning.,,, They go on to show that cyclical universes and universes of eternal inflation both expand in this way. So they cannot be eternal in the past and must therefore have had a beginning. “Although inflation may be eternal in the future, it cannot be extended indefinitely to the past,” they say.
They treat the emergent model of the universe differently, showing that although it may seem stable from a classical point of view, it is unstable from a quantum mechanical point of view. “A simple emergent universe model…cannot escape quantum collapse,” they say.
The conclusion is inescapable. “None of these scenarios can actually be past-eternal,” say Mithani and Vilenkin.
Since the observational evidence is that our universe is expanding, then it must also have been born in the past. A profound conclusion (albeit the same one that lead to the idea of the big bang in the first place).
http://www.technologyreview.com/blog/arxiv/27793/

“Every solution to the equations of general relativity guarantees the existence of a singular boundary for space and time in the past.”
(Hawking, Penrose, Ellis) – 1970

Big Bang Theory – An Overview of the main evidence
Excerpt: Steven Hawking, George Ellis, and Roger Penrose turned their attention to the Theory of Relativity and its implications regarding our notions of time. In 1968 and 1970, they published papers in which they extended Einstein’s Theory of General Relativity to include measurements of time and space.1, 2 According to their calculations, time and space had a finite beginning that corresponded to the origin of matter and energy.”3
Steven W. Hawking, George F.R. Ellis, “The Cosmic Black-Body Radiation and the Existence of Singularities in our Universe,” Astrophysical Journal, 152, (1968) pp. 25-36.
Steven W. Hawking, Roger Penrose, “The Singularities of Gravitational Collapse and Cosmology,” Proceedings of the Royal Society of London, series A, 314 (1970) pp. 529-548.
http://www.big-bang-theory.com/

Not Understanding Nothing – A review of A Universe from Nothing – Edward Feser – June 2012
Excerpt: A critic might reasonably question the arguments for a divine first cause of the cosmos. But to ask “What caused God?” misses the whole reason classical philosophers thought his existence necessary in the first place. So when physicist Lawrence Krauss begins his new book by suggesting that to ask “Who created the creator?” suffices to dispatch traditional philosophical theology, we know it isn’t going to end well. ,,,
,,, But Krauss simply can’t see the “difference between arguing in favor of an eternally existing creator versus an eternally existing universe without one.” The difference, as the reader of Aristotle or Aquinas knows, is that the universe changes while the unmoved mover does not, or, as the Neoplatonist can tell you, that the universe is made up of parts while its source is absolutely one; or, as Leibniz could tell you, that the universe is contingent and God absolutely necessary. There is thus a principled reason for regarding God rather than the universe as the terminus of explanation.
http://www.firstthings.com/art.....ng-nothing

US lab reaches Big Bang heat of four trillion degrees Celsius – 17 Feb 2010
Excerpt: US physicists have created matter at around four trillion degrees Celsius, the hottest temperature ever reached in a laboratory, simulating the “quark soup” that scientists believe existed at the universe’s birth.
A spokesman for the Department of Energy lab where the record-breaking temperature was reached said the effect was achieved by slamming together gold ions travelling at nearly the speed of light inside the Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) – an “atom smasher” with a 2.4-mile (3.8-kilometer) circumference.
The ultra-high temperature is higher than that needed to melt protons and neutrons into a plasma of quarks and gluons, the substance that filled the universe a few microseconds after it came into existence 13.7 billion years ago, according to the spokesperson.
The plasma of four trillion degrees Celsius (7.2 trillion degrees Fahrenheit) – 250,000 times hotter than the center of the sun – existed for only a few microseconds after the birth of the universe.
http://www.telegraph.co.uk/exp.....lsius.html

123. 123
5for says:

Niwrad @109 – you are the priceless one. Lizzie did not say tornadoes do not destroy. She said no they do not represent the destructive nature of entropy increase. Your attempt to lampoon what she said by misquoting her was noticed by this onlooker. For shame!

124. 124
bornagain77 says:

So 5for, you are cosigning tornadoes?? someone seems to have no shame!

125. 125
kairosfocus says:

Onlookers: KS is too educated not to know that a pre-existing gravity field in which our observed cosmos forms at an instant and starts to expand, is an underlying assumed pre-universe. He is also too intelligent and educated not to know that such a fluctuation would be contingent and thus either chance or choice. It being known that he is evolutionary materialist or a fellow traveller, it is not hard to see which is his choice. Just, he wishes to get something for and from nothing, without reckoning what a real nothing is — non being. Cf recent discussion of such poorly thought through metaphysics presented in a lab coat, here at UD — on pulling a cosmos out of a non-existent hat. KF

126. 126
kairosfocus says:

5for: Tornadoes do not perform constructive work in accord with a Wicken wiring diagram, yielding entities with FSCO/I, such as Jumbo Jets, or going to diffusion and the like at micro scale, neither do forces that scatter concentrations out in accordance with random exploration of accessible microstates, credibly get us to the living cell out of Darwin’s warm little pond or the like. Again, the notion that design advocates are arguing that there are no subsystems that undergo entropy reduction. (e.g. a hot body interacting with a cold one by passing d’Q of energy loses numbers of ways that energy and mass may be distributed at micro level, in a way that the receiving sub system then in aggregate increases the number of ways to such an extent that net there is at least conservation of entropy.) What is being pointed out it that he very cluster of forces and mechanisms, such as diffusion and the like, that lead to this, are such that we are not at all credibly going to get from such constructive shaft work in accord with a Wicken wiring diagram issuing in FSCO/I rich subsystems. And where teh heat engine or energy conversion device that performs the sort of constructive work we are discussion — such as assembling a protein step by step — exhibits FSCO/I in itself, that too needs to be explained. The only empirically grounded, analytically credible cause of FSCO/I is design. Attempts to evade this by appealing to spontaneous order tracing to convection and boundary conditions leading to vortices, or crystallisation on freezing by extracting the energy that prevents polarised molecules from forming an ordered structure, a crystal, reflect a fundamental confusion between randomness, order and specified complexity, especially functionally specified complexity. The att4empt to pull the rabbit of a living, metabolising, encapsulated, gated, von Neumann, code based self replicating cell out of such hoped for lucky noise, is a mark of the bankruptcy of evolutionary materialism right from the root of the tree of life. So, we may properly point out: no roots, no shoots, branches or twigs. Design sits at the table as of right of well-warranted induction backed up by the statistical analysis that undergirds thermodynamics, right from the root of the tree of life. KF

127. 127
keiths says:

KF,

You’re missing the point. If the universe has zero net energy, as the WMAP measurements suggest (to within 0.4%!), then the Big Bang cannot have violated the first law, regardless of whether it was the result of a fluctuation or some other mechanism.

x + 0 = x, for all x.

128. 128
keiths says:

CS3,

If I’m reading you correctly, you’re asking whether it’s possible in principle for us to discover violations of what we think are fundamental physical laws.

The answer is yes, of course. It’s possible, but they are considered laws for a reason, and so the evidence that they have been violated has to be very strong in order to convince us.

Recall the intense scrutiny that followed the announcement by the OPERA team that they had observed neutrinos apparently moving faster than the speed of light.

Yet here we have Granville casually asserting that there are thousands of different kinds of entropy, with a different second law for each. We have Robert backing him up, and even claiming that the second law is violated constantly by growing plants. Bizarre!

In short, we have crank science of an high order.

I do not understand why ID supporters embrace crank science so readily. I suppose it has something to do with the fact that they think that they’re right and that the vast majority of scientists are wrong. If scientists are deluded about evolution, the ID supporter thinks, then they may be deluded about thermodynamics, or climate change, or the age of the earth.

In the case of Granville and Robert, they seem to sincerely believe that Granville has stumbled upon something of great value to physics — something that everyone else is just too blind to see. It’s ludicrous, especially since Granville knows very little about thermodynamics and makes a raft of embarrassing errors in his paper.

If ID wants to be taken seriously by science, it needs to start by taking science seriously. Indiscriminately embracing cranks is not the way to do that.

129. 129

btw My tooth came out so easily, I didn’t have time to figure out the answer to cantor’s challenge. Although in the end I simply remembered the formula, and reverse engineered the logic (googled my brain as it were). So I cheated.

Good to have done though, and thanks for the challenge, cantor!

(And recovery is going well – mild analgesics seem to be adequate, and porridge + bananas isn’t too bad a diet)

130. 130
keiths says:

That’s good news, Lizzie. I’m glad it went well.