# A Designed Object’s Entropy Must Increase for Its Design Complexity to Increase – Part 2

September 5, 2012 | Posted by scordova under Biophysics, Comp. Sci. / Eng., Complex Specified Information, ID Foundations, Informatics, Physics, Self-Org. Theory |

In order for a biological system to have more biological complexity, it often requires a substantial increase in thermodynamic entropy, not a reduction of it, contrary to many intuitions among creationists and IDists. This essay is part II of a series that began with Part 1

The physicist Fred Hoyle famously said:

The chance that higher life forms might have emerged in this way is comparable to the chance that a tornado sweeping through a junkyard might assemble a Boeing 747 from the materials therein.

I agree with that assertion, but that conclusion can’t be formally derived from the 2nd law of thermodynamics (at least those forms of the 2nd law that are stated in many physics and engineering text books and used in the majority of scientific and engineering journals). The 2nd law is generally expressed in 2 forms:

2nd Law of Thermodynamics (THE CLAUSIUS POSTULATE)

No cyclic process is possible whose sole outcome is transfer of heat from a cooler body to a hotter body

or equivalently

2nd Law of Thermodynamics (THE KELVIN PLANCK POSTULATE)

No cyclic process is possible whose sole outcome is extraction of heat from a single source maintained at constant temperature and its complete conversion into mechanical work

In Part 1, I explored the Shannon entropy of 500 coins. If the coins are made of copper or some other metal, the thermodynamic entropy can be calculated. But let’s have a little fun, how about the thermodynamic entropy of a 747? [Credit Mike Elzinga for the original idea, but I’m adding my own twist]

The first step is to determine about how much matter we are dealing with. From the manufacturer’s website:

A 747-400 consists of 147,000 pounds (66,150 kg) of high-strength aluminum.

Next we find out the the standard molar entropy of Aluminum (symbol Al). From Enthalpy Entropy and Gibbs we find that the standard entropy of aluminum at 25 Celcius at 1 atmosphere is 28.3 Joules/Kelvin/Mole.

Thus a 747’s thermodynamic entropy based on the aluminum alone is:

Suppose now that a tornado runs into 747 and tears of pieces of the wings, tail, and engines such that the weight of aluminum in what’s left of the 747 is now only 50,000 kg. Using the same sort of calculation, the entropy of the broken and disordered 747 is:

Hence the tornado lowers the entropy of the 747 by disordering and removing vital parts!

And even supposing we recovered all the missing parts such that we have the original weight of the 747, the entropy calculation has nothing to say about the functionality of the 747. Hence, the 2nd law, which inspired the notion of thermodynamic entropy has little to say about the design and evolution of the aircraft, and by way of extension it has little to say about the emergence of life on planet earth.

Perhaps an even more pointed criticism in light of the above calculations is that increasing mass in general will increase entropy (all other things being equal). Thus as a system becomes more complex, on average it will have more thermodynamic entropy. For example a simple empty soda can weighing 14 grams (using a similar calculation) has a thermodynamic entropy of 14.68 J/K which implies a complex 747 has 4.7 million times the thermodynamic entropy of a simple soda can. A complex biological organism like an Albatross has more thermodynamic entropy than a handful of dirt. Worse, when the Albatross dies, it loses body heat and mass, and hence its thermodynamic entropy goes down after it dies!

So the major point of Part II is that a designed object’s thermodynamic entropy often increases with the increasing complexity of the design for the simple reason that it has more parts and hence more mass. And as was shown in part 1, the Shannon entropy also tends to increase with the complexity of the design. Hence, at least two notions of entropy (Shannon and thermodynamic) can increase with increased complexity of a design (be it man-made design, evolution made design, or ….)

This concludes the most important points I wanted to get across. Below is merely an exploration of some of the fundamentals of thermodynamics for readers interested in the some of the technical details of thermodynamics and statistical mechanics. The next section can be skipped at the reader’s discretion since it is mostly an appendix to this essay.

========================================================================

THERMODYNAMICS AND STATISTICAL MECHANICS BASICS

Classical Thermodynamics can trace some of its roots to the work of Carnot in 1824 during his quest to improve the efficiency of steam engines. In 1865 we have a paper by Clausius that describes his conception of entropy. I will adapt his formula here:

Where S is entropy, Q is heat, and T is temperature. Perhaps to make the formula more accessible, let us suppose we have a 1000 watt heater running for 100 seconds that contributes to the boiling of water (already at 373.2ᵒK). What is the entropy contribution due this burst of energy from the heater? First I calculate the amount of heat energy input in the water:

Using Clausius’ formula, and the fact the process is isothermal, I then calculate the change of entropy in the water as:

So how does all this relate to Boltzmann and statistical mechanics? There was the intuition among scientists that thermodynamics could be related to classical (Newtonian) mechanics. They suspected that what we perceived as heat and temperature could be explained in terms of mechanical behaviors of large numbers of particles, specifically the statistical aspects of these behaviors, hence the name of the discipline is *statistical mechanics*.

A system of particles in physical space can be described in terms of position and momentum of the particles. The state of the entire system of particles can be expressed as a location in a conceptual Phase Space. We can slice up this conceptual phase space into a finite number of chunks because of the Liouville Theorem. These sliced-up chunks correspond to the microstates which the system can be found in, and furthermore the probability of the system being in a given microstate is the same for each microstate (equiprobable). Boltzmann made the daring claim that taking the logarithm of the number of microstates is related to the entropy Clausius defined for thermodynamics. The modern form of Boltzmann’s daring assertion is:

where Ω is the number of microstates of the system, S is the entropy, and k_{B} is Boltzmann’s constant. Using Boltzmann’s forumula we can then compute the change of entropy:

As I pointed out Boltzmann’s equation looks hauntingly similar to Shannon’s entropy formula for the special case where the microstates of a Shannon information system are equiprobable.

Around 1877 Boltzmann published his paper connecting thermodynamics to statistical mechanics. This was the major breakthrough that finally bridged the heretofore disparate fields of thermodynamics and classical mechanics.

Under certain conditions we can relate Clausius notions of entropy to Boltzmann’s notions of entropy, and thus the formerly disparate fields of thermodynamics and classical mechanics are bridged. Here is how I describe symbolically the special case where Clausius’s notions of entropy agrees with Boltzmann’s notions of entropy:

[It should be noted, the above equality will not always hold.]

Mike Elzinga and I had some heated disagreement on the effect of spatial configuration to entropy. Perhaps to clarify, the colloquial notion of disordering things does not change the thermodynamic entropy (like taking a 747 and disordering its parts, as long as we have the same matter, it has the same thermodynamic entropy). But that’s not to say that changes in volume (which is a change in spatial configuration) won’t affect the entropy calculations. This can be seen in the formula for the entropy of an ideal monoatomic gas (the Sakur-Tetrode Equation):

where

S is the entropy

N is the number of atoms

k_{B} is Boltzmann’s constant

V is the volume

E is the internal energy

ℏ = Dirac Constant (reduced Planck’s constant)

From this we can see that increasing either the volume which the gas occupies, the energy of the gas, or the number of particles in the gas will increase the entropy. Of course this must happen under reasonable limits since if the volume is too large there cannot be energy exchange in the particles and notions of what defines equilibrium begin to get fuzzy, etc.

Nowhere in this calculation are notions of “order” explicitly or implicitly identified, and hence such notions are inessential and possibly misleading to the understanding of entropy.

How the Sakur-Tetrode formula is derived is complicated, but if one wants to see how entropy can be calculated for simpler systems, Mike Elzinga provided a pedagogical concept test where the volume of the system is fixed and small enough such that the particles are close enough to interact. The volume is not relevant in his examples so the entropy calculations are simpler.

I went through a couple of iterations to solve the problems in his concept test. His test and my two iterations of answers (with help from Olegt on discrete math) are here:

Concept test attempt 1: Basic Statistical Mechanics

and

Concept test amendments: Purcell Pound

Acknowledgements

Mike Elzinga, Olegt, Elizabeth Liddle, Andy Jones, Rob Sheldon, Neil Rickert, the management, fellow authors and commenters at UD and Skeptical Zone.

[UPDATE 9/7/2012]

Boltzmann

“In order to explain the fact that the calculations based on this assumption [“…that by far the largest number of possible states have the characteristic properties of the Maxwell distribution…”] correspond to actually observable processes, one must assume that an enormously complicated mechanical system represents a good picture of the world, and that all or at least most of the parts of it surrounding us are initially in a very ordered — and therefore very improbable — state. When this is the case, then whenever two of more small parts of it come into interaction with each other, the system formed by these parts is also initially in an ordered state and when left to itself it rapidly proceeds to the disordered most probable state.” (Final paragraph of #87, p. 443.)

That slight, innocent paragraph of a sincere man — but before modern understanding of q(rev)/T via knowledge of molecular behavior (Boltzmann believed that molecules perhaps could occupy only an infinitesimal volume of space), or quantum mechanics, or the Third Law — that paragraph and its similar nearby words are the foundation of all dependence on “entropy is a measure of disorder”. Because of it, uncountable thousands of scientists and non-scientists have spent endless hours in thought and argument involving ‘disorder’and entropy in the past century. Apparently never having read its astonishingly overly-simplistic basis, they believed that somewhere there was some profound base. Somewhere. There isn’t. Boltzmann was the source and no one bothered to challenge him. Why should they?

Boltzmann’s concept of entropy change was accepted for a century primarily because skilled physicists and thermodynamicists focused on the fascinating relationships and powerful theoretical and practical conclusions arising from entropy’s relation to the behavior of matter. They were not concerned with conceptual, non-mathematical answers to the question, “What is entropy, really?” that their students occasionally had the courage to ask. Their response, because it was what had been taught to them, was “Learn how to calculate changes in entropy. Then you will understand what entropy ‘really is’.”

There is no basis in physical science for interpreting entropy change as involving order and disorder.

### 64 Responses to *A Designed Object’s Entropy Must Increase for Its Design Complexity to Increase – Part 2*

### Leave a Reply

You must be logged in to post a comment.

This will be interesting. Remember, our formalisms are not the the real thing. The ‘real thing’ here, is fundamental and will relate order, info, and energy, I believe.

Sal I think you’re wanting to break new ground, but this is just basic stuff. A wide survey and a bold synthesis is needed, I believe many have advanced the concepts separately and their works are ripe for integration.

Butifnot,

See my comment here:

Comment in Part I

Thank you for offering your thoughts.

Sal

The rigorous treatment is pretty far in my past but order and disorder are always brought in to entropy, correct.

Here’s a thought – A ‘designed’ arrangement of particles (or anything at any scale) is brought to that configuration independently of the properties (energy etc etc) of the ‘particles’. No, no laws are broken in so doing, but does this not enter into a strict accounting?

SC:

I have already highlighted a comment this morning, here.

Observe especially on the issue that

in statistical thermodynamics contexts, entropy is a measure of the missing information on specific microstate given macrostate i.e. a measure of an info gap, which I have ambbreviated MmIG.* (Classical thermodynamics gives relative values that are useful in chemical and physical changes associated with thermodynamic variables.)In information systems, the informational entropy is assessed in a context of signal reception and the degree of surprise given by the state indicated by the signal as received.

KF

*PS: I repeat my comment that there is an informational school of thermodynamics, which has some things to say that are relevant. Let me clip from Robertson, on just one main point:

I trust this will help provide some balancing points.

The fundamental, first principle-level thing we are dealing with will emerge in many places – ‘order’ included.

Fascinating stuff KF!

KF,

In light of what you’ve written, is there something wrong with my calculation of the 747’s thermodynamic entropy relative to an empty soda can?

I’ve given several examples where I provided entropy numbers (in Joules/Kelvin). You’re welcome to provide your alternate set of numbers for the reader based on the sources you are quoting.

I’d appreciate if we could just deal with the numbers as engineers would, not as ID proponents or Darwinists or whatever, but just as Engineers reporting numbers.

Sal

Will, or should the entropy associated with arranging the 747 enter into the calculation. What is the extent of the appropriate system for a relevant calculation?

Sal: classical E-#’s are relative, as noted. The link in view is on the micro-state picture. I again suggest a read of Robertson. KF

KF,

I’m having difficulties understanding what you said.

To help clarify, can you post your entropy numbers for:

747

broken 747

soda can

If my numbers are wrong, please indicate the correct figures. That would be helpful to everyone concerned.

Sal

Sal it immediately becomes apparent that correct application is all that matters, the calculation just comes out of it trivially.

Given a 747, there does, must exist information to arrange the 747. Now there are many (infinite?) paths to arrange matter to arrive at the 747, that don’t violate any physical laws. Whereas there are many likely ways to wreck a 747 and arrive at the same state.

Now there is the 747 AND (+) some information somewhere. By necessity? Compared to a pile of the same materials + nothing. What is the entropy?

Elizabeth Liddle redux. She who thought 100 pennies contains 100 bits of Shannon information.

When asked, what is the information about, well, it was all downhill from there.

SC:

Pardon, but I am highlighting a gap that has been underscored by the informational school of thought on thermodynamics. Hence my already linked remarks on the significance of MmIG.

Here is Shannon from his 1950/1 paper, prediction and Entropy of Printed English:

This should serve to underscore the summary I made from Connor. H is a measure of average information per symbol, especially in a context where symbols are not equiprobable.

I again draw your attention to the observation that in the statistical thermodynamics context, we see a macrostate, and then have a challenge of an info gap — or, degrees of freedom [a dual way to look at the same issue] — to the specific microstate, as there is a quite large in general number of possible but distinct microstates possible that are consistent with the macrostate.

Such is the missing information on the microstate that defines its entropy. That degree of freedom is of course a measure of degree of want of constraint on configuration (and momentum) that leads to the inference to a higher degree of entropy entails a higher degree of disorder.

In contexts relevant to the wider concerns on design theory, the point is that when you have molecules that have tightly constrained configurations, and these are arranged in tightly constrained ways, that are detectable form the simple fact of cell based life, such have far less freedom to take up varying states than an equivalent mass of atoms or monomers etc. But of course to produce such organisation, work has to be done in the usual case — statistical miracles beyond the search capacity of the observed cosmos being not credible — and that will be associated with the export of waste heat elsewhere. (Cf thought exercise here. And this is actually about a micro-jet assembled from its properly arranged parts.)

But that is going a bit far down the road just now.

Our more direct concern is the link from the informational theory metric and the thermodynamic one. I have pointed to Robertson for more details and have given excerpts. But this clip from Wiki may help clear the air until you can go visit a library (the Amazon price is stiffish, but then I gather textbooks are now at outrageous prices all around):

In short we are looking at the Macro-micro information gap, MmIG.

And BTW, what happens when a body of gas undergoes free expansion, is that its molecules have their freedom put in a much less constrained state, hence we can see why there are now more ways for energy and mass to be arranged and distributed at micro level in the freely expanded state.

I hope this helps

KF

F/N: The 500 pennies are a good example of what is at stake.

Put them in a long skinny black box with little slots like a covered ice cube tray, shake and toss. What do you know about their state given the macro-picture provided by the BB?

Ans, you just know they are likely to be in a peaked distribution centred on 50:50 HT in no particular order, and very very sharply peaked indeed. If they start in a far from 50:50 state and get shaken, they are very likely to move towards that 50:50 state, i.e. we see the time’s arrow theme emerging.

But if we left them on the table in the box and came back to see them neatly lined up giving the ASCII codes for the first 72 or so characters of this post, we would strongly suspect, indeed with high certainty we would infer that the coins had been deliberately arranged.

Such an arrangement in the teeth of the cluster of accessible states and the config space, would be FSCI, and it is a sign of intelligently directed ordering work [IDOW], that is of design. That is because the sampling resources are so dwarfed by the space of possible configs — with a search on the gamut of our solar system across its conventional lifespan, 1 straw to a cubical hay bale 1,000 LY on the side — that an unintelligent sample of the configs is maximally unlikely to pick up anything but the bulk of the distribution: near 50:50, in no particular order.

In short the special zone, T is too isolated to be credibly sampled by chance. But IDOW would easily explain it. Design.

KF

PS: Now, clump the coins in pairs and reduce the 4-state elements in the chain to molecular size — here we are looking at an informational equivalent to a D/RNA chain of 250 elements. Has that changed the issue significantly? Do you see here how the work of clumping vs configuring can then lead to two successive thermodynamic entropy reductions that correspond directly to the information fed into the chain by organising the elements into a meaningful and functionally specific message? (BTW, I would prefer 502 coins . . . ]

My understanding of entropy may be lacking here, but take the above example of coins put into a black box in a certain patterned arrangement: Once shaken, the disorder has increased, the number of bits needed to describe the arrangement of the coins has increased (i.e., the Chaitin/Kolmogorov complexity), the amount of surprise per coin (Shannon entropy) has increased (because the initial predictable pattern has been ruined), while the ability to detect the initially designed arrangement has decreased.

So here, increasing entropy in its various forms appears to go with decreasing “designedness”. Is this just a matter of interpretation and/or terminology?

Also, how does this integrate with the work of Granville Sewell regarding entropy, open systems and design detection?

1. it disagrees with Granville’s work

2. it doesn’t have much to say about open or closed systems, but I showed that an open system can reduce the entropy of a 747 🙂

3. a design is recognized as being one state from a space of large possibilities. Debmski requires the following of designs:

A. Improbable

B. Specified

High improbability implies high Shannon entropy.

Not quite.

1. Shannon Entropy is the same whether the coins are ordered or not

2. Algorithmic entropy (Kolmogorov Complexity) rises if the coins go from ordered to disordered

3. Thermodynamic entropy goes up if the temperature goes up, and down if the temperature goes down

4. If the original design were 500 coins heads, then the disordering has erased the design.

5. If the original design is 500 coins heads, its shannon entropy is still 500 bits. After a tornado hits it and disorders it, its Shannon entorpy is still 500 bits.

Compare this to Bill’s statement.

Shannon entropy doesn’t necessarily imply disorder, neither does thermodynamic entropy. That was the point these two essays, to correct misconceptions of the what entropy is.

High Shannon entropy allows the possibility of disorder, it doesn’t make it inevitable, just highly probable in many cases, as is the case of 500 coins.

Not quite. That was the point of my essays. Some engineering, chemistry, and physics books use the word “disorder” to describe thermodynamic entropy — yet other books do not. The texts I learned from do not use the word “disorder” to describe entropy. My textbook was: Statistical Mechanics by Pathria and Beale

When it actually comes to calculating entropy, it is taking the logarithm of the number of microstates. Mike Elzinga has protested the use of the notion “disorder” to describe entropy, and I agree with him. The examples I’ve provided are partly rooted in his work. The calculations I’ve provided were to illustrate the absence of using “disorder” to calculate entropy (Shannon and Thermodynamic). Notice the 747 example being hit by a tornado and having lower entropy, not higher, after being hit.

A highly ordered system can have extremely high Shannon and Thermodynamic entropy. Example:

NOTE: the high or low thermodynamic entropy has little to do with making a design inference. I’ve argue the 2nd law is inappropriate for making design inferences, but other IDists and creationists disagree.

But the Shannon entropy of the system is still 500 bits (technically speaking, prior to observation). Ordering has nothing to do with the Shannon entropy. The fact that 500 coins provides 500 bits of shannon entropy makes the design inference possible in the first place.

EDTA:

I think all that Sal says is quite correct (except maybe for the second law of thermodinamics, on which I will briefly comment later), but perhaps the form in which he says it can confound somebody here.

Just to try to help, I would go back to the famous 500 coins, and I will try to simplify as much as possible:

a) The problem is not about the coins themselves. What we must consider for any discussion about digital information is a system of 500 coins in definite linear order, each of which can have one of two states. Let’s call this “a material system that can be read as a binary string”.

b) There is no doubt that, in itself, such a system has no information, if we define “information” in a semiotic way, that is as something that has a specific meaning for a conscious observer. I would defionitrly suggest that we use the word “information” only in a semiotic sense, that is in the sense of “meaningful information”. At the same time, I would strongly suggest that we use information as an abstract concept, and not as something that is really in any material system.

c) At the same time, a material system such as the one we described has an intrinsic property that can be quantified, and that is very simply the answer to the question: how many bits can we “write” in this material system? Let’s call that the “potential complexity” of the material system, or if we want its “Shannon entropy” (although the two things are probably not exactly the same thing). In this sense, the potential complexity of a given material system is a property that does not change, whatever information or lack of information can be found in the present state of that system. So, the potential complexity of our system of 500 coins is, without doubt, 500 bits. It always remains 500 bits, either the present state of the system derives from random tossing, and can be read as a truly random string (semiotic information absent), or it derives from design, and conveys the code for a very efficient algorithm for my computer (semiotic information present). The brute complexity of the physical system is always 500 bits.

d) Now, I think that the main point of Sal’s discourse is the following: if we have to write a designed string, if our designed string is less complex (shorter) we can use a less complex material system (with less Shannon entropy). If my program is 100 bit long, I can write it with only 100 coins. But if my program is 500 bit long, I need 500 coins. So, the Shannon entropy, or potential complexity, of the material system needs to be higher to allow a more complex design. It’s very simple, and I don’t understand why that apparently causes confusion here.

e) At the same time, the design becomes more improbable. If we assume, for simplicity, that our design needs one specific string to be functional, andt does not allow any change, not even at one bit level, then a design of 100 bits will have a probability of arising in a random system by chance (for instance, by coin tossing) of 1: 2^100. That can be expressed, ala Shannon, as a 100 bit improbability. On the other hand, a program 500 bit long will have a probability of 1:2^500, that is a 500 bit improbability.

Well, more on that later.

Folks:

The first key thing to note from the above and the previous post by SC, is that there is a debate on the nature of several thermodynamics and information theory concepts. One that is not well known outside physics, and one that in part is not even well known inside physics. IIRC, Robertson in his preface comments on how there are circles in the schools of thought, and they often do not communicate one to the other.

The next thing to realise is that because these issues are closely connected to the debates on design, we are going to have partisan tactics coming in. One of these is the insistence on trying to decouple thermodynamics and information issues. In reply, I have noted back to Gilbert N Lewis — yes, THAT G N Lewis — it has been understood that entropy of a system, from a statistical thermodynamics perspective, is a measure of missing info on the specific microstate if what one has is the info on the observable macrostate. This leads to an “absolute” definition of entropy that is connected to information. Hence the significance of the Boltzmann and Gibbs formulations as opposed to the classical change of system state measures that are used relative to an initial state.

Here is Lewis, again, as I have clipped a couple of times over the past day or so:

It also — as the 500 coins in a string black box example shows — highlights why there is a reason to connect a high entropy state to chaos or disorder. An initially arranged string of coins allowed to change state at random or spontaneously, through shaking or the equivalent, will — per the balance of the distribution of possibilities — strongly tend to a state near 50:50 H/T in no particular meaningful order.

Now, too, we know that a string structure, with certain rules of arrangement, can be used to store the description of any material whatsoever. For, the string case (with room for a long enough string and associated rules of interpretation) can be used to set up a description on nodes and arcs, and how they are arranged to form a whole. This is essentially what something like AutoCad does.

For onward discussion, it would be helpful to adjust the coins in a box case to a similar BB with a red button that on being pressed sends out a digital string of 502 bits:

Here, our access tot he internal state is the emitted string, which we can observe on pressing the button. Initially, say it emits 1010 . . . , then after a time we find it decaying away from this and eventually consistently emitting bits that fit the usual binomial distribution with 50/50 probability of H/T. We may reasonably state that the BB has undergone a rise in entropy, consistent with moving from an internally ordered state to a random one.

Then, we come back again and find that his system emits the first 72 or so ASCII characters for the words of this post. We would find the explanation that this is a matter of simple chance incredible and would ascribe the behaviour to intelligence. That is, somehow ther has evidently been intelligently directed organising work, not mere ordering similar to crystallisation. Our BB has emitted threee classes of string:

Since it is a BB, we have no independent access to the internal state, but we have reason to see that there were three clusters of possible states, which would require distinct explanations on what could be going on in BB. Order is associated with states like a crystal forming on mechanical necessity (and which as we see can be simply described, i.e compressed), randomness with decay of order or with disorder (which will indeed be hard to compress from description but by citing the actual string), and there is a third possibility, organisation that is functional and specific as well as complex, which is resistant to compression but not quite as resistant as sheer disorder would be.

APPLYING TO THE 747 DESTROYED BY TORNADO EXAMPLE, we can see an interesting divergence in perspectives and system definition that becomes highly relevant. Let us clip SC from the OP:

There is a fundamental error in this, driven by failing to assess the significance of micro vs macro descriptions and the gap between classical and statistical formulations of thermodynamics concepts.

Now, in the case of the example of the 747 disarranged by the tornado, we are seeing that something has been left out of the reckoning, because in the calculation, the rearrangement of components away from a functional state was missed out in the discussion. This was done by allowing the exclusion of issues connected to arrangement into a functional whole, and by the error of allowing the calculation to set a system boundary that would miss the loss of parts.

In particular, we should note that the metric used, which gathers the component of entropy relative to a defined initial state used in relevant tables that is connected tot he average random thermal vibrations of the Al atoms and the like, was shifted not by what was happening to

this component of the entropy account, but by simply failing to keep balanced books. if the Al was of the same temp throughout, the Al would have the same value throughout, or at any rate at the beginning and end of the story. For, the Al belonging to the 747 was the same quantity and at the same temp.But this is not the only relevant component. TO BUILD A 747 OUT OF ITS COMPONENT PARTS, A LOT OF IDOW HAD TO BE PUT IN AND THERE IS NO PLAUSIBLE REASON THAT THIS END COULD HAVE BEEN EFFECTED BY SIMPLE INJECTION OF ENERGY. That is, Sir Fred’s example of the utter implausibility of a tornado assembling a 747 out of parts, is obvious. Likewise, those who have to pay Boeing’s assembly plant power and fuel bills know that a lot of energy went into building the jumbo, energy that was intelligently directed according to a plan, and which shifted the configuration of the Al etc atoms to a config E_747x, that was recognisably functional in a highly specific way, in a special zone of function T from a space of possible configs of the same atoms W. Also, the designers know full well that given inevitable tolerances etc, there is no one state for a functional 747, there is a zone of related configs, T that will fulfill the function, and something out of that narrow zone will not fly.

But equally, raw energy — as opposed to IDOW — tends to rip apart and disorganise the complex, functionally specific system. That is what the imaginary tornado did. And assuming the same ambient conditions, the entropy metric for the Al will be the same throughout, per SC’s calc: 6.94*10^7 J/K. (BTW, the degree symbol for Kelvins is archaic usage.)

That is also why my own thought exercise in my always linked note is relevant, on microjets in a vat with parts small enough to suffer diffusion.

[ . . . ]

Pardon a text chunk, but there is a very bad habit of failing to read linked materials:

So, it is evident that the micro level analytical view is capturing something that the macro view is not, something that is highly relevant if we are concerned to accurately understand what is going on.

In particular, configuration is important, and there is such a thing as organisation that is functional that requires to be accounted for, in the face of the overwhelming number of possible states of component parts. if these components are left to the blind forces of mechanical necessity and chance contingencies, the sheer scope of the space of possible configs is so large that such a blind sampling process — on the gamut of a vat, of a planet or a solar system or the observed cosmos — cannot be reasonably expected to turn up anything but the overwhelming bulk of the distributions of possible states. In particular, Al is normally found in the form of bright red-purple earth in my native land, Jamaica, that proverbially cannot grow grass, only airplanes. (There is a famous cable on the results of the analysis of the soil sample as to why certain land was so poor for growing grass to feed cattle.)

Left to natural forces of geology at planetary scale over eons, the predictable outcome would be more of same: unproductive soil, eventually washed into the sea and diffused or settling as sedimentary deposits therein. And perhaps reforming much of the same through tectonic forces.

To get the Jumbo Jet, IDOW on the massive scale had to be injected. Bauxite mines, railroads, Alumina refineries to get an intermediate product, more railroads and a Port Kaiser, shipping, onward Al refineries tuned to the particular ore from Jamaica, etc etc, then processing into Alloys and component parts, through an advanced economy. At every stage there is work going on, and that work is shifting the configurations of the Al atoms that will eventually be part of that Jumbo Jet. Work, dominated by intelligence towards complex function.

Please do not tell me that that work of configuration is not relevant to the overall thermodynamics account, in the teeth of the above case. And, the scaled down case of parts to make a micro-jet through diffusion vs nanobots helps us see why that is a reasonable insistence.

Simply because for particular purposes, we focus on specific aspects of the entropy account and set system boundaries for convenience, does not change that overall picture.

Yes, the analysis has been indicated only in outline, and there is no prospect at this stage of generating a numerical value for the total entropy change involved in assembling a Jumbo Jet, but it is nonetheless real and it is reasonable to sketch in a rough outline that allows us to see more clearly cases where the same issues are far more important to our main concerns. (The same BTW, routinely happens in Economics, and it is the failure of reckoning with the overall picture that so often leads to economic fallacies and costly policy blunders.)

I trust that his will help us clarify our thinking.

KF

F/N: I was just thinking about R/DNA strands, let’s go to 504 bit strings for BB as that is also & bits * 72 characters, and let me represent the push-button on the LH too:

))–> || BLACK BOX || –> 504 bit stringF/N 2: To make it clear what sort of thing we are talking about, the following from the above post is 72 characters or 504 ASCII bits, including spaces:

PS: 72 7-bit characters is 504 bits.

F/N: 504 bits is also the amount of info storable in a D/RNA chain of 252 bases, corresponding to 84 3-letter codons, for a string of AA’s after transcription processing, translation and assembly. If we include a stop codon and the fact that as standard the start puts up Methionone, we are back to 82 variable AAs in a chain of 83.

Um, you’re completely ignoring the fact that the system can include its own error checking and self-correction. 747s do a lot of that. Soda cans do not. It’s one of the reasons that complex systems are worth the trouble and expense over cheap, simple systems.

But I guess I’m missing the point of discussing a non-living system (e.g., pennies in a box) as a means of understanding living systems.

Biologic systems do a whole lot of “healing” as soon as they detect that “something’s gone out of skew on treadle”. What features of the penny-box system “heal” the pennies?

Sal your approach appears incorrect, for me. It is reminiscent of “a royal flush is no more unlikely than any other hand”. There is something different about a lump of aluminum and a 747, surely the entropy, being the result of the work done and information imparted to it. The system of interest is the aluminum plus the source of the work and information which brought it to its current state. All systems of equal weight of aluminum could not possibly have the same entropy. The entropy has not been calculated correctly, terms are missing and not accounted for. Perhaps the boundary should be different. Also the relevance of Shannon information is questionable, in my opinion.

KF at 24, I think that is what I was trying to say.

For the sake of simplicity, let us assume the same material was used to configure completely different structures. Let us assume we are using a billion aluminum coins (perhaps with serial numbers to make them distict).

We can configure the coins to correspond to the ascii representation of some passage in literature or we could configure the coins to a random sequence. Thermodynamic entropy would be the same in each case. Even though one configuration (that which corresponded to a literary passage) would be obviously designed.

All I’m saying is one should not use thermodynamics to try to infer design, it’s the wrong tool. One needs a different set of lenses (figuratively speaking) to see design, not thermodynamics.

Even Shannon entropy in and of itself is insufficient.

What one needs are independent specifications to discern design. We intuitively carry some of these specifications in our minds (like the specification of “all heads”), but some specifications are more subtle.

But the point of my essays was to help readers understand what entropy really is. The fact that it may or may not help us make design inference is actually secondary to the essay. If you come away from this essay and realize that the 2nd law of thermodynamics or that thermodynamic entropy won’t help us make design inferences, then I feel I’ve succeeded in communicating my point.

My point is we have to use other means to infer design than looking at thermodynamic entropy numbers or using the 2nd law of thermodynamics.

Feel free to post what you feel is important. Thank you for contributing.

But do you have a different set of thermodynamic entropy numbers than the ones I posted for:

1. 747

2. Broken 747

3. soda can

If you don’t think you can arrive at them using all the materials you’ve provided, please say so. I surely can’t seem to get a different set of numbers based on what you said.

Do you agree with my numbers for these three objects? A simple:

A. Yes

B. No

C. Don’t know

would suffice rather than large chunks of text.

SC: Please, pause and see what your entropy estimate is based on, so much Al at a given P, T etc. It is only telling one part of the story for a 747 or a broken 747 or an equal mass of soda cans, etc, indeed while useful for chem eng, the values are not addressing a serious associated issue. Where IDOW is also a highly relevant consideration in how we get TO a 747. That is what I am highlighting. And I have taken time to discuss the information issues tied to the Gibbs entropy metric, which is the context of Shannon entropy. A half-story can be doubly misleading precisely because so far as it goes it tells a compelling tale. But, we need to hear the rest of the story, which is where Jaynes et al (including Brillouin and Szilard) come in. BTW, did Jayne come up in your earlier discussions? If so, how and if not, why not; given that the info-entropy bridge is at the pivot of the matter? In addition, I think there is a crucially distorting loose usage of “entropy” in how you are arguing. Kindly note the way Shannon used it, and how specifically that usage ties to the Gibbs metric. KF

So are my numbers correct given P, T, etc.?

I’m not asking about CSI, IC, FSCO/I, IDOW, SFOD-D, MmIG, WMDs, MIGs, BUFFs, AWACS, VLSI, DicNavAb, etc.

I’m asking about the standard state entropy of the Aluminum content of:

A. 747

B. Broken 747

C. Soda can

A simple yes or no, would be helpful to everyone. You’ve been very verbose, and I’m not asking you to print more than 3 characters for a response of “yes”, 2 characters for a response of “no”, and 12 charcters to say “I don’t know”.

You don’t have to print a dissertation that doesn’t answer the question I pose.

If you don’t want to answer the question, say so. “I don’t want to answer the question. I want to talk about something else.” (that would be 73 characters for a response). I’ll accept that but please offer me the courtesy of saying that you prefer to talk about something else rather than answering a question I’ve posed more than a few times in this discussion.

hi kf,

What is the Robertson text you’re referring to?

As for Jaynes, are you referring to the 1957 paper?

Information theory and statistical mechanics.

http://en.wikipedia.org/wiki/Edwin_Thompson_Jaynes

http://en.wikipedia.org/wiki/M.....modynamics

Salvador:

So what *IS* entropy, really?

http://en.wikipedia.org/wiki/Arieh_Ben-Naim

http://www.amazon.com/dp/981437489X

http://www.ariehbennaim.com/books/entropyf.html

Harry S Robertson, Statistical Thermophysics, Prentice.

More later.

SC: The numbers are materially incomplete and the issue of systrem boundary is material to addressing the real change in entropy of the 747 damaged by a hurricane. The loss of Al because of how you did entropy accounting and set a system boundary is not the material shift. KF

So what are the correct entorpy numbers? You’re invited to provide them:

A. 747

B. Broken 747

C. soda can

don’t need to be that exact, but maybe some ball park numbers in Joules/Kelvin, and you can provide justification as to how you arrived at your figures.

There are engineers reading this blog, they are entitled to an attempt at an answer. “I don’t have an answer” is an acceptable answer.

I don’t think this is too much to ask if one is trying to make an inference about the evolution of a system.

Entropy is much ‘harder’ to apprehend, let alone comprehend, than a simple computation. I believe your analysis is incorrect and incomplete, giving wrong entropy values.

Entropy is as simple to understand as “one if by land, two if by sea.”

SC:

I have just come from a fireworks of a public meeting on education issues, and have little time and less inclination to go over the problem again. You gave an entropy calc that gives a number for a mass of Al under given temp and pressure relative to a baseline, and which runs into incorrect system boundary issues in trying to address the difference between a flyable 747 and one torn up by a tornado. The calc you gave yields a relative number that is about Al being Al. It simply does not address the other aspects. And I have already said enough for those who need to see that there is more, for instance on the link between Shannon and Gibbs via Jaynes.

KF

I see where I misunderstood, and it may bring out one or two points that will help.

Gpuccio said,

So for the purposes here, Shannon entropy or potential complexity is the storage capacity of the medium in question. I was taking Shannon entropy to be the amount of information/surprise gained by revealing the arrangement of the coins. In the former case, the maximum

potentialentropy is high, since each coin can be in either state. In the latter case, if we know the arrangement, a uniform arrangement (all heads, e.g.) is uninformative, and so the entropy would be low. Context is everything.But there’s more to the 500 coin example. In the texts I have read, Shannon entropy was defined as the average bits per symbol emitted by a “source”, calculated using Shannon’s formula. As such, the length of the output is not given, nor is it needed. The entropy is thus a property of the

source. But is Shannon entropy even defined for a finite string obtained from a source? Oh sure, you can apply his formula using the measured probabilities from the finite string, and get some number. But those measured probabilities are only approximations to the source’s actual symbol probabilities because your sample is finite in length.And it always depends on what you know about the source or system in advance. Perhaps Shannon entropy is best left out of situations where a finite size/length system is being discussed.

To bolster the case that Shannon entropy might not apply to finite observations, consider this: to be able to calculate entropy, you have to have probabilities. If the probabilities are those of a finite string of symbols (heads/tails, e.g.) that came from the source, then you have to know the whole string. (If you can’t see the whole finite string, then your calculation won’t be worth anything, as the remaining symbols might deviate from the initial symbols’ probabilities.) But if you know the whole string of symbols, then you already have full knowledge of the internals of the system in question. So there is no surprise remaining in the system.

If you just intend to describe information storage capacity, perhaps that terminology would be more straightforward.

EDTA:

The concept of Shannon entropy is useful in evaluating the functional information in a protein. That has been used by Durston.

Indeed, there is a difference between the potential entropy of a string of a given length, that is simply the search space for that string, and the improbability of some specific functional state of that string. In that case, which is what really we are interested in in ID theory, we must calculate the probability of a functional state (for a specifically defined, measurable function), and that is made by dividing the number of functional states by the total number of states (the target space by the search space).

For proteins, the Durston method assigns a Shannon value to each aminoacid in a specific protein molecule, according to the variation of that AA in the known proteome. IOWs, an aminoacid site that is free to change in a practically random way will have the maximum uncertainty. A functional constraint at that aminoacid site will result in some conservation throughout the proteome, and therefore in a reduction of uncertainty.

Therefore, the functional constraint in a protein sequence correspond to the total reduction of uncertainty determined by function, versus the maximum uncertainty of a totally random, non functionally constrained sequence. This is a simple, indirect way of approximating the functional information in a protein family.

So, to sum up:

a) The total search space (potential improbability) of a string state depends only on its length and alphabet. For a protein sequence, it is 20^[protein length in aminoacids].

b) The improbability of a given functional state is the ratio between the functional space and the search space. A simple way of approximating that for protein families is by the application of a concept derived from Shannon entropy to the known proteome.

EDTA:

Shannon entropy is about the average information per symbol in a communication, where symbols may not be equiprobable (as is the usual case, e.g. ~ 1/8 of normal English text is E or e).

As I have summarised above and in the other thread, when Shannon deduced the metric, it was seen to have the same mathematical shape as the Gibbs expression for the statistical mechanics entropy of a body where for a given Macro-observable state, various micro-states y_i are possible with different probabilities p_i. Where microstates are different arrangements of mass and energy at ultra-microscopic level.

The obvious debate over the parallel occurred, and the upshot of it, per JAYNES et al, is that we do face a communication situation, where we seek to infer concerning what we do not directly see, the specific distribution of mass and energy at micro-level, from what we can see, the major lab-observable characteristics.

Thus, it can be shown that Gibbs Entropy is a scaled measure of the average missing information per such microstate given what we do observe, the macrostate. In effect the more random and large the number of such possibilities, the larger is the quantum of missing info.

I suggest you may find it useful to refer to what I said in the other thread, especially at 2, 5, 7, and 56 (notice how Shannon himself uses the term “entropy” in an informational context), with 15 – 16 and 23 – 25 above in this thread.

I clip here from 56 in that other thread:

In 57, I went on to make an estimate:

Now, all of this is complex and in parts quite hard to follow, where the literature is complex and in my opinion usually not put together in the best way to introduce the puzzled student. But that is unfortunately typical in several fields of physics once things get highly mathematical.

That is why I normally do focus on the information origination challenge directly, but in the end due to the underlying physics and chemistry linked to the OOL issue, thermodynamics is a connected issue, and one that is hotly contested.

So, when it is raised directly as in the current set of threads here at UD and apparently an earlier set at TSZ, it has to be dealt with.

One of the issues is that in Physics, there are diverse schools of thought on statistical mechanics/ or statistical thermodynamics/ or statistical thermophysics, and these schools diverge in views significantly on the links between thermodynamics and information theory. However in recent years the momentum has been shifting towards the acceptance of some of the insights of Jaynes et al. (That, BTW, is why I have asked whether those who have been discussing this at more technical level have been calling that name, it is a test of whether the informational school of thought has been reckoned with.)

I should let Jaynes speak for himself, though a clip from Harry S Robertson in Statistical Thermophysics, Prentice:

Other than this, I think I find the summary here about as helpful as this field gets. After this, the Wiki article here — which lacks adequate illustrations and introductory level cases — may be also helpful. Leonard Nash’s Elements of Statistical Thermodynamics may be a classic introduction (I am ashamed to have to admit, by a Chemist; Physicists have simply not done a very good job here I find), one that is blessedly short and uses apt illustrations and cases.

After all the issues have been sorted out — including loose use of the Shannon avg info per symbol term Entropy, which is explicitly an

averageper symbol to mean the cumulative info communicated by a string of N elements, which is N*H — we come back to this: once we see a living cell, we know that at molecular level, it is in a tightly confined set of possibilities relative to the possible arrangements of the component molecules and atoms. That is, we have moved to a distinct macro-observable state that allows us to know a lot more about the molecules than we would if we were to simply take such a cell and prick it, then decant its components into a test tube, where they would be subject to diffusion etc. This sort of Humpty-Dumpty exercise has been done and the result is predictable: the cell never spontaneously reassembles.The message there is that the best explanation we have for the sort of relevant functionally specific, complex organisation and information we deal with, is that we have intelligently designed organising work — design — as its only observed and (in light of the needle in the haystack challenge) analytically plausible cause. This is controversial on OOL etc, not because that is not true but because there is a dominant ideology that thinks that such IDOW was not possible at the origin of cell based life, namely evolutionary materialism. they even wave the flag of science as the flag of their party.

KF

PS: Ch 13 of the part of Motion Mountain here may also be helpful.

From KairsoFocus himself

That calculation suggests:

THE MORE PARTICLES THERE ARE THE HIGHER THE ENTROPY!Which corresponds to the Sakur-Tetrode Equation I posted in the OP for monoatomic ideal gases!

[UPDATE 9/7/2012]

Boltzmann

SC:

Pardon, but I have the distinct impression that you are beginning to indulge snip-snipe tactics. In a matter as involved and integrated as this, that is not good enough.

Of course, entropy is an extensive state function. That is what you exploited to do your calcs for the mass of Al in a 747, but those partial-story calcs — good enough for certain chem engg’ tasks — do not differentiate between Al in ingots, Al in sheets, etc and Al in aircraft. They are in effect based on degrees of freedom of atoms, and temp-related energy distributions leading to numbers of states of the material; so, they simply are not looking at the whole entropy-relevant story.

That is why the underlying Gibbs — Shannon analysis and its link to the missing info on microstate given macrostate become relevant.

That is why I have called your attention to Jaynes repeatedly, and it is why I took time to highlight the correct usage of H, average info per symbol. Also to draw out the parallel to thermodynamics systems through the Gibbs relationship.

As in what I have abbreviated as MmIG, the macro-state micro-state info gap.

And where also I pointed out that a system that is in the sort of constrained state involved in a functionally specific entity, has a very different degree of freedom consistent with that state than just raw materials lying around. The sort of state where I have repeatedly pointed out, the observed way to get there — given the issue of tight zones in a space of possibilities otherwise and the needle in the haystack problem — is IDOW, intelligently directed organising work. Design.

I trust this will help you focus attention on the points of concern.

KF

And will Jaynes contradict this claim?

No. Thus you’re citation actually favors my claims in these two essays. I demonstrated in a rather pointed way using your own example of a ideal monoatomic gas.

So why don’t you decode what that means? It means the more particles the higher the entropy! Hence a 747 has substantially more entropy (on the order or 4.7 million times) than a soda can. Something you seem unwilling to admit directly except in the most hard-to-understand language.

PS: Trying to rule the long known link between high entropy states, degradation of energy resources available to do work, and disorder as in effect not to be mentioned, also is not good enough. To give you an idea of some of the links, let me snip a discussion on a marbles and pistons model. And, I am pretty well satisfied that we don’t only need to be able to do number calcs, but we must also understand what those numbers are about (BTW, one of the reasons L K Nash is an excellent intro to Stat thermo-D).

I clip my always linked note, APP 1:

_________

>> Let us reflect on a few remarks on the link from thermodynamics to information:

1] TMLO: In 1984, this well-received work provided the breakthrough critical review on the origin of life that led to the modern design school of thought in science. The three online chapters, as just linked, should be carefully read to understand why design thinkers think that the origin of FSCI in biology is a significant and unmet challenge to neo-darwinian thought. (Cf also Klyce’s relatively serious and balanced assessment, from a panspermia advocate. Sewell’s remarks here are also worth reading. So is Sarfati’s discussion of Dawkins’ Mt Improbable.)

2]

But open systems can increase their: This is the “standard” dismissal argument on thermodynamics, but it is both fallacious and often resorted to by those who should know better. My own note on why this argument should be abandoned is:ordera] Clausius is the founder of the 2nd law, and the first standard example of an isolated system — one that allows neither energy nor matter to flow in or out — is instructive, given the “closed” subsystems [i.e. allowing energy to pass in or out] in it. Pardon the substitute for a real diagram, for now:

Isol System:

| | (A, at Thot) –> d’Q, heat –> (B, at T cold) | |

b] Now, we introduce entropy change dS >/= d’Q/T . . . “Eqn” A.1

c] So, dSa >/= -d’Q/Th, and dSb >/= +d’Q/Tc, where Th > Tc

d] That is, for system, dStot >/= dSa + dSb >/= 0, as Th > Tc . . . “Eqn” A.2

e] But, observe: the subsystems A and B are open to energy inflows and outflows, and the entropy of B RISES DUE TO THE IMPORTATION OF RAW ENERGY.

f] The key point is that

when raw energy enters a body, it tends to make its entropy rise.This can be envisioned on a simple model of a gas-filled box with piston-ends at the left and the right:=================================

||::::::::::::::::::::::::::::::::::::::::::||

||::::::::::::::::::::::::::::::::::::::::::||===

||::::::::::::::::::::::::::::::::::::::::::||

=================================

1: Consider a box as above, filled with tiny perfectly hard marbles [so collisions will be elastic], scattered similar to a raisin-filled Christmas pudding (pardon how the textual elements give the impression of a regular grid, think of them as scattered more or less hap-hazardly as would happen in a cake).

2: Now,

let the marbles all be at rest to begin with.3: Then, imagine that a layer of them up against the leftmost wall were given a sudden, quite, quite hard push to the right [the left and right ends are pistons].

4: Simply on Newtonian physics, the moving balls would begin to collide with the marbles to their right, and in this model perfectly elastically. So, as they hit, the other marbles would be set in motion in succession. A wave of motion would begin, rippling from left to right

5:As the glancing angles on collision will vary at random, the marbles hit and the original marbles would soon begin to bounce in all sorts of directions. Then, they would also deflect off the walls, bouncing back into the body of the box and other marbles, causing the motion to continue indefinitely.

6: Soon, the marbles will be continually moving in all sorts of directions, with varying speeds, forming what is called the Maxwell-Boltzmann distribution, a bell-shaped curve.

7: And, this pattern would emerge independent of the specific initial arrangement or how we impart motion to it, i.e. this is an attractor in the phase space: once the marbles are set in motion somehow, and move around and interact, they will soon enough settle into the M-B pattern. E.g. the same would happen if a small charge of explosive were set off in the middle of the box, pushing our the balls there into the rest, and so on. And once the M-B pattern sets in, it will strongly tend to continue. (That is, the process is ergodic.)

8: A pressure would be exerted on the walls of the box by the average force per unit area from collisions of marbles bouncing off the walls, and this would be increased by pushing in the left or right walls (which would do work to push in against the pressure, naturally increasing the speed of the marbles just like a ball has its speed increased when it is hit by a bat going the other way, whether cricket or baseball). Pressure rises, if volume goes down due to compression. (Also, volume of a gas body is not fixed.)

9:

Temperature emerges as a measure of the average random kinetic energy of the marbles in any given direction, left, right, to us or away from us. Compressing the model gas does work on it, so the internal energy rises, as the average random kinetic energy per degree of freedom rises. Compression will tend to raise temperature. (We could actually deduce the classical — empirical — P, V, T gas laws [and variants] from this sort of model.)10: Thus, from the implications of classical, Newtonian physics, we soon see the hard little marbles moving at random, and how that randomness gives rise to gas-like behaviour.

It also shows how there is a natural tendency for systems to move from more orderly to more disorderly states, i.e. we see the outlines of the second law of thermodynamics. [ –> This can be elaborated on the number of accessible microstates consistent with a given macrostate and how there is a sharply peaked distribution around the equilibrium value]11: Is the motion really random? First, we define randomness in the relevant sense:

In probability and statistics, a random process is a repeating process whose outcomes follow no describable deterministic pattern, but follow a probability distribution, such that the relative probability of the occurrence of each outcome can be approximated or calculated. For example, the rolling of a fair six-sided die in neutral conditions may be said to produce random results, because one cannot know, before a roll, what number will show up. However, the probability of rolling any one of the six rollable numbers can be calculated.

12: This can be seen by the extension of the thought experiment of imagining a large collection of more or less identically set up boxes, each given the same push at the same time, as closely as we can make it.

At first, the marbles in the boxes will behave very much alike, but soon, they will begin to diverge as to path. The same overall pattern of M-B statistics will happen, but each box will soon be going its own way.That is, the distribution pattern is the same but the specific behaviour in each case will be dramatically different.13: Q: Why?

14: A: This is because tiny, tiny differences between the boxes, and the differences in the vibrating atoms in the walls and pistons, as well as tiny irregularities too small to notice in the walls and pistons will make small differences in initial and intervening states — perfectly smooth boxes and pistons are an unattainable ideal. Since the system is extremely nonlinear, such small differences will be amplified, making the behaviour diverge as time unfolds. A chaotic system is not predictable in the long term. So, while we can deduce a probabilistic distribution, we cannot predict the behaviour in detail, across time. Laplace’s demon who hoped to predict the future of the universe from the covering laws and the initial conditions, is out of a job.

15: To see

diffusionin action, imagine that at the beginning, the balls in the right half were red, and those in the left half were black. After a little while, as they bounce and move, the balls would naturally mix up, and it would be very unlikely indeed — through logically possible — for them to spontaneously un-mix, as the number of possible combinations of position, speed and direction where the balls are mixed up is vastly more than those where they are all red to the right, all alack to the left or something similar.(This can be calculated, by breaking the box up into tiny little cells such that they would have at most one ball in them, and we can analyse each cell on occupancy, colour, location, speed and direction of motion. thus, we have defined a phase or state space, going beyond a mere configuration space that just looks at locations.)

16: So, from the orderly arrangement of laws and patterns of initial motion, we see how randomness emerges through the sensitive dependence of the behaviour on initial and intervening conditions. There would be no specific, traceable deterministic pattern that one could follow or predict for the behaviour of the marbles, through we could work out an overall statistical distribution, and could identify overall parameters such as volume, pressure and temperature . . . >>

_________

That is just foundational discussion, so we can get a picture of what is going on. With suitable models and quantities plus come clever algebra and calculus, we can build up various specific thermodynamics models, but these mathematical models will not undo the basic points already seen.

In particular we will see some illustrations on why rise in entropy is seen as tied to increased disorder as it is based on more ways that energy and mass at micro-levels can be distributed consistent with a given macro-state. More constrained and special states or clusters of states will be more orderly and less entropic. Regardless of who does not like this qualitative usage.

In the context of FSCO/I, we are dealing with organised states that are recognisable as highly specific — notice that! — based on function. Organised not merely orderly, as in say a crystal, we have an information rich arrangement that is structured around a function and once the function is there, a knowing observer can know a lot about just how tightly constrained the space of possible configs is here.

When that organisation is deranged function vanishes and the components are nowhere so tightly constrained from the macro-picture.

It is reasonable to speak about disorganisation and to associate it with a rise in entropy, in the MmIG sense. Notice, we are here measuring an info gap, equivalent to measuring degrees of freedom and energetic implications. Just as, if we have a block of ice at melting point and feed in latent heat, that heat goes into disordering — notice the shift in terms — the molecules and appears in the higher entropy of water at melting point than in ice at the same temp. We hardly need to underscore that the molecules of ice have a greater degree of freedom than those of the ice that has not yet melted.

(Indeed the marble model can be extended to a model of melting, with slight adjustments.)

Okay, enough for the moment.

KF

SC:

Insofar as, the more Al, the more entropy bound up with being Al at that temp, the more Al the more entropy. Which is not relevant save to certain chem rxns and associated enthalpy results. Yes Al makes a nice intense fire [HMS Sheffield or Airplanes in the WTC, anyone . . . ] and even rocket fuel.

That — as you have now had ample opportunity to notice — is utterly irrelevant to the issue that Entropy in information systems has to do with average info per symbol, and that through Jaynes et al, it has been noticed that this can be linked to the Macro-micro state info gap giving a scaled metric on average missing info to specify microstate per microstate consistent with a given macrostate.

That is, there IS a credible link from entropy to information, which makes relevant considerations on functionally constrained configs. Never mind distractions on what could happen given the heat content [enthalpy] of Al. and, it is a relevant issue to reflect on the issue of possible configs and the constraint on such to be in a specifically functional organised state. Even, if the numbers will be lower than the thermal ones.

(Ever contrasted the energy to raise 1 kg of H2O 100 m and that to heat it from say 25 deg C to boiling? The two are both energy, but it is worth considering on the effects under different heads on energy but in connexion with different energy states. And we have not touched on the related issues of nuclear binding energies, which are part of the energy story, too.)

So, my point is that similar reasoning is relevant to the cases and should be used.

KF

http://www.ariehbennaim.com/bo.....ndlaw.html

One thing I like about the Ben-Naim books is that he starts with very simple cases, like a game of 20 questions, or marbles, or dice, and builds on them.

So at first he might deal with two outcomes of equal probability, then perhaps more outcomes but the distribution remains the same, but then adds additional factors such as a non-equiprobable distribution.

KF finally admits it, the more aluminum in the 747, the more thermodynamic entropy:

and Kf writes:

Not relevant to what? Design. If so, thank you. Thermodynamic entropy is not relevant to design. Thanks for proving my point.

Thus, the Clausius postulate and the Kelvin-Plank postulate (the two primary statements of the 2nd law of thermodynamics) are irrelevant to design. Thank you very much. You could have been a little more succinct in saying so.

So my calculations stand and so does the inference. thermodynamic entropy increases with increasing amounts of matter. Therefore a fully functioning 747 has more thermodynamic entropy than a 747 that had its wings ripped off by a tornado. Hence a torndado reduces the thermodynamic entropy of a 747 in that case, it does not increase it. Thank you very much.

And if you want to insist on Shannon and symbols, the 747 with its diverse organization also has substantially more shannon entropy than a soda can. So if you actually followed the consequences Jaynes, you’d come to the same conclusion: designs (be they man-made or whatever) often require more entropy, not less!

Your insistence that I read Jaynes is tiresome since you don’t actually admit the the consequences of Jaynes: the more parts required for a functioning system the higher its entropy, which is exactly the claim of this two part series. Nay, YOU need to read Jaynes and understand it, not me.

Also, for the love of PZ Myers, can’t you be a litte more

focusedin your writing style. Your handle impliesfocusnot circumlocution. Perhaps the remedy for the dissonance between the writing style and the handle is to rename the handle as KairosCircumlocution.SC:

I am sorry, it is time to move beyond snip and snipe rhetorical games. Entropy issues are too intricate and connected for that sort of rhetorical resort to be used, whether willfully or because we do not appreciate the relevance of context.

(To use an easily understood illustration, proof texting Bible verse hopscotch with disregard to immediate and broader contexts is not helpful in Bible study. The same error is even less effective with thermodynamics, especially where statistical thermodynamics is relevant.)

You will see my main summary response here.

Now, you will notice above that I did not bring out details on the issue of the internal energy in an atom of Al due to its having a binding energy per nucleon of about 8.3 MeV. That is strictly a part of its energy content, enthalpy and so entropy too.

But we do not talk of such in general, because this part of the strictly correct account is not generally relevant unless we are in a supermassive star that is running down to having an Iron core.

Where, Enthalpy, H = U + PV, and onward:

d H(S, p) = T d

S+ VdpOr if dp = 0, i.e. under constant pressure (just to highlight):

dH(S) = TdS

Similarly, to raise 1 kg of H2O 100 m, we use

1 kg * 9.8 N/kg * 100 m = 980 J of energy

And to heat the same kg from 25 to 100 degrees, but not to actually boil it would take:

1 kg * 4,181 J/kg.K * 75 deg = 3.136 * 10^6 J

Now, in describing energy involved in mechanical matters, it would be a distraction to point to how much more energy is bound up in the heat content of the water or in heating it up or cooling it down. Even if the water’s temp (and perhaps even phase) is changing during the process of lifting it. That is a part of the wider energy account, but it is not relevant to the focal issue of gravitational potential energy involved in lifting the body of water.

You have, unfortunately, done the like of that error.

Right from the outset I pointed out (a) that the entropy component tied to being Al at a certain temp is irrelevant to the issue of the formation of that Al into a system such as a 747, and (b) that to go on to play system boundary games such that if a tornado twists and tears off sufficient mass, that part left in the boundary imposed will have less mass, is (c ) a further error of assignment of system boundary that distracts from — indeed strawmannises — the configurational issue.

Yes, we can find entropy numbers for Al at a given temp in tables.

These are useful for say the folks who want to use the enthalpy locked away in the Al to make it into rocket fuel. (I am fairly sure bin Laden did not do any precise calculation, all he needed to know is that jet fuel is much like kerosine and that Al immersed in a hot enough fire would burn like HMS Sheffield did when the fuel in the Exocet missile that hit it set its Al superstructure on fire off the Falkland islands. And, that Al is a component of the rocket fuel formerly used in the Shuttle’s boosters.)

But, such is irrelevant to the focal questions as to:

Once that bridge is established, regardless of those who do not like it, configurational and informational matters are a part of the entropy story. Which has been done since the early efforts of Brillouin and with more elaboration since Jaynes et al.

I have outlined the way that becomes relevant to the concerns linked to design theory here, earlier this morning.

My underlying point is that (d)

we should and normally do discuss the relevant aspect of a thermodynamic quantity, and should not allow ourselves to be distracted by something that is irrelevant to the matter under consideration, which is (e) that component of entropy which is tied to configuration and which is in turn linked to information.The characteristic concern of those looking at disorder vs organisation or even order, is matters of configuration.

And so, it is those issues that must be focal

KF

Pardon typo: 3.136 * 10^5 J

PS: It should be noted that the true focal issue is the bridge between information and entropy in light of the Gibbs analysis. That, I have focussed on, noting as necessary over the past several days that a partial entropy calc on the state of a mass of Al at a given temp is a side track from that.

A tornado that rips the wings off a 747 changes not one whit how much information is required to construct a 747.

And if your zone if interest is how to build a 747, the amount of missing information isn’t changed by a tornado either.

In fact, it’s most difficult to ascertain just what Sal’s point is throughout this entire exercise.

On the one hand he seems to be claiming that entropy and intelligent design are not connected.

But then he also states that “a designed object’s entropy must increase for its design complexity to increase.”

Now that’s just a bizarre statement. What does it even

mean?If we load a 100k block of aluminum on a 747 we have increased it’s entropy, but doing so has not increased the design complexity of the 747, rather it has only made it

possibleto increase the design complexity of the 747?How so?

If we heat the 747, does that increase it’s entropy? Does that then make it possible to increase the design complexity of the 747 in some way that not heating it would not have allowed?

What is Entropy?

Mung:

I think the root issue is to understand the difference between average info per symbol sent and the MmIG that Gibbs’ metric points to.

When a complex — many part, high info storing potential — system is constrained to be in a high info functional state, there is high known info, at macro level. This can be seen by using the black box (BB) model, where one figuratiely pushes the button to see emission of a string of symbols that reveal its info state:

Such a system may or may not have high info per symbol or component involved, but in aggregate its info content is high and constrained by the overall function that is observable. (Where, there is a sloppy error of transferring the proper usage of Shannon’s H as avg info per symbol to the info content of a message with N elements, N * H.)

On heating the 747, we have indeed increased its entropy, as today is the 11th anniversary of the horrific proof of: we are moving it towards the point where its Al atoms have enough random energy to be free to engage in reactions with the O2 etc and burn. Or we could simply melt the plane. In so doing we would trigger a phase change and in addition destroy the functional config.

The issue of confusion of entropy with complexity is the same one about the loose use of entropy to mean N*H.

And I keep on pointing out just how often entropy shows up in the sense of time’s arrow: the direction of spontneous change to access clusters of microstates that are dominant numerically, once the constraints that keep the system in other states are relieved. The 504 coin tray that starts out in HTHT . . . HT, then is shaken up and gradually reverts to near-50:50 H & T in no particular order is a good example in point. This system has moved from a high order to a low order config, one that is readily seen as being part of the predominant cluster of microstates.

We can imagine a BB that has a coin tray in it and a coin reader that on pushing the button, emits the 504 bit coin state.

Now, go away for a time and pick back up the same coin tray BB. Push the button again, and now we see the first 72 ASCII characters from this post. On many grounds, the best explanation for this organised state is IDOW, i.e. design by intelligently directed organising work. And such a system is in a tightly constrained functional state, though one distinct from the original ordered form, it is in an organised, functional info-rich state.

That is the essential form of the design inference on tested, reliable sign that objectors to ID are so loathe to acknowledge.

It also extends to the point that just flying either the 747 or the microjet of the other thought exercise, is telling us a lot about a constrained, specific and functional, complex, internal state. One tied to configuration work, IDOW.

KF

It seems very obvious to me that there is a creative intelligence (or force or agency) which is the source of anything new. Otherwise, why would there be new species or individuals? There is no reason or law in science which says ‘new things must happen’. Natural philosophy safely assumes that there is a constant source of new things – the legendary [i]cornucopia[/i] – but it can’t, and actually doesn’t need to explain it.

The only glimmer of bad news is that Christians don’t necessarily have a patent on it.

It seems very obvious to me that there is a creative intelligence (or force or agency) which is the source of anything new. Otherwise, why would there be new species or individuals? There is no reason or law in science which says ‘new things must happen’. Natural philosophy safely assumes that there is a constant source of new things – the legendary

cornucopia– but it can’t, and actually doesn’t need to explain it.The only glimmer of bad news is that Christians don’t necessarily have a patent on it.