The common belief is that adding disorder to a designed object will destroy the design (like a tornado passing through a city, to paraphrase Hoyle). Now if increasing entropy implies increasing disorder, creationists will often reason that “increasing entropy of an object will tend to destroy its design”. This essay will argue mathematically that this popular notion among creationists is wrong.

The correct conception of these matters is far more nuanced and almost the opposite of (but not quite) what many creationists and IDists believe. Here is the more correct view of entropy’s relation to design (be it man-made or otherwise):

1. increasing entropy can increase the capacity for disorder, but it doesn’t necessitate disorder

2. increasing an object’s capacity for disorder doesn’t imply that the object will immediately become more disordered

3. increasing entropy in a physical object is a necessary (but not sufficient) condition for increasing the complexity of the design

4. contrary to popular belief, a complex design is a high entropy design, not a low entropy design. The complex organization of a complex design is made possible (and simultaneously improbable) by the high entropy the object contains.

5. without entropy there is no design

If there is one key point it is: Entropy makes design possible but simultaneously improbable. And that is the nuance that many on both sides of the ID/Creation/Evolution controversy seem to miss.

The notion of entropy is foundational to physics, engineering, information theory and ID. These essays are written to provide a discussion on the topic of entropy and its relationship to other concepts such as uncertainty, probability, microstates, and disorder. Much of what is said will go against popular understanding, but the aim is to make these topics clearer. Some of the math will be in a substantially simplified form, so apologies in advance to the formalists out there.

Entropy may refer to:

1. Thermodynamic (Statistical Mechanics) entropy – measured in Joules/Kelvin, dimensionless units, degrees of freedom, or (if need be) bits

2. Shannon entropy – measured in bits or dimensionless units

3. Algorithmic entropy or Kolmogorov complexity – measured also in bits, but deals with the compactness of a representation. A file that can be compressed substantially has low algorithmic entropy, whereas files which can’t be compressed evidence high algorithmic entropy (Kolmogorov complexity). Both Shannon entropy and algorithmic entropies are within the realm of information theory, but by default, unless otherwise stated, most people associate Shannon entropy as

theentropy in information theory.4. disorder in the popular sense – no real units assigned, often not precise enough to be of scientific or engineering use. I’ll argue the term “disorder” is a misleading way to conceptualize entropy. Unfortunately, the word “disorder” is used even in university science books. I will argue mathematically why this is so…

The reason the word *entropy* is used in the disciplines of Thermodynamics, Statistical Mechanics and Information Theory is that there are strong mathematical analogies. The evolution of the notion of entropy began with Clausius who also coined the term for thermodynamics, then Boltzmann and Gibbs related Clausius’s notions of entropy to Newtonian (Classical) Mechanics, then Shannon took Boltzmann’s math and adapted it to information theory, and then Landauer brought things back full circle by tying thermodynamics to information theory.

How entropy became equated with disorder, I do not know, but the purpose of these essays is to walk through actual calculations of entropy and allow the reader to decide for himself whether disorder can be equated with entropy. My personal view is that Shannon entropy and Thermodynamic entropy cannot be equated with disorder, even though the lesser-known algorithmic entropy can. So in general entropy should not be equated with disorder. Further, the problem of organization (which goes beyond simple notions of order and entropy) needs a little more exploration. Organization sort of stands out as a quality that seems difficult to assign numbers to.

The calculations that follow are to give an illustration how I arrived at some my conclusions.

First I begin with calculating Shannon entropy for simple cases. Thermodynamic entropy will be covered in the Part II.

Bill Dembski actually alludes to Shannon entropy in his latest offering on Conservation of Information Made Simple

In the information-theory literature, information is usually characterized as the negative logarithm to the base two of a probability (or some logarithmic average of probabilities, often referred to as

entropy).William Dembski

Conservation of Information Made Simple

To elaborate on what Bill said, if we have a fair coin, it can exist in two microstates: heads (call it microstate 1) or tails (call it microstate 2).

After a coin flip, the probability of the coin emerging in microstate 1 (heads) is 1/2. Similarly the probability of the coin emerging in microstate 2 (tails) is 1/2. So let me tediously summarize the facts:

N = Ω(N) = Ω = Number of microstates of a 1-coin system = 2

x

_{1}= microstate 1 = heads

x_{2}= microstate 2 = tailsP(x

_{1}) = P(microstate 1)= P(heads) = probability of heads = 1/2

P(x_{2}) = P(microstate 2)= P(tails) = probability of tails = 1/2

Here is the process for calculating the Shannon Entropy of a 1-coin information system starting with Shannon’s famous formula:

where I is the Shannon entropy (or measure of information).

This method seems a rather torturous way to calculate the Shannon entropy of a single coin. A slightly simpler method exists if we take advantage of the fact that each microstate of the coin (heads or tails) is equiprobable, and thus conforms to the fundamental postulate of statistical mechanics, and thus we can calculate the number of bits by simply taking the logarithm of the number of microstates as is done in statistical mechanics.

Now compare this equation of the Shannon entropy in information theory

to Boltzmann entropy from statistical mechanics and thermodynamics

and even more so using different units whereby k_{b}=1

The similarities are not an accident. Shannon’s ideas of information theory are a descendant of Boltzmann’s ideas from statistical mechanics and thermodynamics.

To explore Shannon entropy further, let us suppose we have a system of 3 distinct coins. The Shannon entropy relates the amount of information that will be gained by observing the collective state (microstate) of the 3 coins.

First we have to compute the number of microstates or ways the system of coins can be configured. I will lay them out specifically.

microstate 1 = H H H

microstate 2 = H H T

microstate 3 = H T H

microstate 4 = H T T

microstate 5 = T H H

microstate 6 = T H T

microstate 7 = T T H

microstate 8 = T T TN = Ω(N) = Ω = Number of microstates of a 3-coin system = 8

So there are 8 microstates or outcomes the system can realize. The Shannon entropy can be calculated in the torturous way:

or simply taking the logarithm of the number of microstates:

It can be shown that for the Shannon entropy of a system of N distinct coins is equal to N bits. That is, a system with 1 coin has 1 bit of Shannon entropy, a system with 2 coins has 2 bits of Shannon entropy, a system of 3 coins has 3 bits of Shannon entropy, etc.

Notice, the more microstates there are, the more uncertainty exists that the system will be found in any given microstate. Equivalently, the more microstates there are, the more improbable the system will be found in a given microstate. Hence, sometimes entropy is described in terms of improbability or uncertainty or unpredictability. But we must be careful here, uncertainty is not the same thing as disorder. That is subtle but important distinction.

So what is the Shannon Entropy of a system of 500 distinct coins? Answer: 500 bits, or the Universal Probability Bound.

By way of extension, if we wanted to build an operating system like Windows-7 that requires gigabits of storage, we would require the computer memory to contain gigabits of Shannon entropy. This illustrates the principle that more complex designs require larger Shannon entropy to support the design. It cannot be otherwise. Design requires the presence of entropy, not absence of it.

Suppose we found that a system of 500 coins were all heads, what is the Shannon entropy of this 500-coin system? Answer: 500 bits. No matter what configuration the system is in, whether ordered (like all heads) or disordered, the Shannon entropy remains the same.

Now suppose a small tornado went through the room where the 500 coins resided (with all heads before the tornado), what is the Shannon entropy after the tornado? Same as before, 500-bits! What may arguably change is the algorithmic entropy (Kolmogorov complexity). The algorithmic entropy may go up, which simply means we can’t represent the configuration of the coins in a compact sort of way like saying “all heads” or in the Kleene notation as H*.

Amusingly, if in the aftermath of the tornado’s rampage, the room got cooler, the thermodynamic entropy of the coins would actually go down! Hence the order or disorder of the coins is independent not only of the Shannon entropy but also the thermodynamic entropy.

Let me summarize the before and after of the tornado going through the room with the 500 coins:

BEFORE : 500 coins all heads, Temperature 80 degrees

Shannon Entropy : 500 bits

Algorithmic Entropy (Kolmogorov complexity): low

Thermodynamic Entropy : some finite starting valueAFTER : 500 coins disordered

Shannon Entropy : 500 bits

Algorithmic Entropy (Kolmogorov complexity): high

Thermodynamic Entropy : lower if the temperature is lower, higher if the temperature is higher

Now to help disentangle concepts a little further consider three 3 computer files:

File_A : 1 gigabit of binary numbers randomly generated

File_B : 1 gigabit of all 1’s

File_C : 1 gigabit encrypted JPEG

Here are the characteristics of each file:

File_A : 1 gigabit of binary numbers randomly generated

Shannon Entropy: 1 gigabit

Algorithmic Entropy (Kolmogorov Complexity): high

Thermodynamic Entropy: N/A

Organizational characteristics: highly disorganized

inference : not designedFile_B : 1 gigabit of all 1’s

Shannon Entropy: 1 gigabit

Algorithmic Entropy (Kolmogorov Complexity): low

Thermodynamic Entropy: N/A

Organizational characteristics: highly organized

inference : designed (with qualification, see note below)File_C : 1 gigabit encrypted JPEG

Shannon Entropy: 1 gigabit

Algorithmic Entropy (Kolmogorov complexity): high

Thermodynamic Entropy: N/A

Organizational characteristics: highly organized

inference : extremely designed

Notice, one cannot ascribe high levels of improbable design based on the Shannon entropy or algorithmic entropy without some qualification. Existence of improbable design depends on the existence of high Shannon entropy, but is somewhat independent of algorithmic entropy. Further, to my knowledge, there is not really a metric for organization that is separate from Kolmogorov complexity, but this definition needs a little more exploration and is beyond my knowledge base.

Only in rare cases will high Shannon entropy and low algorithmic entropy (Kolmogorov complexity) result in a design inference. One such example is 500 coins all heads. The general method to infer design (including man-made designs), is that the object:

1. has High Shannon Entropy (high improbability)

2. conforms to an independent (non-postdictive) specification

In contrast to the design of coins being all heads where the Shannon entropy is high but the algorithmic entropy is low, in cases like software or encrypted JPEG files, the design exists in an object that has both high Shannon entropy and high algorithmic entropy. Hence, the issues of entropy are surely nuanced, but on balance entropy is good for design, not always bad for it. In fact, if an object evidences low Shannon entropy, we will not be able to infer design reliably.

The reader might be disturbed at my final conclusion in as much as it grates against popular notions of entropy and creationist notions of entropy. But well, I’m no stranger to this controversy. I explored Shannon entropy in this thread because it is conceptually easier than its ancestor concept of thermodynamic entropy.

In the Part II (which will take a long time to write) I’ll explore thermodynamic entropy and its relationship (or lack thereof) to intelligent design. But in brief, a parallel situation often arises: the more complex a design, the higher its thermodynamic entropy. Why? The simple reason is that more complex designs involve more parts (molecules) and more molecules in general imply higher thermodynamic (as well as Shannon) entropy. So the question of Earth being an open system is a bit beside the point since entropy is essential for intelligent designs to exist in the first place.

[UPDATE: the sequel to this thread is in Part 2]

Acknowledgements (both supporters and critics):

1. Elizabeth Liddle for hosting my discussions on the 2nd Law at TheSkepticalZone

2. physicist Olegt who offered generous amounts of time in plugging the holes in my knowledge, particularly regarding the Liouville Theorem and Configurational Entropy

3. retired physicist Mike Elzinga for his pedagogical examples and historic anecdotes. HT: the relationship of more weight to more entropy

4. An un-named theoretical physicist who spent many hours teaching his students the principles of Statistical Mechanics and Thermodynamics

5. physicists Andy Jones and Rob Sheldon

6. Neil Rickert for helping me with Latex

7. Several others that have gone unnamed

NOTE:

[UPDATE and correction: gpuccio was kind enough to point out that in the case of File_B, the design inference isn’t necessarily warranted. It’s possible an accident or programming error or some other reason could make all the bits 1. It would only be designed if that was the designer’s intention.]

[UPDATE 9/7/2012]

Boltzmann

“In order to explain the fact that the calculations based on this assumption [“…that by far the largest number of possible states have the characteristic properties of the Maxwell distribution…”] correspond to actually observable processes, one must assume that an enormously complicated mechanical system represents a good picture of the world, and that all or at least most of the parts of it surrounding us are initially in a very ordered — and therefore very improbable — state. When this is the case, then whenever two of more small parts of it come into interaction with each other, the system formed by these parts is also initially in an ordered state and when left to itself it rapidly proceeds to the disordered most probable state.” (Final paragraph of #87, p. 443.)

That slight, innocent paragraph of a sincere man — but before modern understanding of q(rev)/T via knowledge of molecular behavior (Boltzmann believed that molecules perhaps could occupy only an infinitesimal volume of space), or quantum mechanics, or the Third Law — that paragraph and its similar nearby words are the foundation of all dependence on “entropy is a measure of disorder”. Because of it, uncountable thousands of scientists and non-scientists have spent endless hours in thought and argument involving ‘disorder’and entropy in the past century. Apparently never having read its astonishingly overly-simplistic basis, they believed that somewhere there was some profound base. Somewhere. There isn’t. Boltzmann was the source and no one bothered to challenge him. Why should they?

Boltzmann’s concept of entropy change was accepted for a century primarily because skilled physicists and thermodynamicists focused on the fascinating relationships and powerful theoretical and practical conclusions arising from entropy’s relation to the behavior of matter. They were not concerned with conceptual, non-mathematical answers to the question, “What is entropy, really?” that their students occasionally had the courage to ask. Their response, because it was what had been taught to them, was “Learn how to calculate changes in entropy. Then you will understand what entropy ‘really is’.”

There is no basis in physical science for interpreting entropy change as involving order and disorder.

One problem, as with neo-Darwinists, you don’t have any physical example of ‘not always bad’. i.e. you have not one molecular machine or one functional protein coming about by purely material processes. But the IDists and creationists have countless examples of purely material processes degrading as such.

SC:

Lets start with

where W is the number of ways that mass and/or energy at ultra-microscopic level may be arranged,

consistent with a given Macroscopic [lab-level observable] state.That constraint is crucial and brings out a key subtlety in the challenge to create functionally specific organisation on complex [multi-part] systems through forces of blind chance and mechanical necessity.

FSCO/I is generally deeply isolated in the space of raw configurational possibilities, and is not normally created by nature working freely. Nature, working freely, on the gamut of our solar system or of the observed cosmos, will blindly sample the space from some plausible, typically arbitrary initial condition, and thereafter it will undergo a partly blind random walk, and there may be mechanical dynamics at work that will impress a certain orderly motion, or the like.

(Think about molecules in a large parcel of air participating in wind and weather systems. The temperature is a metric of avg random energy per degree of freedom of relevant particles, usually translational, rotational and vibrational. At the same time, the body of air as a whole is drifting along in the wind that may reflect planetary scale convection.)

Passing on to Shannon’s entropy in the information context (and noting Jaynes et al on the informational view of thermodynamics that I do not see adequately reflected in your remarks above — there are schools of thought here, cf. my note here on), what Shannon was capturing is average info per symbol transmitted in the case of non equiprobable symbols; the normal state of codes. This turns out to link to the Gibbs formulation of entropy you cite. And, I strongly suggest you look at Harry S Robertson’s Statistical Thermophysics Ch 1 (Prentice) to see what it seems from appearances that your interlocutors have not been telling you. That is, there is a vigorous second school of thought within physics on stat thermo-d, that bridges to Shannon’s info theory.

Wikipedia bears witness to the impact of this school of thought:

So, when we see the value of H in terms of uncommunicated micro- level information based on lab observable state, we see that

entropy, traditionally understood per stat mech [degrees of micro-level freedom], is measuring[MmIG], NOT the info we have in hand per macro-observation.the macro-micro info-gapThe subtlety this leads to is that

when we see a living unicellular species of type x, providing we know the genome, through lab level observability, we know a lot about the specific molecular states from a lab level observation.The MmIG is a lot smaller, as there is a sharp constraint on possible molecular level configs, once we have a living organism in hand. When it dies, the active informationally directed maintenance of such ceases, and spontaneous changes take over. The highly empirically reliable result is well known: decay and breakdown to simpler component molecules.We also know that in the period of historic observation and record — back to the days of early microscopy 350 years back, this is passed on from generation to generation by algorithmic processes. Such a system is in a programmed, highly constrained state governed by gated encapsulation, metabolic automata that manage an organised flow-through of energy and materials [much of this in the form of assembled smart polymers such as proteins] backed up by a

von Neumann self-replicator[vNSR].We can also infer on this pattern right back to the origins of cell based life, on the relevant macro-traces of such life.

So, how do we transition from Darwin’s warm pond with salts [or the equivalent] state, to the living cell state?

The dominant OOL school, under the methodological naturalism imposition, poses a claimed chem evo process of spontaneous cumulative change. This runs right into the problem of accessing deeply isolated configs spontaneously.

For, sampling theory and common sense alike tell us that pond state — due to the overwhelming bulk of configs and some very adverse chemical reaction equilibria overcome in living systems by gating, encapsulation and internal functional organisation that uses coded data and a steady flow of ATP energy battery molecules to drive algorithmic processes — will be dominant over spontaneous emergence at organised cell states (or any reasonable intermediates).

There is but one empirically confirmed means of getting to FSCO/I, namely design.

In short,

on evidence, the info-gap between pond state and cell state, per the value of FSCO/I as sign, is best explained as being bridged by design that feeds in the missing info and throughThat self replication also uses an information and organisation-rich vNSR, and allows a domination of the situation by a new order of entity, the living cell.intelligently directed organising work[IDOW] creates in this case a self replicating micro-level molecular nanotech factory.So,

it is vital for us to understand at the outset of discussion that the entropy in a thermodynamic system is a metric of missing information on the microstate, given the number of microstate possibilities consistent with the macro-observable state.That is, entropy measures the MmIG.Where also,

the living cell is in a macro-observable state that initially and from generation to generation [via vNSR in algorithmically controlled action on coded information], locks down the number of possible states drastically relative to pond state.The debate on OOL, then is about whether it is a credible argument on observed evidence in the here and now, for pond state, via nature operating freely and without IDOW, to go to cell-state.(We know that IDOW routinely creates FSCO/I, a dominant characteristic of living cells.)A common argument is that raw injection of energy suffices to bridge the info-gap without IDOW, as the energy flow and materials flows allow escape from “entropy increases in isolated systems.” What advocates of this do not usually disclose, is that raw injection of energy tends to go to heat, i.e. to dramatic rise in the number of possible configs, given the combinational possibilities of so many lumps of energy dispersed across so many mass-particles. That is, MmIG will strongly tend to RISE on heating. Where also, for instance, spontaneously ordered systems like hurricanes are not based on FSCO/I, but instead on the mechanical necessities of Coriolis forces acting on large masses of air moving under convection on a rotating spherical body.

(Cf my discussion here on, remember, I came to design theory by way of examination of thermodynamics-linked issues. We need to understand and visualise step by step what is going on behind the curtain of serried ranks of algebraic, symbolic expressions and forays into calculus and partial differential equations etc. Otherwise, we are liable to miss the forest for the trees. Or, the old Wizard of Oz can lead us astray.)

A good picture of the challenge was posed by Shapiro in Sci AM, in challenging the dominant genes first school of thought, in words that also apply to his own metabolism first thinking:

Orgel’s reply in a post-humus paper, is equally revealing on the escape from IDOW problem:

So, we have to pull back the curtain and make sure we first understand that the sense in which entropy is linked to information in a thermodynamics context is that we are measuring missing info on the micro-state given the macro-state. So, we should not allow the similarity of mathematics to lead us to think that IDOW is irrelevant to OOL, once a system is opened up to energy and mass flows.

In fact, given the delicacy and unfavourable kinetics and equilibria involved — notice all those catalysing enzymes and ATP energy battery molecules in life? — the challenge of IDOW is the elephant standing in the middle of the room that ever so many are desperate not to speak bout.

KF

Sal:

Great post!

A few comments:

a) Shannon entropy is the basis for what we usually call the “complexity” of a digital string.

b) Regarding the exmaple in:

File_B : 1 gigabit of all 1?s

Shannon Entropy: 1 gigabit

Algorithmic Entropy (Kolmogorov Complexity): low

Organizational characteristics: highly organized

inference : designed

I would say that the inference of design is not necessarily warrnted. According to the explanatory filter, in the presence of this kind of compressible order we must first ascertain that no deterministic effect is the cause of the apparent order. IOWs, many simple deterministic causes could explain a series of 1s, however long. Obviously, such a scenario would imply that the system that generates the string is not random, or that the probabilities of 0 and 1 are extremely different. I agree that, if we have assurance that the system is really random and the probabilities are as described, then a long series of 1 allows the design inference.

c) A truly pseudo-random string, which has no formal evidence of order (no compressibility), like the jpeg file, but still conveys very specific information, is certainly the best scenario for design inference. Indeed, as far as I know, no deterministic system can explain the emergence of that kind of object.

d) Regarding the problem of specification, I paste here what I posted yesterday in another thread, as I believe it is pertinent to the discussion here:

“I suppose much confusion derives from Shannon’s theory, which is not, and never has been, a theory about information, but is often considered as such.

Contemporary thought, in the full splendor of its dogmatic reductionism, has done its best to ignore the obvious connection between information and meaning. Everybody talks about information, but meaning is quite a forbidden word. As if the two things could be separated!

I have discussed for days here with darwinists just trying to have them admit that sucg a thing as “function” does exist. Another forbidden word.

And even IDist often are afraid to admit that meaning and function cannot even be defined if we do not refer to a conscious being. I have challenged evrybody I know to give a definition, any definition, of meaning, function and intent without recurring to conscious experience. How strange, the same concepts on which all our life, and I would say also all our science and knowledge, are based, have become forbidden in modern thought. And consciousness itself, what we are, the final medium that cognizes everything, can scarcely be mentioned, if not to affirm that it is an unscientific concept, or even better a concept completely reducible to non conscious aggregations of things (!!!).

The simple truth is: there is no cognition, no science, no knowledge, without the fundamental intuition of meaning. And that intuition is a conscious event, and nothing else.

There is no understanding of meaning in stones, rivers or computers. Only in conscious beings. And information is only a way to transfer menaing from one conscious being to another. Through material systems, that carry the meaning, but have no understanding of it.

That’s what Shannon considered: what is necessary to transfer information through a material system. In that context, meaning is not relevant, because what we are measuring is only a law of transmission.

The same is true in part for ID. The measure of complexity is a Shannon measure, it has nothing to do with meaning. A random string can be as complex as a meaningful string.

But the concept of specification does relate to meaning, in one of its many aspects, for instance as function. The beautiful simplicity of ID theory is that it measures the complexity necessary to convey a specific meaning. That is simple and beautiful, beacuse it connects the quantitative concept of Shannon complexity to the qualitative aspect of meaning and function.”

F/N: I have put the above comment up with a diagram here.

F/N 2: We should bear in mind that information arises when we move from an a priori state to an a posteriori one where with significant assurance we are in a state that is to some degree or other surprising. Let me clip my always linked note, here on:

A baseline for discussion.

KF

It is interesting to note that in the building of better random number generators for computer programs, a better source of entropy is required:

And Indeed we find:

And the maximum source of randomness in the universe is found to be,,,

,,, there is also a very strong case to be made that the cosmological constant in General Relativity, the extremely finely tuned 1 in 10^120 expansion of space-time, drives, or is deeply connected to, entropy as measured by diffusion:

Thus, though neo-Darwinian atheists may claim that evolution is as well established as Gravity, the plain fact of the matter is that General Relativity itself, which is by far our best description of Gravity, testifies very strongly against the entire concept of ‘random’ Darwinian evolution.

also of note, quantum mechanics, which is even stronger than general relativity in terms of predictive power, has a very different ‘source for randomness’ which sets it as diametrically opposed to materialistic notion of randomness:

Needless to say, finding ‘free will conscious observation’ to be ‘built into’ quantum mechanics as a starting assumption, which is indeed the driving aspect of randomness in quantum mechanics, is VERY antithetical to the entire materialistic philosophy which demands randomness as the driving force of creativity! Could these two different sources of randomness in quantum mechanics and General relativity be one of the primary reasons of their failure to be unified???

Further notes, Boltzman, as this following video alludes to,,,

,,,being a materialist, thought of randomness, entropy, as ‘unconstrained’, as would be expected for someone of the materialistic mindset. Yet Planck, a Christian Theist, corrected that misconception of his:

Related notes:

F/N: Let’s do some boiling down, for summary discussion in light of the underlying matters above and in onward sources:

Let us see how this chain of reasoning is handled, here and elsewhere.

KF

Thank you!

In Bill Dembski’s literature, yes. Some other’s will use a different metric for complexitly, like Algorithmic complexity. Phil Johnson and Stephen Meyer actually refer to algorithmic complexity if you read what they say carefully. In my previously less enlightened writings on the net I used algorithmic complexity.

The point is, this confusion needs a little bit of remedy. Rather than use the word “complexity” it is easier to say what actual metric one is working from. CSI is really based on Shannon Entropy not algorithmic or thermodynamic entropy.

Yes, thank you. I’ll have to revisit this example. It’s possible a programmer had the equivalent of stuck key’s. I’ll update the post accordingly. That’s why I post stuff like this at UD, to help clean up my own thoughts.

gpuccio,

In light of your very insightful criticism, I amended the OP as follows:

Complete and utter nonsense. I assume you have absolutely no experience with the specification and development of new systems.

A baseball’s design is refined to eliminate every single ounce of weight or space that does not satisfy the requirements for a baseball.

An airliner’s design is refined to eliminate every single ounce of weight or space that does not satifsy the requirements for an airliner.

But the airliner is much more complex than the baseball and didn’t get that way by accident.

I assume that you assume that an entropic design is launched by its designers like a Mars probe but expected to change/evolve after launch (by increasing its entropy). But as far as we know, most biologic systems are remarkably stable in their designs (um, the oldest known bat fossils are practically identical to modern bats). In “The Edge of Evolution”, Behe in fact bases his argument against Evolution on the fact that there are measurably distinct levels of complexity in biologic systems, and that no known natural mechanism, most especially random degradation of the original design, will get you from a Level 2 system to a more complex Level 3 system.

Before becoming a financeer I was an engineer. I have 3 undergraduate degrees in electrical engineering and computer science and mathematics and a graduate engineering degree in applied physics. Of late I try to minimize mentioning it because there are so many things I don’t understand which I ought to with that level of academic exposure. I fumble through statistical mechanics and thermodynamics and even basic math. I have to solicit expertise on these matters, and I have to admit that I’m wrong many times or don’t know something, or misunderstand something — and willingness to admit mistakes or lack of understanding is a quality which I find lacking among many of my creationist brethren, and even worse among evolutionary biologists.

I worked on aerospace systems, digital telephony, unmanned aerial vehicles, air traffic control systems, security systems. I’ve written engineering specifications and carried them out. Thus

is utterly wrong and a fabrication of your own imagination.

Besides, my experience is irrelevant to this discussion. At issue are the ideas and calculations.

Do you have any comment on my calculations of Shannon entropy or the other entropy scores for the objects listed?

kf:

Interesting thought and worth considering. I think it is a useful point to bring up when addressing the “open system” red herring put forth by some OOL advocates, but at the end of the day it is really a rounding error on the awful probabilities that already exist. Thus, it probably makes sense to mention it in passing (“Adding energy without direction can actually make things worse.”) if someone is pushing the “just add energy” line of thought, but then keep the attention focused squarely on the heart of the matter.

Also, kf, the rejoinder by the “just add energy” advocate will be that the energy typically increases the reaction rate. Therefore, even if there are more states possible, the prebiotic soup can move through the states more quickly.

It is very difficult to analyze and compare the probabilities (number of states and increased reaction time of various chemicals in the soup) and how they would be affected by adding energy. Perhaps impossible, without making all kinds of additional assumptions about the particular soup and amount/type of energy, which assumptions would themselves be subject to debate.

Anyway, I think you make an interesting point. The more I think about it, however, the more I think it could lead to getting bogged down in the ‘add energy’ part of the discussion. Seems it might be better to stick with a strategy that forcefully states that the ‘add energy’ argument is a complete red herring and not honor the argument by getting into a discussion of whether adding energy would decrease or increase the already terrible odds with specific chemicals in specific situations.

Anyway, just thinking out loud here . . .

Regarding the “Add Energy” argument. Set off an source equal in energy and power to an atomic bomb — the results are predictable in terms of the designs (or lack thereof) that will emerge in the aftermath.

That is an example where Entropy increases, but so does disorder.

The problem, as illustrated with the 500-coins, is that Shannon Entropy and Thermodynamic Entropy have some independence from the notions of disorder.

A designed system can have 500 bits of Shannon entropy but so can an undesigned system. Having 500 bits of Shannon entropy says little (in and of itself) whether something is desiged. An independent specification is needed to identify a design, the entropy score is only a part.

We can have:

1. entropy rise and more disorder

2. entropy rise and more order

3. entropy rise and more disorganization

4 entropy rise and more organization

5. entropy rise and destroying design

6. entropy rise and creating design

We can’t make a general statement about what will happen to a design or a disordered system merely because the entropy rises. There are too many other variables to account for before we can say something useful.

Shannon entropyEA:

When the equilibria are as unfavourable as they are, a faster reaction rate will favour breakdown, as is seen from how we refrigerate to preserve. In effect around room temp, activation processes double for every 8 K increase in temp.

And, the rate of state sampling used in the FSCI calc at 500 bits as revised is actually that for the fastest ionic reactions, not the slower rates appropriate to organic ones. For 1,000 bits, we are using Planck times which are faster than anything else physical. The limits are conservative.

KF

F/N; Please note how I speak of a sampling theory result on a config space, which is independent of precise probability calculations; we have only a reasonable expectation to pick up the bulk of the distribution. Remember we are sampling on the order of one straw to a cubical hay bale 1,000 light years on the side, i.e comparably thick to our Galaxy. KF

SC: Please note the Macro-micro info gap issue I have highlighted above. KF

OlegT helped you? Is this the same olegt that

now quote-mines you for brownie points?olegt’s quote-mine earns him 10 points (out of 10) on the low-integrity scale

The fact that Oleg and Mike went beyond their natural dislike of creationists and were generous to teach me things is something I’m very appreciative of. I’m willing to endure their harsh comments about me because they have scientific knowledge that is worth learning and passing on to everyone.

OT:

Sal, something’s missing, don’t you think? Does it not ‘feel’ that when we get to thermo and information and design, there is *more*, that will not to be admitted from a basic rehash, which is where it looks like you’re at.

The bridge between thermo and ‘information’ is fascinating, but here is where it could become really interesting – [what if] actual information has material and non material components! Our accounting may, and may have to, meet this reality.

The difference in entropy of a ‘live’ brain and and the same brain dead with a small .22 hole in it is said to be very small, but is it? Perhaps something is missing.

Part two is now available:

Part II

Sal, the time is ripe for a bold new thermo-entropy synthesis! Practically the sum of human knowledge is available in an instant for free. A continuing and wider survey, far wide of materialists, is needed before this endeavor can (should) be launched to fruition.

Comments on Shannon

The distinction (good question) between data and information (and much else) must be addressed to get to thermo-design-info theory.

Hi butifnot,

I don’t believe that evolutionists have proven their case.

There are fruitful ways to criticize OOL and Darwinism, I just think that creationists will hurt themselves using the 2nd Law and Entropy arguments (for the reasons outlined in these posts). They need to move on to arguments that are more solid.

What is persuasive to me are the cases of evolutionsits leaving the Darwin camp or OOL camp:

Micahel Denton

Jerry Fodor

Masimo Piantelli

Jack Trevors

Hubert Yockey

Richard Sternberg

Dean Kenyon

James Shapiro

etc.

Their arguments I find worthwhile. I don’t have any new theories to offer. Such an endeavor would be over my head anyway. I know too little to make much of a contribution to the debate beyond what you have seen at places like UD. Besides, blogs aren’t really for doing science, laboratories and libraries are better places for that. The internet is just for fun…

Sal

kf @16:

You make a good point about breakdown.

I’m just looking at the typical approach by abiogenesis proponents from a debating standpoint. I have rarely seen an abiogenesis proponent take careful stock of the many problems with their own preferred OOL scenario, including not only breakdown but also problems with interfering cross reactions, construction of polymers only on side chains, etc. The typical abiogenesis proponent, when they are willing to debate the topic, are almost wholly engrossed with the raw probabilistic resources — amount of matter in the universe, reaction rates, etc. Rarely do they consider the additional probabilistic hurdles that come with things like breakdown.

Indeed, one of the favorite debating tactics is to assert that because we don’t know all the probabilistic hurdles that need to be overcome we can’t therefore draw any conclusion about the unlikelihood of abiogenesis taking place. Despite the obvious logical failure of such an argument, this is a favorite rhetorical tactic of, for example, Elizabeth Liddle. This is of course absurd, to say the least, but it underscores the mindset.

As a result, when we talk about increased energy, the only thing the abiogenesis proponent will generally allow into their head is the hopeful glimmer of faster reaction rates. That is all they are interested in — more opportunities for chance to do its magic. The other considerations — including things like interfering cross reactions and breakdown of nascent molecules — are typically shuffled aside or altogether forgotten. The unfortunate upshot is that pointing out problems with additional energy (like faster breakdown), typically, will fall on deaf ears.

That, coupled with the fact that any definitive answer on the point requires a detailed analysis of precisely which OOL scenario is being discussed, how dilute the solution is, what kind of environment is present, the operative temperature, the type of energy infused, etc., means that it is nearly impossible to convince the recalcitrant abiogenesis proponent that additional energy can in fact be worse. Thus, from a practical standpoint, we seem better off just focusing on real issue — information — and note that energy does nothing to help with that key aspect.

Anyway, way more than you wanted to hear. I’m glad you shared your thoughts on additional energy. I think you have something there worth considering, including a potential hurdle for the occasional abiogenesis proponent who is actually willing to think about things like breakdown.

Sal:

Spoken like a true academic elitist! 🙂

Trevors and Abel point out the necessity of Shannon entropy (uncertainty) to store information for life to replicate. Hence, they recognize that a sufficient amount of Shannon entropy is needed for life:

Sorry, I have to bring it down a notch. Just something that has been on my mind a long time

EA:

Notice, I consistently speak of sampling a distribution of possibilities in a config space, where the atomic resources of solar system or observed cosmos are such that only a very small fraction can be sampled. For 500 bits, we talk of a one straw size sample to a cubical haystack 1,000 LY on the side, about as thick as the galaxy.

With all but certainty, a blind, chance and necessity sample will be dominated by the bulk of the distribution. In short, it is maximally implausible that special zones will be sampled.

KF

PS: Have I been sufficiently clear in underscoring that in stat thermo-d the relevant info metric associated with entropy is a measure of the missing info to specify micro state given macro state?

“

if we wanted to build an operating system like Windows-7 that requires gigabits of storage, we would require the computer memory to contain gigabits of Shannon entropy”Surely you mean “if we wanted to build an operating system like Windows-7 that requires gigabits of storage, we would require the computer memory to contain

32 bits or soof Shannon entropy”Surely not.

32-bits (or 64 bits) refers to the number of bits available to address memeory, not the actual amount of memory Windows-7 requires.

32 bits can address 2^32 bytes of memory or 4 gigabytes directly.

From the Windows website describing Vista (and the comment applies to other Windows operating systems)

Windows x64 occupies about 16gigabytes. A byte being 8 bits implies 16 gigabytes is 16*8 = 128 gigabits.

Thus the Shannon entropy required to represent windows-7 x64 is on the order of 128 gigabits.

Shannon entropy is the amount of information that can be represented, not the number of bits required to locate an address in memory.

I have to disagree with Bill. I have a coin in my pocket and it’s not in either the heads state or the tails state.

Entropy:

ok, so what is entropy?

ok, but first, what is “Shannon entropy”?

Telling me it’s measured in bits doesn’t tell me what “it” is.

So “Shannon entropy” is a measure of information?

So Shannon entropy is a measure of what we don’t know? More like a measure of non-information?

Contrast to Bill Dembski’s recent article:

I just did a comparable calculation more elaborately, and you missed it. Instead of tossing a single coin 3 times, I had 3 coins tossed 1 time.

I wrote the analogous situation, except insted of making multiple tosses of a single coin, I did the formula for single tosses of multiple coins. The Shannon entropy is analogous.

Sal posted this:

Please tell me that you are joking.

If you didn’t know in advance what the origin of File A and File C were, then you would have no useful evidence from the contents of the two files to decide that one was “highly disorganised” and the other was “highly organised”. Hint: the purpose of encryption is to make the contents of the file approach as closely as possible to a randomly generated string.

File B supports an inference of “highly organised”? How? Why? What if the ground state of the signal is just the continuous emission of something interpreted digitally as “ones” (or zeroes” for that matter). Your argument appears to say that if a system transmits a constant signal, then it must be organised.

Correction, I posted this:

I meant to use the term from your post that the valid inference for File B was that the file contents were designed. Clearly a gigabit of “ones” is organised in the sense that it has an evident pattern.

The fact that I knew File C was a

JPEGsuggests that I had some advanced knowledge of the file being designed. And even if I didn’t know that in advance, the fact that it could be parsed and processed as a JPEG indicates that it is organized.The fact that I specified in advance that FILE A was created by a random number generator ensures a high probability it will not be designed.

File B had to be restated with qualification as gpuccio pointed out.

The inference of design or lack thereof was based on advanced prior knowledge, not some explantory filter after the fact.

If you have a means of distinguishing between File X, (which contains a genuine random strong), and File Y (which contains a pseudorandom random string encoding a human-readable sentence), then fill your boots and publish the method.

The sound you can hear is that of computer security specialists the world over shifting uncomfortably in their seats. Or perhaps of computer security specialists laughing their faces off.

The point is this: if you want to infer “design” solely from the evidence (of the contents of the files, with no a priori knowledge of their provenance), then what is your method?

I would bet that both strings are the product of agency involvement as blind and undirected processes cannot construct a file.

Waiting for Sal’s response, I noticed that he posted this:

Exactly. You knew in advance that the file was JPEG-encoded. But even if you didn’t know in advance, the fact that a JPEG decoder could produce a meaningful image proves only that the message was encoded using the JPEG protocol. A magnificent feat of inference.

It might be interesting if you could prove that the message originated from a non-human source. Otherwise not.

But what if you only have the encoded string to work upon, and the JPEG codec generates an apparently random string as output? How do you tell whether the output signal is truly random or that it contains a human-readable message encoded using some other protocol?

If I understand your original post, you claim that design is detectable from the pattern of the encoded message, independent of its mode of encoding.

Joe posted this:

Forget the container and consider the thing contained (I mean, really, do I have to define every parameter of the discussion?). Scientists sensing signals from a pulsar store the results in a computer “file” via a series of truth-preserving transformations (light data to electronics to magnetic marks on a hard drive). Are you arguing that the stored data does not correlate reliably to the original sense data?

I’m saying that if you find a file on a compuetr then it a given some agency put it there.

And timothya- I am still waiting for evidence that natural selection is non-random….

Joe posted this:

Brilliant insight.

Users of computers generate artefacts that are stored in a form determined by the operating system of the computer that they are using (in turn determined by the human designers of the operating system involved). I would be a little surprised if it proved to be otherwise.

However, the reliable transformation of input data to stored data in computer storage doesn’t help Sal with his problem of how to assign “designedness” to an arbitrary string of input data.

He has to show that there is a reliable way to distinguish between a genuinely random string and a pseudorandom string that is hiding a human-readable message, when all he has to go on is the string itself, with no prior knowledge.

If he has such a method, I would be fascinated to know what it is.

Joe posted this:

As far as it matters, you have already had your answer in a different thread.

This thread seems to be focussed on the “how to identify designedness”, so perhaps we should stick to that subject.

timothya- there isn’t any evidence that natural selection is non-random- just so that we are clear.

Joe

I am clear that you think so. You are in disagreement with almost every practising biologist in the world of science. But that is your choice.

In the meantime, can we focus on Sal’s proposal?

No timothya- I don’t think so. It is obvious. And not one of those biologists can produce any evidence that demonstrates otherwise.

TA:

Why not look over in the next thread 23 – 24 (with 16 in context as background)?

Kindly explain the behaviour of the black box that emits ordered vs random vs meaningful text strings of 502 bits:

|| BLACK BOX || –> 502 bit string

As in, explain to us, how emitting the string of ASCII characters for the first 72 or so letters of this post is not an excellent reason to infer to design as the material cause of the organised string. As in, intelligently directed organising work, which I will label for convenience, IDOW.

Can you justify a claim that lucky noise plus mechanical necessity adequately explains such an intelligible string, in the teeth of what sampling theory tells us on the likely outcome of samples on the gamut of the 10^57 atoms of the solar system for 10^17 s, at about 10^14 sa/s — comparable to fast chemical ionic reaction rates — relative to the space of possible configs of 500 bits. (As in 1 straw-size to a cubical hay bale of 1,000 LY on the side about as thick as our galaxy.)

As in, we have reason to infer on FSCO/I as an empirically reliable sign of design, no great surprise, never mind your recirculation of long since cogently answered objections.

(NB: This is what often happens when a single topic gets split up by using rapid succession of threads with comments. That is why I posted a reference thread, with a link back and no comments.)

KF

And JPEG encoders are intellignetly deisgned, so the files generated are still products of intelligent design.

Indeed.

Humans can make JPEGs, so no need to invoke non-human sources.

You can’t tell if a string is truly the product of mindless purposeless forces (random is your word), so you have to be agnostic about that. So one must accept that one can make a false inference to randomness (such as when someone wants to be extremely stealthy and encrypt the data).

If it parses with another codec that is avaiable to you, you have good reason to accept the file is designed.

Beyond that, one might have other techniques such as those that team Norton Symantec used to determine that Stuxnet was the product of an incredible level of intelligent design:

How Digital Detectives Deciphered Stuxnet

And that illustrates how a non-random string in a computer might be deduced as the product of some serious ID.

F/N: This from OP needs comment:

Actually, in basic info theory, H strictly is a measure of

average info content per elementin a system or symbol in a message. Hence its being estimated on a weighted average of information per relevant element.This, I illustrated earlier from a Shannon 1950/1 paper, in comment 15 in the part 2 thread:

So, we see the context of usage here.

But what happens when you have a message of N elements?

In the case of a system of complexity N elements, then the cumulative, Shannon metric based information — notice how I am shifting terms to avoid ambiguity — is, logically, H + H + . . . H N times over, or N * H.

And, as was repeatedly highlighted, in the case of the entropy of systems that are in clusters of microstates consistent with a macrostate,

the thermodynamic entropy is usefully measured by and understood on terms of the Macro-micro information gap(MmIG], not on a per state or per particle basis but a cumulative basis: we know macro quantities, not the specific position and momentum of each particle, from moment to moment, which given chaos theory we could not keep track of anyway.A useful estimate per the Gibbs weighed probability sum entropy metric — which is where Shannon reputedly got the term he used from in the first place, on a suggestion from von Neumann — is:

Where, Wiki gives a useful summary:

Also, {- log p_i} is an information metric, I_i, i.e the information we would learn on actually coming to know that the system is in microstate i. Thus, we are taking a scaled info metric on the probabilistically weighted summmation of info in each microstate. Let us adjust:

S_sys = k_B [SUM over i’s] p_i * I_i

This is

the weighted average info per possible microstate, scaled by k_B. (Which of course is where the Joules per Kelvin come from.)In effect the system is giving us a message, its macrostate, but that message is ambiguous over the specific microstate in it.

After a bit of mathematical huffing and puffing, we are seeing that the entropy is linked to the average info per possible microstate.

Where this is going is of course that when a system is in a state with many possible microstates, it has enormous freedom of being in possible configs, but if the macro signals lock us down to specific states in small clusters, we need to account for how it could be in such clusters, when under reasonable conditions and circumstances, it could be easily in states that are far less specific.

In turn that raises issues over IDOW.

Which then points onward to FSCO/I being a sign of intelligent design.

KF

PS: As I head out, I think an estimate of what it would take to describe the state of 1 cc of monoatomic ideal gas at 760 mm HG and 0 degrees C, i.e. 2.687 * 10^19 particles with 6 degrees of positional and momentum freedom would help us. Let us devote 32 bits — 16 bits to get 4 hex sig figs, and a sign bit plus 15 bits for the binary exponent to each of the (x, y, z) and (P_x, P-y and P-z) co-ordinates in the phase space. We are talking about:

That is, to describe the state of the system at a given instant, we would need 5.159 * 10^21 bits, or 644.9 * 10^18 bits. That is how many yes/no quest5ions, in teh correct order, would have to be amnswered and processed every clock tick we update. And with 10^-14 s as a reasonable chemical reaction rate, we are seeing a huge amount of required processing to keep track. As to how that would be done, that is anybody’s guess.

OOPS, 600 + Exa BYTES

*SMI – Shannon’s Measure of Information

http://www.worldscientific.com......1142/7694

From the OP:

Arieh Ben-Naim writes:

“It should be noted that Boltzmann himself was perhaps the first to use the “disorder” metaphor in his writing:

You should note that Boltzmann uses the terms “order” and “disorder” as qualitative descriptions of what goes on in the system. When he

definesentropy, however, he uses either thenumberofstatesor probability.Indeed, there are many examples where the term “disorder” can be applied to describe entropy. For instance, mixing two gases is well described as a process leading to a higher degree of disorder. However, there are many examples for which the disorder metaphor fails.”

Boltzmann

There, I fixed it fer ya!

As a bonus you get the “directionality” of entropy.

Ordered and disordered gots nothing to do with it.

Does Lambert answer that question?

What is Entropy, really?

So, entropy is the answer to the age-old question, why me?

Mung:

One more time [cf. 56 above, which clips elsewhere . . . ], let me clip Shannon, 1950/1:

Going back to my longstanding, always linked note, which I have clipped several times over the past few days, here on is how we measure info and avg info per symbol:

What this last refers to is the Gibbs formulation of entropy for statistical mechanics, and its implications when the relationship between probability and information is brought to bear in light of the Macro-micro views of a body of matter. That is, when we have a body, we can characterise its state per lab-level thermodynamically significant variables, that are reflective of many possible ultramicroscopic states of constituent particles.

Thus, clipping again from my always linked discussion that uses Robertson’s Statistical Thermophysics, CH 1 [and do recall my strong recommendation that we all acquire and read L K Nash’s elements of Statistical Thermodynamics as introductory reading):

Now, of course, as Wiki summarises, the classic formulation of the Gibbs entropy is:

Looks to me that this is one time Wiki has it just about dead right. Let’s deduce a relationship that shows physical meaning in info terms, where (- log p_i) is an info metric, I-i, here for microstate i, and noting that a sum over i of p_i * log p_i is in effect a frequency/probability weighted average or the expected value of the log p_i expression, and also moving away from natural logs (ln) to generic logs:

Or, as Wiki also says elsewhere:

So, immediately, the use of “entropy” in the Shannon context, to denote not H but N*H, where N is the number of symbols (thus, step by step states emitting those N symbols involved), is an error of loose reference.

Similarly, by exploiting parallels in formulation and insights into the macro-micro distinction in thermodynamics, we can develop a reasonable and empirically supportable physical account of how Shannon information is a component of the Gibbs entropy narrative. Where also Gibbs subsumes the Boltzmann formulation and onward links to the lab-measurable quantity. (Nash has a useful, relatively lucid — none of this topic is straightforward — discussion on that.)

Going beyond, once the bridge is there between information and entropy, it is there. It is not going away, regardless of how inconvenient it may be to some schools of thought.

We can easily see that, for example, information is expressed in the configuration of a string, Z, of elements z1 -z2 . . . zN in accordance with a given protocol of assignment rules and interpretation & action rules etc.

Where also, such is WLOG as AutoCAD etc show us that using the nodes and arcs representation and a list of structured strings that record this, essentially any object can be described in terms of a suitably configured string or collection of strings.

So now, we can see that string Z (with each zi possibly taking b discrete states) may represent an island of function that expresses functionally specific complex organisation and associated information. Because of specificity to achieve and keep function, leading to a demand for matching, co-ordinated values of zi along the string, that string has relatively few of the N^b possibilities for N elements with b possible states being permissible. We are at isolated islands of specific function i.e cases E from a zone of function T in a space of possibilities W.

(BTW, once b^N exceeds 500 bits on the gamut of our solar system, or 1,000 bits on the gamut of our observable cosmos, that brings to bear all the needle in the haystack, monkeys at keyboards analysis that has been repeatedly brought forth to show why FSCO/I is a useful sign of IDOW — intelligently directed organising work — as empirically credible cause.)

We see then that we have a complex string to deal with, with sharp restrictions on possible configs, that are evident from observable function, relative to the general possibility of W = b^N possibilities. Z is in a highly informational, tightly constrained state that comes form a special zone specifiable on macro-level observable function (without actually observing Z directly). That constraint on degrees of freedom contingent on functional, complex organisation, is tantamount to saying that a highly informational state is a low entropy one, in the Gibbs sense.

Going back to the expression,

comparatively speaking there is not a lot of MISSING micro-level info to be specified, i.e.simply by knowing the fact of complex specified information-rich function, we know that we are in a highly restricted special Zone T in W. This immediately applies to R/DNA and proteins, which of course use string structures. It also applies tot he complex 3-D arrangement of components in the cell, which are organised in ways that foster function.And of course it applies to the 747 in a flyable condition.

Such easily explains why a tornado passing through a junkyard in Seattle will not credibly assemble a 747 from parts it hits, and it explains why the raw energy and forces of the tornado that hits another formerly flyable 747, and tearing it apart, would render its resulting condition much less specified per function, and in fact result in predictable loss of function.

We will also see that this analysis assumes the functional possibilities of a mass of Al, but is focussed on the issue of functional config and gives it specific thermodynamics and information theory context. (Where also, algebraic modelling is a valid mathematical analysis.)

I trust this proves helpful

KF

PS: The most probable or equilibrium cluster of microstates consistent with a given macrostate, is the cluster that has the least information about it, and the most freedom of variation of mass and energy distribution at micro level. This high entropy state-cluster is strongly correlated with high levels of disorder, for reasons connected to the functionality constraints just above. And in fact — never mind those who are objecting and pretending that this is not so — it is widely known in physics that entropy is a metric of disorder, some would say it quantifies it and gives it structured physical expression in light of energy and randomness or information gap considerations.