Uncommon Descent Serving The Intelligent Design Community

Failure of the “compensation argument” and implausibility of evolution

Categories
Biophysics
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Granville Sewell and Daniel Styer have a thing in common: both wrote an article with the same title “Entropy and evolution”. But they reach opposite conclusions on a fundamental question: Styer says that the evolutionist “compensation argument” (henceforth “ECA”) is ok, Sewell says it isn’t. Here I briefly explain why I fully agree with Granville. The ECA is an argument that tries to resolve the problems the 2nd law of statistical mechanics (henceforth 2nd_law_SM) posits to unguided evolution. I adopt Styer’s article as ECA archetype because he also offers calculations, which make clearer its failure.

The 2nd_law_SM as problem for evolution.

The 2nd_law_SM says that a isolated system goes toward its more probable macrostates. In this diagram the arrow represents the 2nd_law_SM rightward trend/direction:

organization … improbable_states … systems ====>>> probable_states

Sewell says:

“The second law is all about using probability at the microscopic level to predict macroscopic change. […] This statement of the second law, or at least of the fundamental principle behind the second law, is the one that should be applied to evolution.”

The physical evolution of a isolated system passes spontaneously through macrostates with increasing values of probability until arriving to equilibrium (the most probable macrostate). Since organization is highly improbable a corollary of the 2nd_law_SM is that isolated systems don’t self-organize. That is the opposite of what biological evolution pretends.

See the picture:

cs1

Styer’s ECA.

Since the 2nd_law_SM applies to isolated systems the ECA says: the Earth E is not a isolated system, then its entropy can decrease thanks to an entropy increase (compensation) in the surroundings S (wrt to the energy coming from the Sun). Unfortunately to consider open the systems is useless, because, as Sewell puts it:

“If an increase in order is extremely improbable when a system is closed, it is still extremely improbable when the system is open, unless something is entering which makes it not extremely improbable.”

Here is how Styer applies the ECA to show that “evolution is consistent with the 2nd law”.
Suppose that, due to evolution, each individual organism is 1000 times more improbable that the corresponding individual was 100 years ago (Emory Bunn says 1000 times is incorrect, it should be 10^25 times, but this is a detail). If Wi is the number of microstates consistent with the specification of an initial organism I 100 years ago, and Wf is the number of microstates consistent with the specification of today’s improved and less probable organism F, then

Wf = Wi / 1000

At this point he uses Boltzmann’s formula:

S = k * ln (W)

where S = entropy, W = number of microstates, k = 1.38 x 10^-23 joules/degrees, ln = logarithm.

Then he calculates the entropy change over 100 years, and finally the entropy decrease per second:

Sf – Si = -3.02 x 10^-30 joules/degrees

By considering all individuals of all species he gets the change in entropy of the biosphere each second: -302 joules/degrees. Since he knows that the Earth’s physical entropy throughput (due to energy from the Sun) each second is: 420 x 10^12 joules/degrees he concludes: “at a minimum the Earth is bathed in about one trillion times the amount of entropy flux required to support the rate of evolution assumed here”, then evolution is largely consistent with the 2nd law.

The problem in Styer’s argument (and in general in the ECA).

Although it could seem an innocent issue of measure units the introduction of the Boltzmann’s formula with k = 1.38 x 10^-23 joules/degrees in this context is a conceptual error. With such formula the ECA has transformed a difficult problem of probability (in connection with the arise of ultra-complex organized systems) into a simple issue of energy (“joule” is unit of energy, work, or amount of heat). This assumes a priori that energy is able to organize organisms from sparse atoms. But such assumption is totally gratuitous and unproved. That energy can do that is exactly what the ECA should prove in the first place. So Styer’s ECA begs the question.

Similarly Andy McIntosh (cited by Sewell) says:

Both Styer and Bunn calculate by slightly different routes a statistical upper bound on the total entropy reduction necessary to ‘achieve’ life on earth. This is then compared to the total entropy received by the Earth for a given period of time. However, all these authors are making the same assumption—viz. that all one needs is sufficient energy flow into a [non-isolated] system and this will be the means of increasing the probability of life developing in complexity and new machinery evolving. But as stated earlier this begs the question…

The Boltzmann’s formula in the ECA, with its introduction of joules of energy, establishes a bridge between probabilities and the joules coming from the Sun. Unfortunately this link is unsubstantiated here because no one has proved that joules cause biological organization. On the contrary, in my previous post “The illusion of organizing energy” I explained why any kind of energy per se cannot create organization in principle. To greater reason, thermal energy is unable to the task. In fact, heat is the more degraded and disordered kind of energy, the one with maximum entropy. So the ECA would contain also an internal contradiction: by importing entropy in E one decreases entropy in E!

The problem of Boltzmann’s formula, as used in the ECA, is then “to buy” probability bonus with energy “money”. Sewell expresses the same concept with different words:

The compensation argument is predicated on the idea […] that the universal currency for entropy is thermal entropy.

That conversion / compensation is not allowed if one hasn’t proved at the outset a direct causation role of energy in producing the effect, biological organization, which is in the opposite direction of the 2nd_law_SM rightward arrow (extreme left on the above diagram). In a sense the ECA conflates two different planes. This wrong conflation is like to say that a roulette placed inside a refrigerated room can easily output 1 million “black” in a row because its entropy is decreased compared to the outside.

Note that evolution doesn’t imply a single small deviation from the trend, quite differently it implies countless highly improbable processes happened continually in countless organisms during billion years. Who claims that evolution doesn’t violate the 2nd_law_SM, would doubt a violation if countless tornados always turned rubble into houses, cars and computers for billion years? Sewell asks (backward tornado is the metaphor he uses more). In conclusion Roger Caillois is right: “Clausius and Darwin cannot both be right.”

Implausibility of evolution.

Styer’s paper is also an opportunity to see the problem of evolution from a probabilistic viewpoint. You will note the huge difference of difficulty of the probabilistic scenario compared to the above enthusiastic thermal entropy scenario, with potentially 1,000,000,000,000 times evolution!
In Appendix #2 he proposes a problem for students: “How much improved and less probable would each organism be, relative to its (possibly single-celled) ancestor at the beginning of the Cambrian explosion? (Answer: 10 raised to the 1.8 x 10^22 times)”. Call this monster number “a”, Wi = the initial microstates, Wf = the final microstates, W = the total microstates. According to Styer’s answer (which is correct as calculation) we have:

Wf = Wi / a

The probability of the initial macrostate is Wi / W. The probability of the final macrostate is Wf / W. Suppose Wf = 1, then Wi is = a. W must be equal or greater a otherwise (Wi / W) would be greater than 1 (impossible). Therefore the probability to occur of the final macrostate is:

(Wf / W) equal or less (1 / a)

This is the probability of evolution of a single individual organism in the Cambrian:

1 on 10 raised to the 1.8 x 10^22

a number with more than 10^22 digits (10 trillion billion digits). This miraculous event had to occur 10^18 times, for each of other organisms.

Dembski’s “universal probability bound” is:

1 / 10^150

1 on a number with “only” 150 digits. Therefore evolution is far beyond the plausibility threshold. In conclusion: the ECA fails to prove that “evolution is consistent with the 2nd law”, and we have also a proof of the implausibility of evolution based on probability.

Some could object: “you cannot have both ways, if the ECA is wrong then Appendix #2 is wrong too, because it uses the same method, then the evolution probability is not correct”.
Answer: the method is biased toward evolution both in ECA and in Appendix #2. This means the evolution probability is even worse than that, and the implausibility of evolution holds to greater reason.

Comments
Z: You will note that, consistently I have spoken to the statistical, microstate underpinnings that have been inextricably connected to statements of 2LOT for 100+ years. It is that context that brings out what you would obfuscate, the need to adequately account for organisation beyond a threshold of specifically functional complexity that exceeds the blind chance and necessity search capacity of the sol system or the observed cosmos. 500 - 1,000 bits. Take it as referring to fluctuations from dominant equilibrium if you will. FSCO/I is real and relevant to cell scale life. Thus, to specifically OOL. The pretence is that irrelevant heat or energy and mass flows can account for FSCO/I, appealing to lucky noise. Not only has that not been observed but it is a consistent evasion of what is observable, information rich functionally specific complex organisation and what we see is causally adequate. This also calls up the informational perspective on thermodynamics, a school of thought you seem to wish to willfully ignore. To rhetorical convenience doubtless, but at a price of failure to properly address epistemic duties of care. Let me again clip my always linked note, Section A: _____________ >> . . . let us consider a source that emits symbols from a vocabulary: s1,s2, s3, . . . sn, with probabilities p1, p2, p3, . . . pn. That is, in a "typical" long string of symbols, of size M [say this web page], the average number that are some sj, J, will be such that the ratio J/M --> pj, and in the limit attains equality. We term pj the a priori -- before the fact -- probability of symbol sj. Then, when a receiver detects sj, the question arises as to whether this was sent. [That is, the mixing in of noise means that received messages are prone to misidentification.] If on average, sj will be detected correctly a fraction, dj of the time, the a posteriori -- after the fact -- probability of sj is by a similar calculation, dj. So, we now define the information content of symbol sj as, in effect how much it surprises us on average when it shows up in our receiver: I = log [dj/pj], in bits [if the log is base 2, log2] . . . Eqn 1 This immediately means that the question of receiving information arises AFTER an apparent symbol sj has been detected and decoded. That is, the issue of information inherently implies an inference to having received an intentional signal in the face of the possibility that noise could be present. Second, logs are used in the definition of I, as they give an additive property: for, the amount of information in independent signals, si + sj, using the above definition, is such that: I total = Ii + Ij . . . Eqn 2 For example, assume that dj for the moment is 1, i.e. we have a noiseless channel so what is transmitted is just what is received. Then, the information in sj is: I = log [1/pj] = - log pj . . . Eqn 3 This case illustrates the additive property as well, assuming that symbols si and sj are independent. That means that the probability of receiving both messages is the product of the probability of the individual messages (pi *pj); so: Itot = log1/(pi *pj) = [-log pi] + [-log pj] = Ii + Ij . . . Eqn 4 So if there are two symbols, say 1 and 0, and each has probability 0.5, then for each, I is - log [1/2], on a base of 2, which is 1 bit. (If the symbols were not equiprobable, the less probable binary digit-state would convey more than, and the more probable, less than, one bit of information. Moving over to English text, we can easily see that E is as a rule far more probable than X, and that Q is most often followed by U. So, X conveys more information than E, and U conveys very little, though it is useful as redundancy, which gives us a chance to catch errors and fix them: if we see "wueen" it is most likely to have been "queen.") Further to this, we may average the information per symbol in the communication system thusly (giving in terms of -H to make the additive relationships clearer): - H = p1 log p1 + p2 log p2 + . . . + pn log pn or, H = - SUM [pi log pi] . . . Eqn 5 H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: "it is often referred to as the entropy of the source." [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1 below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form. Though somewhat controversial even in quite recent years, this is becoming more broadly accepted in physics and information theory, as Wikipedia now discusses [as at April 2011] in its article on Informational Entropy (aka Shannon Information, cf also here):
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. [--> note, on the table for literally years] Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. [--> again, on the table long since, but as we know, entropy can be partitioned and relevant components addressed, cf how Thaxton et al address clumping then configuring in TMLO. Entropy is state-linked, not path-linked.] But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. [--> again, on the table in easily accessible sources and pointed out long since] (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate. [--> yes, repeatedly cited and pointed out but routinely ignored]>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
Summarising Harry Robertson's Statistical Thermophysics (Prentice-Hall International, 1993) -- excerpting desperately and adding emphases and explanatory comments, we can see, perhaps, that this should not be so surprising after all. (In effect, since we do not possess detailed knowledge of the states of the vary large number of microscopic particles of thermal systems [typically ~ 10^20 to 10^26; a mole of substance containing ~ 6.023*10^23 particles; i.e. the Avogadro Number], we can only view them in terms of those gross averages we term thermodynamic variables [pressure, temperature, etc], and so we cannot take advantage of knowledge of such individual particle states that would give us a richer harvest of work, etc.) For, as he astutely observes on pp. vii - viii:
. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . .
And, in more details, (pp. 3 - 6, 7, 36, cf Appendix 1 below for a more detailed development of thermodynamics issues and their tie-in with the inference to design; also see recent ArXiv papers by Duncan and Samura here and here):
. . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . . [deriving informational entropy, cf. discussions here, here, here, here and here; also Sarfati's discussion of debates and the issue of open systems here . . . ] H({pi}) = - C [SUM over i] pi*ln pi, [. . . "my" Eqn 6] [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp - beta*yi) = Z [Z being in effect the partition function across microstates, the "Holy Grail" of statistical thermodynamics]. . . . [H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . . Jayne's [summary rebuttal to a typical objection] is ". . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly 'objective' quantity . . . it is a function of [those variables] and does not depend on anybody's personality. There is no reason why it cannot be measured in the laboratory." . . . . [pp. 3 - 6, 7, 36; replacing Robertson's use of S for Informational Entropy with the more standard H.]>>
________________ Yes, thermodynamics, in light of statistical underpinnings and the bridge to information, is very relevant to the matters at hand. Sewell, in this light, has long been apt:
. . . The second law is all about probability, it uses probability at the microscopic level to predict macroscopic change: the reason carbon distributes itself more and more uniformly in an insulated solid is, that is what the laws of probability predict when diffusion alone is operative. The reason natural forces may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy. The second law of thermodynamics is the reason that computers will degenerate into scrap metal over time, and, in the absence of intelligence, the reverse process will not occur; and it is also the reason that animals, when they die, decay into simple organic and inorganic compounds, and, in the absence of intelligence, the reverse process will not occur. The discovery that life on Earth developed through evolutionary "steps," coupled with the observation that mutations and natural selection -- like other natural forces -- can cause (minor) change, is widely accepted in the scientific world as proof that natural selection -- alone among all natural forces -- can create order out of disorder, and even design human brains, with human consciousness. Only the layman seems to see the problem with this logic. In a recent Mathematical Intelligencer article ["A Mathematician's View of Evolution," The Mathematical Intelligencer 22, number 4, 5-7, 2000] I asserted that the idea that the four fundamental forces of physics alone could rearrange the fundamental particles of Nature into spaceships, nuclear power plants, and computers, connected to laser printers, CRTs, keyboards and the Internet, appears to violate the second law of thermodynamics in a spectacular way.1 . . . . What happens in a[n isolated] system depends on the initial conditions; what happens in an open system depends on the boundary conditions as well. As I wrote in "Can ANYTHING Happen in an Open System?", "order can increase in an open system, not because the laws of probability are suspended when the door is open, but simply because order may walk in through the door.... If we found evidence that DNA, auto parts, computer chips, and books entered through the Earth's atmosphere at some time in the past, then perhaps the appearance of humans, cars, computers, and encyclopedias on a previously barren planet could be explained without postulating a violation of the second law here . . . But if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here." Evolution is a movie running backward, that is what makes it special. THE EVOLUTIONIST, therefore, cannot avoid the question of probability by saying that anything can happen in an open system, he is finally forced to argue that it only seems extremely improbable, but really isn't, that atoms would rearrange themselves into spaceships and computers and TV sets . . .
I will say, in closing, that there is a notorious zero concessions policy on any strong argument a design thinker may bring to the table so I find the fact of strident dismissals, caricatures and irrelevant but superficially plausible talking points about compensating heat flows to be business as usual, strawman tactics as usual. I no longer expect reasonableness on the part of many objectors to design thought so their usual talking points on this only deserve to be exposed and corrected for record. If you wish to be a case in point, well, so be it. The record stands for those willing to attend to it. KFkairosfocus
March 30, 2015
March
03
Mar
30
30
2015
02:24 AM
2
02
24
AM
PST
This is the probability of evolution of a single individual organism in the Cambrian: 1 on 10 raised to the 1.8 x 10^22 a number with more than 10^22 digits (10 trillion billion digits). This miraculous event had to occur 10^18 times, for each of other organisms. Dembski’s “universal probability bound” is: 1 / 10^150 1 on a number with “only” 150 digits. Therefore evolution is far beyond the plausibility threshold. In conclusion: the ECA fails to prove that “evolution is consistent with the 2nd law”, and we have also a proof of the implausibility of evolution based on probability.
This is of course pure nonsense. Invoking Dembski's "universal probability bound" in this way is an obvious fallacy -- so obvious that you must be mathematically challenged to commit it. With this logic, no event would possibly happen if it belonged to a sample space defining more than 10^150 possible outcomes with a flat probability distribution. For example, it would be impossible to toss a fair coin 500 times, because there are more than 3x10^150 possible outcomes, all of them equally probable! The fact that a particular individual is "extremely improbable" does not mean that its evolution was impossible. It only means that organisms actually living represent a very, very small subset of organisms that could in theory have evolved, but didn't. The probability calculus is maths, not physics. It uses abstract sample spaces not constrained by the size of the universe, the number of elementary particles in it, or the number of nanoseconds that have elapsed since the Big friggin' Bang! Survival is limited and only an astronomically small number of living beings actually come into existence in comparison with those that "could be" if every organism in the past had survived and produced numerous descendants. In other words, we are dealing with population resampling over a large number of generations, under constraints placed by nature on its size (exponential growth is not sustainable). Chance (random drift) and differential fitness (natural selection) are important aspects of this process. The outcome is guaranteed to be "extremely improbable", just like a long series of coin flips, and it's equally possible despite all that "big exponent" hocus pocus.Piotr
March 30, 2015
March
03
Mar
30
30
2015
01:58 AM
1
01
58
AM
PST
scordova
And finally, why this obsession with reducing entropy (both thermodynamic and design space entropy)?
Entropy is not MY obsession, it is obsession of evolutionists because they use it to obfuscate. My formulation of the 2nd_law_SM in this OP does NOT use that term. It uses only the concept of probability and micro/macro state.
Isolated systems can self-organize if they are in thermodynamic NON-equilibrium AND if they have an intelligence within them and/or are front-loaded to do so.
Agree if you mean that intelligence is the cause and thermodynamic non-equilibrium an effect.
The 2nd law (Clausius formulation) deals with energy microstates, it has nothing or little to do with the microstates of interest to ID. [...] The organization (the improbable microstates of interest to Design proponents) of systems is not the type of organization which the 2nd law deals with.
2nd_law_SM implicitly deals with organization because deals with states in general and organization is states. 2nd_law_SM establishes a fundamental dissymmetry: while increase of entropy destroys order AND organization, decrease of entropy does NOT create organization. These AND and NOT, whose devastating effects are evident to all, point indeed to the fact that 2nd_law_SM has a lot to do with organization.niwrad
March 30, 2015
March
03
Mar
30
30
2015
12:38 AM
12
12
38
AM
PST
Since organization is highly improbable a corollary of the 2nd_law_SM is that isolated systems don’t self-organize.
Isolated systems can self-organize if they are in thermodynamic NON-equilibrium AND if they have an intelligence within them and/or are front-loaded to do so. Hypothetical example: an isolated system with a large cold reservoir to which heat can be added from a nuclear source. If there are people inside this system they can build and organize cities using the energy from the nuclear reactor. Same would hold for robots in the system.
corollary of the 2nd_law_SM is that isolated systems don’t self-organize.
The organization (the improbable microstates of interest to Design proponents) of systems is not the type of organization which the 2nd law deals with. The 2nd law (Clausius formulation) deals with energy microstates, it has nothing or little to do with the microstates of interest to ID. Example, we at UD confronted Nick Matzke about a system 500 fair coins. The microstates in question were the heads/tails states of the system, with one of the microstates being 100% heads. 100% heads is one of the specified complex (specified improbable) microstates out of 2^500 possible heads/tails microstates. But heads/tails microstates have nothing to do with the thermodynamic microstates (or thermodynamic entropy) of 500 fair coins. If the coins are hypothetically pure copper pennies weighing 3.11 grams each at 298 Kelvin, the number of thermodynamic microstates is: 2^(8.636 x 10^25) and the thermodynamic entropy is 826.68 J/K or 8.636 x 10^25 bits based on standard molar entropy tables for copper. A student of chemistry, physics and engineers should be able to derive this sort of textbook result. The particular thermodynamic microstate the 500 coins are in is changing all the time since the molecules are vibrating, we actually don't know which thermodynamic microstate is in play at any given moment, but we do know which heads/tails microstate is in play. Thermodynamic entropy can spontaneously go down in an isolated part of a system if the system is in non-equilibrium. Example: A system consisting of a hot brick and a cold brick. The hot brick spontaneously has a reduction of thermodynamic entropy even though the total entropy of the system increases. See: http://web.mit.edu/16.unified/www/FALL/thermodynamics/notes/node41.html The heads/tails microstates in a system of 500 fair coins can be said to be the design space microstates. The more complex and vast the design space (like say 10,000 coins instead of 500), the HIGHER the design space entropy, hence design space entropy must INCREASE not decrease for specified complexity to increase. One can look at Bill Dembski's No Free Lunch, page 131 and you see entropy much INCREASE for specified complexity to increase:
…. As an aside, this information theoretic entropy measure is mathematically identical to the Maxwell-Boltzmann-Gibbs entropy from statistical mechanics No Free Lunch page 131
And finally, why this obsession with reducing entropy (both thermodynamic and design space entropy)? A warm living human being has far more entropy than a dead lifeless ice cube. Here are the textbook calculations to demonstrate this: Let S_human be the thermodynamic entropy of a human, and S_ice_cube the thermodynamic entropy of an ice cube. Order of magnitude entropy numbers just using the liters of water in humans alone: S_human > 30 liters * 55.6 mol/liter * 69.95 J/K = 116,677 J/K S_ice_cube ~= 0.012 liters * 55.6 mol/liter * 41 J/K = 27 J/K approximately (ice is a little less dense than liquid water, but this is inconsequential for the question at hand). The human having a larger number of distinguishable molecules and arrangements also has higher design space entropy. Thus warm living human has substantially more thermodynamic entropy than a lifeless cube of ice. So why this obsession with reducing thermodynamic entropy? It's not about having much or little thermodynamic entropy, but it's about having just the right fine-tuned amounts.scordova
March 29, 2015
March
03
Mar
29
29
2015
06:38 PM
6
06
38
PM
PST
Zachriel, Read this very carefully: any argument featuring storms or snowflakes is not an argument relevant to organization.Box
March 29, 2015
March
03
Mar
29
29
2015
06:12 PM
6
06
12
PM
PST
kairosfocus: order is not functionally specific complex organisation That's immaterial to thermodynamic entropy. Box: - the 2nd law turns all organization into disorder That's incorrect. While overall entropy within a system increases, local entropy can decrease. Read this carefully: Storms are characterized by low pressure zones, so they have fewer available microstates than the surrounding air. Such a region is *vanishingly improbable* to occur due to chance arrangement of microstates.Zachriel
March 29, 2015
March
03
Mar
29
29
2015
05:12 PM
5
05
12
PM
PST
Niwrad’s argument, as I understand it, is two-fold: - the 2nd law turns all organization into disorder - and furthermore it prevents organization to form by pure natural forces. Zachriel points out that there are instances of order – low thermodynamic entropy - in nature; such as snowflakes and storms. He is right, however there are no known cases of order progressing towards organization - e.g. no ice watches - so it doesn’t address the argument put forward by Niwrad.Box
March 29, 2015
March
03
Mar
29
29
2015
03:40 PM
3
03
40
PM
PST
Z, order is not functionally specific complex organisation. KFkairosfocus
March 29, 2015
March
03
Mar
29
29
2015
03:11 PM
3
03
11
PM
PST
niwrad: Your storms have big probabilities to occur everywhere and everyday. Storms have *low* thermodynamic entropy. Among other effects, they are low pressure zones, so they have fewer available microstates than the surrounding air. Such a region is vanishingly improbable to occur due to chance arrangement of microstates, but are the result of thermodynamic work. Let's review your statement again. niwrad: The physical evolution of a isolated system passes spontaneously through macrostates with increasing values of probability until arriving to equilibrium (the most probable macrostate). While entropy increases in a closed system, the Earth's weather system will never reach equilibrium as long as it continues to receive energy from the Sun. Furthermore, as energy moves through the system, the atmosphere can exhibit regions with both very low and very high thermodynamic entropy. niwrad: ince organization is highly improbable a corollary of the 2nd_law_SM is that isolated systems don’t self-organize. If we consider the Sun and Earth as a single system, the overall entropy will increase as the vast majority of the Sun's energy dissipates into space. However, regions of the system can exhibit both very low and very high thermodynamic entropy. This includes everything from snowflakes to solar flares.Zachriel
March 29, 2015
March
03
Mar
29
29
2015
12:50 PM
12
12
50
PM
PST
// Kauffman's confirmation of Niwrad's definition of the 2nd law of statistical mechanics and other info. //
The basics of statistical mechanics are fundamentally simple. The centerpiece of the theory with respect to the second law of classical thermodynamics, the one-way increase of entropy, and thus the explanation of the arrow of time, is that a system of particles will, by chance (formalized as something called the ergodic hypothesis, which we will encounter in a later chapter), evolve to more-probable “macrostates” from less-probable macrostates. Thus a droplet of ink on a petri plate will typically diffuse to a uniform distribution. It will not un-diffuse from a uniform distribution to reconstitute the ink drop. To be a bit more precise, think of the petri dish divided mathematically into many tiny volumes. Each possible distribution of ink molecules among these tiny volumes is a microstate. Clearly there are many more microstates corresponding to an approximate equilibrium distribution of the ink through the petri plate than there are microstates corresponding to the ink droplet in the center of the plate. Thus the uniform ink distribution is more probable than the ordered ink-drop distribution. The greater the number of microstates corresponding to the diffuse “macrostate,” the greater the disorder of that macrostate: the same macrostate, here the diffuse distribution, can result from very many different microstates. The entropy of a macrostate is the mathematical logarithm of the number of microstates in that macrostate. So the ink in the center of the petri plate has fewer microstates, hence lower entropy, and is less probable than the higher-entropy, diffuse distribution to which the ink drop evolves. In other words, by random collisions, the ink system flows from the less probable to more probable macrostate. This one-way flow is the statistical-mechanics version of the second law in classical thermodynamics, the one-way increase in the entropy of a closed system. In the generally accepted view, classical thermodynamics has been successfully reduced to the random collisions of many particles under Newton’s laws of motion. Thus statistical mechanics has reduced the arrow of time to the statistical increase in entropy. ["Reinventing the Sacred: A New View of Science, Reason, and Religion", p.26, Stuart A. Kauffman]
Box
March 29, 2015
March
03
Mar
29
29
2015
12:43 PM
12
12
43
PM
PST
Zachriel Evidently you haven't read the OP until the end. Your storms have big probabilities to occur everywhere and everyday. While your supposed spontaneous biological evolution has probabilities that to define infinitesimal is to be very generous.niwrad
March 29, 2015
March
03
Mar
29
29
2015
11:03 AM
11
11
03
AM
PST
kairosfocus: Z, energy flows need to be relevant to constructive work for organisation to be reasonable Storm systems are areas of spontaneous organization with low entropy.Zachriel
March 29, 2015
March
03
Mar
29
29
2015
06:20 AM
6
06
20
AM
PST
Z, energy flows need to be relevant to constructive work for organisation to be reasonable, beyond 500 - 1,000 bits of descriptive complexity. KFkairosfocus
March 29, 2015
March
03
Mar
29
29
2015
06:05 AM
6
06
05
AM
PST
niwrad: The physical evolution of a isolated system passes spontaneously through macrostates with increasing values of probability until arriving to equilibrium (the most probable macrostate). However, during the evolution of the system, regions within the system may experience significant decreases in entropy and still be consistent with the 2nd law of thermodynamics. niwrad: Since organization is highly improbable a corollary of the 2nd_law_SM is that isolated systems don’t self-organize. In fact, we observe spontaneous organization, typically seen when energy flows through a system.Zachriel
March 29, 2015
March
03
Mar
29
29
2015
05:51 AM
5
05
51
AM
PST
Good point. Add even more energy to the earth, raise the surface temperature to 10,000 degrees and that would make life even more probable!Jim Smith
March 29, 2015
March
03
Mar
29
29
2015
05:15 AM
5
05
15
AM
PST
Niw, another bite at this I see. Budgie cheeping season continues, I can only note I suggest directly use Joules per Kelvin, J/K not "degrees." KF PS: Let me just note that to be genuinely compensatory, a relevant energy flow needs to be just that -- relevant. Causally connected as with flow through a conversion device giving shaft work or micro equivalents such as electricity currents, then coupling of such to info-rich organisation with exhaust of adequate waste degraded energy. Typically, heat. PPS: Let me try a nice typable diag (you did a doozie on 2LOT stat-mech form): EN Source --> En Conv --> Shaft work + waste en Shaft work + control info + constructor --> organisation + waste en NET: En s + control info --> En conv + constructor --> org work + tot waste en Where, lucky noise cannot credibly substitute for control info, energy converter and constructor.kairosfocus
March 29, 2015
March
03
Mar
29
29
2015
03:49 AM
3
03
49
AM
PST
1 20 21 22

Leave a Reply