Uncommon Descent Serving The Intelligent Design Community

Physicist Rob Sheldon offers some thoughts on Sal Cordova vs. Granville Sewell on 2nd Law Thermo

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Readers will recall that Sal Cordova doesn’t agree with Granville Sewell’s doubts about Darwin based on the Second Law of Thermodynamics. His argument is here.

Sewell has replied here.

Now, Rob Sheldon weighs in:

As a physicist, I have a few more problems with the 2nd law, which as Sal points out, is a problem with definitions. I agree with Sal that few people understand either thermo or entropy, and as a consequence, make a hash of either pro or con arguments. For the sake of expediency, I think Sal is suggesting that it is better to avoid this topic because it invariably ends up in the weeds.

On the other side, Granville thinks that the weeds are still an interesting place to be. Surely if we avoided all difficult topics, we’d never make progress in anything. We can make progress if we are not afraid to plough the untilled turf, and so Granville’s work is both original and interesting (to quote Eugene Wigner.)

Here’s the definitional challenge of the two meanings of 2nd Law:

1) Thermodynamics: deals with macroscopic states of things like hot bricks and boiling water. This was all worked out in the early 1800’s, and is essential for the operation of steam engines all the way up to nuclear power plants. Entropy was defined as macroscopic state defined by temperature and heat.

No chance of this being wrong, because if it were ever possible to beat the system, you’d have a perpetual motion machine and besides being filthy rich, you could take over the world.

This is where the phrase “2nd law” applies like an iron rule. Unfortunately, cells are not steam engines, and the origin of the first cell is not a problem related to steam engines, so Sal is pleading for prudence in using the 2nd law in this fashion.

2) Statistical Mechanics: Boltzmann and Gibbs redefined macroscopic states of matter into microscopic states of matter, where the entropy of a system is now based on counting microscopic states. As Sal has pointed out, the entropy is now defined in terms of how many different ways there are to arrange, say, nitrogen and carbon dioxide molecules. No “heat” is involved, merely statistical combinatorics. This ostensibly has little to do with heat and dynamics, so it is called “statistical mechanics”.

3) The Equivalence: Boltzmann’s famous equation (and engraved on his tombstone) S = k ln W, merely is an exchange rate conversion. If W is lira, and S is dollars, then k ln() is the conversion of the one to the other, which is empirically determined. Boltzmann’s constant “k” is a semi-empirical conversion number that made Gibbs “stat mech” definition work with the earlier “thermo” definition of Lord Kelvin and co.

Despite this being something as simple as a conversion factor, you must realize how important it was to connect these two. When Einstein connected mass to energy with E = (c2) m, we can now talk about mass-energy conservation, atom bombs and baby universes, whereas before Einstein they were totally different quantities. Likewise, by connecting the two things, thermodynamics and statistical mechanics, then the hard rules derived from thermo can now be applied to statistics of counting permutations.

This is where Granville derives the potency of his argument, since a living organism certainly shows unusual permutations of the atoms, and thus has stat mech entropy that via Boltzmann, must obey the 2nd law. If life violates this, then it must not be lawfully possible for evolution to happen (without an input of work or information.)

The one remaining problem, is how to calculate it.

4) The Entropy problem

Boltzmann was working with ideal gasses, and most of the entropy illustrations in physics books deal with either ideal gasses or energy states of an atom. Nobody but nobody wants to tackle the entropic calculation of a cell.

The problem isn’t just that the number of arrangements of the 10^14 atoms in a cell is at least 10^14!, and that the arrangements show long range ordering, and that they demonstrate constrained dynamics, but frankly we don’t know how to do the arithmetic.

The entropy merely has to show an increase, but we have so many orders of magnitude in this calculation, we can’t tell if the entropy increased or decreased. In the field of numerics, if you subtract two large numbers, most of the significant digits vanish, and you can be left with noise. In this case, the numbers are so big, that subtracting the “after” entropy from the “before” entropy gives nothing but noise.

It is impossible to calculate the Boltzmann entropy change of a cell unless we could write down that number with a 100-trillion digits of math and keep track of the last 100 or so digits.

My (Nobel nominated) college professor used to ask a rhetorical question in his thermo class, “what is the entropy change of a cow after the butcher put a 22 calibre bullet through the brain?” Yet this miniscule entropy change is supposed to tell us the difference between a “living” and “dead” cow. From a physics view point, there is almost no change in disorder, yet from a biological viewpoint it is all the difference in the world. Physics just doesn’t know how to measure this, and doesn’t even know if they ever can measure this quantity.

So that’s the weeds. We have a recipe, but we can’t use it. Despite having this great definition for Boltzmann entropy, we don’t know how to apply it to life, and therefore we can’t tell if entropy is up or down until the critter rots and starts to turn into gas.

That means some other definition of entropy is employed that “approximates” the Boltzmann definition. Shannon information is often employed, as well as some thought-experiments involving black holes. These work fine on the obvious examples, but once again, they fail to detect the difference between a live and dead cow. So when Granville shows a picture of a tornado, we all know intuitively that Boltzmann entropy is increasing, but there’s just no easy way to calculate it. It reminds of the 30 years of lawsuits while tobacco companies said that no one had proven smoking causes cancer–we all knew it was true, but we didn’t have the proof.

Despite this intuitive use of the 2nd law not being mathematically robust, we can still learn a lot by using it. But if our opponent challenges us to prove it, we must be willing to go in the weeds.

Here’s two weeds:

Response 1) The entropy of life balances out. Food in, waste out, entropy up.

Answer: Really? Can you show me your calculation? Your proof would be Nobel prize material!

Response 2) The earth is not a closed system.

Then how about the solar system? No, then the galaxy? No, then surely the universe is a closed system! Where’s the missing entropy? Show me where it went, and give me a rough idea of its magnitude–one or two orders of magnitude is sufficient.

Comments
F/N 2: This one, from the linked interview, staggers the imagination, on the misleading power of begging the pivotal question: __________ >> Ordered flow, including life, was permissible as long as it produced enough entropy to compensate for its own internal entropy reduction. The central problem remained, however: If the spontaneous production of order was “infinitely improbable,” as Boltzmann had surmised, then why were ordered systems such a fundamental and characteristic property of the visible world? LMEP provided the answer: Order production is inexorable because order produces entropy faster than disorder. >> __________ Of course, a living system must export entropy. But the mere export of entropy does not explain or justify the proposed spontaneous emergence -- better, origin -- of FSCO/I based forms that have integrated metabolic processes and von Neumann architecture self-replication. The space of possibilities must be bridged to arrive at shorelines of function based on complex specific alignments of well matched parts, starting with OOL. Cf the remark on this here, earlier this morning. and if you think you can easily twist this about and pretend that continents of function dominate the space of possibilities, first remember what a .22 in the head does to a cow. In short, the strongly evident reality in font of us is that FSCO/I comes in islands. Think about the misplaced comma that put a rocket off course. Think about what happens when you need a single hard to find car part. think about why it is that monkey at the keyboard exercises have so far peaked at about 24 ASCII characters in coherent English and how the algors to detect and use the sense that was stumbled upon were intelligently designed. In short, think outside the a priori materialist box, and see what the evidence is screaming out to those who will but look and listen. KFkairosfocus
July 7, 2012
July
07
Jul
7
07
2012
04:28 AM
4
04
28
AM
PDT
F/N 2: This comes ever so close to spotting bthe key point:
The second law of thermodynamics given by equation (2.1) is the view by the system. When the surroundings are lower in energy density, the system undergoes dissipative jk-transitions from ?k to ?j by emitting quanta that are then no longer part of the system. Hence, the energy content of the system is decreasing. Likewise, when the surroundings are higher in energy density, the system undergoes jk-transitions from ?j to ?k by absorbing quanta that become an integral part of the system. Hence, the energy content of the system is increasing. Thus, the inequality dS/dt?0 in the second law, i.e. the principle of increasing entropy S means that the open system is evolving towards a more probable partition.
In short, quanta of energy are subject to diffusion. Much as beads in ice trays that are shaken up. As a result, the clusters of states with higher statistical weights strongly dominate. This moves systems towards higher entropy, and disorder. and obviously, once FSCO/I is involved, that is going to sit on isolated islands of function in vast seas of non function. Thence the blind walk challenge to find needles in haystacks. (And Mung, I had to update myself on that.)kairosfocus
July 7, 2012
July
07
Jul
7
07
2012
04:12 AM
4
04
12
AM
PDT
F/N: This, from the LMEP link, is misleading:
THE SECOND LAW ('the entropy principle') as understood classically by Clausius and Thomson captures the idea that the world is inherently active and whenever an energy distribution is out of equilibrium a gradient of a potential (or thermodynamic force) exists that the world acts to dissipate or minimize.
No, there is no mysterious field of force and associated potential gradiebt that pushes to maximum entropy. There are only the probabilistic implications of random molecular interactions and large config spaces. Cf my ice tray and beads exercise, here on.kairosfocus
July 7, 2012
July
07
Jul
7
07
2012
04:02 AM
4
04
02
AM
PDT
EDTA: Bare speculation:
When a steady stream of external energy is falling on an open system, there is a driving force to assemble mechanisms from the available ingredients and to improve on them in order to acquire more energy in the quest for a stationary state. The driving force makes no difference between abiotic and biotic mechanisms of energy transduction but favours all those that are dispersing energy more and more effectively.
There is instead a driving force to drive random variations in a context where the space of possibilities is dominated by non-functional, non-specific states. So, there is no reliable driving force to get us to islands of function, which is where mechanisms lie. These, to blind forces of chance and necessity, are deeply isolated and present it with the monkeys at keyboards challenge, and nowhere near the resources to surmount it. The rise of this sort of speculation in the teeth of evidence and analysis is a mark of desperation. This, from an exchange of Orgel vs hapiro on OOL, is apt:
SHAPIRO: The analogy that comes to mind is that of a golfer, who having played a golf ball through an 18-hole course, then assumed that the ball could also play itself around the course in his absence. He had demonstrated the possibility of the event; it was only necessary to presume that some combination of natural forces (earthquakes, winds, tornadoes and floods, for example) could produce the same result, given enough time. No physical law need be broken for spontaneous RNA formation to happen, but the chances against it are so immense, that the suggestion implies that the non-living world had an innate desire to generate RNA. The majority of origin-of-life scientists who still support the RNA-first theory either accept this concept (implicitly, if not explicitly) or feel that the immensely unfavorable odds were simply overcome by good luck.
Orgel's rejoinder, of course, was that much the same holds for metabolism first scenarios. Both are plainly right, and those who try to pretend that mere injection of raw energy allows an escape from the challenges of blind walks through vast config spaces, do us a disservice. KFkairosfocus
July 7, 2012
July
07
Jul
7
07
2012
03:52 AM
3
03
52
AM
PDT
To muddy up the water just a bit, here are links to some people who think that the laws of physics are on the side of evolution. I don't buy these ideas personally, because matter cannot spontaneously become intentional. But they at least show how confusing the whole matter can be. The first set deal with R. Swenson's Law of Maximum Entropy Production (LMEP), sometimes called the Maximum Entropy Production Principle. To a non-physicist such as myself, this idea sounds almost like a tautology, but he's selling it as the Fourth Law of thermodynamics. It's the idea that entropy/temperature/etc. gradients will relieve themselves in the manner that maximizes the rate at which entropy is produced. In the case of biological evolution then, life exists and evolves because it enables an even faster production of entropy than a non-living system. LMEP www.entropylaw.com Interview with Swenson Next is an article tying evolution in with the Second Law and the Principle of Least Action. Natural Selection for Least Action
When a steady stream of external energy is falling on an open system, there is a driving force to assemble mechanisms from the available ingredients and to improve on them in order to acquire more energy in the quest for a stationary state. The driving force makes no difference between abiotic and biotic mechanisms of energy transduction but favours all those that are dispersing energy more and more effectively.
Don't anticipate that this latter article will describe how a steady stream of external energy manages to create and evolve life, because it doesn't. (But it does take up space on the internet.)EDTA
July 6, 2012
July
07
Jul
6
06
2012
08:45 PM
8
08
45
PM
PDT
sal: I will not intervene in your back-forth with Dr Sewell, save to suggest a bit of cooldown. A cold coconut water - for preference under a coconut tree with turquoise waves lapping at your feet -- always makes things look better. However, I am pointing to a well-defined conceptual bridge with worked through Math. The information bridge is real and once we have a macro-micro view going thermodynamic concepts apply to the business of physically expressing degrees of info by constraining configs. That means we can draw on statistical thermodynamics concepts in explaining relevant phenomena. For instance, I just did so to understand what is happening when a cow-- an open system -- grows from a zygote, then goes for a one way visit to the butchers. We cannot calculate the specific entropy numbers involved (BTW, the numbers for a steam table etc are relative) but we can trace the pattern. We can reasonably infer that the vitals in a cow's brain and CNS are beyond 500 bits of complexity, and we can see that the scrambling of configs consequent on a .22 hit in the right place takes us out of a one of functional configs. With predictable consequences. We can then look from zygote up, at protein codes, regulatory info etc. We see that where FSCO/I is involved, the cow is dependent on a program, or actually a suite. We can reasonably see that this is well beyond 500 bits, and that the only reasonable explanation for the functional info to build a cow, in the end is design. The usual objection is that self replicating systems can evolve. But that depends on a strawman caricature of Paley, per his remarks on having the additional capacity of self replication in Ch 2 of Nat Theol. That is, updating, the von Neuman self replicator tied to a cellular level constructor points to design. And once design is on the table it is there across the board. So, thermodynamic and informational thinking are linked and cohere. They both contribute to the design view. I trust this helps KFkairosfocus
July 6, 2012
July
07
Jul
6
06
2012
08:30 AM
8
08
30
AM
PDT
The tornado running backward is a bad analogy, because it fails to take into account: 1. the heat radiated by the system, and 2. the attractive forces between the structures being assembled. As for 1., let's assume that in a regular, forward tornado, heat is radiated by the system to the environment. So if the heat radiated to the environment is deltaQ, where deltaQ is positive if heat is radiated outward. Define delta S as the entropy decrease of the forward tornado, where deltaS is positive for entropy decrease, negative for entropy increase. Then, at temperature T, the second law of thermodynamics for an isolated system says delta S 0, which permits an entropy decrease. In Granville Sewell's analogy of the backwards tornado, the houses, cars etc. that assemble themselves are not attracted to each other by attractive forces. Thus, his analogy of self-assembling but non-attractive structures is a bad analogy for real chemical systems. Again: In his self-assembling house analogy, the reaction is not exothermic because the parts do not have mutual attractive forces. In real systems that undergo spontaneous decreases in entropy (e.g. magnetization, crystallization, etc.) there are attractive forces which liberate energy as the parts approach each other, so the reaction is exothermic, delta Q is positive, and 2LOT specifically permits a local DECREASE in entropy. The problem is with Granville Sewell's analogy, not with physics.Diogenes
July 6, 2012
July
07
Jul
6
06
2012
06:47 AM
6
06
47
AM
PDT
As to the various metrics used to quantify information in a living cell, it is interesting to note just how much information is found to be in a 'simple' cell from the thermodynamic perspective: Professor Harold Morowitz shows the Origin of Life 'problem' escalates dramatically over the 1 in 10^40,000 figure when working from a thermodynamic perspective,:
"The probability for the chance of formation of the smallest, simplest form of living organism known is 1 in 10^340,000,000. This number is 10 to the 340 millionth power! The size of this figure is truly staggering since there is only supposed to be approximately 10^80 (10 to the 80th power) electrons in the whole universe!" (Professor Harold Morowitz, Energy Flow In Biology pg. 99, Biophysicist of George Mason University)
Dr. Don Johnson lays out some of the probabilities for life in this following video:
Probabilities Of Life - Don Johnson PhD. - 38 minute mark of video a typical functional protein - 1 part in 10^175 the required enzymes for life - 1 part in 10^40,000 a living self replicating cell - 1 part in 10^340,000,000 Programming of Life - Probability of a Cell Evolving - video http://www.youtube.com/user/Programmingoflife#p/c/AFDF33F11E2FB840/9/nyTUSe99z6o
Dr. Morowitz did another probability calculation working from the thermodynamic perspective with a already existing cell and came up with this number:
DID LIFE START BY CHANCE? Excerpt: Molecular biophysicist, Horold Morowitz (Yale University), calculated the odds of life beginning under natural conditions (spontaneous generation). He calculated, if one were to take the simplest living cell and break every chemical bond within it, the odds that the cell would reassemble under ideal natural conditions (the best possible chemical environment) would be one chance in 10^100,000,000,000. You will have probably have trouble imagining a number so large, so Hugh Ross provides us with the following example. If all the matter in the Universe was converted into building blocks of life, and if assembly of these building blocks were attempted once a microsecond for the entire age of the universe. Then instead of the odds being 1 in 10^100,000,000,000, they would be 1 in 10^99,999,999,916 (also of note: 1 with 100 billion zeros following would fill approx. 20,000 encyclopedias) http://members.tripod.com/~Black_J/chance.html Punctured cell will never reassemble - Jonathan Wells - 2:40 mark of video http://www.youtube.com/watch?v=WKoiivfe_mo
The information content that is derived to be in a cell when working from a purely thermodynamic perspective is simply astonishing to ponder:
'The information content of a simple cell has been estimated as around 10^12 bits, comparable to about a hundred million pages of the Encyclopedia Britannica." Carl Sagan, "Life" in Encyclopedia Britannica: Macropaedia (1974 ed.), pp. 893-894 “a one-celled bacterium, e. coli, is estimated to contain the equivalent of 100 million pages of Encyclopedia Britannica. Expressed in information in science jargon, this would be the same as 10^12 bits of information. In comparison, the total writings from classical Greek Civilization is only 10^9 bits, and the largest libraries in the world – The British Museum, Oxford Bodleian Library, New York Public Library, Harvard Widenier Library, and the Moscow Lenin Library – have about 10 million volumes or 10^12 bits.” – R. C. Wysong
For calculations for information, when working from the thermodynamic perspective, please see the following site:
Moleular Biophysics – Information theory. Relation between information and entropy: - Setlow-Pollard, Ed. Addison Wesley Excerpt: Linschitz gave the figure 9.3 x 10^12 cal/deg or 9.3 x 10^12 x 4.2 joules/deg for the entropy of a bacterial cell. Using the relation H = S/(k In 2), we find that the information content is 4 x 10^12 bits. Morowitz' deduction from the work of Bayne-Jones and Rhees gives the lower value of 5.6 x 10^11 bits, which is still in the neighborhood of 10^12 bits. Thus two quite different approaches give rather concordant figures. http://www.astroscu.unam.mx/~angel/tsb/molecular.htm
Thus, regardless of the whatever nitpicking gripes Sal may have as to the lack of mathematical precision, I find the argument for ID from thermodynamics to be very effective, especially for the origin of life, and I am certainly not going to stop using the argument just because Sal thinks we should! As to 'mathematical precision' for measuring information in cells, there actually is a precise limit in place.,, I would consider the second law violated in purely neo-Darwinian processes generated enough functional information to account for JUST ONE novel functional protein:
Functional information and the emergence of bio-complexity: Robert M. Hazen, Patrick L. Griffin, James M. Carothers, and Jack W. Szostak: Abstract: Complex emergent systems of many interacting components, including complex biological systems, have the potential to perform quantifiable functions. Accordingly, we define 'functional information,' I(Ex), as a measure of system complexity. For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, Ex (e.g., the RNA-GTP binding energy), I(Ex)= -log2 [F(Ex)], where F(Ex) is the fraction of all possible configurations of the system that possess a degree of function > Ex. Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree. In each case we observe evidence for several distinct solutions with different maximum degrees of function, features that lead to steps in plots of information versus degree of functions. http://genetics.mgh.harvard.edu/szostakweb/publications/Szostak_pdfs/Hazen_etal_PNAS_2007.pdf Mathematically Defining Functional Information In Molecular Biology - Kirk Durston - short video http://www.metacafe.com/watch/3995236 Measuring the functional sequence complexity of proteins - Kirk K Durston, David KY Chiu, David L Abel and Jack T Trevors - 2007 Excerpt: We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable. To demonstrate the relevance to functional bioinformatics, a method to measure functional sequence complexity was developed and applied to 35 protein families.,,, http://www.tbiomed.com/content/4/1/47
Another very good test which is very easy to understand, which would show a violation of the second law but which, as far as I know, has never been violated is the "fitness test":
Is Antibiotic Resistance evidence for evolution? - 'The Fitness Test' - video http://www.metacafe.com/watch/3995248
This following study demonstrated that bacteria which had gained antibiotic resistance by mutation are less fit than wild type bacteria::
Testing the Biological Fitness of Antibiotic Resistant Bacteria - 2008 Excerpt: Therefore, in order to simulate competition in the wild, bacteria must be grown on minimal media. Minimal media mimics better what bacteria experience in a natural environment over a period of time. This is the place where fitness can be accurately assessed. Given a rich media, they grow about the same. http://www.answersingenesis.org/articles/aid/v2/n1/darwin-at-drugstore
bornagain77
July 6, 2012
July
07
Jul
6
06
2012
06:41 AM
6
06
41
AM
PDT
Sal,
We can quantify entropy change for chemical reactions and bricks to several significant figures but we can’t do the same for various evolutionary claims. The inability to quantify this amount makes the relevance of the 2nd law suspect at best
So if you watched a video of a tornado running backward, turning rubble into houses and cars, you would sit there and say, it is just too hard to quantify what is happening on this video, so I can't decide if the video is running forward or backward? It's too hard to quantify, so I can't tell if entropy is increasing or decreasing? And if you watched a barren planet producing intelligent brains and computers and airplanes and the Internet, your reaction would be, this is just too hard to quantify, we can't apply the second law? Come on, some things are obvious even if they are difficult to quantify! As Kairosfocus says, the calculation challenge does not dismiss the issue. Science isn't all about quantifying things.Granville Sewell
July 6, 2012
July
07
Jul
6
06
2012
05:55 AM
5
05
55
AM
PDT
. Though my AML article has received high praise from many good scientists I know, always in private of course. One engineering professor called it “a really highly significant piece of work” but told me never to quote him by name!
Then your friends and colleagues failed you on this matter. Maybe your true friends are the ones willing to disagree.
And Sal, you really should do some homework before attacking friends on UD. It is completely clear to me that you had not read anything I had written on the second law before you posted this
I'm sorry, but that is simply not true. I read and studied your work, and the more I learned the more it became apparent something had to be said. I publicly disagreed with you in far more polite terms here at Uncommon Descent on April 2, 2007. Here is the link to one of our first exchanges FIVE YEARS AGO: https://uncommondescent.com/intelligent-design/specified-complexity-and-the-second-law/#comment-109481
Granville, Welcome to Uncommon Descent. I am a big fan of your writings. I am however, reluctant to appeal to the traditional 2nd law as an argument supportive of design inferences. I believe Dembski’s 4th law is more appropriate. There has been an ongoing discussion between myself an Professor Beling (a professor of Thermodynamics). See: Is 2nd Law a special case of 4th Law? My central conclusion regarding the 2nd law is taken from Bill’s No Free Lunch:
the second law is subject to the Law of Conservation of Information [4th Law] page 172-173, No Free Lunch
and
A magnetic diskette recording random bits versus one recording, say, the text of this book are thermodynamically equivalent from the vantage of the second law. yet from the vantage of the fourth law they are radically different. Two Scrabble boards with Scrabble pieces covering identical squares are thermodynamically equivalent from the vantage of the second law. Yet from the vantage of the fourth law they can be radically different, one displaying a random arrangement of letters, and the other meaninful words and therefore CSI.
Thus, I’m a bit uncomforatable with 2nd law arguments. Founding father of ID Walter Bradley Thermodynamics and the Origin of Life:
the Second Law of Thermodynamics cannot be used to preclude a naturalistic origin of life.
I was far more polite and pettitioning to you then, and I'm sorry I had to far more rude this time around. The following statement by you yourself says it all:
It may be difficult (or impossible) to apply it in a quantitative way to things like tornados and evolution
We can quantify entropy change for chemical reactions and bricks to several significant figures but we can't do the same for various evolutionary claims. The inability to quantify this amount makes the relevance of the 2nd law suspect at best. There is non-thermal entropy change, but to quantify this it is more appropriate to use statistics and statistical mechanics and information theories NOT the 2nd law. We use thermodynamics to measure thermal entropy change. The fact that you yourself admit that it is difficult to use the second law to calculate non-thermal entropy change in things like evolution should be indicative that maybe it is not the most appropriate avenue for defending design concepts or criticizing evolutionary claims. In fact, in one of your works you made reference that dynamite may provide energy to an open system but that did not mean that useful work toward construction of a building could be accomplished. That phrase was used in Mystery of Life's Origin by Thaxton, Bradley, and Olsen. I quoted Bradley in my critiques of your work. In that book they calculated the NON-THERMAL entropy change needed to polymerize a functional protein and the corresponding amount of Gibbs free energy that needed to be involved. Did they use the 2nd law to calculate this non-thermal entropy change? NO! They used statistical mechanics. That is the more fruitful approach, imho.scordova
July 6, 2012
July
07
Jul
6
06
2012
05:26 AM
5
05
26
AM
PDT
Kairosfocus, Thank you for that. The thing I find so insulting about Sal's post and comment above is, throughout all of it is the implication that I just don't know enough about thermodynamics to realize that the second law of thermodynamics applies only to thermodynamics, when it is used much more generally by many, many people on both sides of the ID debate. It may be difficult (or impossible) to apply it in a quantitative way to things like tornados and evolution, but most everyone I've been arguing with these last 11 years acknowledges that what has happened on Earth would violate the second law if the Earth were an isolated system. I would have been much less insulted if he would have at least acknowledged that my point of view on the application of the law is very widely held, and not just due to ignorance of thermodynamics. Actually, Sal, I probably would not have been insulted at all if it were just your post, but I have been told continually for 11 years that I don't know what I'm talking about, that's why I'm so hypersensitive. Though my AML article has received high praise from many good scientists I know, always in private of course. One engineering professor called it "a really highly significant piece of work" but told me never to quote him by name! And Sal, you really should do some homework before attacking friends on UD. It is completely clear to me that you had not read anything I had written on the second law before you posted this (maybe you have now), you had just heard that I believed that evolution violated the second law and took off from there. No wonder you didn't include any links to my work, you probably didn't know of any. They are all over www.evolutionnews.org.Granville Sewell
July 6, 2012
July
07
Jul
6
06
2012
04:34 AM
4
04
34
AM
PDT
Gentlemen: I have several times now pointed that there is a whole informational school of thought on entropy, which provides conceptual tools -- including a bridge from Shannon Info theory and average info per symbol [the info th meaning of Shannon's H] -- to the microscopic view of statistical thermodynamics, thence a bridge to classical entropy. (You may wish to skim the discussions here and here on in my always linked.) I think the concession made by Wiki I excerpted here, is sufficient to outline the bridge that joins these concepts:
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
And, in that context, it is quite reasonable to compare entropy to disorder. Pardon, the discussion is now happening across too many threads, I guess I will simply cross-link this point. Specifically, because order -- and, a fortiori, functionally specific organisation -- restricts the range of possible organisation at microscopic level. That is why when a block of ice melts, it absorbs a certain latent heat of fusion and in so doing increases its entropy. Disordering the crystal structure took a certain d'Q/T. The resulting water is less specified at molecular level, is less ordered and is of greater entropy, even at the same temperature and nearly the same density, i.e. inter-molecular spacing. Likewise, the transition to the vapour state of water that is boiling is an increase of entropy and of disorder. In this case, usually with a dramatic shift in possible intermolecular spacing. Hence, the drop in density of orders of magnitude. (A ten times increase in spacing is linked to a thousand times decrease in density. This is roughly what happens.) Informationally, the amount of missing info on where molecules are and how they are moving, given the set of macro observable variables sufficient to describe bulk state, rose sharply both times. Similarly, the increased energy available pulls up the high energy skirt of molecular energy distribution, and accelerates activation processes exponentially. Thus the proverbial doubling of rates of such processes -- aging of components [and thus halving of system lifespan], conductivity of semiconductors etc -- per eight degrees Celsius rise around room temperature. So, no, it will not do to try to sever or dismiss the link from entropy to disorder. Moreover, there is no good reason to insist that the logic involved depends on our requiring a microscope to inspect the particles or components involved. That is, once we can see aggregate behaviours underpinned by a micro-level, the same considerations apply. As in, the micro-macro distinction in economics. I suspect, this may even apply to the gap between molecular and neuronal level behaviour in the CNS and the unified behaviour of an individual. So, I see no reason to artificially separate informationally-focussed analyses and statistical thermodynamic ones once the bridge between the two has been made. Thus, in outline from Harry S Robertson in his Statistical Thermo-physics (Prentice-Hall, 1993):
. . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . . [deriving informational entropy, cf. discussions here, here, here, here and here; also Sarfati's discussion of debates and the issue of open systems here . . . ] H({pi}) = - C [SUM over i] pi*ln pi [--> This is Shannon's H, avg info per symbol], [. . . "my" Eqn 6] [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp - beta*yi) = Z [Z being in effect the partition function across microstates, the "Holy Grail" of statistical thermodynamics]. . . . [H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . . Jayne's [summary rebuttal to a typical objection] is ". . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly 'objective' quantity . . . it is a function of [those variables] and does not depend on anybody's personality. There is no reason why it cannot be measured in the laboratory." . . . . [pp. 3 - 6, 7, 36; replacing Robertson's use of S for Informational Entropy with the more standard H.]
In short, what we have been thinking of as heat, energy, entropy etc from one view, and as information etc from another, are integrated once we see the conceptual bridges. Indeed, this also has relevance to economic analysis, which is informationally constrained on similar macro/micro and "atomic freedom" vs average behaviour issues. And in that context, the Creationists and design thinkers have been right to highlight the pivotal observation that opening up a system to energy and mass flows does not answer to the question of origin of functionally specific complex organisation sufficient to implement a metabolic entity that is self replicating, nor does it explain the body plans. The von Neumann observation, that the pivotal issue is the joining of a contructor to a self replication facility driven by a control tape, is crucial. It also points to the often overlooked thought exercise in Paley's Natural Theology Ch 2, in which he envisioned a self-replicating time-keeping watch. Namely, the additionality of having separate function and an informationally controlled replication system that reproduces the functional entity points to a higher order of intelligent design than even just the direct functional article. So, the correct answer is not to dismiss the links between thermodynamics and information (and between information and functional organisation), but to recognise and apply them. Hence, my discussion of the significance of why a .22 round in a vital spot will kill a cow. Then, going back to the zygote that grew into that cow by virtue of taking in materials and energy under informational control,we see the link to FSCO/I. Onward this connects to constraint vs freedom and linked macro-level observables and states. There is an island of possible states consistent with the living cow. A kinetic disruption to those states can trigger catastrophic functional collapse, i.e. here, death. The same molecules and atoms, suddenly re arranged in what were always possible ways, and life function vanishes. Life being a macro-observable consistent with certain constrained clusters of underlying configurations forming a target zone. Bringing in the thermodynamic possibilities for the relevant atoms and molecules, we see that some serious work of clumping and configuration had to have gone into the development and growth of the living cow, which can be crudely and partially released by burning the dead cow. But, it is obvious that the symbolic functional constraints in the cow's dna are the same: molecules are clumped then specifically constrained for functional reasons. The organisation can be destroyed by raw energy injection: heat it up and destroy it, and the increment of work and entropy reduction to configure may be lost in the decimal places, but we can detect it by using other conceptual and measuring tools, as information in bits. So, we are back to the significance of FSCO/I. And, let us not forget, if you suffer a trauma, the EMTs using the Glasgow coma scale are seeking to infer intelligence from functional response in the context of possible disorder that damages or destroys proper function. Intelligent life, itself is sustained by active, informationally controlled processes, in the teeth of the tendency of molecules to randomise and go to increasingly disordered states. Eventually, entropy will win, not least by corrupting the genetic and regulatory info in our cells. And in the end, that genetic entropy will add such a burden to the human genome that our race will become non-viable. Thus, we see the point that those who argue for writing genetic and associated regulatory info by cumulative, rewarded happy accidents, are missing the significance of the link between information and entropy. So, no, I think the Creationists who first noticed these issues were on to something, decades ago. And, I think the Design theorists who have looked in greater details on the informational issues, are on to something. Where also, those who are ever so desperate to dismiss and deride this, are barking up the wrong tree. Especially, when they resort to the lazy, shabby tactics of namecalling, guilt by association, and the genetic fallacy. There is an issue to be addressed soberly and seriously on its merits, and let us be about it. KFkairosfocus
July 6, 2012
July
07
Jul
6
06
2012
12:53 AM
12
12
53
AM
PDT
Dr. Sewell, Your work is not wasted, it has been an inspiration to many including myself. I think the fruitful avenue of exploration is statistics not thermodynamics. If reference to "thermo" were replaced with notions of "statistics", it would convey much of the what you want without causing arguments over definitions. Ideas of entropy can be excorsised of their thermodynamic associations and moved into information theory. The relabeling we alleviate many of the arguments over definitions while preserving the heart of what you are working to demonstrate. Instead of Boltzman's entropy in statistical mechanics: S = k log W It can be replaced with shannon information or entropy: H = log W At least with Shannon you don't need the "k", so the formula is simpler. Then the probablity arguments will still hold but without being conflated with notions in thermodynamics such as heat and temperature nor even energy. Arguments over Clausius, Kelvin-Plank, etc. will vanish leaving only statistical arguments (which are more important anyway).scordova
July 5, 2012
July
07
Jul
5
05
2012
10:58 PM
10
10
58
PM
PDT
Yes, I agree with Granville Sewell's last sentence in his comment above. Sal Cordova is right about the Second Law of Thermodynamics not disproving evolution. Granville Sewell is completely confused. 2LOT says that if a closed system has a decrease in entropy deltaS (where I define deltaS is positive for entropy decrease and negative for entropy increase) and if it radiates heat deltaQ to its environment at temperature T, then: deltaS <= delta Q /T For an open system, you add corrections for the intrinsic entropy of matter entering or leaving the system. That's all. Special cases: for an isolated system, deltaQ = 0 by definition. For an exothermic reaction, which all living things and ecosystems are, deltaQ is positive and huge. Granville Sewell is in effect setting deltaQ = 0 for the evolution of a population of organisms, plus the food they eat, plus the poop and dead bodies they produce. This is invalid, because populations of organisms radiate huge amounts of heat, so deltaQ is huge for populations. This is a huge, obvious error. For the evolution of a population of organisms, e.g. Homo habilis to Homo erectus, deltaS is certainly very, very small compared to the huge amounts of heat radiated to the environment by a half-million years of evolution. In fact DeltaS could even be NEGATIVE for some kinds of evolution, i.e. Homo habilis to Homo erectus, because Homo erectus individuals are larger, and because entropy is an intrinsic property, so more matter means higher entropy. The "disorder" of a modern human brain could be twice that of an early Homo erectus brain, because it is more massive. If you prefer to think of entropy as "disorder", and "order" as the opposite of entropy, well, that is a very bad metaphor and leads to bizarre contradictions. Consider the following. Consider the empty space around planet Earth 3 billion years ago. Let's by convention say that its entropy then was zero. Now if you define "order" as minus entropy, then the space around planet Earth 3 billion years ago had zero "order." But, while life was evolving on Earth, the ecosystem radiated heat into space. So delta Q over 3 billion years is HUGE. This means the entropy of empty space around Earth is a much, much higher positive number than it was 3 billion years ago. But if you define "order" as minus entropy, then the empty space around Earth is, right now, a huge, huge NEGATIVE NUMBER. If this seems bizarre or counter-intuitive, then don't call "order" the opposite of entropy. Physicists don't and chemists don't, when they are doing real calculations. This is exactly what Granville Sewell does–he calls "order" the opposite of entropy, and instead of talking about heat flowing OUT of a system, as a physicist or chemist would, Sewell instead speaks of "order" flowing INTO a system. This leads Granville Sewell to bizarre self-contradictions and counter-intuitive absurdities.Diogenes
July 5, 2012
July
07
Jul
5
05
2012
08:41 PM
8
08
41
PM
PDT
Gil, You are absolutely right that the whole issue is just common sense, but there is a "common sense law of physics" called the second law, which says that what has happened on Earth should not happen, at least due to unintelligent causes. My first primitive attempt to make the second law argument was in the second part of my 2001 Mathematical Intelligencer article. I wrote: --------------------------------------- ...to attribute the development of life on Earth to natural selection is to assign to it--and to it alone, of all known natural "forces"--the ability to violate the second law of thermodynamics and to cause order to arise from disorder. It is often argued that since the Earth is not a closed system--it receives energy from the Sun, for example-- the second law is not applicable in this case. It is true that can increase locally, if the local increase is compensated by a decrease elsewhere, ie, an open system can be taken to a less probable state by importing order from outside. For example, we could transport a truckload of encyclopedias and computers to the moon, thereby increasing the order on the moon, without violating the second law. But the second law of thermodynamics--at least the underlying principle behind this law--simply says that natural forces do not cause extremely improbable things to happen, and it is absurd to argue that because the Earth receives energy from the Sun, this principle was not violated here when the original rearrangement of atoms into encyclopedias and computers occurred. ---------------------------------------------- I immediately heard from people who said, natural causes do extremely improbable things all the time, every time we flip a billion coins we get an extremely improbable result. So I responded, I mean the second law says natural forces don't do macroscopically describable things that are extremely improbable from the microscopic point of view. Then a few months later I noticed that the equations for entropy change, which are commonly generalized to less quantifiable applications to make the "compensation" argument---that extremely improbable things can happen on Earth as long as they are compensated by entropy increases outside the Earth, when looked at more carefully, do not support the absurd compensation argument, they actually support, when generalized, the common sense argument of my 2001 article. So for 11 years now I have been trying to make the second law argument more clearly and scientifically. But by now I realize I have completely wasted 11 years of my life, and put up with unbelievable ridicule and abuse for nothing, because I now realize that anyone who can read my original, common sense argument above, and believe that 4 unintelligent forces alone can create "encyclopedias and computers," is always going to find a way to avoid the obvious conclusion, no matter how clearly and accurately you state the second law argument. They will argue that the second law only applies to thermal entropy, or that what has happened on Earth is just too difficult to quantify, or use the most popular argument: "you're just an idiot who doesn't know anything about the second law", whatever it takes, they will find a way to distract attention from my main, obvious, point. I have wasted 11 years of my life.Granville Sewell
July 5, 2012
July
07
Jul
5
05
2012
08:27 PM
8
08
27
PM
PDT
I'm a very simple person, au fond. I play Chopin on the piano, write artificially intelligent computer programs as a hobby, and earn my living as a software engineer in aerospace R&D. These qualifications might seem irrelevant concerning the discussion at hand, but they give me some authority concerning the theme of this thread. I have a propensity and a passion for figuring stuff out, and have developed a nose for smelling out BS concerning "scientific" issues. Forget thermodynamics, entropy, and all the rest. Anyone with any reasonable intelligence and familiarity with engineered systems should be able to recognize that complex information-processing machinery of the kind found in living systems cannot possibly be engineered by Darwinian mechanisms. Those who propose that Darwinian mechanisms account for this technology are clearly out of contact with reality, and live in some kind of bizarre La-La Land that has nothing to do with reason or evidence. Attempting to reason with such people is an exercise in futility, because they are completely irrational.GilDodgen
July 5, 2012
July
07
Jul
5
05
2012
05:54 PM
5
05
54
PM
PDT
small supply dropskairosfocus
July 5, 2012
July
07
Jul
5
05
2012
03:57 PM
3
03
57
PM
PDT
News: The calculation challenge RS rightly highlights does not dismiss the issue. To see why, notice that we face similar issues in say economics. We are left to resort to proofs in principle, toy models to illustrate that these are relevant, and useful, often semi-empirical aggregate models. So, we can have confidence in a result or even a qualitative assessment lightly dusted with some algebra or even graphs backed up by a survey of the involved logic. Frankly, one of my best pieces of economic argument was to use two sticks, one fixed almost straight up and the other at an angle, with the intersection passing through a bead. as the movable stick goes right or left a slight amount the bead rises (or, falls) sharply. Voila, that is one reason energy markets are so volatile: tight supply, shifting demand. Slight jumps in demand will push prices hard. and small drops can push prices up too. Crude, but good enough to see what is going on beyond the noise of talking heads and screaming politicians. And, based on valid principles. Let's take up the cow shot by the butcher. Some crude physics, from a design perspective, without a single equation. We know, empirically, that the well-placed round reliably destroys life function. Why? Plainly, by disrupting vital system function that depends on specific configurations. That is, we see an in-principle illustration of loss of organisation and associated functionally specific information. In the right place, instant unconsciousness, immediate collapse and rapid death. That is, apparently minor disruptions to obviously accessible configs, are not compatible with life. Where also, the cow came about by the infolding of regulatory programs and associated growth and development in accord with genetic and cellular information originally resident in a Zygote that formed a system open to material and energy inflows and outflows. Where too, the key molecular aggregates in the cells are known to be in large part informationally controlled. Indeed, the cell exhibits metabolism and an integrated von Neumann self replicator. Where again, we know there is a reasonable bridge between configurational specificity and low entropy. Indeed, there is a whole informational school of thermo-D out there. So, even where we cannot calculate the numerical values (actually MOST serious problems in physics and related fields cannot be worked out in detail, we use simplifications and models and aggregations or approximations all the time), we can trace the entropic pattern involved. A pattern that is consistent with what we can work out for our usual toy examples. And the pattern throws out results that point to the utter implausibility of getting to main living forms by blind forces of chance and necessity. Indeed, a good toy comparison is the 500-bit string that is to form coherent English text. This is beyond the credible reach of the solar system's atomic resources. If you see a 73 or so character string of coherent English, that is a strong sign that the only observed source of such has been at work. Intelligence. We are here dealing with systems that are known to be far more complex than that. KFkairosfocus
July 5, 2012
July
07
Jul
5
05
2012
03:55 PM
3
03
55
PM
PDT

Leave a Reply