A little timeline on the second law argument, as applied to evolution (see my BioComplexity article for more detail):
1. Scientists observed that the temperature distribution in an object always tends toward more uniformity, as heat flows from hot to cold regions, and defined a quantity called “entropy” to measure this randomness, or uniformity. The first formulations of the second law of thermodynamics stated that thermal “entropy” must always increase, or at least remain constant, in an isolated system.
2. It was realized that the reason temperature tends to become more uniformly (more randomly) distributed was purely statistical: a uniform distribution is more probable than a highly non-uniform distribution. Exactly the same argument, and even the same equations, apply to the distribution of anything else, such as carbon, that diffuses. In fact, one can define a “carbon entropy” in the same way as thermal entropy, and show, using the same equations, that carbon entropy must always increase, or remain constant, in an isolated system.
3. Since the reason thermal and carbon (and chromium, etc) distributions become more uniform in an isolated system is that the laws of probability favor more random, more probable, states, some scientists generalized the second law with statements such as “In an isolated system, the direction of spontaneous change is from order to disorder.” For these more general statements, “entropy” was simply used as a synonym for “disorder” and many physics texts gave examples of irreversible “entropy” increases that had nothing to do with heat conduction or diffusion, such as tornados turning towns into rubble, explosions destroying buildings, or fires turning books into ashes.
4. Some people then said, what could be a more spectacular increase in order, or decrease in “entropy”, than civilizations arising on a once-barren planet, and said the claim that entirely natural causes could turn dust into computers was contrary to these more general statements of the second law.
5. The counter-argument offered by evolutionists was always: but the second law only says order cannot increase in an isolated system, and the Earth receives energy from the sun, so computers arising from dust here does not violate the second law, as long as the increases in order here are “compensated” by decreases outside our open system.
6. In several publications, beginning in a 2001 Mathematical Intelligencer letter, I showed that while it is true that thermal entropy can decrease in an open system, it cannot decrease faster than it is exported through the boundary, or stated in terms of “thermal order” (= the negative of thermal entropy), in an open system thermal order cannot increase faster than it is imported through the boundary, and likewise “carbon order” cannot increase faster than it is imported through the boundary, etc. (Though I was not the first to notice this, it seemed to be a very little known fact.) Then I argued that the more general statements of the second law could also be generalized to open systems, using the tautology that “if an increase in order is extremely improbable when a system is isolated, it is still extremely improbable when the system is open, unless something is entering which makes it not extremely improbable.” Thus the fact that order can increase in an open system does not mean that computers can appear on a barren planet as long as the planet receives solar energy, something must be entering which makes the appearance of computers not extremely improbable, for example: computers.
7. I’m sure that physics texts are still being written which apply the second law to tornados and explosions and fires, and still say evolution does not violate these more general statements of the second law because they only apply to isolated systems. But I have found that after reading my writings on the second law (for example, my withdrawn-at-the-last-minute Applied Mathematics Letters article) or my videos (see below) no one wants to talk about isolated and open systems, they ALL now say, the second law of thermodynamics should only be applied to thermodynamics, it is only about heat. “Entropy” never meant anything other than thermal entropy, and even when physics textbooks apply the second law to more general situations, they are really only talking about thermal entropy. Whether the second law still applies to carbon entropy, for example, where the equations are exactly the same, is not clear.
8. Of course you can still argue that the “second law of thermodynamics” should never have been generalized (by physics textbook writers; creationists were not the first to generalize it!) and so it has no relevance to evolution. But there is obviously SOME law of Nature that prevents tornados from turning rubble into houses and cars, and the same law prevents computers from arising on barren planets through unintelligent causes alone. And if it is not a generalization of the second law of thermodynamics, it is a law of Nature very closely related to the second law!
Note added later: as clearly stated in the BioComplexity article, the statements about “X-entropy”, where X = heat, carbon, chromium,…, in an isolated or open system, are assuming nothing is going on except diffusion, in which case they illustrate nicely the common sense conclusion (tautology, actually) that “if an increase in order is extremely improbable when a system is isolated, it is still extremely improbable when the system is open, unless something is entering which makes it not extremely improbable.” Thus just showing that the statements about X-entropy are not always valid in more general situations does not negate the general, common sense, conclusion, and allow you to argue that just because the Earth is an open system, civilizations can arise from dust here without violating the second law (or at least the fundamental natural principle behind the second law). At some point you are going to have to argue that energy from the sun makes the spontaneous rearrangement of atoms into computers and spaceships and iPhones not astronomically improbable, all the popular easy ways to avoid the obvious conclusion are now gone. (see Why Evolution is Different, excerpted—and somewhat updated—from Chapter 5 of my Discovery Institute Press book. )
[youtube 259r-iDckjQ]
And what does that explain about how “intelligent cause” works?
And what does that explain about how “intelligent cause” works?
Let’s assume that your post had no intelligent cause.
Again? When are you going to admit this dog just won’t hunt?
As I’ve pointed out a number of times already, your 2001 letter (and subsequent rehashes) only showed this in a single very specific circumstance (diffusion through a uniform solid), and provided no basis at all for the claim that it’s true in general. And it’s not true in general. In fact, it fails in even slightly different situations:
– Diffusion through a solid *in the presence of gravity*. In this case, lighter components(*) will diffuse toward the top, and denser ones toward the bottom, which can certainly decrease their X-entropies (despite there being no X-entropy flux into or out of the system).
(* Actually, it’s a little more complicated than that, but it’s not worth worrying about here.)
– Similarly, if you had a carbon powder uniformly mixed with air and left it isolated, the powder would settle to the bottom, decreasing its carbon entropy (again, with no carbon-entropy flux).
– Start with a sealed container of propane gas, and cool it. When it gets cool enough, some of the propane will condense out as a liquid, leading to a less uniform distribution of carbon (and a decrease in the carbon entropy). Again, there’s no carbon-entropy flux into or out of the system, although there is a thermal entropy flux in this case.
– Mix a magnesium sulfate solution with a solution of sodium carbonate, and isolate it. The two will react to form sodium sulfate and magnesium carbonate. Magnesium carbonate is insoluble, so it precipitates out as a solid… producing a less uniform destribution of carbon, and thus again a decrease in carbon entropy. In an isolated system.
– Start with a uniform block of graphite (or, actually, any other form of carbon), and isolate it completely. If some of the carbon is the C14 isotope, it’ll spontaneously decay to nitrogen. Even though the distribution of carbon remains uniform, the carbon entropy will decrease due to the decreasing quantity of carbon.
– Start with a block of graphite (or any other form of carbon), and compress it slightly (say, by 0.01%). (You could increase the pressure, or cool it, or whatever; it doesn’t matter for this example.) The carbon entropy change in this case is, at least as far I can see, undefined. Depending on what volume you integrate over, you either get an (undefined) constant of integration that fails to cancel out, or a carbon-entropy density that diverges to -infinity (in the now-empty-of-carbon volume).
This is just a small sample of the many situations where your derivation completely fails to correspond to reality. You really need to stop trying to pass it off as an actual valid law.
(BTW, even if your claim was correct, it wouldn’t support intelligent design; it’d imply that creating life required various ordered elements entering and leaving Earth, whether or not there was any intelligence involved.)
There are a bunch of other problems with your summary, but if you can’t even admit the limitations of your derivation, there’s no point in going into subtler issues.
Now, if anyone reading this is interested in how the second law of thermodynamics actually applies to various “entropies”, here’s a short summary:
You can call anything you want “entropy” (for example, Sewell’s X-entropies, the Shannon entropy of information theory, etc), but naming things “entropy” doesn’t magically make the second law apply to them.
The second law applies directly to only one entropy. Unfortunately, different people use different terms for this one entropy; I tend to call it the “thermodynamic” or “total” entropy; some people call it “thermal” entropy, or even other things.
The second law applies indirectly to some other entropies because they’re related to the thermodynamic(/total/whatever) entropy. But as a rule, if you don’t know how some other entropy is related to the thermodynamic entropy, you don’t know how (or even if) the second law applies to it.
Generally, the way other entropies are related to the thermodynamic entropy is that they’re part of it; that is, the thermodynamic entropy is sometimes the total of various partial entropies (which is why I call it the “total” entropy). For instance, the entropy of a classical ideal gas can be broken down into thermal and configurational components (relating to the movement and arrangement of the gas molecules, respectively). Again, the terminology is confusing: in this case, thermodynamic entropy = thermal entropy + configurational entropy.
For less idealized systems, you generally don’t get such a clean breakdown of the entropy into different components, but you sometimes do get an approximate breakdown. Real gases, for example, are often close enough to ideal that the thermal + configurational breakdown is a good approximation. Liquids, on the other hand, are messier than that.
Now, the important thing to realize is that since the second law applies only to the thermodynamic (/total) entropy, and it doesn’t place any limit on conversion of entropy between types. For example, if an ideal (or near-ideal) gas is compressed, some of its configurational entropy will be converted into thermal entropy, and it will heat up. If it’s allowed to expand, the reverse happens: some of its thermal entropy is converted to configurational entropy as it cools down. If the compression/expansion happens slowly (and close to equilibrium), the conversion efficiency can be arbitrarily close to 100%.
That’s what’s happening in most of my examples above. Sewell’s X-entropies are sort of like the configurational entropies (though not close enough to be of any actual use), and in my examples the configurational entropy of the carbon atoms decreases, coupled to an equal-or-larger increase in some other partial entropy, so the total entropy increases (or stays constant) and the actual second law is fully satisfied.
Sewell has repeatedly denied that this sort of conversion between entropies makes sense. He doesn’t understand how it could work, or even why the different entropies should all have the same units. But reality is not limited to what he understands, and the clear fact is that this sort of conversion happens all the time, all over the place (and if you use the correct formulas instead of his X-entropies, the units do match up).
So, let’s apply this to the Earth and evolution: the sunlight recieved by Earth carries about 3.8e13 W/J of entropy, and the thermal (mostly infrared) radiation leaving Earth carries at least 3.7e14 W/K, for a net flux of at least 3.3e14 W/K leaving Earth (see here). There are some other entropy fluxes, but I’m pretty sure they’re too small to matter. So, as far as the second law is concerned, the total entropy of Earth could be decreasing at up to 3.3e14 J/K per second, and the second law places no restriction on what form that decrease might take.
That’s plenty.
Mind you, just because something is allowed by the second law doesn’t mean it’s actually possible; it might well be impossible for some other reason. Sewell really really really wants there to be some law of nature that forbids any sort of spontaneous organization, but the second law certainly doesn’t do that and he has yet to make a coherent argument for any other law that does either.
Gordon Davidson,
In the BioComplexity article I stated several times that I was assuming nothing is going on but diffusion (or heat conduction, in the case of thermal entropy). In this simple case, the carbon order cannot increase faster than it is imported, so it illustrates nicely the tautology that “if an increase in order is extremely improbable when a system is isolated, it is still extremely improbable when the system is open, unless something is entering which makes it NOT extremely improbable,” which was my main point, so it is not true that ANYTHING can happen in an open system without violating the second law (or at least the general principle behind this law), as long as the increases in order are “compensated” by decreases outside the open system. Anyway, I don’t think a tautology really needs illustrating, it stands even without an example.
Gordon,
I dealt with an objection similar to your second comment in the BioComplexity paper, quoting from this:
Bob Lloyd’s primary criticism [7] of my approach was that my “X-entropies”(e.g., “chromium entropy”) are not always independent of each other. He showed that in certain experiments in liquids, thermal entropy changes can cause changes in the other X-entropies. Therefore, he concluded,“the separation of total entropy into different entropies…is invalid.” He wrote that the idea that my X-entropies are always independent of each otherwas “central to all of the versions of his argument.” Actually, I never claimed that: in scenarios A and B, using the standard models for diffusion and heat conduction, and assuming nothing else is going on, the thermal and chromium entropies are independent, and then statement 1b nicely illustrates the general statement 2b (though I’m not sure a tautology needs illustrating).
Hi Granville!
After all evolutionists are our best fun, we should thank them.
Think, countless physics text books for decades stated that entropy is disorder and tends to increase. Since this counters evolution, today some evolutionists work hard to do an errata corrige on those darned books.
You patiently try to explain that systems always tend to go toward probable states and what they reply? “No, the 2nd law applies only to heat”. My God, but why heat passes from hot body to cold body? Because it is more probable the state of uniform thermal distribution. Therefore indeed heat is an example among many that systems tend to probable states. But evolutionists not even are aware they shoot themselves on the foot. Too funny.
Then niwrad what does that explain about how “intelligent cause” works? Where is your cognitive model for us to test?
GaryGaulin #8
I think maybe the ID argument from the 2nd law has more to do with how “unintelligent cause” works. It works in the direction of … un-work so to speak, if with “work” we mean something organizational. Unintelligent forces are idle, laggard, they always prefer probable, easy tasks. They hate to organize ex novo, way too efforts.
niwrad:
Then instead of the advertised theory that is expected to explain how “intelligent cause” works:
The consumer gets another deceptive “bait and switch” that changes the subject to something else?
https://en.wikipedia.org/wiki/Bait-and-switch
GaryGaulin #10
No “bait and switch”. Intelligent beings work by means of knowledge. But actually the issue of this thread is whether the 2nd law has something to do with the ID/evo debate.
niwrad:
That is an extreme oversimplification with no testable operational definition for “Intelligent”, which in this case should be provided as a computer model showing all of the main features of any “Intelligent” system.
niwrad:
The question of whether the 2nd law has something to do with the ID/evo debate amounts to expecting the consumer to later somehow for themselves explain how “intelligent cause” works by arguing against another theory entirely where there are “evo” words galore to go in endless circles over. That explains why those who buy into or get stuck in the “debate” experience serious unexpected problems from what is being sold as a “theory”.
What I have for scientific models and theory (my name above hyperlinks to a page for them) was long ago rejected by BioComplexity. But that is expected from an organization where what is needed is only more switch, not the bait.
Gordon @ 4.
Hello Gordon, if you would indulge some questions?
In a thermodynamic system, what is the difference between configurational entropy and thermal entropy?
When entropy changes, what exactly is it that is changed?
Can’t thermodynamic entropy be stated in statistical terms [Boltzmann], and can’t that be stated in information terms [Shannon], and don’t many scientists accept that interpretation of entropy as being equally valid?
To put it another way, isn’t thermodynamic entropy just a special case of a broader principle?
niwrad @7:
The second law only requires that entropy increase in isolated systems; in open systems it’s entirely normal for entropy to decrease. The Earth has at least 3.3e14 W/K more entropy leaving than entering, so as far as the second law is concerned its entropy could be decreasing by up to 3.3e14 J/K per second (actually, somewhat more since that’s a lower bound).
Also, entropy can be thought of as a measure of disorder, but it’s quite different from what we normally think of as disorder. For instance: which is more disordered, a well-shuffled deck of cards, or a sorted deck that’s 1 degree warmer (and otherwise identical)? Most people would say the shuffled deck is more disordered, but the warmer deck has the higher entropy.
(In more detail: shuffling a deck of 52 cards increases its entropy by k_B*ln(52!) = 2.16e-21 J/K, while heating a 100g deck with a specific heat of 0.2 from 300K (= 26.85 Celsius = 80.33 Fahrenheit) to 301K (= 27.85 Celsius = 82.13 Fahrenheit) would increase its entropy by 0.28 J/K. In this example, the entropy change due to the temperature difference is over 100 quintillion times bigger than the difference due to shuffling.)
The “increasing probability” versions of the second law only apply to systems with equilibrium boundary conditions. The Earth does not have equilibrium boundary conditions (it’s in thermal contact with both the sun’s photosphere at a temperature of 6000K, and the microwave background at a temp of 3K). Without equilibrium boundary conditions, you can’t even define the relevant probability distribution, much less say that the system will move to states of higher and higher probability under it.
(Again, more detail: consider a system that only interacts with its surroundings by exchanging heat at a constant temperature T. If it were fluctuating around equilibrium, the probability that it’ll be in a macrostate with energy E and entropy S is proportional to e^((S-E/T)/k_B) (this is a form of the Boltzmann distribution). If it starts in a nonequilibrium state, it’ll move through a sequence of states with monotonically nondecreasing probability under this distribution. But this probability distribution depends on the temperature T; without a single well-defined temperature, the distribution becomes undefined.)
If you’re going to reason about the second law’s implications, you really need to use a form of the second law that applies to the situation you’re considering. But if you do that, you’ll find there’s no conflict between the second law and evolution.
BTW, as the deck-of-cards example shows, thermodynamic entropy isn’t only about heat, but it is mostly about heat.
From another forum a long time ago – as far as I can tell, the criticism is still valid, and it renders Sewell’s ramblings as rather irrelevant:
Professor Sewell propagates several incorrect notions, but one in particular is egregious, and has the happy property that the mistake can be seen (and corrected) on one’s own kitchen countertop. Specifically, Sewell states: “It is a well-known prediction of the second law that, in a closed system, every type of order is unstable and must eventually decrease, as everything tends toward more probable (more random) states. Not only will carbon and temperature distributions become more disordered (more uniform), but the performance of all electronic devices will deteriorate, not improve. Natural forces, such as corrosion, erosion, fire and explosions, do not create order, they destroy it.”
Anyone reading this can put, in a modest cruet, some salad oil and some water. The cruet can be capped and the mixture shaken vigorously. Obviously, what results is a highly disorganized mixture, as the tiny globules of oil are dispersed in the water. Now, if one were to take Sewell seriously, one would expect that the disorganized mess in the capped cruet (an isolated system) would never, ever become anything other than an even more disorganized mess. But, again, anyone reading this knows that, if one were to set the cruet aside, and do absolutely nothing, the oil would spontaneously aggregate and separate from the water, and in fact a highly-ordered, perfectly-separated two-phase system would come about entirely on its own accord.
By now, many readers must be wondering “is it so easy to defy the Second Law of Thermodynamics that we can do so in our kitchens?” The answer, as a chemist would tell us, is NO. The remarkable ordering that occurs in our cruet is not a defiance of the Second Law, but rather an obedience of the Law. Without going into detail, the reality of this is that the relentless drive to increasing entropy plays out at the microscopic scale to cause oil and water to separate, in effect to produce dramatic and spontaneous macroscopic ordering. Similar processes are at work inside living cells, and are largely responsible for the degree of order and organization that we see in cells. Put another way, this spontaneous assumption of macroscopic order is not a defiance of the Second Law, but an inevitable consequence of the Law.
When it comes to evolution, similar principles (if based on more extensive chemistries) apply. There is no “thermodynamic failure”. A perspective (such as Sewell’s) that so completely ignores basic chemical principles that it predicts that oil and water will not spontaneously separate will miss this simple truth.
Arthur Hunt #15
Your extrapolation from oil-aggregation-and-separation-from-water-in-a-cruet to evolution is in perfect evolutionist-style, and in fact it doesn’t work. You simply cannot equate a phenomenon due to standard chemical principles to the organization of a cell, a cybernetic factory where countless information processing nano-machines concur to the advanced metabolic tasks necessary to life.
Yes your spontaneous-oil-aggregation-and-separation doesn’t defy the 2nd law. But sparse molecules that spontaneously self-organize into a living cell would do, as would do a tornado that spontaneously constructs a 747. This is what Granville claims and I perfectly agree with him that is eminently relevant to the impossibility of evolution.
Gordon Davisson #14
I appreciate that you admit that “thermodynamic entropy isn’t only about heat” and “entropy can be thought of as a measure of disorder” although with distinguo.
In the deck-of-cards example the comparison of the two entropy values (thermal and configurational) is misleading. I explained why here:
http://www.uncommondescent.com.....ng-energy/
What is sure is that, in our world, castles of cards don’t arise spontaneously. You can call this phenomenon entropic or not, however the fact is the castle order is neither probable nor spontaneous. Biological organization in turn is far more improbable than card castles and implies hierarchies of formalisms far beyond patterns of cards. Ergo if card castles need a constructor to greater reason bio-organization does.
Gordon Davisson #14
About how is equivocal the use of Boltzmann’s constant in some contexts see also:
http://www.uncommondescent.com.....evolution/
mung @13:
Sure, I’ll take a stab at them. I think the easiest way to explain the difference between configurational and thermal entropy would be with some examples. A warning, though: I’m going to give a simplified overview to get the basic idea across, without worrying about some of the complications (like quantum mechanics) that come up if you’re trying to do this properly.
Consider an ideal gas. This is an approximation of how real gasses work that ignores both the size of the gas molecules (i.e. they’re too tiny to matter) and the fact that molecules can interact without touching (i.e. if they’re spread out far enough apart, the forces between molecules are too weak to matter). That means that each molecule of the gas bounces around pretty independently of all the other molecules.
The Boltzmann formula for entropy (which is good enough for what we’re doing here) is S = k_B * ln(w), where k_B is Boltzmann’s constant and w is the number of possible microscopically distinct states the system might be in. For our ideal gas, that means how many possible ways could the gas molecules be scattered around (i.e. how many places each molecule might be) and how many possible ways might they be moving (i.e. how many different speeds & directions each molecule might have). In the ideal gas case, the molecules’ positions and motions are independent, so the total number of states is w = (# of possible sets of positions) * (# of possible sets of motions). Since the logarithm of a product is the sum of the logarithms, that means S = k_B * ln(# of possible sets of positions) + k_B * ln(# of possible sets of motions). We call the first part of that the configurational entropy and the second part the thermal entropy.
Now, let’s look at how these two entropy components behave. The configurational entropy depends on how large a volume the gas molecules are spread over, and how uniformly they’re spread across that volume. Suppose, for example, we let the gas expand to twice its original volume. After the expansion, each molecule could be in twice as many positions; if there are N molecules, that means the total number of sets of positions goes up by a factor of 2^N, which means the configurational entropy increases by k_B * ln(2^N) = N * k_B * ln(2). Similarly, compressing the gas would decrease its configurational entropy.
(Note: I’m ducking the question of how far apart two possible positions have to be to count as “different” — that takes quantum mechanics and gets messy.)
Note that the configurational entropy depends (for a given amount of gas) only on the volume of the gas and how uniformly the molecules are spread over that volume. Most importantly, it does not depend on the gas’s energy or temperature.
The thermal component of the entropy, on the other hand, turns out to depend only on the gas’s energy (which is directly related to its temperature). Since moving molecules have kinetic energy (& the faster they’re moving the more energy they have), a limited supply of energy means limited possibilities for motion. It the gas has no kinetic energy at all (i.e. at a temperature of absolute zero), there can be no motion, and so there’s only one possible state of motion (complete stasis!), and since ln(1) = 0 the thermal entropy will come out to zero. If we add energy, we increase the number of ways that energy can be distributed among the gas molecules and what range of speeds they might have, and hence the thermal entropy goes up accordingly.
(Note: again I’m ducking the question of how different two states of motion have to be to count as “different”.)
Does that help? If you’re with me so far, let me try to apply the same principle to a solid. In this case, we can similarly define the configurational entropy in terms of the number of ways that the atoms that make up the solid can be arranged. For a perfect crystal, each atom’s position will be fully determined by the crystal structure, so there’s only one possible arrangement and the configurational entropy comes out to zero. For an imperfect crystal, the deviations from ideal crystal structure allow more possible arrangements, and hence increase the configurational entropy. For amorphous solids, like glass, it’ll be even higher (though still not nearly as high as for a gas, since each atom is still mostly constrained by its neighbors).
The thermal entropy also behaves a little like it does in a gas. Again, at a temperature of absolute zero there’ll be no motion and no thermal entropy. As we add thermal entropy, the atoms can’t move much, but they can vibrate in place. More thermal energy -> more vibration -> more possible ways they can be vibrating at any specific moment -> more thermal entropy.
But it’s actually messier than that. Since the atoms’ vibrations cause them to move away from what their positions would be in a completely frozen crystal, the vibrations are partly configurational as well as partly thermal. Similarly, since the atoms interact with their neighbors, different positions (=configurations) will have different energies, so the atomic arrangement is partly thermal as well as partly configurational. Thus, what I’m calling the thermal entropy is partly configurational, and what I’m calling the configurational entropy is actually sort of thermal. The dividing line between them is much blurrier than it was for the ideal gas (and actually if you look closer, it’s even blurrier than I’m describing here).
The division of entropy into configurational and thermal components (and sometimes others as well) is really just an approximation. Sometimes it’s a very good approximation, and can be very useful; other times it’s way off and will just confuse you. In order to know when it’ll work well and when it won’t you need to know a fair bit about the state structure of the system you’re talking about. If you don’t understand that very well… it’s probably safest to avoid it. Stick to thinking in terms of the total (thermodynamic) entropy.
Entropy isn’t really a thing itself; it’s a property of a system and the state that system is in. When the system’s state changes (due to heating, cooling, compression, chemical reaction, whatever) the system’s entropy will change as a result of that.
Put another way: a change in entropy is an effect, not a cause.
Pretty much, although I’ll make some minor corrections: technically, thermodynamics and statistical mechanics use very different definitions of entropy, but if you do it right you’ll get the same numbers either way. Basically, they’re both the same fundamental quantity, just approached from two different directions.
The stat mech definitions are also very closely related to those of information theory. Specifically, the Gibbs entropy formula of stat mech differs from Shannon’s formula for information entropy only in the choice of units. My take on this is that the stat mech entropy is a special case of the Shannon entropy: it’s proportional to the amount of information needed to specify the exact state (microstate) of the system given its macroscopic state (macrostate).
Note that this doesn’t mean that the second law applies to Shannon entropy. In information theory, it’s entirely normal for entropy to decrease spontaneously. The second law only applies in the specific case where you’re talking about the entropy of a physical system’s microstate conditioned on its macrostate; if you’re talking about the Shannon entropy of something else, the law does not apply.
There does appear to be another connection, though: when information is stored in the state of a physical system (e.g. in a memory chip or hard disk), the Shannon entropy of that information contributes to the thermodynamic entropy of that system. But the contribution (1 bit of Shannon entropy -> k_B * ln(2) = 9.57e-24 J/K of thermo entropy) is pretty much always too small to matter.
If you want more details on this, I’ll refer you to an essay I posted to talk.orgins quite a while ago.
Granville and niwraD, how does a human egg grow into a baby? I would think that turning a single celled egg into a multi trillion celled baby would constitute a dramatic reduction in entropy. Is the second law being violated here?
Ditto for how does a baby grow into an adult without violating the second law?
Niwrad@16,
You seem to be saying that hydrophobic interactions do not apply to the assembly of macro molecular complexes in the cell. You could not be more incorrect. The fact is, the same chemical principles that apply to the spontaneous macroscopic ordering I describe also apply to the assembly of large complexes in the cell.
MatSpirit@20, all growth and development is accompanied by increases, large increases, in entropy. I realize that Sewell may want to convey the opposite notion, but he is wrong. Quite completely wrong.
nirwad @9:
In this respect the ID argument from the second law is fundamentally flawed, because the second law does not distinguish intelligent vs. unintelligent forces. As far as the second law is concerned, intelligent agents are subject to exactly the same rule as everything else.
In the late 19th century, James Clerk Maxwell proposed a possible exception to this: an intelligent agent (which became known as “Maxwell’s demon”) which appeared to be able to decrease entropy by sorting individual molecules (e.g. sorting them into fast-moving/hot vs. slow-moving/cool). Since then, more detailed analysis has shown that in fact the daemon cannot decrease overall entropy, because the connection between information and thermodynamic entropy (specifically Landauer’s principle) implies that the daemon must produce at least as much entropy as it removes.
Therefore, if there were an actual thermodynamic problem with evolution, adding intelligence would not solve it.
MatSpirit #20
Obviously no violation. Embryo development is an exquisite work of intelligent design programming. In general the work of intelligence never violates the 2nd law. When you write a post, compose music, program on computer, etc. do you violate the 2nd law? Certainly no. The 2nd law expresses a natural tendency toward probable states. But per se it doesn’t forbid at all the presence of power able to go toward improbable states. In a conductor the laws of physics say the current is zero. But if you close the circuit and introduce a current generator the current flows, without violating any law.
Gordon Davisson #22
As said above to MatSpirit, intelligence introduces in nature a factor able to organize. Per se nature is not able to self-organize. Nature needs something higher that overarches it and its laws. This somehow transcendent factor is intelligence.
Nature tends to spontaneous disorganization (2nd law). Whatever you see organized in nature or elsewhere is the work of intelligent design (front-loaded or at run-time).
So it is wrong to say “if there were an actual thermodynamic problem with evolution, adding intelligence would not solve it” because intelligence is exactly what it takes to solve the problems of lack of organization in all fields.
Again, it is misleading to say “intelligent agents are subject to exactly the same rule as everything else”. In fact that presupposes materialism. Intelligent beings are spirit, soul, body. It is only as physical body that intelligent beings suffer the physical laws. Pure intelligence, which is spirit, transcends matter. For this reason it is able to organize matter, which per se goes toward disorganization.
So we are finally arrived to the point. You are materialist and as such you are also evolutionist. I am non materialist and as such I am also design supporter. All square.
Arthur Hunt #21
I don’t deny hydrophobic interactions in biology, mind you. I simply say that they cannot account for the cybernetics of cells and organisms. Simple chemistry is not sufficient to explain cells and organisms because they eminently imply all fields of engineering and technology (as we know them so far), plus other countless functional hierarchies of advanced formalisms that we actually are even unable to imagine. If we were able our robotics would be far more advanced than it is.
Hello, Granville,
I have been a fan of yours for a long time.
You remarked
I seems to me, and has for a long time, that there exists a more general, fundamental law of which the 2nd law is merely an illustration of how that more fundamental law applies to thermodynamics.
Why don’t you use your expertise to precisely define that more general and fundamental law? “Sewell’s Law” has a nice ring to it!
niwrad, your argument seems to be “the best and brightest cannot design life, thus life was designed”. I will confess that this sort of logic escapes me.
The fact is, scientists have been looking into the inner workings of living things for more than a century, and at every turn, “simple chemistry*” is all one sees.
(* – with apologies to the host of students who, over the years, encountered organic chemistry in college and realized that, say, French literature may be a better career option than medical school.)
Arthur Hunt #27
Also if you look into a computer all you see is plastic, silicon, copper, tin, fiber-glass… But if you ask an informatician he would say that on such stuff several layers of formalism are implemented (hardware circuitry, registries and logic, Boolean algebra, Bios instructions, operating system, many communication protocol layers, application layer, etc).
I bet that if an informatician studied the cell he would recognize many of the above paradigms implemented on the chemistry. The problem yesterday and today is that usually informaticians don’t study biology and biologists don’t study engineering in general and informatics in particular. Although with the arise of the ID movement and the consequent increase of interest on the ID/evo debate fortunately the situation is slowly changing.
When you receive a deck of cards from the factory it is typically ordered by suit and rank. There’s only one of two things that can happen. It can stay the same. It can become “less ordered.”
If you have a perfectly shuffled deck, there’s only one of two things that can happen. It can stay the same. It can become “more ordered.”
Neither would entail a violation of the second law.
I will confess that this sort of logic escapes me.
Obviously not an isolated system.
The fact is, scientists have been looking into the inner workings of living things for more than a century, and at every turn, “simple chemistry*” is all one sees.
This is one of those things that is not even false.
Read The Eight Day of Creation.
This is certainly true, but, if you believe Sewell claims that natural forces cannot produce anything other than uniform, random distributions of matter, you are misunderstanding his point.
He is clear that the claim “In an open system, thermal order (or X-order, where X is any diffusing component) cannot increase faster than it is imported through the boundary” is only true when “assuming nothing is going on but diffusion” (see footnote 4 of his Bio-complexity paper).
From Chemistry by Zumdahl and Zumdahl:
Physics can and does restrict the states available to systems. For example, if you flip 100 fair coins, it is very improbable that all will land on heads. From University Physics by Young and Freedman:
However, if the coins all had little magnets with one pole on the heads side and the other pole on the tails side, and a magnet were placed under the table, with the magnetic field entering the system, all heads would no longer be extremely improbable.
This is why Sewell says:
In both my example and yours, that is exactly what is happening: something is entering (or leaving) that makes what happens not extremely improbable (i.e., the magnetic field and the gravitational field, respectively). The reason these happen is because the physics restricts the set of states available, not because some inequality is satisfied between the “entropy” of the magnetic or gravitational field entering (however that would be calculated) and the “heads-tails” ordering of the coins. We wouldn’t say, if the entropy of the magnetic field is x, we can only expect configurations with 90% heads, but if it is y, we can expect 100% heads. Furthermore, while the entry of the magnetic field greatly increases the probability of the microstates of “all heads” or “mostly heads” for the coins with magnets, it does not increase the probability of other types of clearly unrelated microstates, such as “spells out a lengthy passage from Shakespeare in binary”.
So, it is still possible to argue that, thanks to Darwinian evolution, etc., the physics actually do restrict the set of possible states into which the atoms on the originally barren Earth could rearrange themselves over several billion years, such that a collection of human brains, iPhones, jet planes, and encyclopedias is not extremely improbable. That is why Sewell says things like
Some people are happy to make that argument, and see no reason to resort to the “compensation” entropy computations as made by many textbooks and people like Asimov, Styer, and Bunn. In that case, their position is actually not in conflict with Sewell’s argument.
You have given a nice description of thermodynamic entropy in the context of energy, which is what most practical applications of the Second Law involve. Indeed, energy can be converted into different forms, and the Second Law only requires that the total thermodynamic (energy) entropy increases in the universe. Thus, a system can become more ordered when accompanied by a sufficient release of heat so as to increase the overall (energy) entropy of the universe. However, this increase in “order” refers to a very specific type of order, i.e., what is defined by the Third Law of Thermodynamics:
So, when water freezes and releases heat from the system, its molecules can become more ordered in that they more closely approach a pure, perfect crystalline substance. However, that doesn’t mean that, with sufficient heat release, the molecules can more closely approach a pure, perfect Apple iPhone.
Isaac Asimov wrote:
Now, if, when Asimov says that the human brain “as far as we know, is the most complex and orderly arrangement of matter in the universe”, he means that the human brain, as far as we know, most closely approximates a pure, perfect crystalline substance of all matter in the universe, then I agree your argument and entropy computations are completely valid.
However, it is pretty clear that is not what Asimov meant, and not what the debate is about. Clearly, there are other types of order, not directly related to energy, such as “heads-tails” coin entropy as illustrated in the University Physics quote, or the type of order in the human brain that so impressed Asimov, that can be defined. These types of order are not really true thermodynamic quantities, and just because there is a relationship between entropies of different forms of energy does not mean that thermodynamic (energy) entropies can also be interconverted to the probability of anything.
Now, if you want to say that, since these are not “true” thermodynamic quantities, then we are not really talking about the Second Law of Thermodynamics, then that is fine. However, it is clear that the same statistical principles that explain why the Second Law predicts against free compressions of gases are also applicable to understanding whether you are likely to flip all heads on 100 fair coins, or whether atoms on a barren planet are likely to arrange themselves into Apple iPhones. Again, there is always the caveat: unless the physics restrict the available states such that these configurations are not extremely improbable.
To be sure, for any process to take place (e.g., for the coins to flip at all), there has to be a net increase in the thermodynamic (energy) entropy of the universe. However, that is not the only consideration. All coin flips are equally likely from a purely energy perspective, but it is still extremely improbable to get all heads — again, unless something (like the magnetic field) enters that makes it not extremely improbable.
So are you saying that, as long as the Earth is receiving sufficient sunlight from the Sun, neither the Second Law nor the statistical principles behind it can say anything about which types of arrangements of playing cards are more likely?
And yet, Zumdahl and Zumdahl give exactly that example:
They don’t say, “well, unless sunspot activity is particularly favorable that day, in which case, you could totally get the original sequence”, because thermodynamic (energy) entropy and “card-order” entropy are not inter-convertible.
Welcome CS3.
Energy applied to matter without some sort of teleonomic mechanism to harness it, causes disintegration, or an increase in entropy. Integration of matter that decreases entropy is always the result of energy applied to some sort of teleonomic mechanism:
How do such mechanisms come about mindlessly and accidentally?
Consider photosynthesis, which maintains atmospheric oxygen levels and supplies all of the organic compounds and most of the energy necessary for life on Earth.(2) How did the astonishingly complex mechanism referred to as photosynthesis itself get assembled?(3) It is easy to say, as Denis Alexander does in the comments section of his Big Questions Online article entitled “How are Christianity and Evolution Compatible?” that
But what drives matter to assemble itself into such an incredibly complex mechanism as photosynthesis in the first place? See The Mechanism of Photosynthesis(3). An excerpt:
(1) http://c.ymcdn.com/sites/netwo....._22078.pdf
(2) https://en.wikipedia.org/wiki/Photosynthesis
(3) http://www.harunyahya.com/en/B.....apter/4747
Harry:
How do such mechanisms come about by intelligent cause?
The floor is yours, please explain your scientific theory.
How do such mechanisms come about by intelligent cause?
You’ve never driven a car.
GaryGaulin @35,
That it can be determined that intelligent agency was a causal factor in a given phenomenon coming about is non-controversial when it comes to SETI, forensic pathology and archaeology. That it is controversial when it comes to the origin of life is due to the religious/philosophical bias of atheists who lack the relentless objectivity and religious/philosophical neutrality that genuine science requires. They have redefined science as “that which confirms atheism,” which has perverted modern science.
Objective science would admit that the most plausible explanation for the fine-tuning of the Universe and the ultra-sophisticated, digit information-based nanotechnology of life, the functional complexity of which is light years beyond anything modern science knows how to build from scratch, is intelligent agency. Relentlessly objective, religiously/philosophically neutral science does not have to explain the nature of that intelligent agent, which would be outside of the realm of its competence, but only must admit that currently the most plausible explanation for the Universe and life within it is intelligent agency.
Harry, explain the intelligent cause that created photosynthesis. A computer model would be preferable but explaining how to go about modeling the process would be a good start.
If you need help then click on my name above for links to my models and the theory of intelligent design that I represent.
niwrad #28 – An informatian will use analogy and metaphor to make sense of that which they do not understand. Basically, they would see what they want to see, what they can comprehend. These tools are limited, and absent some connection with chemistry, they fail in conveying true understanding of mechanism and origin.
For example, how many times would a computer programmer use the exact same command to execute a process OR to delete the command (and thereby destroy the code?). I submit that an informatian would be blind to such occurrences, even though living things are teeming with them. Any analogy that an informatician might use to describe or understand such processes would fall short.
GaryGaulin @ 38,
OK. But the explanation of the nature of that intelligent agent, as I said in my previous post, is outside the realm of science’s competence, so I can only provide a metaphysical explanation.
There is a being whose essence is “to be.” That being is the first cause, the prime mover, the primary and fundamental reality, and necessarily exists outside of time, space, matter and energy, as He brought them into being. For reasons understood completely only by that being, after He launched the Universe ex nihilo, He arranged some of it into what we refer to as photosynthesis.
harry:
And where were you taught that? It’s exactly what methodological naturalism teaches.
Arthur Hunt #39
When you for example write information into a read/write memory by means of an arbitrary code and read back and decode such information to do some functional task you are not doing *analogy* of information processing, you are doing *real* information processing.
The cell, among other things, does exactly that. So when informaticians study the cell and recognize paradigms of informatics they do not see “what they want to see” — as you say — rather exactly what there is.
Evolutionists hate to admit that biology *is* chemistry-based cybernetics because that means organisms are designed, but the reality is so. You say it is all analogy, Dawkins says it is an illusion, but yours are only naive escamotages.
The only person who can read back what was written by code precisely would be the person who developed the code and wrote the program. If part of the code is temporal and probabilistic – as seen in biological systems – then even the person who wrote the code can’t decipher the outcome precisely.
Niwrad @ 23: “But per se it doesn’t forbid at all the presence of power able to go toward improbable states. In a conductor the laws of physics say the current is zero. But if you close the circuit and introduce a current generator the current flows, without violating any law.”
Just to see if we’re on the same page, do you agree that life doesn’t violate the Second Law because it’s powered by “food”, whether that food is sunlight, eating other organisms or “eating” chemicals like some micro organisms do?
If you do, you’re miles ahead of Dave Scott who claimed that he was violating the Second Law by typing a sentence.
Granville, do you also agree that life doesn’t violate the Second Law because it powers its actions by the food it eats?
GaryGaulin @ 41,
Methodological naturalism is just fine as an approach to science. For it to work well though, it has to include ALL known realities when it considers what might be the causal factors in a given phenomenon coming about. Intelligence is known to be a reality. Anybody who denies that is deficient in it. Sometimes the only plausible explanation for the emergence of a given phenomenon is intelligent agency, as in the emergence of technology, which, by definition, is the result of the application of knowledge for a purpose. This is why functionally complex technology never comes about mindlessly and accidentally. This fact is a huge clue to keep in mind when we consider the origin of the most functionally complex technology known to us, which is, of course, the digital information-based nanotechnology of life.
Mung: If you have a perfectly shuffled deck, there’s only one of two things that can happen. It can stay the same. It can become “more ordered.” Neither would entail a violation of the second law.
That’s right. In either case, overall entropy will increase. If a person sorts the cards, the person turns energy from food into the mechanism required to reorder the cards. This includes not just mechanical energy, but the energy required by the brain to make the necessary decisions.
CS3: They don’t say, “well, unless sunspot activity is particularly favorable that day
They do say that entropy is a measure of molecular randomness or disorder, so it’s clear that playing cards are an analogy.
harry: But what drives matter to assemble itself into such an incredibly complex mechanism as photosynthesis in the first place?
The mechanism was the result of a long period of evolution, with overall entropy increasing the entire time.
harry: That it can be determined that intelligent agency was a causal factor in a given phenomenon coming about is non-controversial when it comes to SETI, forensic pathology and archaeology.
SETI hasn’t claimed to have found “intelligent agency” from outer space. Forensic pathology and archaeology point to specific intelligent agents, not a nebulous and ill-defined agent.
niwrad: Evolutionists hate to admit that biology *is* chemistry-based cybernetics because that means organisms are designed, but the reality is so.
Well, no. If you define cybernetics in such as way as to encompass biological systems, that doesn’t mean they were designed. Definitions aren’t scientific arguments, nor do they determine the history of life.
Zachriel:
So you say yet cannot support.
Actually merely saying a human did it is a nebulous claim. But I digress. All three, SETI, forensics and archaeology look for signs of intentional agency involvement. Artifacts are not necessarily just from humans.
It is a good thing that Zachriel isn’t an investigator.
Mat Spirit:
It is the origin of life via purely stochastic processes that violates the second law.
While this may not have much to do with this discussion, my wife and I were watching a video of an “exploration” of a shopping mall abandoned in 2013. It was stunning to see how much deterioration had occurred in this facility in just 3 years. To me, it serves as further conformation of the futility of thought that would permit one to believe in the “power” of evolution to take lifeless chemicals and turn them into a human being.
Couple of things:
1. Granville notes that many evolutionists have argued as follows: “but the second law only says order cannot increase in an isolated system, and the Earth receives energy from the sun, so computers arising from dust here does not violate the second law.”
Granville is correct that this is a common argument.
It is also an incredibly absurd, nonsensical and completely misinformed argument. It is an utter red herring. The determination of whether a system is closed or open is essentially arbitrary and is nothing more than a definitional semantic game in the case of the Earth. It is a terrible argument. No thoughtful supporter of evolution should ever make the argument. It belies a total lack of understanding of the issues relevant to the formation of living organisms.
2. Some people have claimed, with a snort and a tsk-tsk in their voice, that Granville has claimed that some things, like living systems, violate the Second Law. That is an extremely unfair and twisted reading of what he is saying. It is another red herring and such a debating tactic should not be countenanced by anyone of intellectual integrity.
Obviously Granville is not saying that anything violates the Second Law. That is his whole point. Rather, he is claiming that the normal trajectory of the Second Law drives against, not toward, the formation of something like living organisms — unless there is a countervailing factor, such as intelligence. This is a very simple point. It is observed by every one of us on a daily basis. There are billions upon billions of real-world examples. It is so obvious that it scarcely bares mentioning.
Except for the unfortunate fact that the point seems lost on materialists who imagine that they have discovered some other force that can operate as an intelligence substitute: either a specific claim about that favorite non-force, “natural selection”; or vague claims about things like the Earth being an “open system”.
3. It is true that Granville is pointing to a very clear “law” about how things work in nature. His specific examples, as well as his overall broader point seem well made. It is unfortunate that the broader point is lost on some.
The practical problem seems to be that Granville has chosen to frame his argument in the context of the Second Law of Thermodynamics. Sloppy readers take note: he is not arguing that the Second Law in its most classical, thermal-only, incarnation is the primary issue (though it could be relevant in some cases). Rather, he is arguing that either (a) the Second Law should be applied more broadly, and he has provided over the years some rational reasons for taking that approach, as well as citations from many authors that do so; or (b) something like the Second Law is operational in nature that applies to functionally-organized systems, similar to how the Second Law applies to thermal systems.
We can dispute whether Granville has made his case as to (a), and lots of ink has been spilled on that front. However, (b) seems to be a relatively well-supported position, even if it lacks an agreed-upon definitional statement or lacks broad support among biologists.
I have not personally taken enough time with Granville’s writings to have my mind made up as to whether (a) is a reasonable approach. My sense is that part of the reason his writings have been something of a lightning rod has to do with his insistence that (a) is a solid position. I think he has softened this a bit over the years (evident even in the OP), toward something closer to (b).
We can have lengthy debates about (a). These debates might even be interesting and in some cases even important. And one might reasonably take the position that Granville has bitten off more than he can chew with (a) or that he won’t win that particular point.
Be that as it may, I am personally glad that he has raised the issues. Whether in a formulation approximating (b) or otherwise, he has continued to raise an important larger point. One that has not been adequately addressed by proponents of traditional evolutionary theory.
Not even close.
MatSpirit, see my #2 @50.
No-one, certainly not Granville, is claiming that living organisms violate the Second Law. Nothing, as far as we know violates the Second Law.
The issue needs to be framed more carefully than that.
—–
P.S. The question is not whether energy is coming in or going out, whether the system is open or closed, and so on. Those aspects have almost nothing to do with it.
I believe that one likely source of confusion is that some formulations of the Second Law apply to every type of order, while others apply only to energy entropy.
This formulation, from Chemistry by Zumdahl and Zumdahl, applies to every type of order, including the ordering of cards, the result of coin flips, and the type of order that impressed Asimov about the human brain:
However, this statement, from General Chemistry by Whitten, Davis, and Peck, applies only to energy entropy:
If you want, you could refer to Statement 1 as “the general principle behind the Second Law”, and Statement 2 as “the Second Law”. The difference is that the second statement depends not only on the first statement, but also on the First Law of Thermodynamics, i.e., that energy is neither created nor destroyed, but can change forms.
We know the Second Law predicts against the “free compression” of a gas. However, we also know that a gas can be compressed. Doing so requires work, and, since work is the transfer of energy, one can do the calculations to show that the overall (energy) entropy of the universe has increased during the forced compression of the gas. Thus, both statements of the Second Law are satisfied.
In my coins example, with regards to energy, both statements also are satisfied. Any flip of the coins will increase the energy entropy of the universe, regardless of the resulting heads-tails configuration. With regards to the heads-tails configuration order, the first statement also holds. When all states are available to the fair coins, it will tend towards the most probable state, i.e., one with roughly equal numbers of heads and tails. When the magnet is applied, it will still tend towards the most probable state available to it, but the physics dictates that all heads is the only state available to it.
However, in the coins example, the second statement is not applicable with regards to the heads-tails configuration order, because that is not a form of energy. Just because my coins have become more “ordered” with regards to heads-tails configuration does not mean that some other coins in some other part of the universe have to become even more disordered with regards to heads-tails configuration. It also doesn’t mean that this decrease in heads-tails “entropy” has to be “compensated for” by some increase in some other type of entropy, such as thermal entropy. Only the first statement, that nature is tending towards the most probable state available to it, is applicable.
Therefore, it is non-sense to try to “compensate” for iPhones with thermal entropy. Doing so is trying to apply a formulation of the Second Law that is relevant only for energy to a type of order that is not a form of energy. If the physics restrict the states available to the atoms such that iPhones are not improbable, then we may well get iPhones. If not, then we are unlikely to get them, regardless of how much thermal entropy is entering or leaving the Earth.
Thanks!
Eric Anderson @ 50:
I think this is a huge part of the communications problem. When I talk about the second law of thermodynamics, I’m talking about the actual second law of thermodynamics, not something else that’s a bit like it. Pretty much everyone on the ID side of this seems to be talking about something they think is a law and is a bit like the second law of thermodynamics. For instance, you wrote:
If you’re talking about the actual second law of thermodynamics, you are absolutely dead flat wrong here. The actual second law does not forbid entropy decreases in open systems. Such decreases are completely normal and unremarkable. The open vs. closed distinction is about as far from “essentially arbitrary” and “a definitional semantic game” as you can get. Understanding boundary conditions and how they change the implications of the second law is incredibly basic to applying it correctly.
Other important differences include:
– The actual second law of thermodynamics is about heat, energy, entropy, and things related to them. The “things related to them” actually means it’s related to a great many things (including Shannon information), but not the sorts of things ID is concerned with (like organization).
– As I told nirwad, the actual second law doesn’t make any special allowances for intelligent agents or their designs. Intelligent agents have been trying design perpetual motion machines for longer than the second law itself has been around, and they’ve uniformly failed.
For example, organisms need to take in free energy from their environment to survive. Free energy is what powers pretty much all of the interesting things that living organisms do — metabolism, growth, reproduction, and yes evolution — and without it they will run down and die. Plants mostly get free energy from sunlight, animals get it from stealing other organisms’ stores of free energy (by eating them), but all organisms need it in some form or other.
– The actual second law is backed up by over a century of testing, research, etc. Your second-law-wannabes, on the other hand, are not accepted outside of the ID community.
Put it this way: the second law says that when things are left to themselves, they go to pot. But the sense of “left to themselves” and “go to pot” that the second law are not the senses that are relevant to the ID argument.
So, please be clear on when you’re talking about the actual second law, or something else that just somewhat resembles it!
CS3 @32 and 52:
Excellent discussion. Well said.
CS3 @ 52:
Actually, the second statement (i.e. the actual second law of thermodynamics) does appear to be relevant to the heads-tails configuration.
In statistical mechanics, the entropy of a system is defined as S = k_B * ln(w), where k_B is Boltzmann’s constant and w is the number of states the system might be in. There are 2^100 possible “random” arrangements of 100 coins and only 1 all-heads configuration. Each of those configurations corresponds to a huge number of microscopically distinct states (exact arrangements of atoms, vibration states, etc), but if we assume that each heads-tails arrangement corresponds to the same number of microstates (call it w1), then the ratio of total states will be 2^100. The difference in total entropy between the all-heads and random states will then be DeltaS = k_B * ln(w1*2^100) – k_B * ln(w1) = k_B * ln(2^100) = 9.57e-22 Joules/Kelvin.
Some of these states (the ones that differ in arrangement of heads vs tails) might have the same arrangement of energy; this does not matter, since the entropy calculation includes all distinct states, not just those that differ in energy.
The second law — the actual second law — actually does imply that converting the random to all-heads arrangements must be compensated by an equal-or-larger increase of entropy in some other form, such as heat. This is the basis of Landauer’s principle. Charles H Bennett puts it this way:
You may not think of the arrangement of coins as bearing information, but the definition of information that’s relevant here is Claude Shannon’s, and as he put it:
The arrangement of heads and tails does not have meaning, but that’s irrelevant; it’s selected (randomly) from a set, and thus has information in the Shannon sense.
So the second law — the real second law — does apply just fine to the arrangement of coins. And it places restrictions on what’s needed to straighten them out. But the restriction has nothing to do with intelligence, and can be fully satisfied by converting a little free energy into heat.
You may also think there’s a requirement related to intelligence, but if so that’s separate from the actual second-law limit.
Gordon Davisson @53:
Yes, yes. There are situations in which defining an open vs. closed system is important to our analysis of the particular system in question. Yet we would do well to consider how the system in question is defined. The definition of our system is not “arbitrary” in the colloquial sense of that word, meaning willy-nilly or just throwing a dart at the board. It is, however, very much arbitrary in a technical sense of that word, meaning where we choose to draw the boundary.
In the context of the current discussion, the statement “the Earth is an open system” is pointless. And, unfortunately, it is also a semantic game. Watch carefully. I can simply reply: “The Earth-Sun is the system in question. Now explain how life arose on the Earth.” Or I can say that we need to consider the entire Solar System, or the galaxy, or the universe. It is very much a semantic game.
Much more importantly, as I also explained, the whole issue of open-vs-closed in the context of the current discussion is irrelevant. It is a red herring. That the Earth receives energy from the Sun isn’t something that is even at issue. As CS3 explained in his last paragraph @52, the so-called “compensation argument” is missing the point entirely. If anyone thinks the issue on the table is whether the Second Law forbids entropy decreases in open systems, then they haven’t a clue what the issue on the table is in the context of origins.
Nobody, not a single person, questions whether the Earth receives energy from the Sun. Yet this recognition over the course of millennia has provided exactly zero answer to the fundamental questions of how life originated and developed to its current state.
Anyone who puts forward the juvenile and pedestrian observation that the Earth receives energy from the Sun and therefore “compensates” for a decrease in entropy at the Earth, as though this were some kind of explanation for the fundamental problems of origins, simply has no idea what they are talking about.
Again, if we want to stomp our feet and loudly protest that the Second Law only applies to thermal considerations, fine. I don’t have a particular problem with that. As I said, I think much of the pushback Granville has experienced is precisely because he has formulated his argument with reference to the Second Law.
At the same time:
It is also the case that Granville has been clear that he is not primarily interested in the thermal aspects (although, ironically, the Second Law, even in the classic thermal sense, may end up having much to say about things at the origin of life and biochemical levels, but that is a topic for another time).
Rather, Granville is saying that there is a broader principle at work. Many authors and researchers over the years have talked about the Second Law in a much broader sense. Granville can neither take credit nor blame for thinking this up; it is not some ID article of faith and your assertion that ID proponents as a rule take this approach is incorrect and inappropriate. We could go on a campaign to make everyone stop talking about the Second Law unless they are referring to thermal aspects in the most classical sense. I might even be tempted to join your crusade on that point.
But we must not forget that there is a broader principle at work. One that perhaps at some level might be seen to even encompass the Second Law. One that could perhaps be viewed as parallel to the Second Law, but addressing aspects other than thermal. One that is regularly observed and well supported.
Should we call that principle the Second Law? Many people have, but I’m certainly not going to fall on my sword over the issue. I don’t care what we call it. The point is it is real, it is there, we should recognize it.
Now the question for you: Do you recognize it?
Oops. Me @ 53:
I meant to say that such decreases are completely normal and unremarkable in open systems.
Eric Anderson @ 56:
Agreed.
The Earth-Sun system is not closed either; the sun dumps heat (well, thermal radiation) to deep space at a huge rate. That doesn’t mean you can’t apply the second law to it, or to the Earth alone, though — it just means you can’t apply the “entropy always increases” form, you have to pick a form that actually applies to the system. The simplest form I know that can be applied directly to Earth (or the Earth-Sun system if you prefer) is that the system’s entropy cannot decrease faster than it’s exported from the system. In the case of Earth, that export rate is at least 3.3e14 W/K, so as far as the second law is concerned the Earth’s entropy could be decreasing at up to that rate.
Yet many people ignore or deny the thermodynamic implications of this.
Nobody’s doing that. The energy from the sun (and just as important, heat flow from Earth to deep space) explains why the origin and evolution of life is not forbidden by the second law of thermodynamics, but that’s not at all the same as explaining how or why it does happen. The second law never actually says what will happen; it only says what won’t.
Let’s look at a less controversial example: cold fusion. At low temperatures and pressures, spontaneous fusion of light elements to heavier elements is allowed (in fact, strongly favored) by the second law, because the free energy of the fusion products is lower than that of the initial atoms. But it doesn’t actually happen. All the second law can really say is “it’s not forbidden for this particular reason”. But it might still be impossible for some other reason, or it might just turn out that there’s no way for it to happen. In the case of cold fusion, it doesn’t happen because the activation energy is just too high.
The most you can really argue here is that evolution is in the same category as cold fusion — it’s thermodynamically allowed, but actually impossible for some other reason. But you cannot say that that “other reason” is the second law.
Now, there are some people who are attempting to actually explain life — not just why it’s not forbidden, but why it actually happens — on thermodynamic bases (Jeremy England is the latest example). But this is a much more difficult task, involves much deeper principles of thermodynamics, and so far I haven’t seen any arguments that I really found convincing.
As far as I can see, thermodynamics neither forbids nor requires life to originate and/or evolve.
I am going to stamp my feet and insist that a law of physics says what it actually says, and not some other things that someone thinks it should say. I am also going to insist that if people claim to be doing physics, that they actually do physics.
That does not mean I object to applying the second law to things other than heat. What I do insist is that if you’re going to apply it to something other than heat, you actually work out how it applies to that thing, rather than just making stuff up and attributing it to the second law.
That question has a simple answer: it is not the second law, so don’t call it that.
So far, all the attempts I’ve seen to define and justify such a broader principle have been wrong and/or useless. Take Sewell’s attempt at a broader principle:
Without a definition of what’s sufficient to make something “not extremely improbable”, this is useless. You and Sewell may think that sunlight and waste heat radiation aren’t sufficient to make evolution “not extremely improbable”, but I do think they’re sufficient; Sewell’s principle doesn’t do anything at all toward deciding which of us is right.
Gordon Davisson @ 58,
Without a mechanism to harness the energy, sunlight and waste heat radiation make matter warm and radiated, not assemble it into functional complexity. The productive harnessing of energy achieved by photosynthesis is an extremely complex system itself. How did it evolve in the first place?
This is the same kind of problem as the one Karl Popper noticed when he pointed out that the assembly instructions to build the cellular machinery necessary to process the information encoded in DNA, was also encoded in the DNA. How did such a system evolve in the first place? Why would a digital information storage device like the DNA molecule evolve at all when it had no functionality whatsoever until it was populated with useful information and there was cellular machinery to utilize that information?
The only plausible explanation for such things is the involvement of an intelligent agent in the process.
harry @ 59:
There’s a lot we know about the history of evolution and how various features evolved, but there’s also a lot we don’t know much about — abiogenesis is the single biggest example, but there are lots of others.
But to infer that it must’ve been an intelligent agent? No, that’s the ID-of-the-gaps fallacy. The only thing we can say about the parts we don’t know about yet is “we don’t know about that yet.”
Gordon Davisson @58:
Sounds good. But again, it is not relevant to the key issues at hand. Where the energy comes from is a minor, bit-player issue in all of this. We could have energy from radioactive decay, thermal pools, from deep sea volcanic vents. Getting energy into the system is not and never has been the primary issue. So when people point out that the Earth can get energy from the Sun they aren’t adding anything to the discussion.
The only implication of this is that energy is available. Again, not an issue. Not something anyone, including Granville, has ever disputed.
That might be a reasonable way to start thinking about it. However, we need to be careful not to throw out the baby with the bathwater. Two caveats are in order:
First, as has been pointed out many times, a number of scientists and authors have talked about the Second Law more broadly than just thermal aspects. Are they all wrong? Perhaps. Should we demand that they stop referring to the Second Law in these cases? Perhaps. But they are pointing to a principle that is real and exists. They are pointing to a Second Law-like principle. Maybe even pointing to the principle behind the Second Law, which is a useful contribution to our understanding of the world around us. So I hear you on the terminology and your desire to limit this term only to the most narrow, classic thermal sense. But I guess I’m willing to hear people out who argue that there is a broader principle at work or that we can start thinking of the Second Law in broader terms. Even in the classic sense, there are probably a dozen different attempts by well-known scientists to formulate the principle behind the Second Law into a particular description or statement in our language. Must we forever only think of the Second Law in the terms outlined by some guy a couple of centuries ago? Perhaps. But perhaps we should also be open to exploring and framing things more broadly.
Second, I would agree with you that the Second Law does not prohibit the existence of living organisms and the like. Obviously it does not. We are here. Yet the Second Law, even in the classical sense, is very much relevant to what we can expect of certain chemical and biochemical reactions, operating on their own. And the question on the table is not whether the Second Law prohibits the existence of certain systems, but whether the formation of such systems by purely natural and material processes is thermodynamically prohibited or at least highly unlikely. These are very different questions. It is difficult perhaps to apply the Second Law to some broad, vague term like “evolution,” but there are no doubt many specific, individual cases in which we can say that a certain reaction or a certain system is impossible or exceedingly unlikely, based in part on thermodynamic constraints and considerations.
Yeah, I agree with you that these proposals are unconvincing. Nick Matzke send us down this rabbit hole before with another proposal of this kind he had run across. As far as I’ve looked into these kinds of proposals, they typically rest on vague generalizations, questionable assumptions, and, too often, semantic games about what is thermodynamically favorable.
I agree with you in general, at least insofar as we understand “forbids” in the 100% absolute sense and insofar as we refer to “evolution” vaguely and generally. However, we need to be careful not to claim that the Second Law is irrelevant, based on the important caveats I outlined above, particularly the second one.
I don’t think the statement you’ve quoted is his broader principle in terms of applying the Second Law to anything. The statement you quoted is just an observation he is making (one that is definitionally true, if seemingly trivial) that simply throwing energy into a system doesn’t make the improbable probable. That statement is a direct response to the “compensation” arguments that tend to be thrown around as though they were some kind of explanation. It is true – trivially so – that unless the additional energy is doing something very specific to make the improbable probable, then simply adding more energy doesn’t help.
I am very curious, though, to understand why you think sunlight and waste heat radiation are sufficient to make evolution probable? What special property do you think energy brings to the table for something like, say, origin of life?
Presumably you aren’t simply making the general claim that evolution is more likely to occur when energy exists than when it doesn’t exist. That is no doubt true, but is also obviously not the issue at hand. Again, having enough energy on the Earth, whether from the Sun, radioactive decay, hydrothermal vents or otherwise, is not the issue.
Presumably you also aren’t simply making the slightly more focused observation that some chemical reactions require energy. Again, yes, everyone agrees. We have to have some sufficient level of energy for certain reactions to occur. So that makes some amount of energy a necessary condition; but that does not make it a sufficient condition.
What special property do you think energy brings to the table? And if it does bring some special property to the table, presumably adding more energy would bring more of that property, which would be interesting indeed.
So what special property do you think energy like sunlight or waste heat radiation brings to the table that is sufficient to make something like the origin of life probable?
Gordon Davisson @60:
C’mon, Gordon. You know better than that. You have some good thoughts and a generally careful approach, so please don’t stoop to the tired anti-ID rhetoric.
Two things:
1. There is no ID-of-the-gaps fallacy. ID does not argue that because we don’t know how something came about it must have been designed. You know better than that. ID makes a positive case for what can be accomplished through intelligent agents and also makes a comparative case against competing non-intelligent explanations. It is perfectly reasonable and appropriate. You can disagree with the conclusion; you might feel the design inference isn’t strong enough for you to side with it in a particular case. But there isn’t a fallacy in the approach.
Some people, unfortunately, like to caricature ID for their own philosophical or worldview purposes. They like to caricature it as arguing “we don’t know; therefore design.” So, yes, there is a caricature-of-ID-of-the-gaps fallacy. But there is not an ID-of-the-gaps fallacy.
2. Given that you acknowledge you “don’t know about that yet,” you must logically also be open to the possibility that such a system or part was designed. And if it were designed, how could we tell? That is precisely the question asked by ID. It is a perfectly legitimate, objective, science-based question.
—-
The upshot is this:
Someone could reasonably take the position that we don’t know enough about biological system x or y to say how it came about. Many systems are in that category. At that point the reasonable — the intellectually responsible — thing to do as it relates to ID is acknowledge (a) the possibility of design as a live option, and (b) the ways in which we might be able to draw a reasonable inference to design.
Again, one might feel that the inference in a particular case isn’t strong enough. One might hope in their heart-of-hearts to someday discover a purely naturalistic explanation. But to reject the design inference based on philosophical or worldview biases or to reject the design inference based on a false caricature, is neither helpful nor intellectually responsible.
Thanks; same to you!
As I said, “In my coins example, with regards to energy, both statements also are satisfied.” Landauer’s principle would seem to support that. From what I have read, the experiment that purportedly supported Landauer’s principle did the following (from http://www.nature.com/news/the.....ed-1.10186):
It seems the way information is stored is related to the configurational entropy, and thus, when the available states are reduced by resetting the “bit”, there is a dissipation of energy. I assume this is not too dissimilar to my forced compression of a gas example, in which the number of states available to the gas molecules is reduced, but is accompanied by a transfer of energy through the work to another form with increased entropy.
Yes, this is a valuable distinction to make, but the origin of order, or information, that is meaningful, or functional, is what the debate is about.
When the Zumdahl textbook says “Looking at the new sequence of the cards, you would be very surprised to find that it matched the original order”, what is surprising is that the resulting order has meaning. Whether we should be surprised or not if we get the original order has nothing to do with how much thermal entropy enters or leaves the system. It sounds like you don’t dispute that there is no relationship between the “meaningfulness” of the order or information and the thermal entropy that must be released.
That’s the point. If we flip 10,000 fair coins, and they exactly spell out a passage from Shakespeare in binary (i.e., result in a meaningful result, where meaningful states are a very small subset of the possible states), then our surprise isn’t mitigated by any thermal entropy measurements. If a DNA sequence arises that codes for a functional flagellum, our surprise again isn’t mitigated by any thermal entropy measurements. It could, theoretically, be mitigated if the physics dictates a restricted set of available states such that these results are not a very small subset of the possible states.
It seems, though, that talking about the Second Law with regards to the statistical principles behind it was never controversial – until a design advocate did it. Here are just a few more examples of books that express the idea that the Second Law has something to say about the surprise we should feel when we get an order that has meaning.
From University Physics by Young and Freedman, in the Chapter “The Second Law of Thermodynamics”:
From a different edition of University Physics, in a section about “building physical intuition” about the Second Law:
From Isaac Asimov in “In the game of energy and thermodynamics, you can’t even break even”:
Yes, it is useless by itself. Sewell’s contribution here is not an argument against those who believe there is nothing extremely improbable about the origin and development of life. That is purview of others, such as Behe, Meyer, Denton, etc. Sewell’s contribution is to refute those, like Asimov, who seek to avoid that question altogether by arguing, yes, the origin and development of life IS extremely improbable, but it isn’t an issue because of the “increase in entropy that took place in the sun”.
Less rigorously, Sewell does sometimes point out the apparent double-standard we have in applying the (general principle behind the) Second Law.
From Basic Physics by Kenneth Ford:
From General Chemistry, 5th Edition, by Whitten, Davis, and Peck:
We have no qualms about concluding a dropped mirror can shatter but not self-assemble, or that automobiles may mangle in a crash but not re-assemble, or that tornados may destroy but not construct houses. Yet, when it comes to the question of whether a barren planet can turn into one filled with jet planes, computers, and human brains, not to mention automobiles, live rabbits, and mirrors, we say, sure, this can happen by natural processes.
Unlike the refutation of the compensation argument, this “intuition” obviously isn’t a rigorous proof, nor does it claim to be. Maybe it really is the case that the origin and development of life only appears to be extremely improbable, but really isn’t. There are, I’m sure, plenty of other threads that discuss that. Nevertheless, it is still worthwhile to point out how contrary such a reality would be to our experiences in other contexts.
Does adding energy increase or decrease the number of macrostates?
Does adding energy increase or decrease the number of microstates?
Statistical mechanics does not state that highly improbable states cannot be realized. Neither does the Second Law. Boltzmann specifically wrote about this, which I have quoted here in the past but don’t recall the source.
Physics textbooks are among the worst. =P
Gordon Davisson @ 60,
To just assume a phenomenon must have come about mindlessly and accidentally, when that assumption isn’t just counterintuitive, but is also without any evidentiary basis whatsoever, is the “naturalism of the gaps” fallacy.
Life aside, since it is the subject the origin of which is being debated, we have no evidence whatsoever that significant functional complexity ever comes about mindlessly and accidentally. Every instance of significant functional complexity known to us, is known to have had intelligent agency as a causal factor in its coming about.
Not only that, life doesn’t merely exhibit significant functional complexity, it is digital information-based nanotechnology the functional complexity of which is light years beyond anything modern science knows how to build from scratch. Technology, by definition, is the application of knowledge for a purpose. That is why technology never comes about mindlessly and accidentally. It couldn’t be more obvious that life is technology that is astoundingly superior to our own, and is therefore the result of the application of knowledge for a purpose.
harry @ 66
It is a pity ID scientists take no steps to find the possessor of such wonderful, advanced technologies. We could profit from it.
[Sorry if this one’s a little incoherent; I keep rewriting bits to improve it, but it’s getting late, I’m getting tired, and I’m no longer sure my “improvements” are anything of the sort.]
Eric Anderson @ 61:
If you would go that extra step and say “the second law is not relevant to the key issues at hand”, I could completely agree with you.
If we’re talking about actual thermodynamics, the implications are actually huge (and the important aspect is that it’s free energy rather than near-equilibrium thermal energy).
They’re not all wrong.
– The case for a connection between thermodynamic entropy and information entropy is IMO quite solid at the theoretical level, and in the last decade has started to gain empirical support as well.
– J. D. Berkenstein proposed a “generalized second law” that applies to systems containing black holes. I’m far less familiar with this, but my understanding is that there’s a reasonably solid theoretical case for this.
– The fluctuation theorem extends the second law down to microscopic systems. (The second law only applies to macroscopic systems where you are averaging over a huge number of atoms/molecules/whatever.)
But while some generalizations are quite solid and well-supported, that doesn’t mean that all generalizations are valid. In each of the cases I listed above, the extension is based on a carefully-worked-out theoretical basis.
On the other hand, all the proposed generalizations I see coming from the ID side here seem to be based on reasoning that goes like this: “It’s obvious that X is impossible, so the second law must forbid X. Oh, the second law doesn’t forbid X? Then there must be some more general form of the second law that does forbid X.” I’ll happily dismiss these generalizations as utter nonsense.
You should refer to the second law only to the extent that you are actually using the second law. Claiming your case is based on the second law when it’s actually based on something else (even something vaguely like the second law) is dishonest.
I’m also willing to hear out people who argue for that, but so far I haven’t seen anyone make an actual solid case, just lots of unsupported claims and wild handwaving.
Let me draw a line in the sand here: if you don’t have a good understanding of thermodynamics to start from, you have no business trying to draw generalizations from it. I don’t see anyone here that actually understands thermodynamics, and thinks these generalizations have any value. Rob Sheldon is a possible exception, but so far I haven’t seen him say anything of substance on the issue.
I’ve never seen a coherent argument for such a case. If you have such a case, I’d be interested in seeing it, but be warned: I’ll only take it seriously if it’s based on an actual, well-recognized formulation of the second law (and you don’t botch the physics or logic). Be clear about what form(s) of the second law you’re applying, and what system(s) you’re applying it to. What are the boundary conditions of the system(s), and how does that influence what the second law implies? If you’re using a probability-based form, what probability distribution are you using and why?
Not just energy, free energy. Free energy is an important concept in thermodynamics, which acts as a sort of opposite of entropy. I can’t explain it fully without getting into technical details, but you can think of free energy as energy that hasn’t been thermalized (i.e. degraded to heat at the ambient temperature).
The significance of free energy in this case is that it drives a system (e.g. Earth) away from thermodynamic equilibrium.
Equilibrium states are, well, boring. Equilibrium probability distributions are dominated by maximum randomness (sometimes combined with other constraints, like energy minimization, depending on the boundary conditions). Take a look at a probability-based “thermodynamic” argument against evolution or abiogenesis; chances are it assumes an equilibrium probability distribution.
Adding free energy to a system drives its state away from the equilibrium, and as a result changes the actual probability distribution away from what it would be at equilibrium.
Take a simple example: a rock. At equilibrium, its thermal energy will be distributed randomly throughout the rock, and it’ll have an extremely high probability of being pretty uniform. Basically, the rock will be at a uniform temperature. But if you add heat (at above-ambient temperature so it has free energy) to one side of the rock, its temperature will become uneven (higher on the side you heated). At equilibrium this uneven temperature would be wildly improbable, but away from equilibrium it’s entirely normal.
Take a more complex example: salt dissolved in the Earth’s water supply. Seawater has an osmolarity of around 1000 mOsm/l, which means that it has about 1 mole of ions (=6e23 ions) per liter of water. At equilibrium, those ions will be randomly distributed through the water (ignoring gravitational gradients); in some places the concentration will be slightly lower and in places slightly higher, but it’s unlikely to vary significantly. For example, the probability that a particular liter of seawater will have only 1/35th the expected number of ions is (if I’ve done my math right) about 1 in 10^(10^23). Practically impossible.
Freshwater has less dissolved salt than that, and there’s a lot more than a liter of it on Earth. Clearly this cannot happen by chance under the equilibrium probability distribution, but free energy from the sun shifts the probabilities enough that a practical impossibility becomes entirely normal. Fresh water not only exists, but is new fresh water is continually generated thanks to free energy from the sun.
So, free energy can shift probabilities away from their equilibrium values, even by huge amounts. But that doesn’t mean it makes everything more probable. Since the total probability is always 1, every increase in probability has to be balanced by a decrease in the probability of some other outcome. The second law can be used to place a limit on how far the probabilities will shift, but it doesn’t give any indication of what direction they’ll shift.
Close to equilibrium, there are some linearity principles that tell you a bit about what’ll happen as the system is pushed out of equilibrium. Essentially, if you “push” twice as hard, the system will depart twice as far from equilibrium, in the same direction. The rock getting warmer on one side is an example: add twice as much heat, get twice the temperature gradient.
But further from equilibrium, linearity breaks down and the results get much harder to predict (and much more interesting!), as in the case of the hydrologic cycle producing fresh water. The nonlinear regieme is where self-organization can take place. I’m not very familiar with this branch of thermodynamics, but it’s fairly well established.
Can this sort of far-from-equilibrium self-organization explain abiogenesis? There’s clearly plenty of free energy available to drive it, the question (well, the thermodynamic part of the question) is how it got coupled to the process of abiogenesis. This is a question that any proposed mechanism of abiogenesis must answer, but there’s no reason to think it’s unanswerable.
I suspect that there’s a mechanism that can, under the right circumstances, couple free energy (probably from a geothermal source) to drive abiogenesis, thus making the origin of life “not extremely improbable”. I could be wrong, but that’s my suspicion.
The situation for evolution is much clearer. Basically, evolution is a side effect of reproduction, which is driven via well-understood mechanisms from solar free energy. There’s no (thermodynamic) problem here at all.
Gordon Davisson @ 68
You have successfully refuted your straw man.
Suppose there is a group of scientists who are wholeheartedly committed to the notion that humanity is the only instance of intelligent beings in the entire Universe. We’ll call this group the ESBU (Everbody’s Stupid But Us). Further suppose that an unmanned (or un-aliened in this case) extraterrestrial drone, which was at first mistaken for an asteroid hurtling towards the Earth, slows down at the last minute and gracefully lands in the middle of Central Park.
The ESBU, of course, insists that this object is actually only an extremely peculiar asteroid, not a drone created by alien intelligent agents. They make their case as follows:
If the ESBU arguments sound ridiculous, so should such arguments applied to the digital-information-based nanotechnology of life.
harry: suppose that an unmanned (or un-aliened in this case) extraterrestrial drone, which was at first mistaken for an asteroid hurtling towards the Earth, slows down at the last minute and gracefully lands in the middle of Central Park.
Seems very similar to how human-manufactured rockets land on other planets. Further investigation might reveal other similarities — or not.
harry @ 69:
From Sewell’s original posting:
Yes, I was paraphrasing freely. But as far as I can see, that’s the core of his argument.
Gordon Davisson @71,
Life aside, the generalization of the 2nd Law rings true. There ARE NO instances of significant functional complexity coming about mindlessly and accidentally. Matter DOES inexorably tend to disintegrate into a more likely state. That is why significant functional complexity, being matter’s least likely state, never comes about mindlessly and accidentally.
That is also why we would immediately know that an alien drone was just that, not an extremely peculiar asteroid. In the same way, open minds capable of objectivity realize that the digital information-based nanotechnology of life couldn’t have come about mindlessly and accidentally. This explains why many insist that “the ‘second law of thermodynamics’ should never have been generalized,” and why they turn any discussion of the generalized 2LOT making the mindless and accidental emergence of life an impossibility into a discussion of how the non-generalized 2LOT doesn’t really apply to the emergence of life. The strategy is to distract everyone from the embarrassing reality that the generalized 2LOT, with law-like certainty, damns the notion that life emerged mindlessly and accidentally.
It seems to me that Sewell is merely saying that you can call it something besides the generalization of the 2LOT, but that doesn’t change its law-like impact on nature, which renders the mindless and accidental emergence of significant functional complexity impossible.
Gordon Davisson @68:
Thanks for your additional thoughts.
Possibly. Just curious, would you say that the Second Law is wholly, completely irrelevant to chemical reactions or to the physical structures we would expect to see form in a given system?
Interestingly, some researchers in the origin of life context (including the gentleman you cited earlier in this thread) are explicitly trying to address the thermodynamic issues. I realize you don’t think they’ve provided a solution yet, nor do I. But it is worth noting that they talk about thermodynamic issues that need to be overcome. Are they wrong to do so? Worth at least pondering.
Couple of things. First, many people on the ID side would agree with you about the Second Law and think it is a poor argument. There is not a monolithic ID-related stance on this issue. Presumably you are talking about Granville and a few others who have argued from the Second Law.
Second, it bears repeating yet again, there are lots of people who have referred to the Second Law in a more general sense, as has been pointed out to you several times. If you want to express your righteous indignation about their use of the term, fine. But please stop presenting this as though it were some kind of issue brought up exclusively by ID people. It isn’t and wasn’t. Much more importantly, it is a real, observable fact in the real world, as has been acknowledged by many scientists.
What would you like to call the principle that applies to the class of phenomena they are discussing? Is there a name or a principle that would make you more comfortable?
This is more of an allegation against Granville than me, but let me just suggest that you are going a bridge too far. You are implying intent to deceive when that is not necessarily the case. Granville, for example, has tried (we can debate how successfully) to explain that he is talking about a broader principle than pure thermal aspects. Lots of other authors and scientists have done the same. Are they all being dishonest? Of course not. They are observing something in nature that is real and important. Then they are trying to apply a principle that can help us understand that observation, either by arguing that (a) we should think of the Second Law in a broader way than it was originally formulated, or (b) there is a parallel principle at work. We can debate with them and disagree with them all we want. But calling the approach “dishonest” is hardly becoming.
Great, now we have a far-from-equilibrium state. Where does that get us on the path to life? Essentially nowhere. Barely moves the needle.
Is energy important for certain chemical reactions? Sure. Is energy important for living systems? Sure. So having some energy, even free energy, is a necessary condition. But the only thing your “open” system does is provide more free energy. That isn’t even a concern on the table. Having enough energy for abiogenesis or evolution has never been in question.
Having the right kind of energy at the right time and in the right place is perhaps an open question. But even that is a bit player in the list of problems for abiogenesis. Wouldn’t even crack the top 10. And whatever benefit an open system provides in terms of supplying even more of that free energy is utterly unhelpful to the task at hand. The issue has never been whether there is enough free energy available.
Well, if you are talking about self-organization, then you are clearly not on the path to forming life or living systems. Self-organization is anathema to such systems. Complete category mistake.
Yep, agreed. Plenty of free energy available on the Earth. That has never been the issue. And, therefore, I am glad you now have come around to agree 🙂 the “open system” business is a complete red herring.
Well, at least you are backing down from the idea that abiogenesis is “probable,” so that is good. You also seem to be implying, if I take a charitable reading, that pumping more free energy into the system, in and of itself, doesn’t really get us anywhere significant down the road to abiogensis.
Rather, we must have some kind of system that can extract, coordinate, and use that free energy to perform work. That is indeed much closer to the real issues at hand.
And so far, based on everything we know about the world around us, there is every reason to acknowledge that no such system exists in the abiogenic context. There is also no rational reason to think that one would just arise on its own.
Those are the facts at hand. Those are the problems that have concerned origin of life researchers for decades, not just ID proponents. Any claim that abiogenesis will occur, regardless of copious amounts of free energy, is nothing but pure, unsupported, contrary-to-evidence speculation.
The idea that free energy + reproduction results in evolution is simplistic and naïve, but we can leave that discussion for another time. Yes, there is plenty of free energy. So again, that has never been the issue, and the ‘Earth-is-an-open-system’ compensation argument is a red herring, even in the post-abiogenesis realm.
Considered from a materialistic perspective, life violates the fundamental principle behind the second law — the principle that matter always tends to go toward probable states. Organisms violate this principle, that is, until the moment of death.
Energy (e.g. food, sunlight) fails as a sufficient countervailing force, since it cannot explain the order and organization we find in life.
The materialist is forced to adopt the incoherent position that the fundamental principle behind the second law does not apply to all matter. Put in other words: some conglomerations of matter (organisms) violate the fundamental principle behind the second law.
Gordon Davisson #68
Your final insult “dishonest” against Granville and whoever supports the ID argument from the 2nd law is very telling and typical of a card player with a “bad hand”.
Then the Discovery Institute is “dishonest” because endorses Granville and publishes his books on the 2nd law. Then the BioComplexity journal is “dishonest” because publishes Granville’s articles. Personally I know no one in the ID movement and among creationists who opposes Granville’s argument. Maybe some are agnostic but nobody clearly refutes it. Then all these guys are “dishonest”.
Instead maybe you — who deny the evidence and defend a falsity — believe to be honest. Well, I am pretty sure that the careful reader is able to understand who is right between Granville, who uses few words, and you who write many words as a smoke screen to hidden a truth lethal for evolution.
Origenes:
No.
Origenes:
And again, no.
No wonder people think we’re IDiots.
Mung,
Zero counter-arguments and ad hominem attack fail to persuade.
Origenes,
First, you didn’t make any argument yourself, you just declared what you believe.
Second, my arguments (and those of others) are up-thread. I didn’t feel like repeating myself, again.
Origenes @74:
Welcome to the thread.
A charitable reading of your comment provides a pretty good idea of what you are driving at. However, if I might, there are a couple of things that could be tightened up to make sure the point is more clear and less subject to, for example, the cryptic and somewhat unhelpful critique by Mung.
I know you probably know all of this, but thought I would lay it out just in case it is helpful to any other readers as well.
—–
In my experience, there are three major hangups, or traps for the unwary, surrounding this issue of the Second Law. The first is a purely semantic issue. The second is historical. The third is evidentiary.
1. First, the semantic issue. It is generally helpful in these discussions to avoid saying that “x violates the Second Law.” Particularly if x is an observed phenomenon. Nothing violates the Second Law. Ever. I know you know that, and so does everyone else, including Granville. Further, anyone who understands the substantive issues at play and who is sincerely desirous to address the substantive issues will realize that no-one, not Granville nor anyone else, is claiming that the Second Law was ever in fact violated. However, when someone says “x violates the Second Law” it gives very easy ammunition to opponents to use the smear tactic and shout, “So-and-so is an idiot, because they think the Second Law was violated!”
One might occasionally get an audience that is thoughtful enough to understand the real argument without getting hung up on semantics, but it is rare. The semantics are just to easy and just too tempting, and if your opponent catches you on the semantics it lets them avoid addressing the substance altogether.
So best to be careful with the wording from the outset. For example, there is a difference between saying:
(a) “Living organisms violate the Second Law.”
and
(b) “The normal trajectory of the Second Law tends against, not toward, the formation of something like living organisms under purely natural conditions — unless there is a countervailing factor at work.”
Anyone familiar with the debate will understand that Granville, or anyone else, saying something like (a) really means something like (b). But once you say (a), you fall right into the semantic trap and the debate will get bogged down in semantics before it ever gets off the ground.
2. Second, the historical issue. There is a real, legitimate question about whether the Second Law should be understood more broadly than it was originally articulated. As has been pointed out by commenters above, and as we all know, many authors and scientists have talked about the Second Law in a more broad sense.
However, some people, including Gordon Davisson in our interaction above, prefer — nay, insist, as a matter of all that is right and holy — that the Second Law can only, ever, be understood in the very narrow, original sense in which it was articulated by some guy a couple of hundred years ago.
There is excellent reason to question this mental rigidity, but sometimes it is better to avoid it altogether. We note, for example, that Gordon Davisson, in multiple lengthy comments in this thread has completely failed to address the substantive issue that Granville is raising, offering nothing more than a couple of vague claims in passing that he thinks lots of sunshine and free energy make abiogenesis and evolution possible. Instead, his entire effort has been to attack the idea that the Second Law can ever be understood as anything other than the way it was originally put forth.
So we can battle and debate and fight over definitions, but at some level it is better to say, “Fine. Keep your original, narrow definition of the Second Law. Look at x in nature. It is a real, observable principle. We can call that principle whatever we want. Now, given that principle, how do we explain the origin of life or the formation of living organisms?”
3. Third, the evidentiary issue. This is really the only substantive point of the three. But in most debates we have to get past the first two before we can even begin to address this one. Thus, my careful and lengthy back-and-forth with Gordon Davisson to get to this point. Hopefully he will be willing to continue the discussion, now that we’ve largely dispensed with the semantic and historical distractions.
There is reason to think that the Second Law — even in its narrow, classical formulation — might have something to say about the expected trajectory of chemical reactions and the formation of functional integrated structures. But we have to talk about specifics if we are going to make headway.
A number of origin of life researchers are quite aware of these issues and specifically indicate that their research seeks to deal with the thermodynamic problems. These are solid, card-carrying, kool-aid-drinking members of the Darwinist club. This is not an issue that was dreamed up by intelligent design proponents. So there are good reasons to take the thermodynamic constraints into consideration, but it needs to be done by addressing specific cases, not just by saying “the origin of life violates the Second Law,” or something like that.
Finally, and this is a bit of a nuance, but an important one: living organisms clearly do not violate the Second Law. They exist, they perform work, they are all around us. None of them are violating the Second Law.
This harks back somewhat to our semantic issue, but with a twist. Specifically, the question is not so much about whether living organisms — in their process of living — go against our expectations of what to expect from natural processes. Rather, it is source of the initial formation or origin of those organisms that is at issue.
Everyone agrees that living organisms have the ability to take in energy and perform work. Everyone agrees that numerous mechanism within the cell and otherwise are required to keep the organism going — living, breathing, growing. It is not the living that is the problem for the materialist creation story. It is the origin.
So we are again back to the same question that we face with the origin of life, namely, are there principles, thermodynamic or otherwise, that militate against the natural formation of a living organism or against the natural formation of new biological features?
There is excellent reason to think that there are. But we need to frame the issue in that way and, to the extent possible, provide specific examples. This is no small task, and it can be incredibly frustrating — particularly when the rejoinder from the opposition is a simplistic, naive, vague, hand-waving, “Well, I think such things can form under natural conditions. So there.” But to the extent possible, we need to focus on specifics. It is there that the thermodynamic issues can be brought to bear.
—–
P.S.
I should add that in general this is a difficult area in which to make headway. Granville’s broader point is well taken and is a substantive issue that needs to be addressed. However, it is so difficult to get past the semantic games and the historical rigidity and it takes enough effort to come up with good, specific, substantive examples on the thermodynamic side that this becomes an exhausting area in which to debate. Not that it isn’t worth doing; not that it isn’t important at some level. But it can quickly drain one’s energy with little to show for it. Typically a lot more mileage is gained by homing in on the more obvious problems, like Behe’s focus on the formation of integrated functional molecular machines or Meyer’s focus on information content.
Mung,
Indeed, isolated freakish incidents happen. I agree. However when one shuffles a deck of cards a thousand times and it stays the same each time…. doesn’t that entail a violation of the laws of probability which are foundational to the second law?
Life is like that to the gazillionth power.
Origenes: doesn’t that entail a violation of the laws of probability which are foundational to the second law?
While both playing cards and entropy entail distributional probabilities, they are not the same thing. Entropy concerns distributions of microstates. If you sort playing cards, the laws of thermodynamics just says it requires free energy to power the sorter. The stored energy of the playing cards is in the arrangement and bounds of the paper fibers.
harry @ 72:
I have to strongly disagree on all three counts.
– There are examples of significant functional complexity coming about mindlessly, such as river systems — complex interconnected networks of channels, all correctly positioned, sloped, etc to allow water to drain efficiently into the world ocean. They’re formed by erosion, powered by the hydrologic cycle, which in turn is powered by free energy from the sun.
You’ll probably object that this isn’t the same kind of functional complexity you’re talking about, but the fact that you’re going to have to adjust your definition to evade refutation weakens your case significantly.
– Matter only inexorably tends to disintegrate into a more likely state IN SYSTEMS WITH NO FREE ENERGY INFLUX.
– Functional complexity is not matter’s least likely state. In situations where probability is dominated by entropy (/randomness), ordered states are less likely than functionally complex states. For example, a random sequence of letters is more likely to happen to form a meaningful English sentence than it is to just be “aaaaaa…” — there are many possible meaningful sentences, but only one sequence of all “a”s. Similarly, a random sequence of DNA bases is more likely to code for a functional protein than it is to be “AAAAAAA” (or “ACGTACGTACGTACGT….” or…).
Mind you, both organization and order are highly unlikely when entropy (/randomness) dominates. But when that’s not the case, organization can become highly likely (as in the self-organizing systems that tend to form when a system gets far enough from equilibrium).
First, the semantic issue. It is generally helpful in these discussions to avoid saying that “x violates the Second Law.”
Going to the moon violates the law of gravity. Therefore, man never went to the moon, and birds can’t really fly. 🙂
Not sure this is correct. If so I’d ask Gordon which guy. I can think of at least three different formulations of the second law. The fact is there is something that unifies all three.
Does the concept of entropy only have meaning when a system is at equilibrium?
There exists for every thermodynamic system in equilibrium an extensive scalar property called the entropy, S, such that …
http://web.mit.edu/16.unified/.....ode38.html
Gordon:
– Matter only inexorably tends to disintegrate into a more likely state IN SYSTEMS WITH NO FREE ENERGY INFLUX.
What does an influx of free energy do to change that?
I’m going to be out of touch until tomorrow; I’m partway through a response to Eric Anderson @ 79, and after that I’ll try to respond to other people.
So, basically, I’m being slow to respond. As usual. Sorry about that…
Eric Anderson, thank you for the kind welcome.
I agree with statement (b). There has to be a countervailing factor at work in order to explain the fact that, with regard to the formation of life, the normal trajectory of the Second Law is not operational.
Obviously, what I would like to argue next is that materialism cannot provide for a sufficient countervailing factor.
Earlier you wrote:
In other words, absent such a countervailing factor, the formation of living organisms does indeed violate the Second Law. And this is exactly what Granville points out when he writes that “spontaneous rearrangement of matter on a rocky, barren, planet into human brains and spaceships” represent a violation of the second law.
I fail to understand how natural selection, understood as a process involving nothing but matter, can be posited as something that is beyond the grasp of the Second Law, let alone a countervailing factor to the Second Law. The Second Law operates on matter and therefor it is incoherent to posit a material process as a countervailing factor.
Returning to statements (a) and (b) which you formulated, if one holds that materialism cannot provide for a countervailing factor, then, for the materialist, saying (b) is the same as saying (a).
From a materialistic perspective I respectfully disagree. The parts do not explain the whole. In each cell countless processes are going on, and although we may expect each process to ‘go its own way’ according to the incalculable factors impinging on it from all directions, what we see is quite different. Instead of becoming increasingly disordered in their relations — as indeed happens after death, when coherence dissolves into disjoint fragments — the processes hold together in a larger unity.
However, this may be a discussion for another day.
The second reason I disagree, and more relevant to this thread, is that if evolution is regarded as a purely material process towards improbable states, from simple replicators towards humans, then (purely material) organisms and their purely material environment are responsible for a tendency conflicting with the normal trajectory of the Second Law. If evolution is real then bacteria are (or have been) involved in the trajectory towards even more improbable states (like bats and humans).
Mung @86:
Indeed. A most excellent question.
I’m waiting with baited breath to find out . . .
Origenes:
I hear you, and there is much that I agree with you on. Yet at the heart of the matter is still a question that is somewhat different from the Second Law issue. Namely, can purely blind, material processes produce something that can countervail the normal trajectory of the Second Law? Materialists argue (without decent evidence, to be sure) that purely blind, material processes are in fact up to the task.
So the rubber really hits the road with the question of whether such a situation can arise by chance. That is the crux of the matter and where most of the attention and energy (no pun intended) should be focused.
I hear you. But in debating, we need to focus the attention on this part: “that materialism cannot provide for a countervailing factor.” The materialist believes in his heart of hearts that it can. So any discussion about the Second Law, or any other law of nature for that matter, is, to the materialist, very much beside the point.
The materialist’s creation story essentially claims that through a long string of accidental particle collisions a system was formed that could do work — to wit, sustaining and maintaining and replicating itself. This is utter nonsense, of course, and completely laughable. But the point for the current discussion is that if such a thing were possible in practice, then the materialistic creation story would have legs.
In other words, logically there is no prohibition against the idea that particles could bump into each other over long periods of time and eventually, by happy coincidence, turn into something like a living organism. It isn’t possible from a practical standpoint. It isn’t going to happen within the resources of the known universe. As a theory for the origin of life it is a complete joke. But, as far as sheer logical possibility goes, such a thing could theoretically occur.
And so the materialist, reposing blind faith as he does in such a vanishingly small possibility, has already convinced himself that the formation of a living organism — or at least that never-before-seen, hypothetical, sacred cow of abiogenesis, the self-replicating molecule — is possible. And once that happens . . . well, then all bets are off. Like Gordon Davisson @68, the materialist thinks that once reproduction is on the table, no miracle is too hard for the magical process of evolution.
Driven of course by the incoming sunlight from the Sun . . . 🙂
You make a decent point. But it doesn’t really address the materialist creation story.
The materialist isn’t arguing that this or that evolutionary event is probable, or that the Second Law wouldn’t normally tend in a particular direction.* Rather, they are reposing their faith in a very simplistic claim: stuff happens; even improbable things occasionally occur; even things that have a vanishingly low likelihood of occurring under the Second Law (or any other law), might sometime, somewhere, in some circumstance, occur.
Is it vanishingly unlikely that something like a self-replicating molecule could arise on the early Earth? Sure. But — at least as a matter of sheer logical possibility — such a thing could theoretically occur, even given the constraints of the Second Law. So, good enough. That is all the materialist needs. He doesn’t need a good theory. He doesn’t need a rational story. He doesn’t need probabilities on his side. All he needs to help himself sleep well at night — and more importantly, to keep those pesky skeptics at bay — is a mental toehold: the sheer logical possibility that something unexpected could theoretically, somehow, somewhere occur.
So, unfortunately, the bottom line is that the Second Law is not something that even concerns the materialist as far as the materialist creation story goes. He isn’t interested in talking about what the Second Law drives toward. He isn’t interested in considering whether the probabilities make sense. He doesn’t countenance the question of whether the trajectory of expected natural processes leads toward evolution.* His is a theory that rests on the simple claim that, hey, strange stuff happens. Sometimes it works; sometimes it doesn’t.
At the end of the day when we strip away the fancy rhetoric and the philosophical gloss, it is really no more substantive than that:
Stuff Happens.
—–
* Note that some origin of life researchers explicitly attempt to address the thermodynamic problems by claiming — through a combination of circular definitions and faulty logic — that the Second Law actually drives toward the formation of living organisms. Laughable though it may be, we’ve seen that argument in these pages before.
Gordon Davisson @ 82,
Your being reduced to arguing that pointing how lame it is to use river systems as an example of significant functional complexity weakens the argument of the one pointing that out, demonstrates not just the weakness, but also the absurdity of your position.
River systems are the inevitable result of the laws of physics applied to planet Earth. Stars, planets, asteroids and comets are also the inevitable result of the laws of physics applied to a given material environment. Life, like other technology, does not appear to be the inevitable result of the laws of physics applied to matter.
If an intellect with a capacity similar to that of humanity’s could have examined the pre-life Universe, it wouldn’t have been able to say, “Hey! It’s obvious that digital information-based nanotechnology is going to emerge eventually!” anymore than it could have said, “It sure looks like television sets will eventually be produced by the millions.” It might have figured out how the laws of physics when applied to an appropriate material environment would inevitably produce a star, and when applied to another environment a river system, but it would have had no expectation at all of a laptop PC coming about — unless it decided to produce one itself.
What chance in combination with the laws of physics can accomplish appears to be very limited. See my post #6 here:
http://www.uncommondescent.com.....hers-make/
Significant functional complexity of any kind emerging mindlessly and accidentally isn’t just not inevitable, it is also virtually impossible given the limited probabilistic resources the Universe provides.
There has to be a countervailing factor at work in order to explain the fact that, with regard to the formation of life, the normal trajectory of the Second Law is not operational.
The tendency is towards the more probable state. If life is the more probable state that is what the tendency will be. There’s no violation of the 2nd law involved.
I fail to understand how natural selection, understood as a process involving nothing but matter, can be posited as something that is beyond the grasp of the Second Law, let alone a countervailing factor to the Second Law.
I don’t think anyone believes natural selection is beyond the grasp of the second law. Natural selection leads to more probable states being actualized. There’s no violation of the 2nd law involved.
Haha. But i did just come across this quote:
“Natural selection is a mechanism for generating an exceedingly high degree of improbability.”
https://en.wikipedia.org/wiki/Ronald_Fisher
The materialist’s creation story essentially claims that through a long string of accidental particle collisions a system was formed that could do work — to wit, sustaining and maintaining and replicating itself.
We should explore further the concept of work. I know Zachriel thinks a cyclone does work but I think he’s just equivocating over the term work.
What is work and what is required for a system to perform work?
Mung: What is work and what is required for a system to perform work
Work (thermodynamics)
https://en.wikipedia.org/wiki/Work_(thermodynamics)
Mung,
Sure, if we accept arguendo your absurd proposition that life is “the more probable state” then there is no violation of the 2nd law. However, no one in his right mind would hold that life is “the more probable state”, so a violation of the 2nd law is on the table.
— Why don’t you tell origin of life researchers that life is “the more probable state”, and see how they will react?
So your claim is that the trajectory driven by natural selection from simple replicators toward “human brains and spaceships and jet airplanes and nuclear power plants and libraries full of science texts and novels, and super computers running partial differential equation solving software” is one which can be characterized as a process during which “more probable states are being actualized”?? You seem to be willing to accept anything in order to hold on to your mantra that “there’s no violation of the 2nd law involved.”
Sir, your argument is absurd.
That life is probable seems to me to be a very ID-friendly idea.
Life being a probable state, flowing naturally from accidental particle collisions, is precisely the argument being made by some origin of life researchers. Well, perhaps not “researchers” in the sense of doing real experiments; but it has been proposed by people we might perhaps call origin of life “pundits” or “speculators.”
Gordon Davisson referred to one proposal by Jeremy England @58. Nick Matzke sent us down this rabbit hole in the past as well.
This claim of living organisms somehow being a probable thermodynamic state — and therefore we should expect life to arise by natural processes — is indeed the source of my cryptic comment at the end of @90:
Eric, a tornado sweeping through a junkyard drives towards the assembly of a Boeing 747 🙂
Mung:
Makes sense to me.
I’m a Believer:
https://www.youtube.com/watch?v=XfuBREMXxts
Incidentally, the Jeremy England claim was mentioned here, a little over two years ago:
http://www.uncommondescent.com.....cial-case/
Sorry it’s taken me so long to get back to this. Since Mung and Eric both asked about the connection between adding free energy and decreasing probability, let me concentrate on that.
Mung @ 86: “What does an influx of free energy do to change that?”
Eric Anderson @ 89: “Indeed. A most excellent question.”
Short answer: for an open system with equilibrium boundary conditions fluctuating around its own equilibrium, the probabability it’ll be in any particular macroscopic state is directly related to that state’s free energy. Specifically, each state’s probability is porportional to e^(-F/(k_B*T)), where k_B is the Boltzmann constant, T is the temperature, and F is whatever free energy measure is appropriate for the system’s boundary conditions (e.g. it might be the Helmholtz free energy, the Gibbs free energy, or something more obscure). Note the munus sign — that means that increases in free energy directly correspond to decreases in probability.
That’s rather glib and uninformative, though, so I’ll take a stab at explaining why this is and what it means. A warning, though: I’m going to be doing actual thermodynamics (and statistical mechanics), so it’s going to be a bit technical. I’ll try not to get too deep in the physics, but there will be at least a bit of math, and I’ll need to get some relevant concepts clear before getting to the real point.
Also, definitions. In particular, we need to be very careful what we mean by probable and improbable. Probabilities are defined by probability distributions. Whether something is probable or improbable depends entirely on what probability distribution governed its choice, so calling something probable or improbable is meaningless unless you specify what the relevant probability distribution is.
When we talk about the probability of states in stat mech, we’re (generally) talking about the probability that the state will arise as a result of a fluctuation around equilibrium. If it didn’t actually arise this way, these probabilities will not correspond to actual probabilities. Sewell gets this part mostly right: “if an increase in order is extremely improbable when a system is isolated, it is still extremely improbable when the system is open, unless something is entering which makes it not extremely improbable.” He uses probabilities from an isolated system (rather than a system at equilibrium) as his reference, but he’s at least in the ballpark.
Here’s a simple example of the difference between these hypothetical equilibrium probabilities and actual probabilities. Take a rock that’s just been sitting in the sun (and as a result is warmer on top than at the bottom) and isolate it. What’s the most probable state for it to be in? If it were at equilibrium the most probable state would be that the thermal energy would be spread evenly throughout it, so its temperature was uniform. But just after you isolate it, the actual probabilities say it’s most likely to still be warmer at the top. As time goes on the temperature will gradually even out, and so the actual probabilities gradually shift until they match up with the equilibrium probabilities.
That is what thermodynamicists really mean when they talk about things moving to more and more probable states; they’re talking about the (hypothetical) equilibrium probabilities of the states, not their actual probabilities. It also doesn’t have anything to do with what seems intuitively probable and improbable, so don’t make the mistake of using your intuition to judge probabilities; use equilibrium probabilities and do the math.
Ok, now let’s apply this to a simple example: an isolated system that’s (like the rock) initially in a nonequilibrium state. For an isolated system at equilibrium, every microscopically distinct state (“microstate”) is equally probable at any given time. But most of these states look a lot alike at the macroscopic level — for instance, there is a huge number of ways the system’s energy can be scattered through all of its various atoms, molecules, etc, but in most of them the energy is pretty evenly dirtributed. We can we group these microstates into groups that “look alike” at the macro level (called “macrostates”). Since each microstate is equally probable, each macrostate’s probability will be proportional to the number of microstates that correspond to it.
We’re not to the relevant part yet, but let’s get our feet wet with a little math. Start with the Boltzmann formula for entropy: S = k_B * ln(w) (where S is the entropy of a particular macrostate, k_B is Boltzmann’s constant, and w is the number of microstates corresponding to that macrostate). We can solve that for w, giving w = e^(S/k_B). The probability of that macrostate is proportional to w, so we can write that as P ~ w = e^(S/k_B). This means that higher-entropy states have higher probability (at equilibrium), and by a huge amount: k_B is only 1.38e-23 J/K, so a difference of only one J/K in entropy corresponds to a factor of 10^(10^22.5) in probability. And since entropy only increases (or stauys constant) in an isolated system, so does the probability.
But we’re not really interested in isolated systems here. Let’s take a look at an open system that can exchange heat with its surroundings; but to keep things simple let’s assume that’s the only way it interacts with its surroundings and that it’s at a constant temperature T. In this case, the equilibrium distribution is a bit more complicated, because it turns out that lower-energy microstates are more probable than higher-energy microstates. Specifically, they’ll follow the Boltzmann probability distribution, where the probability of a particular microstate is proportional to e^(E/(k_B*T)) (where E is the state’s energy). To get the probability of a particular macrostate, we multiply that by the number of microstated w = e^(S/k_B), giving P ~ e^(S/k_B) * e^(E/(k_B*T)) = e^((S-E/T)/k_B). We can simplify this a bit using the Helmholtz free energy (abbreviated A for some reason), A = E – T*S. Rewriting the probability formula using that gives P ~ e^(-A/(k_B*T)). (Compare with the formula in my “short answer”.)
This means that in our open system, probability is directly related to Helmholtz free energy in much the same way it was related to entropy in the isolated system, but in the opposite direction (lower free energy = higher probability) because of the minus sign. Also, the relation depends on the temperature; at 1 Kelvin (very very cold), 1 Joule of free energy increase corresponds to a factor of 10^(10^22.5) decrease in probability; but at 300 Kelvin (about room temperature), 1 Joule of free energy “only” corresponds to an probability decrease of 10^(10^20.0).
That’s the basic punch line here: adding just 1 Joule of free energy to a system at 300 Kelvin pushes it into a state that would be 10^100000000000000000000 times less probable at equilibrium. But that’s still oversimplistic and glib; let me go a little more into what that actually means.
For one thing, what does “adding free energy” really mean? Let’s take a simple example: we add some heat to the system. Heat flows carry entropy inversely proportional to their temperature, so a flow of heat dQ at temperature T will carry entropy dQ/T with it. So if we add heat at the ambient temperature T, adding heat dQ to the system will also add entropy dQ/T to the system, so E goes up by dQ, T*S goes up by T*(dQ/T)=dQ, so the net change in free energy is dQ – dQ = 0. Well, that was pointless.
Ok, suppose we added heat at a higher temperature, T_hot. In that case, we’re adding energy dQ, and entropy dQ/T_hot, so the free energy goes up by dA = dQ – T*(dQ/T_hot) = dQ * (1 – T/T_hot). So if we add the heat at twice the ambient temperature, dA would be dQ/2 (“half free”); at three times the ambient temp it’ll be 2*dQ/3 (2/3 free), at four times it’d be 3*dq/4 (3/4 free), etc.
(There are lots of other ways of adding free energy to a system, but let’s not get too complicated here.)
But that’s not the whole story either, because we’re only looking a the entropy change directly due to the heat being added; usually there’ll also be entropy generated inside the system. Call the internal entropy production dS_i. Including that gets us dA = dQ * (1 – T/T_hot) – dS_i/T. The second law doesn’t say anything about what the entropy production rate will be (other than it won’t be negative). All we can really say is that the system’s free energy might be increased by up to dQ * (1 – T/T_hot), and thus it might be pushed into a state that’s up to e^(dQ * (1 – T/T_hot) / (k_B*T)) = e^(dQ * (1/T – 1/T_hot) / k_B) less probable.
(It may bother you that the second law can’t actually tell us exactly what’s going to happen here; sorry, but that’s just the nature of the second law. When you test something against the second law, there are pretty much two possible verdicts: forbidden (i.e. not gonna happen) or not forbidden (but might not happen anyway). That’s just the way the law works.)
There are a bunch more complications I should touch on, but first let me give a quick sense of the scales we’re talking about here. The Earth receives about 1.7e17 Joules/second of energy (sunlight) from the sun. That’s near-blockbody radiation at a temperature of about 6000 Kelvin, which has an entropy of about S = 4E/3T = 3.8e13 J/K per second. Its free energy (at Earth’s ~300 K temperature) is thus 1.7e17 J – (3.8e13 J/K * 300 K) per second = 1.6e17 J per second. This corresponds to a probability decrease of up to a factor of 10^1.6e37 per second. Which is inconcievably huge. And that’s just in a single second.
Ok, I’ll finish with a sketch of some of the ways things might be more complicated. If the system in question has other near-equilibrium interactions with its surroundings (like expanding/contracting at constant pressure, exchanging chemicals at constant chemical potential, etc), you can add terms to the free energy to take these into account. The most common example is the Gibbs free energy, G = E – T*S + pV, which handles the expansion/contraction case. That’s why I used “F” as free energy in my “short answer” — depending on the interactions between the system and its surroundings, the relevant free energy function might be A or G or even something even more complicated.
The situations where the free energy approach really into trouble is where the temperature (or pressure or chemical potentials or…) is changing or nonuniform. If they’re close to constant, you can still use free energy as an approximation (e.g. I’m treating the Earth as having a uniform temp of 300 K, but that’s only approxamately correct) and maybe add correction terms where needed. But if things are highly nonuniform and/or you’re trying to get more precise results, you’re going to need a more complicated analysis.
But these more complicated situations are where the “increasing probability” formulation of the second law breaks down as well. The “increasing probability” and “decreasing free energy” formulations of the second law are really just slightly different ways of describing the same thing; they both work for the same resons, and they’ll both break down for the same reasons.
Gordon Davisson: But if things are highly nonuniform and/or you’re trying to get more precise results, you’re going to need a more complicated analysis.
Or a detailed simulation, such as with weather forecasting.
Dear Gordon:
Thank you for taking time to provide a detailed and thoughtful response. As interesting and important as this may be, I realize it is but a hobby for most of us, so no need to apologize for the delay, and I appreciate you taking the time. I’ve been rather tied up myself and haven’t been able to respond sooner. (If I get time, I may put some of this up as a new head post, but we’ll see what my schedule looks like the next few days.)
Also, before focusing on some potential areas of disagreement, let me take a stab at identifying perhaps a couple of areas of agreement with what you wrote.
1. Energy is needed for the processes we are interested in for present purposes, namely, processes giving rise to the origin of life and living organisms. For example, some chemical reactions might be expected to occur in an isolated system (however we define that), but it is true that other reactions will require an input of energy. It is also true that without an input of energy our system will eventually reach equilibrium, which for our purposes, would be the death-knell.
Thus, energy is critical, not only in a general sense, but it must be available to the system in sufficient quantities.
On that much I agree. The question remains how we define our system, how much energy is needed, and whether, for the origin of life, incoming solar radiation is necessary or whether more local energy sources would suffice (radioactive decay, volcanic vents, deep sea hydrothermal vents, etc.).
2. On a related note, you highlight the fact that incoming free energy creates the possibility for a system to be out of equilibrium. This is true enough. Indeed, if we isolate any system we would expect equilibrium to eventually attain over some period of time and we would be left with a dead, lifeless system that is both rather simple to describe (from a macrostate perspective) and, for most purposes, rather uninteresting.
As a result, it is true that an influx of free energy can move a system out of equilibrium and provide the possibility that certain macrostates will attain that would not have been possible with a system stuck at equilibrium. This is almost a truism, but I agree with you that it is worth pointing out. Stated in layman’s terms: given a system stuck at equilibrium, it is more likely that something interesting will happen if free energy enters the system than if nothing enters the system.
On that much I agree. The question remains as to what “interesting” thing we need to have happen and what the realistic probability of such an event is.
—–
Despite agreement on these basic facts, however, there are a couple of fundamental problems with the idea that free energy, in and of itself, has much to say about the topic at hand: namely, the origin of life and living organisms.
First, the key questions that have been raised by many origin of life critics, most of them not intelligent design proponents to be sure, have never been about the sheer quantity of energy. Receiving more energy has never been the concern. Furthermore, even having the availability of energy has been noted, but is not of particularly deep concern. Yes, abiogenesis researchers recognize the need for energy generally, as discussed in (1) above. Yet there are plenty of sources of energy on the Earth and many possibilities have been proposed, including volcanic vents, radioactive decay, lightning, deep sea hydrothermal vents and the like.
Thus, the “Earth-is-an-open-system” argument, and the related “compensation” claim, is simply irrelevant to the issues at hand. Abiogenesis researchers have, to my knowledge, rarely been concerned that abiogenesis is implausible due to a lack of energy and that “if only we could figure out how to get more free energy into our system, then the issues would be solved.” And they certainly have not been missing the point that the Earth receives energy from the Sun. It does; but that fact just doesn’t help address the issues. Thus the question of having enough free energy or available free energy has rarely been on the table – certainly not as a major fixture of abiogenesis critiques.
Second, you have made a good argument that incoming free energy can help move a system formerly at equilibrium into a far from equilibrium state. You have even provided some nice calculations showing a huge difference between the probability of a far from equilibrium state occurring with incoming free energy contrasted with the absence of such incoming energy.
Yet this too does not address the issues at hand. While “far from equilibrium” is a characteristic of most living systems, it is not the most important characteristic and definitely not the only characteristic – certainly not the characteristic that determines, in and of itself, whether something is alive. Indeed, most natural systems that are far from equilibrium (at least on a temporary basis) are not living organisms at all.
You have shown that adding free energy to a system makes a far from equilibrium macrostate more probable. Agreed. And contrasting that with an equilibrium baseline (in which there is almost no practical chance for a far from equilibrium macrostate) does make the increased probability look impressive. Based on your calculations we can also conclude that adding more energy to the system would correspondingly increase the probability of a far from equilibrium macrostate. Again, true enough. And even more energy would further increase it.
Yet as we note this “more energy => higher probability of a far from equilibrium macrostate” formulation, we should start to feel a sense of unease that perhaps we have simplistically misunderstood or misstated the problem. Again, capable origin of life researchers have never taken the approach of thinking that the answer lies in pouring more energy into a system. Ironically, too much energy can even be a problem, which means our formulation is, by definition, heading down the wrong track.
You have been discussing probability in a way that relates to a system being far from equilibrium. But for the origin of life and living systems, being far from equilibrium is not the only or even the key issue. Much more relevant are our observations that life is characterized by integrated functional complexity and by information-rich, coded systems. What is the difference in the probability of an integrated, functional bacterial flagellum versus a random jumble of parts? What is difference in probability between a meaningful nucleotide or amino acid sequence and a random string of molecules?
Calculations that focus on the strict thermodynamic “probability” of having a far from equilibrium macrostate do not address those key probabilities. Indeed, they cannot, as they are simply incapable of distinguishing between such macrostates.* We would be making a category error to think that they can. Nor can we claim that the only “probabilities” worth calculating are thermodynamic ones, when the key issues we are dealing with are not primarily thermal issues in the first place.
—–
In conclusion, when we carefully analyze what free energy brings to the table, we can conclude that while, yes, energy is important, having adequate quantity and availability of energy is really a bit player among the larger problems besetting the abiogenesis story. And it does not even address the more fundamental issues.
The bottom line is that talking about open systems and the availability of free energy does not bring anything meaningful to the table to help the abiogenesis story. The abiogenesis proponent is thus required to fall back on the general claim: that some incredibly lucky coincidence occurred that, against all expectation and probability, somehow in some unknown way resulted in first life.
—–
* Note, as I have mentioned previously, thermodynamic constraints may come into play when analyzing a particular chemical reaction or the likelihood of a specific molecular complex arising. But at the macrostate level, the thermodynamic calculations are simply blind to the key, fundamental difference between an integrated, functioning, information-rich system and a random assemblage of useless parts.
Further to my prior comment:
Back to the question of whether the Second Law can be applied to something like the origin of life, we can conclude that either the Second Law is:
(a) irrelevant to the origin of life, and therefore imposes no constraints; or
(b) relevant to the origin of life, and therefore imposes constraints that need to be considered.
Although (a) is likely incorrect, I can at least appreciate someone who argues that (a) is a reasonable position. However, if they argue (a), then we would expect them to have the intellectual integrity to also acknowledge that the claim that the “Earth is an open system” or any other form of the “compensation” argument is, by definition, irrelevant to the origin of life and can never serve as an explanation.
Alternatively, if one argues (b), as a number of origin of life researchers have, then we would expect them to have the intellectual integrity to also acknowledge that: (1) arguments based on the Second Law are potentially legitimate if framed carefully and cannot be blithely dismissed as “creationist” distractions; and (2) the “Earth-is-an-open-system” argument and any similar “compensation” argument, although technically relevant is unhelpful, because having enough energy available on the Earth has never been an issue.
In either case (a) or (b), we would expect an objective observer to acknowledge the following corollaries: (x) simply adding free energy to a system does not meaningfully increase the probabilities of forming an information-rich, integrated, complex functioning system and, indeed, without control mechanisms may even hinder such formation in particular cases; and (y) although perhaps it should not be articulated in terms of the Second Law or should be articulated more carefully, the key substantive problem of explaining the origin of such systems through purely natural processes remains.
One might still take the strict view that the Second Law can only be discussed and understood purely in terms of thermal aspects. There seem to be decent arguments to the contrary, but I can appreciate such a restrictive view – as long as the individual also takes one of the approaches in (a) or (b) listed above and is honest about acknowledging the corollaries as well.