Uncommon Descent Serving The Intelligent Design Community

A Little Timeline on the Second Law Argument

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A little timeline on the second law argument, as applied to evolution (see my BioComplexity article for more detail):

1. Scientists observed that the temperature distribution in an object always tends toward more uniformity, as heat flows from hot to cold regions, and defined a quantity called “entropy” to measure this randomness, or uniformity. The first formulations of the second law of thermodynamics stated that thermal “entropy” must always increase, or at least remain constant, in an isolated system.

2. It was realized that the reason temperature tends to become more uniformly (more randomly) distributed was purely statistical: a uniform distribution is more probable than a highly non-uniform distribution. Exactly the same argument, and even the same equations, apply to the distribution of anything else, such as carbon, that diffuses. In fact, one can define a “carbon entropy” in the same way as thermal entropy, and show, using the same equations, that carbon entropy must always increase, or remain constant, in an isolated system.

3. Since the reason thermal and carbon (and chromium, etc) distributions become more uniform in an isolated system is that the laws of probability favor more random, more probable, states, some scientists generalized the second law with statements such as “In an isolated system, the direction of spontaneous change is from order to disorder.” For these more general statements, “entropy” was simply used as a synonym for “disorder” and many physics texts gave examples of irreversible “entropy” increases that had nothing to do with heat conduction or diffusion, such as tornados turning towns into rubble, explosions destroying buildings, or fires turning books into ashes.

4. Some people then said, what could be a more spectacular increase in order, or decrease in “entropy”, than civilizations arising on a once-barren planet, and said the claim that entirely natural causes could turn dust into computers was contrary to these more general statements of the second law.

5. The counter-argument offered by evolutionists was always: but the second law only says order cannot increase in an isolated system, and the Earth receives energy from the sun, so computers arising from dust here does not violate the second law, as long as the increases in order here are “compensated” by decreases outside our open system.

6. In several publications, beginning in a 2001 Mathematical Intelligencer letter, I showed that while it is true that thermal entropy can decrease in an open system, it cannot decrease faster than it is exported through the boundary, or stated in terms of “thermal order” (= the negative of thermal entropy), in an open system thermal order cannot increase faster than it is imported through the boundary, and likewise “carbon order” cannot increase faster than it is imported through the boundary, etc. (Though I was not the first to notice this, it seemed to be a very little known fact.) Then I argued that the more general statements of the second law could also be generalized to open systems, using the tautology that “if an increase in order is extremely improbable when a system is isolated, it is still extremely improbable when the system is open, unless something is entering which makes it not extremely improbable.” Thus the fact that order can increase in an open system does not mean that computers can appear on a barren planet as long as the planet receives solar energy, something must be entering which makes the appearance of computers not extremely improbable, for example: computers.

7. I’m sure that physics texts are still being written which apply the second law to tornados and explosions and fires, and still say evolution does not violate these more general statements of the second law because they only apply to isolated systems. But I have found that after reading my writings on the second law (for example, my withdrawn-at-the-last-minute Applied Mathematics Letters article) or my videos (see below) no one wants to talk about isolated and open systems, they ALL now say, the second law of thermodynamics should only be applied to thermodynamics, it is only about heat. “Entropy” never meant anything other than thermal entropy, and even when physics textbooks apply the second law to more general situations, they are really only talking about thermal entropy. Whether the second law still applies to carbon entropy, for example, where the equations are exactly the same, is not clear.

8. Of course you can still argue that the “second law of thermodynamics” should never have been generalized (by physics textbook writers; creationists were not the first to generalize it!) and so it has no relevance to evolution. But there is obviously SOME law of Nature that prevents tornados from turning rubble into houses and cars, and the same law prevents computers from arising on barren planets through unintelligent causes alone. And if it is not a generalization of the second law of thermodynamics, it is a law of Nature very closely related to the second law!

Note added later: as clearly stated in the BioComplexity article, the statements about “X-entropy”, where X = heat, carbon, chromium,…, in an isolated or open system, are assuming nothing is going on except diffusion, in which case they illustrate nicely the common sense conclusion (tautology, actually) that “if an increase in order is extremely improbable when a system is isolated, it is still extremely improbable when the system is open, unless something is entering which makes it not extremely improbable.” Thus just showing that the statements about X-entropy are not always valid in more general situations does not negate the general, common sense, conclusion, and allow you to argue that just because the Earth is an open system, civilizations can arise from dust here without violating the second law (or at least the fundamental natural principle behind the second law). At some point you are going to have to argue that energy from the sun makes the spontaneous rearrangement of atoms into computers and spaceships and iPhones not astronomically improbable, all the popular easy ways to avoid the obvious conclusion are now gone. (see Why Evolution is Different, excerpted—and somewhat updated—from Chapter 5 of my Discovery Institute Press book. )

[youtube 259r-iDckjQ]

Comments
Mung: If you have a perfectly shuffled deck, there’s only one of two things that can happen. It can stay the same. It can become “more ordered.” Neither would entail a violation of the second law. That's right. In either case, overall entropy will increase. If a person sorts the cards, the person turns energy from food into the mechanism required to reorder the cards. This includes not just mechanical energy, but the energy required by the brain to make the necessary decisions. CS3: They don’t say, “well, unless sunspot activity is particularly favorable that day They do say that entropy is a measure of molecular randomness or disorder, so it's clear that playing cards are an analogy. harry: But what drives matter to assemble itself into such an incredibly complex mechanism as photosynthesis in the first place? The mechanism was the result of a long period of evolution, with overall entropy increasing the entire time. harry: That it can be determined that intelligent agency was a causal factor in a given phenomenon coming about is non-controversial when it comes to SETI, forensic pathology and archaeology. SETI hasn't claimed to have found "intelligent agency" from outer space. Forensic pathology and archaeology point to specific intelligent agents, not a nebulous and ill-defined agent. niwrad: Evolutionists hate to admit that biology *is* chemistry-based cybernetics because that means organisms are designed, but the reality is so. Well, no. If you define cybernetics in such as way as to encompass biological systems, that doesn't mean they were designed. Definitions aren't scientific arguments, nor do they determine the history of life.Zachriel
February 22, 2016
February
02
Feb
22
22
2016
06:52 AM
6
06
52
AM
PDT
GaryGaulin @ 41, Methodological naturalism is just fine as an approach to science. For it to work well though, it has to include ALL known realities when it considers what might be the causal factors in a given phenomenon coming about. Intelligence is known to be a reality. Anybody who denies that is deficient in it. Sometimes the only plausible explanation for the emergence of a given phenomenon is intelligent agency, as in the emergence of technology, which, by definition, is the result of the application of knowledge for a purpose. This is why functionally complex technology never comes about mindlessly and accidentally. This fact is a huge clue to keep in mind when we consider the origin of the most functionally complex technology known to us, which is, of course, the digital information-based nanotechnology of life.harry
February 22, 2016
February
02
Feb
22
22
2016
05:22 AM
5
05
22
AM
PDT
Niwrad @ 23: "But per se it doesn’t forbid at all the presence of power able to go toward improbable states. In a conductor the laws of physics say the current is zero. But if you close the circuit and introduce a current generator the current flows, without violating any law." Just to see if we're on the same page, do you agree that life doesn't violate the Second Law because it's powered by "food", whether that food is sunlight, eating other organisms or "eating" chemicals like some micro organisms do? If you do, you're miles ahead of Dave Scott who claimed that he was violating the Second Law by typing a sentence. Granville, do you also agree that life doesn't violate the Second Law because it powers its actions by the food it eats?MatSpirit
February 22, 2016
February
02
Feb
22
22
2016
04:02 AM
4
04
02
AM
PDT
The only person who can read back what was written by code precisely would be the person who developed the code and wrote the program. If part of the code is temporal and probabilistic - as seen in biological systems - then even the person who wrote the code can't decipher the outcome precisely.Me_Think
February 22, 2016
February
02
Feb
22
22
2016
01:15 AM
1
01
15
AM
PDT
Arthur Hunt #39 When you for example write information into a read/write memory by means of an arbitrary code and read back and decode such information to do some functional task you are not doing *analogy* of information processing, you are doing *real* information processing. The cell, among other things, does exactly that. So when informaticians study the cell and recognize paradigms of informatics they do not see "what they want to see" -- as you say -- rather exactly what there is. Evolutionists hate to admit that biology *is* chemistry-based cybernetics because that means organisms are designed, but the reality is so. You say it is all analogy, Dawkins says it is an illusion, but yours are only naive escamotages.niwrad
February 22, 2016
February
02
Feb
22
22
2016
12:53 AM
12
12
53
AM
PDT
harry:
But the explanation of the nature of that intelligent agent, as I said in my previous post, is outside the realm of science’s competence,
And where were you taught that? It's exactly what methodological naturalism teaches.GaryGaulin
February 21, 2016
February
02
Feb
21
21
2016
11:34 PM
11
11
34
PM
PDT
GaryGaulin @ 38,
Harry, explain the intelligent cause that created photosynthesis.
OK. But the explanation of the nature of that intelligent agent, as I said in my previous post, is outside the realm of science's competence, so I can only provide a metaphysical explanation. There is a being whose essence is "to be." That being is the first cause, the prime mover, the primary and fundamental reality, and necessarily exists outside of time, space, matter and energy, as He brought them into being. For reasons understood completely only by that being, after He launched the Universe ex nihilo, He arranged some of it into what we refer to as photosynthesis.harry
February 21, 2016
February
02
Feb
21
21
2016
09:04 PM
9
09
04
PM
PDT
niwrad #28 - An informatian will use analogy and metaphor to make sense of that which they do not understand. Basically, they would see what they want to see, what they can comprehend. These tools are limited, and absent some connection with chemistry, they fail in conveying true understanding of mechanism and origin. For example, how many times would a computer programmer use the exact same command to execute a process OR to delete the command (and thereby destroy the code?). I submit that an informatian would be blind to such occurrences, even though living things are teeming with them. Any analogy that an informatician might use to describe or understand such processes would fall short.Arthur Hunt
February 21, 2016
February
02
Feb
21
21
2016
08:47 PM
8
08
47
PM
PDT
Harry, explain the intelligent cause that created photosynthesis. A computer model would be preferable but explaining how to go about modeling the process would be a good start. If you need help then click on my name above for links to my models and the theory of intelligent design that I represent.GaryGaulin
February 21, 2016
February
02
Feb
21
21
2016
08:46 PM
8
08
46
PM
PDT
GaryGaulin @35, That it can be determined that intelligent agency was a causal factor in a given phenomenon coming about is non-controversial when it comes to SETI, forensic pathology and archaeology. That it is controversial when it comes to the origin of life is due to the religious/philosophical bias of atheists who lack the relentless objectivity and religious/philosophical neutrality that genuine science requires. They have redefined science as "that which confirms atheism," which has perverted modern science. Objective science would admit that the most plausible explanation for the fine-tuning of the Universe and the ultra-sophisticated, digit information-based nanotechnology of life, the functional complexity of which is light years beyond anything modern science knows how to build from scratch, is intelligent agency. Relentlessly objective, religiously/philosophically neutral science does not have to explain the nature of that intelligent agent, which would be outside of the realm of its competence, but only must admit that currently the most plausible explanation for the Universe and life within it is intelligent agency.harry
February 21, 2016
February
02
Feb
21
21
2016
08:26 PM
8
08
26
PM
PDT
How do such mechanisms come about by intelligent cause? You've never driven a car.Mung
February 21, 2016
February
02
Feb
21
21
2016
08:10 PM
8
08
10
PM
PDT
Harry:
How do such mechanisms come about mindlessly and accidentally?
How do such mechanisms come about by intelligent cause? The floor is yours, please explain your scientific theory.GaryGaulin
February 21, 2016
February
02
Feb
21
21
2016
07:52 PM
7
07
52
PM
PDT
Energy applied to matter without some sort of teleonomic mechanism to harness it, causes disintegration, or an increase in entropy. Integration of matter that decreases entropy is always the result of energy applied to some sort of teleonomic mechanism:
raw matter within an isolated system, plus a teleonomic machine, might yield auto-organisation derived from endogenous [that which comes from within] energy. Raw matter within a non isolated system, plus a teleonomic machine may yield auto-organisation derived from endogenous and/or exogenous [that which comes from without] energy. Within both isolated and non-isolated systems, however, a mechanism (machine, teleonomy, know-how) is essential if any auto-organisation is to result. -- A.C. Macintosh, Information and Entropy – Top-down or Bottom-up Development in Living Systems?, citing Wilder-Smith(1)
How do such mechanisms come about mindlessly and accidentally? Consider photosynthesis, which maintains atmospheric oxygen levels and supplies all of the organic compounds and most of the energy necessary for life on Earth.(2) How did the astonishingly complex mechanism referred to as photosynthesis itself get assembled?(3) It is easy to say, as Denis Alexander does in the comments section of his Big Questions Online article entitled "How are Christianity and Evolution Compatible?" that
The entropy argument is another red herring. Of course life in its complexity would run counter to the second law of thermodynamics if it were operating in a closed system. But it’s not, it’s in an open system in which the sun’s energy runs down as life’s complexity runs ‘up-hill’. Photosynthesis provides a classic example, in which energy from the sun is translated into cellular energy using the chemical chlorophyll inside plant cells.
But what drives matter to assemble itself into such an incredibly complex mechanism as photosynthesis in the first place? See The Mechanism of Photosynthesis(3). An excerpt:
The production of these final products [of photosynthesis] is carried out as a result of astonishing and exceedingly complex processes and mechanisms in the leaf. In order for the carbohydrate molecules we commonly call sugar to be formed from carbon dioxide and water, exceedingly complex and delicate measures and processes must be implemented. These processes involve very complex systems working at the atomic level, and even at the level of the electrons orbiting around them. In the process, there are a large number of elements, consisting of different pigments, various salts, minerals, trace elements (such as ferredoxin and adenosine triphosphate), sub-catalysts, and other substances and chemicals with various different responsibilities. Bearing in mind that plants need 30 different proteins just to produce a sugar molecule as simple as saccharose, you can see just how complex the entire system is.
(1) http://c.ymcdn.com/sites/network.asa3.org/resource/dynamic/forums/20130306_234609_22078.pdf (2) https://en.wikipedia.org/wiki/Photosynthesis (3) http://www.harunyahya.com/en/Books/4780/photosynthesis-the-green-miracle/chapter/4747harry
February 21, 2016
February
02
Feb
21
21
2016
07:36 PM
7
07
36
PM
PDT
Welcome CS3.Mung
February 21, 2016
February
02
Feb
21
21
2016
04:38 PM
4
04
38
PM
PDT
Diffusion through a solid *in the presence of gravity*. In this case, lighter components(*) will diffuse toward the top, and denser ones toward the bottom, which can certainly decrease their X-entropies (despite there being no X-entropy flux into or out of the system).
This is certainly true, but, if you believe Sewell claims that natural forces cannot produce anything other than uniform, random distributions of matter, you are misunderstanding his point. He is clear that the claim “In an open system, thermal order (or X-order, where X is any diffusing component) cannot increase faster than it is imported through the boundary” is only true when “assuming nothing is going on but diffusion” (see footnote 4 of his Bio-complexity paper). From Chemistry by Zumdahl and Zumdahl:
Nature always moves toward the most probable state available to it. (emphasis mine)
Physics can and does restrict the states available to systems. For example, if you flip 100 fair coins, it is very improbable that all will land on heads. From University Physics by Young and Freedman:
Suppose you toss N identical coins on the floor, and half of them show heads and half show tails. This is a description of the large-scale or macroscopic state of the system of N coins. A description of the microscopic state of the system includes information about each individual coin: Coin 1 was heads, coin 2 was tails, coin 3 was tails, and so on. There can be many microscopic states that correspond to the same macroscopic description. For instance, with N=4 coins there are six possible states in which half are heads and half are tails. The number of microscopic states grows rapidly with increasing N; for N=100 there are 2^100 = 1.27x10^30 microscopic states, of which 1.01x10^29 are half heads and half tails. The least probable outcomes of the coin toss are the states that are either all heads or all tails. It is certainly possible that you could throw 100 heads in a row, but don't bet on it: the possibility of doing this is only 1 in 1.27x10^30. The most probable outcome of tossing N coins is that half are heads and half are tails. The reason is that this macroscopic state has the greatest number of corresponding microscopic states.
However, if the coins all had little magnets with one pole on the heads side and the other pole on the tails side, and a magnet were placed under the table, with the magnetic field entering the system, all heads would no longer be extremely improbable. This is why Sewell says:
If an increase in order is extremely improbable when a system is isolated, it is still extremely improbable when the system is open, unless something is entering (or leaving), which makes it not extremely improbable.
In both my example and yours, that is exactly what is happening: something is entering (or leaving) that makes what happens not extremely improbable (i.e., the magnetic field and the gravitational field, respectively). The reason these happen is because the physics restricts the set of states available, not because some inequality is satisfied between the “entropy” of the magnetic or gravitational field entering (however that would be calculated) and the “heads-tails” ordering of the coins. We wouldn’t say, if the entropy of the magnetic field is x, we can only expect configurations with 90% heads, but if it is y, we can expect 100% heads. Furthermore, while the entry of the magnetic field greatly increases the probability of the microstates of “all heads” or “mostly heads” for the coins with magnets, it does not increase the probability of other types of clearly unrelated microstates, such as “spells out a lengthy passage from Shakespeare in binary”. So, it is still possible to argue that, thanks to Darwinian evolution, etc., the physics actually do restrict the set of possible states into which the atoms on the originally barren Earth could rearrange themselves over several billion years, such that a collection of human brains, iPhones, jet planes, and encyclopedias is not extremely improbable. That is why Sewell says things like
Those wanting to claim that the basic principle behind the second law is not violated in scenario D [the origin and evolution of life on Earth] need to argue that, under the right conditions, macroscopically describable things such as the spontaneous rearrangement of atoms into machines capable of mathematical computations, or of long-distance air travel, or of receiving pictures and sounds transmitted from the other side of the planet, or of interplanetary space travel, are not really astronomically improbable from the microscopic point of view, thanks to the influx of solar energy and to natural selection or whatever theory they use to explain the evolution of life and of human intelligence.
Some people are happy to make that argument, and see no reason to resort to the “compensation” entropy computations as made by many textbooks and people like Asimov, Styer, and Bunn. In that case, their position is actually not in conflict with Sewell’s argument.
Generally, the way other entropies are related to the thermodynamic entropy is that they’re part of it; that is, the thermodynamic entropy is sometimes the total of various partial entropies (which is why I call it the “total” entropy). For instance, the entropy of a classical ideal gas can be broken down into thermal and configurational components (relating to the movement and arrangement of the gas molecules, respectively). Again, the terminology is confusing: in this case, thermodynamic entropy = thermal entropy + configurational entropy.
You have given a nice description of thermodynamic entropy in the context of energy, which is what most practical applications of the Second Law involve. Indeed, energy can be converted into different forms, and the Second Law only requires that the total thermodynamic (energy) entropy increases in the universe. Thus, a system can become more ordered when accompanied by a sufficient release of heat so as to increase the overall (energy) entropy of the universe. However, this increase in “order” refers to a very specific type of order, i.e., what is defined by the Third Law of Thermodynamics:
The entropy of a pure, perfect crystalline substance (perfectly ordered) is zero at absolute zero (0 K).
So, when water freezes and releases heat from the system, its molecules can become more ordered in that they more closely approach a pure, perfect crystalline substance. However, that doesn’t mean that, with sufficient heat release, the molecules can more closely approach a pure, perfect Apple iPhone. Isaac Asimov wrote:
You can argue, of course, that the phenomenon of life may be an exception [to the second law]. Life on earth has steadily grown more complex, more versatile, more elaborate, more orderly, over the billions of years of the planet’s existence. From no life at all, living molecules were developed, then living cells, then living conglomerates of cells, worms, vertebrates, mammals, finally Man. And in Man is a three-pound brain which, as far as we know, is the most complex and orderly arrangement of matter in the universe. How could the human brain develop out of the primeval slime? How could that vast increase in order (and therefore that vast decrease in entropy) have taken place?
Now, if, when Asimov says that the human brain “as far as we know, is the most complex and orderly arrangement of matter in the universe”, he means that the human brain, as far as we know, most closely approximates a pure, perfect crystalline substance of all matter in the universe, then I agree your argument and entropy computations are completely valid. However, it is pretty clear that is not what Asimov meant, and not what the debate is about. Clearly, there are other types of order, not directly related to energy, such as “heads-tails” coin entropy as illustrated in the University Physics quote, or the type of order in the human brain that so impressed Asimov, that can be defined. These types of order are not really true thermodynamic quantities, and just because there is a relationship between entropies of different forms of energy does not mean that thermodynamic (energy) entropies can also be interconverted to the probability of anything. Now, if you want to say that, since these are not “true” thermodynamic quantities, then we are not really talking about the Second Law of Thermodynamics, then that is fine. However, it is clear that the same statistical principles that explain why the Second Law predicts against free compressions of gases are also applicable to understanding whether you are likely to flip all heads on 100 fair coins, or whether atoms on a barren planet are likely to arrange themselves into Apple iPhones. Again, there is always the caveat: unless the physics restrict the available states such that these configurations are not extremely improbable. To be sure, for any process to take place (e.g., for the coins to flip at all), there has to be a net increase in the thermodynamic (energy) entropy of the universe. However, that is not the only consideration. All coin flips are equally likely from a purely energy perspective, but it is still extremely improbable to get all heads -- again, unless something (like the magnetic field) enters that makes it not extremely improbable.
Also, entropy can be thought of as a measure of disorder, but it’s quite different from what we normally think of as disorder. For instance: which is more disordered, a well-shuffled deck of cards, or a sorted deck that’s 1 degree warmer (and otherwise identical)? Most people would say the shuffled deck is more disordered, but the warmer deck has the higher entropy. (In more detail: shuffling a deck of 52 cards increases its entropy by k_B*ln(52!) = 2.16e-21 J/K, while heating a 100g deck with a specific heat of 0.2 from 300K (= 26.85 Celsius = 80.33 Fahrenheit) to 301K (= 27.85 Celsius = 82.13 Fahrenheit) would increase its entropy by 0.28 J/K. In this example, the entropy change due to the temperature difference is over 100 quintillion times bigger than the difference due to shuffling.) .. BTW, as the deck-of-cards example shows, thermodynamic entropy isn’t only about heat, but it is mostly about heat.
So, let’s apply this to the Earth and evolution: the sunlight recieved by Earth carries about 3.8e13 W/J of entropy, and the thermal (mostly infrared) radiation leaving Earth carries at least 3.7e14 W/K, for a net flux of at least 3.3e14 W/K leaving Earth (see here). There are some other entropy fluxes, but I’m pretty sure they’re too small to matter. So, as far as the second law is concerned, the total entropy of Earth could be decreasing at up to 3.3e14 J/K per second, and the second law places no restriction on what form that decrease might take.
So are you saying that, as long as the Earth is receiving sufficient sunlight from the Sun, neither the Second Law nor the statistical principles behind it can say anything about which types of arrangements of playing cards are more likely? And yet, Zumdahl and Zumdahl give exactly that example:
As another example, suppose you have a deck of playing cards ordered in some particular way. You throw these cards into the air and pick them all up at random. Looking at the new sequence of the cards, you would be very surprised to find that it matched the original order. Such an event would be possible, but very improbable. There are billions of ways for the deck to be disordered, but only one way to be ordered according to your definition. Thus the chances of picking the cards up out of order are much greater than the chance of picking them up in order. It is natural for disorder to increase.
They don’t say, “well, unless sunspot activity is particularly favorable that day, in which case, you could totally get the original sequence”, because thermodynamic (energy) entropy and “card-order” entropy are not inter-convertible.CS3
February 21, 2016
February
02
Feb
21
21
2016
03:56 PM
3
03
56
PM
PDT
The fact is, scientists have been looking into the inner workings of living things for more than a century, and at every turn, “simple chemistry*” is all one sees. This is one of those things that is not even false. Read The Eight Day of Creation.Mung
February 21, 2016
February
02
Feb
21
21
2016
09:54 AM
9
09
54
AM
PDT
I will confess that this sort of logic escapes me. Obviously not an isolated system.Mung
February 21, 2016
February
02
Feb
21
21
2016
09:51 AM
9
09
51
AM
PDT
When you receive a deck of cards from the factory it is typically ordered by suit and rank. There's only one of two things that can happen. It can stay the same. It can become "less ordered." If you have a perfectly shuffled deck, there's only one of two things that can happen. It can stay the same. It can become "more ordered." Neither would entail a violation of the second law.Mung
February 21, 2016
February
02
Feb
21
21
2016
09:49 AM
9
09
49
AM
PDT
Arthur Hunt #27 Also if you look into a computer all you see is plastic, silicon, copper, tin, fiber-glass... But if you ask an informatician he would say that on such stuff several layers of formalism are implemented (hardware circuitry, registries and logic, Boolean algebra, Bios instructions, operating system, many communication protocol layers, application layer, etc). I bet that if an informatician studied the cell he would recognize many of the above paradigms implemented on the chemistry. The problem yesterday and today is that usually informaticians don't study biology and biologists don't study engineering in general and informatics in particular. Although with the arise of the ID movement and the consequent increase of interest on the ID/evo debate fortunately the situation is slowly changing.niwrad
February 21, 2016
February
02
Feb
21
21
2016
08:48 AM
8
08
48
AM
PDT
niwrad, your argument seems to be "the best and brightest cannot design life, thus life was designed". I will confess that this sort of logic escapes me. The fact is, scientists have been looking into the inner workings of living things for more than a century, and at every turn, "simple chemistry*" is all one sees. (* - with apologies to the host of students who, over the years, encountered organic chemistry in college and realized that, say, French literature may be a better career option than medical school.)Arthur Hunt
February 21, 2016
February
02
Feb
21
21
2016
07:42 AM
7
07
42
AM
PDT
Hello, Granville, I have been a fan of yours for a long time. You remarked
But there is obviously SOME law of Nature that prevents tornados from turning rubble into houses and cars, and the same law prevents computers from arising on barren planets through unintelligent causes alone. And if it is not a generalization of the second law of thermodynamics, it is a law of Nature very closely related to the second law!
I seems to me, and has for a long time, that there exists a more general, fundamental law of which the 2nd law is merely an illustration of how that more fundamental law applies to thermodynamics. Why don't you use your expertise to precisely define that more general and fundamental law? "Sewell's Law" has a nice ring to it!harry
February 21, 2016
February
02
Feb
21
21
2016
06:09 AM
6
06
09
AM
PDT
Arthur Hunt #21
You seem to be saying that hydrophobic interactions do not apply to the assembly of macro molecular complexes in the cell. You could not be more incorrect. The fact is, the same chemical principles that apply to the spontaneous macroscopic ordering I describe also apply to the assembly of large complexes in the cell.
I don't deny hydrophobic interactions in biology, mind you. I simply say that they cannot account for the cybernetics of cells and organisms. Simple chemistry is not sufficient to explain cells and organisms because they eminently imply all fields of engineering and technology (as we know them so far), plus other countless functional hierarchies of advanced formalisms that we actually are even unable to imagine. If we were able our robotics would be far more advanced than it is.niwrad
February 21, 2016
February
02
Feb
21
21
2016
01:21 AM
1
01
21
AM
PDT
Gordon Davisson #22
As far as the second law is concerned, intelligent agents are subject to exactly the same rule as everything else. [...] Therefore, if there were an actual thermodynamic problem with evolution, adding intelligence would not solve it.
As said above to MatSpirit, intelligence introduces in nature a factor able to organize. Per se nature is not able to self-organize. Nature needs something higher that overarches it and its laws. This somehow transcendent factor is intelligence. Nature tends to spontaneous disorganization (2nd law). Whatever you see organized in nature or elsewhere is the work of intelligent design (front-loaded or at run-time). So it is wrong to say "if there were an actual thermodynamic problem with evolution, adding intelligence would not solve it" because intelligence is exactly what it takes to solve the problems of lack of organization in all fields. Again, it is misleading to say "intelligent agents are subject to exactly the same rule as everything else". In fact that presupposes materialism. Intelligent beings are spirit, soul, body. It is only as physical body that intelligent beings suffer the physical laws. Pure intelligence, which is spirit, transcends matter. For this reason it is able to organize matter, which per se goes toward disorganization. So we are finally arrived to the point. You are materialist and as such you are also evolutionist. I am non materialist and as such I am also design supporter. All square.niwrad
February 21, 2016
February
02
Feb
21
21
2016
01:00 AM
1
01
00
AM
PDT
MatSpirit #20
How does a human egg grow into a baby? I would think that turning a single celled egg into a multi trillion celled baby would constitute a dramatic reduction in entropy. Is the second law being violated here?
Obviously no violation. Embryo development is an exquisite work of intelligent design programming. In general the work of intelligence never violates the 2nd law. When you write a post, compose music, program on computer, etc. do you violate the 2nd law? Certainly no. The 2nd law expresses a natural tendency toward probable states. But per se it doesn't forbid at all the presence of power able to go toward improbable states. In a conductor the laws of physics say the current is zero. But if you close the circuit and introduce a current generator the current flows, without violating any law.niwrad
February 21, 2016
February
02
Feb
21
21
2016
12:25 AM
12
12
25
AM
PDT
nirwad @9:
I think maybe the ID argument from the 2nd law has more to do with how “unintelligent cause” works. It works in the direction of … un-work so to speak, if with “work” we mean something organizational. Unintelligent forces are idle, laggard, they always prefer probable, easy tasks. They hate to organize ex novo, way too efforts.
In this respect the ID argument from the second law is fundamentally flawed, because the second law does not distinguish intelligent vs. unintelligent forces. As far as the second law is concerned, intelligent agents are subject to exactly the same rule as everything else. In the late 19th century, James Clerk Maxwell proposed a possible exception to this: an intelligent agent (which became known as "Maxwell's demon") which appeared to be able to decrease entropy by sorting individual molecules (e.g. sorting them into fast-moving/hot vs. slow-moving/cool). Since then, more detailed analysis has shown that in fact the daemon cannot decrease overall entropy, because the connection between information and thermodynamic entropy (specifically Landauer's principle) implies that the daemon must produce at least as much entropy as it removes. Therefore, if there were an actual thermodynamic problem with evolution, adding intelligence would not solve it.Gordon Davisson
February 20, 2016
February
02
Feb
20
20
2016
08:49 PM
8
08
49
PM
PDT
Niwrad@16, You seem to be saying that hydrophobic interactions do not apply to the assembly of macro molecular complexes in the cell. You could not be more incorrect. The fact is, the same chemical principles that apply to the spontaneous macroscopic ordering I describe also apply to the assembly of large complexes in the cell. MatSpirit@20, all growth and development is accompanied by increases, large increases, in entropy. I realize that Sewell may want to convey the opposite notion, but he is wrong. Quite completely wrong.Arthur Hunt
February 20, 2016
February
02
Feb
20
20
2016
07:06 PM
7
07
06
PM
PDT
Granville and niwraD, how does a human egg grow into a baby? I would think that turning a single celled egg into a multi trillion celled baby would constitute a dramatic reduction in entropy. Is the second law being violated here? Ditto for how does a baby grow into an adult without violating the second law?MatSpirit
February 20, 2016
February
02
Feb
20
20
2016
06:51 PM
6
06
51
PM
PDT
mung @13:
Hello Gordon, if you would indulge some questions? In a thermodynamic system, what is the difference between configurational entropy and thermal entropy?
Sure, I'll take a stab at them. I think the easiest way to explain the difference between configurational and thermal entropy would be with some examples. A warning, though: I'm going to give a simplified overview to get the basic idea across, without worrying about some of the complications (like quantum mechanics) that come up if you're trying to do this properly. Consider an ideal gas. This is an approximation of how real gasses work that ignores both the size of the gas molecules (i.e. they're too tiny to matter) and the fact that molecules can interact without touching (i.e. if they're spread out far enough apart, the forces between molecules are too weak to matter). That means that each molecule of the gas bounces around pretty independently of all the other molecules. The Boltzmann formula for entropy (which is good enough for what we're doing here) is S = k_B * ln(w), where k_B is Boltzmann's constant and w is the number of possible microscopically distinct states the system might be in. For our ideal gas, that means how many possible ways could the gas molecules be scattered around (i.e. how many places each molecule might be) and how many possible ways might they be moving (i.e. how many different speeds & directions each molecule might have). In the ideal gas case, the molecules' positions and motions are independent, so the total number of states is w = (# of possible sets of positions) * (# of possible sets of motions). Since the logarithm of a product is the sum of the logarithms, that means S = k_B * ln(# of possible sets of positions) + k_B * ln(# of possible sets of motions). We call the first part of that the configurational entropy and the second part the thermal entropy. Now, let's look at how these two entropy components behave. The configurational entropy depends on how large a volume the gas molecules are spread over, and how uniformly they're spread across that volume. Suppose, for example, we let the gas expand to twice its original volume. After the expansion, each molecule could be in twice as many positions; if there are N molecules, that means the total number of sets of positions goes up by a factor of 2^N, which means the configurational entropy increases by k_B * ln(2^N) = N * k_B * ln(2). Similarly, compressing the gas would decrease its configurational entropy. (Note: I'm ducking the question of how far apart two possible positions have to be to count as "different" -- that takes quantum mechanics and gets messy.) Note that the configurational entropy depends (for a given amount of gas) only on the volume of the gas and how uniformly the molecules are spread over that volume. Most importantly, it does not depend on the gas's energy or temperature. The thermal component of the entropy, on the other hand, turns out to depend only on the gas's energy (which is directly related to its temperature). Since moving molecules have kinetic energy (& the faster they're moving the more energy they have), a limited supply of energy means limited possibilities for motion. It the gas has no kinetic energy at all (i.e. at a temperature of absolute zero), there can be no motion, and so there's only one possible state of motion (complete stasis!), and since ln(1) = 0 the thermal entropy will come out to zero. If we add energy, we increase the number of ways that energy can be distributed among the gas molecules and what range of speeds they might have, and hence the thermal entropy goes up accordingly. (Note: again I'm ducking the question of how different two states of motion have to be to count as "different".) Does that help? If you're with me so far, let me try to apply the same principle to a solid. In this case, we can similarly define the configurational entropy in terms of the number of ways that the atoms that make up the solid can be arranged. For a perfect crystal, each atom's position will be fully determined by the crystal structure, so there's only one possible arrangement and the configurational entropy comes out to zero. For an imperfect crystal, the deviations from ideal crystal structure allow more possible arrangements, and hence increase the configurational entropy. For amorphous solids, like glass, it'll be even higher (though still not nearly as high as for a gas, since each atom is still mostly constrained by its neighbors). The thermal entropy also behaves a little like it does in a gas. Again, at a temperature of absolute zero there'll be no motion and no thermal entropy. As we add thermal entropy, the atoms can't move much, but they can vibrate in place. More thermal energy -> more vibration -> more possible ways they can be vibrating at any specific moment -> more thermal entropy. But it's actually messier than that. Since the atoms' vibrations cause them to move away from what their positions would be in a completely frozen crystal, the vibrations are partly configurational as well as partly thermal. Similarly, since the atoms interact with their neighbors, different positions (=configurations) will have different energies, so the atomic arrangement is partly thermal as well as partly configurational. Thus, what I'm calling the thermal entropy is partly configurational, and what I'm calling the configurational entropy is actually sort of thermal. The dividing line between them is much blurrier than it was for the ideal gas (and actually if you look closer, it's even blurrier than I'm describing here). The division of entropy into configurational and thermal components (and sometimes others as well) is really just an approximation. Sometimes it's a very good approximation, and can be very useful; other times it's way off and will just confuse you. In order to know when it'll work well and when it won't you need to know a fair bit about the state structure of the system you're talking about. If you don't understand that very well... it's probably safest to avoid it. Stick to thinking in terms of the total (thermodynamic) entropy.
When entropy changes, what exactly is it that is changed?
Entropy isn't really a thing itself; it's a property of a system and the state that system is in. When the system's state changes (due to heating, cooling, compression, chemical reaction, whatever) the system's entropy will change as a result of that. Put another way: a change in entropy is an effect, not a cause.
Can’t thermodynamic entropy be stated in statistical terms [Boltzmann], and can’t that be stated in information terms [Shannon], and don’t many scientists accept that interpretation of entropy as being equally valid? To put it another way, isn’t thermodynamic entropy just a special case of a broader principle?
Pretty much, although I'll make some minor corrections: technically, thermodynamics and statistical mechanics use very different definitions of entropy, but if you do it right you'll get the same numbers either way. Basically, they're both the same fundamental quantity, just approached from two different directions. The stat mech definitions are also very closely related to those of information theory. Specifically, the Gibbs entropy formula of stat mech differs from Shannon's formula for information entropy only in the choice of units. My take on this is that the stat mech entropy is a special case of the Shannon entropy: it's proportional to the amount of information needed to specify the exact state (microstate) of the system given its macroscopic state (macrostate). Note that this doesn't mean that the second law applies to Shannon entropy. In information theory, it's entirely normal for entropy to decrease spontaneously. The second law only applies in the specific case where you're talking about the entropy of a physical system's microstate conditioned on its macrostate; if you're talking about the Shannon entropy of something else, the law does not apply. There does appear to be another connection, though: when information is stored in the state of a physical system (e.g. in a memory chip or hard disk), the Shannon entropy of that information contributes to the thermodynamic entropy of that system. But the contribution (1 bit of Shannon entropy -> k_B * ln(2) = 9.57e-24 J/K of thermo entropy) is pretty much always too small to matter. If you want more details on this, I'll refer you to an essay I posted to talk.orgins quite a while ago.Gordon Davisson
February 20, 2016
February
02
Feb
20
20
2016
04:45 PM
4
04
45
PM
PDT
Gordon Davisson #14 About how is equivocal the use of Boltzmann’s constant in some contexts see also: https://uncommondescent.com/intelligent-design/failure-of-the-compensation-argument-and-implausibility-of-evolution/niwrad
February 20, 2016
February
02
Feb
20
20
2016
02:05 PM
2
02
05
PM
PDT
Gordon Davisson #14 I appreciate that you admit that "thermodynamic entropy isn’t only about heat" and "entropy can be thought of as a measure of disorder" although with distinguo. In the deck-of-cards example the comparison of the two entropy values (thermal and configurational) is misleading. I explained why here: https://uncommondescent.com/intelligent-design/the-illusion-of-organizing-energy/ What is sure is that, in our world, castles of cards don't arise spontaneously. You can call this phenomenon entropic or not, however the fact is the castle order is neither probable nor spontaneous. Biological organization in turn is far more improbable than card castles and implies hierarchies of formalisms far beyond patterns of cards. Ergo if card castles need a constructor to greater reason bio-organization does.niwrad
February 20, 2016
February
02
Feb
20
20
2016
01:34 PM
1
01
34
PM
PDT
1 2 3 4

Leave a Reply