A couple of days ago Dr. Granville Sewell posted a video (essentially a summary of his 2013 Biocomplexity paper). Unfortunately, he left comments off (as usual), which prevents any discussion, so I wanted to start a thread in case anyone wants to discuss this issue.
Let me say a couple of things and then throw it open for comments.
1. I typically do not argue for design (or against the blind, undirected materialist creation story) by referencing the Second Law. I think there is too much misunderstanding surrounding the Second Law, and most discussions about the Second Law tend to generate more heat (pun intended) than light. Dr. Sewell’s experience demonstrates, I think, that it is an uphill battle to argue from the Second Law.
2. However, I agree with Dr. Sewell that many advocates of materialistic evolution have tried to support their case by arguing that the Earth is an open system, so I think his efforts to debunk that nonsense are worthwhile, and I applaud him for the effort. Personally, I am astounded that he has had to spend so much time on the issue, as the idea of life arising and evolution proceeding due to Earth being an open system is so completely off the mark and preposterous as to not even be worthy of much discussion. Yet it raises its head from time to time. Indeed, just two days ago on a thread here at UD, AVS made essentially this same argument. Thus, despite having to wade into such preposterous territory, I appreciate Dr. Sewell valiantly pressing forward.
3. Further, whatever weaknesses the discussion of the Second Law may have, I believe Dr. Sewell makes a compelling case that the Second Law has been, and often is, understood in the field as relating to more than just thermal entropy. He cites a number of examples and textbook cases of the Second Law being applied to a broader category of phenomena than just thermal flow, categories that could be applicable to designed objects. This question about the range of applicability of the Second Law appears to be a large part of the battle.
Specifically, whenever someone suggests that evolution should be scrutinized in light of the Second Law, the discussion gets shut down because “Hey, the Second Law only applies to heat/energy, not information or construction of functional mechanical systems, etc.” Yet, ironically, some of those same objectors will then refer to the “Earth is an open system, receiving heat and energy from the Sun” as an answer to the conundrum – thereby essentially invoking the Second Law to refute something to which they said the Second Law did not apply.
—–
I’m interested in others’ thoughts.
Can the Second Law be appropriately applied to broader categories, to more than just thermal entropy? Can it be applied to information, to functional mechanical structures?
Is there an incoherence in saying the Second Law does not apply to OOL or evolution, but in the same breath invoking the “Earth is an open system” refrain?
What did others think of Dr. Sewell’s paper, and are there some avenues here that could be used productively to think about these issues?
Here are my views
1) The connection between information entropy and thermal entropy is an analogy. The formulas appear similar, but the underlying physics are different, somewhat like the connection between viscous flow through a porous medium and electrical conductivity.
2) This is borne out by the units of entropy. For thermal entropy, the units are Btus per degree of temperature. Btus and degrees don’t have much to do with information.
3) We dont have a good understanding of what information, or information entropy is. The analogy only carries you so far. Come back in 100 years, and we can talk.
4.) One area where the ananlogy breaks down is the pure state. In information entropy, this is information without any corruption. As one moves away from the pure state the information degrades very rapidly.
By contrast in thermal entropy, the entorpy is the difference between the total energy and the available work. As one moves away from the pure state, the available work diminishes slowly
I agree with Chris. What units of measure do you proposed be used in your attempt to expand the 2nd law metric?
Eric, I fully agree with Sewell. See:
http://www.uncommondescent.com.....-the-slot/
So my answers to your two questions are two “yes”. The 2nd law applies to all systems with many elements. Biological systems have many elements so they cannot escape the consequences of this law, meant in its statistical sense.
Great points, Chris.
While there are analogies between information and physics, extrapolation based on analogy is pretty risky. Obviously information (and design) exists and is notable in absence, but it’s also entirely dependent on context, environment, and utility.
In cryptography, information can be disguised as white noise, or rather very close to it. Also, it can be argued that most non-mathematical information is an abstraction of reality. That’s why most true-false questions are false at some level.
I agree that Dr. Sewell made a good case against blindly (mis)applying the 2nd Law. When I visualized his argument regarding an open system, I had the funny thought of driving a car, when the engine suddenly freezes, the tires all pop and the doors fall off as an indication of a beneficial mutation on a planet a million light years distant. Ok, unlikely. So instead the 2nd Law would instantly raise the temperature of the entire galaxy by an infinitesimal amount, even a million light years away? Also unlikely.
-Q
Even though I am not a scientist I would like to put my two cents worth in anyway.
As a mathematician Dr. Sewell knows very well how things can be interchanged such as trigonometry being substituted in solving problems in integral calculus. To me it is the concepts of thermodynamics that form a framework in understanding processes in other areas just like statistics can be applied in a variety of useful ways. So, from what I can see he is well within common usage to use one thing to explain another.
I also like his interpretation of thermodynamics as a statistical problem involving probability which basically eliminates the closed vs. open system even though I am sure there is some truth in the idea that there is a balance in the cosmos. As you have said arguments based on “the idea of life arising and evolution proceeding due to Earth being an open system is so completely off the mark and preposterous as to not even be worthy of much discussion.” It is so because they ignore probability which throws a giant monkey wrench into their way of thinking and argument concerning what blind, unguided, random mutations can do.
That the second law exists is evidence for design.
Just sayin’…
Querius/fossil:
Regarding the “compensation” argument, namely, that a decrease in entropy in one locale is “compensated” by an increase in entropy in another locale is (i) largely a red herring and is essentially nonsense, except in those limited cases in which there is a direct causal connection between those two locales; and (ii) meaningless, at least to the extent we are talking about things like functional systems and information.
I think Sewell does a decent job of addressing this “compensation” business in his article/video.
I should add one more important point, at least in terms of the debating tactic of the “open system.”
The “Earth is an open system and, therefore, evolution is possible/likely” idea is completely void of substance and is but a rhetorical game. If someone is having trouble mentally getting over the hump between a closed and open system, we can easily get past it by just redefining the system we are talking about.
Earth is an open system? Fine. Then let’s talk about the Earth-Sun system. Now it is a closed system. Please explain to me how life could arise in the confines of the closed Earth-Sun system. Or the Solar System is our closed system. Now please explain how life could arise in the confines of this closed Solar System.
The point being that we can draw a circle around any particular geographical area and call it our system — whether the whole Earth, the Earth-Sun, the whole solar system, the galaxy, or even the entire universe, or (moving the other direction in size) a continent, a particular spit of land, a specific thermal vent, etc.
The entire distinction between open/closed systems in the context of discussing the formation of life on the Earth is nothing but a rhetorical game and is completely without substance. Every particular locale can be considered part of an open system or closed system, depending on how we want to define it.
And, of course, more importantly, whether a system is defined as open or closed tells us precisely nothing about how something like the origin of life, for example, could take place.
I’m at a loss to understand how the Second Law wouldn’t affect information and mechanics. Certainly, computer and electrical engineers must account for “heat” buildup and dissipation when considering the design and construction of microprocessors. I suspect that mechanical engineers, too, must take into account the Second Law when designing and developing purely mechanical devices such as components of the internal combustion engine (e.g. crankshafts, pistons, rings, etc…).
I don’t think you can escape having to “deal with” the Second Law. Even if you aren’t specifically discussing thermal entropy, you must at the very least make allowances for the affect thermal entropy has upon your given area of study. A great many persons can safely “ignore” thermal entropy because an engineer has taken it upon themselves to design a computing or mechanical device that is “buffered” (for lack of better terminology) against the effects of thermal entropy. Most computers are designed to operate within certain temperature thresholds as are most cars.
When I’m writing computer code, I don’t have to consider what effect thermal entropy will have upon my code. Generally speaking, my code doesn’t put the CPU and GPU of my computer outside of their designed parameters for reliable operation. The same is true when I drive my car.
However, I do recognize that others (namely physicists and engineers) who contributed to designing those computing and mechanical devices did take into account thermal entropy so that I did not have to.
As far as I’m concerned the second law of thermodynamics is the perfect example of Popperian scientific theory. It only tells us what can NOT happen, or gives an upper boundary for the possibility of the occurance of an event. It does not give us a limit about the realizable (or actual) possibility.
From an engineering point of view (since I am a chemical enginner), for instance, if you want to separate two components from each other transfering only a limited amount of energy and work to a separator, second law only shows you whether this whole operation under the given conditions is NOT possible. The important point is that the reverse is not usually true: that the 2nd law shows that entropy of the universe increases during the operation does not guarantee that it is practically possible to achieve the aimed separation. A known engineering mechanism should exist for that seperation to be possible under these conditions. That’s one of the reasons why efficieny of many processes is far from 100%, much more than theoretical limits of energy should be transferred to the system.
So back to the topic about evolution vs. ID, in my opinion second law (in the classical sense) can only say that life cannot start and evolve without an external energy source, such as the sun. However, a similar argument can be proposed also for skyscrapers or cars (or anything man made), that an external energy supply (such as sun!) is required. So, saying that life (and unguided evolution) does not violate 2nd law does not say much, since cars, planes, etc. do not violate the 2nd law either, due to the continuous energy supply of the sun and the winds, etc.
I read Dr. Sewell’s paper (and I remember having a read an older one quite some time ago), and have found the basic premise interesting (and intuitive), but my mathematics is not advanced enough to evaluate his study.
Two more thoughts
1) A complete statement of the second law is “Stable States exist”. But Dr Sewell’s statement of the second law is made with differentials, gradients, fluxes, stuff like that. Why does he need the mathematical hocus pocus?
2) Matter with the lowest entropy is a pure crystal, rather like a pure tone of sound. The information content is slight. Living things are anything but pure crystals, being more like music. There certainly appears to be a connection between information and living things, and there may be an entropy concept involved. But whatever it is, we dont yet understand it.
Dear Curious Cat
I disagree that the Second Law, properly stated, tells you that something can not happen. Take the statement: “The entropy of an isolated system cannot decrease.” That statement is untrue.
The second law is not about absolutes. It is about probabilities.
chris haynes #11
The 2nd law in statistical mechanics states that systems always go toward their more probable states (when intelligence is not involved). Organized states are highly improbable. Ergo systems go toward disorganization. Biological systems are highly organized, so cannot arise spontaneously by chance, as evolution claims. That’s simple.
Greetings.
Eric, you typed:
Perhaps you did not see what transpired when Upright Biped asked a question about how information systems could arise in living systems.
As to my thoughts, I’ll not add much to what ciphertext and CuriousCat typed. Here they are:
1) For functional mechanical structures, they will be affected by the 2nd Law as they are made of matter.
2) Given that information needs a material substrate, the information can be affected by the second law of thermodynamics as the material substrate follows the law.
3) I might be mistaken, (as I have not touched this concept for a while now), what is more interesting is how energy is utilized. In thermal entropy sense, a basic is that work needs to be done for heat transfer in the negative gradient. How is the work provided? This is the case for machinery. Another issue in machinery is how is the energy directed for it to perform its task?
I do not know much about biological systems, but from the little I remember, there has to be a way keep temperature balance: Homeostasis. Without homeostasis, the material substrate which contains biological information will be destroyed quicker than usual.
With all what I have written, I can see why talking about open systems is a distraction, given that functional mechanical structures and information storage materials will definitely be affected by the second law. What is more important is how these systems arose in the first place, especially in biological systems which preservation through generations is necessary. The open system argument is problematic if it does not account for some form of homeostasis so as to preserve the material substrates necessary for life, and also if it does not account for the rise of the system Upright Biped has always been talking about.
Dear Niwrad
A correct statement of the Second Law does NOT state that systems “always go toward their more probable states”.
It merely states that some states are more probable than others. Thus a highly organized system CAN arise spontaneously, by chance, from a disorganized one.
If you disagree, please state the second law youre using, in terms a layman can understand.
A few more thoughts on the second law. I was very surprised to learn that entropy has a deep association with gravity:
In fact, it has been found that Black Holes are the largest contributors to the entropy of the universe:
I was also impressed to learn how destructive black holes are in their ‘generation’ of entropy:
And when I learned that entropy is ‘loosely associated with randomness/chaos’,,,
,,,and with learning, as mentioned previously, that Black Holes are the greatest contributors of randomness/chaos in the universe, and realizing that Darwinists think that randomness created life, I then, half in jest, offered a Darwinist a one way trip to a Black Hole so as to perform scientific experiments to prove, once and for all, that Randomness can create life.
His resolve to pursue the truth to death if need be was a bit less firm than Gilbert Newton Lewis’s resolve was:
But all kidding aside, this lesson I learned from black holes, about how deeply chaos/randomness and entropy are connected to death and destruction, is that the last place life would ever come from is from these entropic generators called black holes.
In fact, far from being a creator of life, entropy is the primary reason why our temporal bodies grow old and die,,
This following video brings the point personally home to each of us about the very destructive effects of entropy on our bodies:
Verse and music:
supplemental notes:
chris haynes #15
E.g. read Wikipedia:
http://en.wikipedia.org/wiki/S.....modynamics
The above definition is perfectly consistent with what I wrote in short. Instead it is inconsistent with your “Thus a highly organized system CAN arise spontaneously, by chance, from a disorganized one”. In fact it says indeed the opposite: “random chance alone practically guarantees that the system will evolve towards such thermodynamic equilibrium”!
2nd Law Clausius formulation:
and the more modern but equivalent “Kelvin-Plank Postulate”:
Entropy defined by Boltzmann as formulated by Planck
S = k log W
S is entropy
k is Boltzmann’s constant
W is number of energy microstates or position-momentum microstates (classical).
The clausius formulation is indirect for change in entropy :
dS = dQ/T
where
dS is differential change in entropy
dQ is inexact differential change in heat
T is temperature
The great discovery was linking Clausius formulation to Boltzmann and Planck’s formulation.
k log W = Integral(dS) with appropriate limits
Dear Chris,
I think I can see what you mean when you say the second law is not about absolutes but probabilities, as illustrated by your following statements:
From a purely scientific point of view, I agree. As an engineer, I totally disagree. Second law says that we cannot transfer heat from a cold medium to hot medium without changing the pressure and compositions of the mediums. On the other hand, in a universe with zillion collision of molecules in the universe all the time, there may be local infinitesimal regions in which energy tranfers from cold to hot region, but temperature in this local regions will not be stable, but tend to equilibrate as an average heat transfer from hot to cold.
Morever, I think the way you use the phrases “more probable” and “can” is the more problematic part of your totally probabilistic approach of defining 2nd law. In simple terms, 7/8 is more probable compared to 1/8. If something is known to happen with a 1/100 probability and we see it happen, we may say that we are lucky (or unlucky depending on whatwever it is) since the probability of the event not happening is 99/100, which is more probable than its happening. Nevetherless, the event CAN take place. On the other hand, comparing this probability to the probability of a ball, while standing still, suddenly starting bouncing, due to the air molecules inside the ball moving together in the same direction (I guess a similar example was in the Blind Watchmaker, though presented with a different aim), is not realistic. For this case, saying merely that random motion of the molecules is MORE PROBABLE compared to the organized motions of molecules, and bouncing CAN happen spontaneously is, I belive, not faithful to the spirit (for lack of a better term :)) of the 2nd law.
Chris, please realize that I’m not critisizing your view that organized systems (or life) may arise spontaneously (though I do not think life arose that way), but I just want to say that I partially disagree with your definition of 2nd law.
The entropy of an isolated system CAN decrease.
The Clausius and the Kelvin-Plank Statements can be used to prove the statement “The entropy of an isolated system can not decrease”
Let me give an example that proves them wrong.
Take an isolated system, a tank of air
State 1. The molecules of air are whizzing around, some fast some slow, all mixed up, so the temperature is even. The entropy S1 can be calculated from standard formulas.
State 2. After a while, probably a real long time, the fast molecules will happen to be be down one end of the tank, the slow ones at the other. The fast end is hot, the slow end is cold. The entropy S2 can be calcuated from formulas. S2 is LESS THAN S1.
So the entropy decreased.
And while the air is in State 2, the difference in temperature can run a heat engine, to raise a weight, thus turning heat energy completely into work.
Let me give you my 3 word statement of the Second Law: Stable States Exist.
The key to the statement is the definition of a Stable State: A system is in a Stable State if it is hopelessly improbable that it can do measurable work without a finite and permanent change in its environment.
I have a question and it is probably a stupid one but I am going to ask it anyway.
Can anyone show me a closed system in the strictest sense of the term?
I am asking this because I can’t conceive of any system that does not have at least some interaction with something outside of it.
okfanriffic:
Thanks, seventrees. I had missed that exchange. Sounds like okfanriffic needs to catch up on the basics. Perhaps he could start here:
http://www.uncommondescent.com.....formation/
My thoughts — note the onward. I add, that there is need to reckon with the informational perspective on thermodynamics. KF
Reply to fossil at 23:
If one believes ‘The cosmos is all that is”, then, the cosmos is an isolated system. Unless one wants to posit that the cosmos is not finite.
To Eric at 24: You’re welcome, Eric.
Chris Haynes, “Take an isolated system, a tank of air…”
I think you are in error in your analysis. The air tank is not a closed system. It is subject to gravity. I am sure you will quickly point out that gravity is only potential, not actual energy. However, the shifting of the hot air to the top causes the center of gravity of the tank to go down ever so slightly, therefore implementing that potential.
Your phenomenon would simply not happen in a zero gravity environment.
Fossil, “Can anyone show me a closed system in the strictest sense of the term?” This is the difference between “real world” and hypothetical. It is unbelievably hard to simulate many hypothetical scenarios in the real world. The hypothetical is surely the playground of the theorist. That said, we can get close. And we can factor in the remaining error component rather well.
franklin @2:
That is a good question. When we are talking about things like functional machines or information it is harder to quantify what it is that we are measuring. I think that is part of the challenge of formulating a concise argument and is, perhaps, part of the reason I haven’t personally felt comfortable pushing the Second Law argument.
That said, I think Dr. Sewell has made some good points and is discussing a principle that most people recognize — intuitively and by common sense and personal experience, if not with precise measurements — to be correct. He does acknowledge that it is harder to quantify these kinds of situations when we are not measuring simple thermal properties. At the end of his paper he refers to “this statement of the second law, or at least of the fundamental principle behind the second law . . .”
Lastly, I would add that it is not I, nor Dr. Sewell necessarily, who is seeking to expand the Second Law. Dr. Sewell cites several examples, even textbook definitions, that go beyond simple thermal properties. Again, I am not sure I am completely on board with his entire approach, but I think he makes a good case that scholars and practitioners seem perfectly happy applying the Second Law (or, as he says, “the fundamental principle behind the second law”) to functional systems. Ironically, however, when it comes to living systems, they seem only too quick to jettison that approach in order to protect the evolutionary storyline.
Chris@21 speculated
“Happen to be” is the key.
The entropy is increasing but the value for entropy is probabilistically a little bumpy.
Overall, the odds smooth out over large numbers of molecules. If your tank holds 22.4 liters, and it’s at STP, there would be 6.02 x 10^23 molecules bouncing around, so it’s extremely unlikely that there would be a significant number of them at different energy levels in one location.
Your thought experiment works better with fewer molecules, let’s say 4. 🙂
-Q
seventrees @ 26
Accepted, but is anything within the cosmos isolated?
Fossil at 31.
No. Because each entity within the cosmos interacts with each other. That’s why it is said Earth is an open system. It is an open system in the cosmos.
CuriousCat @10:
Well said. I should add that some of the more egregious complaints against Dr. Sewell have accused him of saying that “evolution violates the Second Law” and then labeling him a loony for thinking the Second Law can be violated. That is obviously not what he is saying. Rather, that because X would violate the Second Law, then X cannot be true. A subtle, but important difference.
—–
ciphertext @9:
and seventrees @14:
Good thoughts. This is one area where the Second Law — even in its basic thermal sense — is certainly applicable. Functional machines have to be engineered to deal with heat flow, energy, and temperature. Homeostasis is a very important principle in building a living organism, such as ourselves. And, by all accounts, we are far-from-equilibrium system, from a Second Law standpoint.
Does this mean the Second Law is violated by our existence? Of course not. But what it means is that there has to be something that carefully monitors, controls, takes into account, counteracts the Second Law to maintain the system far from equilibrium. Such systems, as far as we know, never arise on their own by dint of the known forces of nature.
This is really, I think, the heart of Dr. Sewell’s argument.
Chris Haynes, niwrad, CuriousCat:
Apologies for jumping into the midst of your good discussion, but in my experience some of the disconnect in discussing the Second Law results from talking about “order.” There are many systems where “order” is the thermodynamically-preferred arrangement, such as crystals. Furthermore, depending on what we mean by “order”, one could certainly view a well-mixed gaseous mixture as being more “ordered” than one that was not well-mixed. But that is of course not what is meant. So the definitions or examples we sometimes hear of the Second Law that refer to “order” are, in my humble opinion, not very helpful.
It is perfectly reasonable for that kind of thermodynamically-preferable order to arise on its own, spontaneously if you will, by dint of law/necessity. Such “order” has nothing to do with what we might call the “organization” of a functional system or the existence of information-rich systems. Such “order” is in fact anathema to such systems. It destroys functional organization; it destroys information.
Design is of course interested in the latter, not in “order” per se.
BTW, niwrad’s post he referenced spends some time talking about this distinction between order and organization.
Chris Haynes at 21:
Don’t gas molecules follow to a certain degree The Kinetic Molecular Theory? If so, what you just said assumes that there is no kinetic energy transfer between the gas molecules.
seventrees @ 32
Then if there is no known closed system that we can observe or any that we can experience then there is no point at all in discussing the thermodynamics of closed systems. For us everything is an open system but as Dr. Sewell has said thermodynamics is essentially statistical.
To me it is very logical that in a certain sense the “compensation theory” is real. If the cosmos which is yet undefined is indeed a closed system then the energy within it can be considered static so that if one part decreases in entropy then somewhere else must have an increase. However, the point that Dr Sewell makes bypasses that theory as a valid explanation of evolution because we are still dealing a highly improbable event that we must account for and as he has demonstrated in other articles that is something that simply can’t happen by pure chance because the odds against it are far too remote.
I also suppose that instead of thinking of OOL as a single improbable event we actually should think of it as a series of parallel improbable events which makes the whole thought of evolution that much more ridiculous.
Fossil at 36
It will be best if you read my comment at 14 to see why I find the compensation argument quite unnecessary. Raw energy might destroy systems. Read also the comments from ciphertext and CuriousCat at 9 and 10 respectively, unless you’ve already done so.
It’s funny, I actually made that comment after reading his paper. My favorite part is where he used these two examples of a piece of metal, and things made sense. Then the next thing you know, he’s extrapolating his ideas out to examples of tornadoes running through junkyards recreating previous organization, and matter spontaneously forming humans, computers, libraries and who knows what else.
If you guys honestly think that paper adds anything to the topic of evolution and the second law of thermodynamics, you are sadly mistaken.
My idea, I think relates to his statement that “thermal order can increase, but no faster than it
is imported.” What I had come up with was that the sun was a huge import source, driving an increase in order on Earth.
No one really came up with an argument against it to my knowledge. The only response I got was that I needed to explain how my idea leads to the generation of a “metabolising, von Neumann self-replicating living cell” before I can be taken seriously. It was a simple thought experiment, but apparently none of you wanted to think. Why am I not surprised?
Chris Haynes,
I really didn’t want to weigh in because this discussion was very contentious, and you know I side with you.
But being someone who helps students with physics and chemistry, and being that some of my students and peers may read this weblog, I can’t remain silent.
Order and disorder were introduced as “definitions” of entropy by a passing and erroneous remark by Boltzmann, the error has been restated in chem and physics text ever since, but appears nowhere in the actual calculations.
There are about 4 versions of the 2nd law:
1. Clausius
2. Kelvin-Plank
3. Jaynes
4. Hawking Bekenstein
all have their applications with #3 and #4 being more general than #1 and #2.
If I asked a chem student: estimate which has greater entropy, a frozen dead mouse of 1/2 kilogram or a warm living human of 100 kilograms? Correct answer:
Warm living human has MORE entropy. Take standard molar entropy of water and simply multiply it by the amount of water in each. Considering the mouse is frozen, it has lower entropy anyway since colder things tend to have lower entropy than hotter things all other things being equal.
How about a dead cell versus a living human? Answer living human has more entropy than a dead cell since the human has 100 trillion times more mass.
That’s the way a college chemistry or physics student would be expected to answer. Thus, for biological complexity to increase, to the extent more mass is needed, in general entropy must increase.
Now, there are alternate definitions of the 2nd law and entropy. It is in those that one might find a defense of ID, but the textbook definitions provided above like:
S = k log W
will yield the counter-intuitive answer I gave.
Dr. Sewell provided a notion of X-entropies. That’s what must be done since the Boltmann/Plank definition simply doesn’t do any sort of accounting of biological order.
“S = k log W” can’t tell a dead dog from a living one, and that’s why I don’t argue from the 2nd law and that’s why I won’t teach it to college science students as a defense of ID (some of whom are chem, physics, and bio students).
Thank you all very much for your gracious responses.
I agree with what you said about order, organization, information and design.
My point is that they are not yet well understood. Sometimes they do have analogies with the second law, but they are separate from the second law. I also believe Dr Sewell is ignoring this poor understanding and fails when he tries to stretch analogy into equivalence.
I also note that highly ordered things like crystals do have low entropy, and high free energy. Yet they are common in nature. A calculation of the free energy of a plant would not give a higher free energy than of the same material in the form of a crystal. Therefore, Sewell’s claim that the second law precludes the formation or evolution of life seems silly to me.
Regarding my tank of air, the entropy would be reversed whether or not there is a gravity field. It is a matter of simple Kinetic Theory that sooner or later a gas will self separate into hot and cold regions. Gibbs himself said “The impossibility of an uncompensated decrease in entropy seems to be reduced to an improbability”. Boltzmann’s equation S=k*ln(W) says the same thing.
My thought experiment works for ANY finite number of molecules, not just 4. It is true that for a large number I would need to wait for eons. But if I was milking a cushy research grant from Uncle Sam, whats the problem?
My point that absolute statements of the second law are false is not especially germane here, for as you noted, the probability of a reversal of entropy is hopelessly low. I just hate to see a fundamental law of physics stated sloppily, and incorrectly.
I finally have an objection to the concept that life can be understood using the laws of chemistry, thermodynamics, physics and the like. It is merely an unfounded assumption that living things do not involve unknown forces, systems, and phenomena beyond our current knowledge. Consideration of common things like consciousness and free will, whatever they are and wherever they come from, suggests that the assumption is probably false.
Another very interesting topic Eric. thanks! I have long been trying to expose here the link between information theory and thermodynamics.
Without at this time attempting to expand further, my responses to the questions in the OP are as follows:
Q: Can the Second Law be appropriately applied to broader categories, to more than just thermal entropy?
A: This probably depends upon the particular formulation of the second law. As Sal points out there are different formulations.
Q: Can it be applied to information, to functional mechanical structures?
A: The second law can be reframed in informational terms. it’s not clear that the reverse is true.
Q: Is there an incoherence in saying the Second Law does not apply to OOL or evolution, but in the same breath invoking the “Earth is an open system” refrain?
A: It depends. If someone says “the second law does not apply” they are either ignorant or being sloppy with language. The second law always applies, else it’s no law.
What they likely mean is that the second law does not prohibit the processes involved in OOL or evolution.
Q: What did others think of Dr. Sewell’s paper, and are there some avenues here that could be used productively to think about these issues?
A: I have the paper and can discuss it, but I’d rather focus on the relationship between the second law and information theory.
The one author that I know of who has done the most to try to make this subject comprehensible to the general public is Arieh Ben-Naim.
http://ariehbennaim.com/
franklin:
bits
How much information is required to resolve what is unknown about the state of the system.
I do have a comment about those 4 versions of the 2nd law:
1. Clausius
2. Kelvin-Plank
3. Jaynes
4. Hawking Bekenstein
No’s 1 2 and 4 are absolute statements rather than a statement of probability. Thus they are false, as shown by the example of the tank of air.
The first two are over 100 years old, so we should cut them some slack. Dr Hawking is just following his SOP.
I believe that Jaynes’s statement is also false, but its such gobbledegook that I’m uncertain.
As an aside, 2 and 4 are tautologies. They use concepts (heat reservoirs and entropy) that cant be said to exist without presupposing the second law. Thus adequate statements of the second law are “Heat Reservoirs exist” and “Entropy exists”
chria haynes:
BINGO!
And so is information theory. So why hold that the two are merely analogous?
Are we not, in both cases, talking about the potential states of a system and what is known or unknown of the actual state? The reduction in our uncertainty (entropy) of the state of the system?
What better term than information?
Here is something written by a creationist physicist, Gange:
I mentioned his book at UD:
Forgotten Creationist/ID book endorsed by Nobel Prize winner in Physics
chris haynes:
Can you re-phrase your post in terms of uncertainty and is what is known about the state of the system?
If you can, have you not thereby shown that information theory and thermodynamics/statistical mechanics are not merely analogous but may be spoken of in the same way with the same meaning?
Love your posts, thanks!
Salvador:
Sal, are you repudiating your earlier attempts to correlate entropy with disorder here at UD?
I’m happy you like my posts, indeed I’m flattered, but I really think they’re kind of obvious.
As far as information and thermal entropy being more than analogies, I dont see it. I just don’t understand what the units of thermal entropy (Btu’s and degrees fahrenheit) have to do with information. I remember Dr Sewell discussing this issue, but to me at least, what he said was so much hocus pocus. Perhaps you could do better.
One last nit pick
The classic definition of entropy dS = dQ/T is a tautology.
Temperature cannot be defined without first defining either heat or entropy.
To see this, consider systems with temperatures below absolute zero. An example is an electron spin system with slightly more spinning at high energy than at low. The temperature will be in the order of -1,000,000 degrees Kelvin.
How do you define temperature, without using either heat or entropy so that you get 1 million degrees below absolute zero?
Chris,
I once read how far a typical (ideal) gas molecule travelled at STP before colliding with another one. Not very far. So now you require a trillions of collisions, all of which are segregated between higher energy and lower energy molecules (or a single collision each, at angles such that no other collisions occur before your gas molecules are segregated).
Next take the probability of success of one such molecule to the power of 6.02 x 10^23 to get the probability of the event.
This probability is so small compared to the age of the universe times a trillion that it would be a miracle on the order of Moses parting the Red Sea or Jesus turning water into wine.
Are you sure you want to incorporate this into your ideas? 😉
-Q
chris haynes:
If you have an axe to grind you hide it well. 🙂
But no, they are not obvious, which is why I attempted to draw you out on some of the things you raised.
In particular this statement of yours:
That is NOT obvious. Even you seem to know it’s not obvious.
But my question to you is, if the second law is indeed about probabilities, as you assert, then what, precisely, differentiates it from information theory?
The units of measurement?
Yes, the probability is very low.
But as you noted, its not zero.
So any statement of the second law, such as those by Clausius, Plank, or Hawking, that says its zero, is false.
Myself, whatever Dr Hawking may think, I don’t think its okay to make a false statement of a Scientific Law. It isn’t hard to state the second law correctly. Even a Creationist like myself did. So why sell yourself short?
Creationists tend to associate the second law with the fall, which is just absurd.
It’s as if the second law did not exist prior to the fall but after the fall the second law was in control of everything.
I’m not a math-whiz, or an engineer, or particularly smart; most of this conversation is above me.
But something I know:
A pile of scrap metal in my backyard, exposed to an input of energy (whether sunshine, or wind, or rain, or gasoline and a lit match, etc), will never self-assemble into a lawn mower.
To get a lawn-mower out of that scrap, I need a minimum of three things:
1) a machine capable of turning that scrap into a mower
2) an input of energy (of the right type – gasoline might work, as long as it’s put into the tank of the mower-making machine and not poured onto the outside of the machine; solar might work; even coal might work; depends on the machine’s requirements)
3) a program to control that machine (sometimes the program is “built into” the machine, in the form of certain-sized gears, and timing mechanisms, and piston-rod lengths, etc)
As I understand entropy / the second law, simply having an open system is only one third of what’s needed to overcome it. The input of energy onto my pile of scrap will only increase the entropy of that system, and will never, by itself, decrease it.
If I understand seventrees in 14, he said the same thing, saying: “In thermal entropy sense, a basic is that work needs to be done for heat transfer in the negative gradient. How is the work provided? This is the case for machinery. Another issue in machinery is how is the energy directed for it to perform its task?”
In other words, you need 1) energy, 2) a machine to convert that energy into useful work, and 3) a program that controls that machine.
That’s just common sense to me.
Chris Haynes, “It is a matter of simple Kinetic Theory that sooner or later a gas will self separate into hot and cold regions.”
I don’t follow. It would appear to me that in the absence of gravity, the hot gas will share its heat with the cold, and all of the gas will become the same temperature. That certainly seems to be what happens when I drop a cup of hot water into a pail of cold.
DebianFanatic observed:
Nicely put.
Still, if you blow up the pile of scrap, one piece might land on the roof. Since the pieces do fit together, Darwinian evolutionists extrapolate this stunning result, pinning their hopes on having enough scrap piles and explosions, so that eventually the result will be a fully functional lawn mower!
Actually that’s not right. Eventually, you’ll get an edger for the lawn as an intermediate result. Additional explosions on subsequent versions will make innovations and improvements. Finally, you’ll get a tractor with a stereo and an Internet connection…all due to “the blind lawn equipment manufacturer”!
To see this method in action, you just have to be willing to wait a billion years, which of course, no one can do to verify their assertion. However, they did witness a gas explosion that sent a lawnmower up into a tree—a triumph of their theory! 😉
-Q
fossil @36:
The compensation argument may be of some relevance in a purely thermal sense when we are looking at systems that are close in space and time, are directly linked in some way, and are affected by each other. But even in that case it isn’t so much of an “explanation” as an observation. You don’t get a decrease in entropy in a system simply by the injection of thermal energy, unless we fall back to the barest concept of entropy being nothing more than a measure of the total quantity of thermal energy in a system.
The compensation argument in regards to OOL and evolution is nonsensical because (i) OOL and evolution are not primarily thermal problems, (ii) even to the extent that energy is needed for OOL and evolution, simply pouring energy into the system isn’t helpful; there needs to be a directing process to channel the energy in useful ways, and (iii) no-one doubts that there is plenty of energy available, whether it be lightning strikes, volcanic vents, the Sun, deep sea vents, or otherwise; energy (at least in terms of raw quantity) has never been the issue.
Furthermore, if we go beyond the purely thermal considerations and think of the Second Law a bit more broadly as a number of scholars (in addition to Sewell) have done, then the compensation argument is simply irrational. When considering functional machines, for instance, can we possibly think that if I build a machine in my garage then — as a direct consequence of my activity — somewhere, in some undefined location in the universe, entropy increases?
Sure, one could say that. But it is just nonsense. There is no possible mechanism for the entropy somewhere else in the universe to respond to what I am doing here in my garage. And on the other side of the coin, if I later accidentally back over my machine with the car and destroy it, or accidentally set the house afire, can we with a straight face argue that somewhere else in the universe another machine must have been built or another house must have arisen from the ashes to “compensate” for what happened here and now in this locale? Or if I have a sentence made out of Scrabble letters and then mix the letters up, has a counteracting decrease in informational entropy happened elsewhere in the universe to compensate for it?
The whole idea of compensation in these areas is wild fantasy and magic of the strangest sort.
Greetings.
Mung at 53: Maybe it’s good you say tend. From CMI:
http://creation.com/the-second.....to-critics
CH (et al):
One of the fallacies of probability reasoning is the confusion of a logical possibility with a credibly potentially observable probability on the gamut of possible observations.
This issue is often seen in the notion that someone “must” win a lottery. Actually, not, unless it is very carefully calibrated to be winnable. That is, there must be a sufficiently high likelihood of a winning ticket being bought, or else people will see that no-one is winning and it will collapse.
In the case of thermodynamic systems, it is for instance, logically possible for say all the O2 molecules in the room in which you are sitting to go to one end, leaving you gasping. However, as there is no good reason to see a strong bias in favour of such a config, the utterly overwhelming bulk of more or less evenly mixed states mean that even in a case where a room had no oxygen in it to begin with, if a tank of the gas were released in one corner, in a fairly short while it would spread throughout the room.
In fact, it is overwhelmingly likely that we would ever observe the reverse, a bottle spontaneously filling up with pure O2, even once on the gamut of the observed cosmos, of credibly 10^80 atoms and perhaps c 10^17 s duration, with chemical events taking maybe 10^-14 s.
What statistical thermodynamics informs us in no uncertain terms, is that thermodynamic systems of any appreciable size will dampen out the relative effect of fluctuations, leading to a strong tendency of microstates to migrate to the bulk clusters of macrostates.
It is in this context that we can see a connexion between entropy and information, as I linked on previously at 25 above. Namely, it can be seriously developed that entropy is an informational metric on the average missing info [in bits or a measure convertible into bits under relevant conditions] on the particular microstate of a body of particles facing a circumstance wherein the known state is given as a macrostate. This ties to the Gibbs and Boltzmann formulations of entropy at statistical levels.
This leads to the point that say if we were to decant the parts for a micro-scale UAV into a vat with 1 cu m of fluid, say, 10^6 particles small enough to undergo diffusion, let’s just say micron size, in the cu m, we would face 10^6 particles in a field of 10^18 1 micron cells. It is easy to see that random molecular interactions (of the sort responsible for Brownian motion) would overwhelmingly tend to disperse and mix the decanted components.
The spontaneous clumping of these particles would be overwhelmingly unlikely . . . unweaving of diffusion — just do the ink drop in a glass of water exercise to see why.
Beyond clumping, the spontaneous arrangement of the clumped particles into a flyable config would also be further maximally unlikely, as the number of non-functional clumped configs vastly outnumbers the functional ones.
Indeed, this can be seen to be a second unweaving of diffusive forces.
Where also, we now see the significance of Sewell’s deduction that entropy discussions are closely connected to diffusion.
We also can see that if we were to agitate the vat, it would not help our case. That is, Sewell’s point that if a config is overwhelmingly unlikely to be observed in an isolated or closed system, it is not going to be suddenly materially more likely if we were to simply open up the system to blind forces.
But, if we were to put into our vat a cooperative army of clumping nanobots and assembly bots working under a program, assembly of the UAV from components would be far more explicable. For work is forced ordered motion, so if we have organising instructions and implementing devices, we can search out, categorise, collect, then cluster and assemble the parts according to a plan.
Such work of complex, functionally specific organisation is a commonplace all around us and it normally rests on organised effort working according to a plan based on an information-rich design. With required energy flows achieving shaft work while exhausting waste energy that contributes to the overall rise in entropy of the observed cosmos.
All of this is of course very closely connected to the too often willfully obfuscated challenge to spontaneously get to origin of cell based life in Darwin’s warm little pond or the like. Clumping and assembling work have to be accounted for, and associated information, including coded algorithmic info used in DNA, RNA and proteins.
The compensation argument fallacies and dismissiveness to the likes of a Hoyle et al, do not even begin to scratch the surface of that challenge.
But, what is quite evident is confirmation bias of the most blatant sort, leading to clutching at straws and belief in materialistic magic that in imagination manufactures complex functionally specific organisation out of diffusive and known disorganising forces. Failing, spectacularly, the vera causa test that to explain unobserved OOL, we should first show that forces and factors used are capable of the effect, on empirical observation here and now.
Otherwise, all is materialist fantasy, driven by ideological a prioris.
KF
PS: Just to get an idea, we may look at the number of arrangements for 10^6 items, imagined as a string, 1,000 each of 1,000 types:
[10^6]!/[1,000!]^1000 possible arrangements & using accessible factorial values . . .
= [8.2639316883×10^5,565,708]/[4.0238726008×10^2,567]^1000
Take logs, go to 4 sig figs:
log[x] ~ [5,565,708 + 0.9172 ] – [2,567,000 + 604.7]
~ 5,439, 497 + 0.2172
x ~ 1.7 * 10^5,439,497
If all 10^80 atoms, each were furnished with a string of this type, and were to be blindly searching a new config every 10^-14 s, for 10^17 s, they would scan 10^111 possibilities, moving up to working at Planck time rates for 10^25 s, we see 10^150.
There is just too much haystack and too little resource to carry out a thorough enough search to find the needle in it.
And that’s just for the string of elements.
10^18! is in computer reduced to slag territory.
Again, I thank you for your responses. Please forgive my not answering all of them.
Mr Mung asked the key question of this whole thread: “What, precisely, differentiates it (the second law) from information theory? The units of measurement?”
My answer: I dont know. My gut feeling, reinforced by the units of measurement issue, is that they’re connected only by a loose analogy, nothing more.
For those who disgaree, I need to say I get nothing from your long essays, becausue I dont believe you understand the terms you throw around. I urge you to do three things that Dr Sewell hasnt done. 1) State the second law in terms that one can understand. 2.) Define entropy precisely. 3.) Explain what Btu’s and degrees of temperature have to do with information.
Based on my present understanding, here is what I would give:
1) Stable States Exist. A system is in a “Stable State” if it is hopelessly improbable that it can do measurable work without a finite and permanent change in its environment.
2) Entropy is a property of a system that is equal to the difference between i) the system’s energy, and ii) the maximum amount of work the system can produce in coming to equilibrium with an indefinitely large tank of ice water.
3) Nothing.
Finally, Dr Moose asked about the tank of air.
If you wait a bozillion years, the molecules in the tank will assume every possible state, one of which has all the fast ones at one end and the slow ones at the other end. This is much like a turning drum of US and Canadian pennies. Sooner or later the US ones will be at one end, and the Canadian ones at the other end. I’m sorry I cant explain it better.
Please let me correct an error in my definition of entropy
Entropy is a property of a system that is equal to the difference between i) the system’s energy, and ii) the maximum amount of work the system can produce in attaining a Stable State that is in equilibrium with an indefinitely large tank of ice water.
chris haynes:
Thanks for your thoughtful comments. I still need to go through everything you’ve written to make sure I understand it fully. Just one point of clarification:
I’m not sure this is a helpful statement of the Second Law. Do stable states exist? Sure they do. But so do unstable states. The question is for how long, and in which direction things move.
We don’t need a Second Law to know that stable states exist. We can understand that from simple observation. What a law should do is help us understand why we observe what we do, or how certain forces will interact, or what predictions we can make given initial conditions.
So the Second Law is not simply a restatement of the basic observation that stable states exist. Rather it needs to be explaining or predicting or understanding directional movement.
If we were to reword it slightly, then it would be more substantive. For example, we might say: “Without a counteracting force, systems tend toward a stable state.”
Wording along those lines would be more substantive and meaningful and would be, I believe, more in line with what the Second Law is really trying to communicate.
chris haynes @61 responding to Dr Moose:
I’m not sure this is any better than the multiverse idea. 🙂 What you are essentially saying is that if we have enough trials then eventually every possible state will have existed at some point, so we shouldn’t be surprised at this very unusual state of affairs.
This is problematic for a couple of reasons:
(i) Practically. We don’t have a bazillion years for anything. We’re interested in what can be done, here and now, within reasonable timeframes.
(ii) Logically. Your molecules example would seem to undercut the whole point of the Second Law. What you have essentially said is that the Second Law will force a system to a particular state . . . unless we wait long enough, in which case all states will eventually show up — presumably even those that are unstable. That seems directly contradictory to what the Second Law describes. It is essentially a reliance on the chance occurrence of “every possible state” showing up eventually.
scordova @39:
Sal, instead of “order” and “disorder”, which I agree is a terrible way to think about the Second Law (or ID or designed systems, for that matter), I’m wondering if it is possible that what he was trying to describe is really better understood as “uniformity” or “non-uniformity”?
You stated the Clausius view. The Boltzman view is conceptually more accurate (not necessarily practical) since it merely counts the way the particles can be configured in terms of position and momentum or energy. The genius step was connecting the Classius view with the Boltzmann view.
Clausius:
S = Integral(dS)
Boltzman
S = k log W
Genius connection:
S = Integral (dS) = k log W
The reason
S = k log W
has significance because if we let k = 1, and use the symbol I instead of S, it looks like Shannon!
I = log W
where W is number of equiprobable microstates, or equivalently
I = -log (P)
where P = 1/W
which looks like Dembski’s claim
I = -log2(P)
Example, information of 3 fair coins. There a 2^3 = 8 microstates, W = 8, P = 1/W = 1/8
I = log2 W = log2 (8)= 3 bits
or
I = -log2 ( 1/W) = -log2 (1/8) = 3 bits
Bill said as much here:
More bits = more entropy
Since increase in complexity requires increase in bits, this also means entropy must INCREASE for complexity to increase. You have to INCREASE Shannon entropy, you have to INCREASE thermal entropy to increase complexity as a general rule.
Most, creationists and IDists have the ideas exactly backward.
A living human has 100 trillion times more Shannon and thermal entropy than a dead cell. The calculations bear this out.
I suppose one can define other kinds entropy where the relationship is reversed than the traditional ones I’ve outlined, and I suppose one would have to expand the 2nd law to deal with those sorts of entropies. X-entropies were proposed by Dr. Sewell. But maybe it is helpful to see the traditional entropies first. I provided them:
1. Clausius
2. Boltzmann
3. Shannon
4. Dembski
AVS @38:
It is good to see that you are backpedaling a bit and acknowledging that your assertion was “a simple thought experiment.” But you might do well to fully dismount.
Your idea is not new, and if you had actually read Sewell’s paper carefully you would know that he spent a significant amount of time dealing with it, so your assertion that no one really came up with an argument against it doesn’t wash.
Furthermore, more than one person, on this very thread, has shown why the idea of the Sun “driving an increase in order on Earth” is flawed, both in terms of actual thermodynamics, as well as in terms of your particular statement regarding the “inevitable” origin and evolution of life on Earth.
You need to be a bit more careful than saying things like “none of you wanted to think,” particularly when it is evident there are many people here who have thought about this particular issue a lot longer and much more in depth than you have.
I apologize for the somewhat harsh nature of my assessment above. We welcome you to UD and hope that you will participate with a sincere desire to learn about ID and to engage the issues. To the extent you are willing to do that, we are grateful for your participation and, I trust, all can learn something from your contribution as well.
AVS:
How is that coming along on Mecury?
Mercury…
So EA, in your opinion, which was the most damaging refutation of my thought experiment.
Joe, I can no longer take you seriously. You have lost all respect, and I have lost all hope for you ever exhibiting a basic level of scientific knowledge. Bye
AVS, no one takes you seriously and seeing that you are a proponent of evolutionism you don’t know anything about science
Does Mercury receive an influx of energy from the Sun? Yes.
AVS #71
Please, don’t offend my friend Joe, thank you. To accuse anyone of us of lacking “a basic level of scientific knowledge” in no reasonable way helps your evolutionist cause.
Aww, nimrod, that so nice of you. It’s reasonable when it’s the truth, sorry.
niwraD, Thank you but AVS cannot offend me because he is beneath me. 😉 He is just upset because I keep correcting him and exposing him as a poseur. His only recourse is to try to attack me personally but it ain’t working out so well.
1) I disagree with Dr Dembski
He wrote:
“information is usually characterized as the negative logarithm to the base two of a probability (or some logarithmic average of probabilities, often referred to as entropy).”
I say that Entropy is NOT a probability, or a logarithm of a probability. Probabilities don’t have units of Btus per degree. Entropy does.
2.) Your statement of the Second law, “Without a counteracting force, systems tend toward a stable state.” is a false statement, on two counts.
First, systems in metastable states will remain unchanged indefinitely. Thus they do not “tend toward a stable state”
Second, as shown by the tank of air, a system will eventually tend toward an unstable state, if you wait long enough.
3) On your objections to my tank of air
(i) Practically.
So what? A Scientific Law is a statement of an observed regularity in the physical Created world. T A statement of a Scientific Law should always be true. he example shows that other statements of the second law are false.
(ii) Logically.
The fact is that those unstable states will eventually show up. If a fact is “directly contradictory to what the Second Law describes”, it is because the statement of the second law is false.
The very low probability of the tank of air example occurring in the life of the universe is well covered by my definition of a Stable State: “A system is in a “Stable State” if it is HOPELESSLY IMPROBABLE that it can do measurable amount of work without a finite and permanent change in its environment.”
Eric,
To set the record straight historically where all the confusion began, I refer to the entropy Website that provides the quote of a passing remark by Boltzmann that has resulted in over a hundred years of misunderstanding:
From
http://entropysite.oxy.edu/boltzmann.html
To understand entropy, you practice calculating it, then the fog clears.
To restate. Entropy was first defined by Clausius using thermometers and measuring heat:
delta-S = (heat change)/ Temperature
or in terms of differentials
dS = dQ/T
Simple example:
Let us suppose we have a 1000 watt heater running for 100 seconds that contributes to the boiling of water (already at 373.2?K). What is the entropy contribution due this burst of energy from the heater? First I calculate the amount of heat energy input in the water:
1000 watts * 100 sec (Joules/ sec/watt) = 100,000 Joules
entropy change = delta S = 100,000 Joules / 373.2 = 268 Joules/ Kelvin
what I did was essentially:
S = Integral(dS)
My notation is a bit sloppy, but hopefully that gets the point across. The scientific breakthrough was realizing
S = Inegral (dS) = k log W
so
S = k log W
which evolved into Shannon’s by letting k = 1 and using “I” as a symbol instead of “S” (Shannon used “H” in his original paper)
I = log W
and this evolved into Dembski’s
I = log W or in the more familiar form
I = -log2(P)
Now I suppose if one is really determined, one can redo all the units of Joules and Kelvins and we could relate
S = Clausius = Boltzmann = Shannon = Dembski
S = Integral (ds) = k log W = log W = -log2(P)
by letting k = 1 and then redoing Joule/Kelvin to make k = 1
That is how thermodynamics connects to information theory, but it doesn’t have anything to say about order or uniformity without creating something like the notion of X-entropy.
The gargantuan insight was
Integral (ds) = k log W
It required some miserably difficult math to prove, but it was laid out by Josiah Gibbs in this classic book of physics:
http://en.wikipedia.org/wiki/E....._Mechanics
It resorted to this theorem:
http://en.wikipedia.org/wiki/L.....miltonian)
So that is a summary of how thermodynamics eventually connected to ID theory, but the formula actually says entropy must INCREASE for design to emerge, not decrease.
From Bill Dembski:
And as I showed above, Dembski information measure connects to thermodynamic entropy:
S = Integral (dS) = k log W = log W = -log2(p)
providing we normalize the units so k=1
The hard part is invoking the Liouville theorem and classical mechanics to connect all the physics together, but the end result is just simple counting of bits or just measuring improbability.
I dont understand
“1” is a pure number. No Units
“Unnormalized” k has a value of: 1.97 Btu’s per mole per degree R
How do you “normalize” k, so that units of energy and temperature disappear?
Chris,
To understand, here is about the best explanation:
http://www.science20.com/hammo.....ropy-89730
So you ask:
Exactly! Entropy doesn’t need units, but we use units so we can make measurements in terms of heat calorimeters and thermometers since counting microstates is not practical!
Kelvin is a derived unit, it consists of energy per degree of freedom time some pure number constant.
A degree of freedom is like the number of bits. When you take the log of the number of microstates you end up with number of bits or number of degrees of freedom. 3 coins have 3 bits of information, or three degrees of freedom.
3 coins have 2^3 = 8 microstates, but only 3 degrees of freedom. 4 coins have 2^4 = 16 microstates, but only 4 degrees of freedom. In like manner a tank of gas has (gasp) the following number of microstates:
(Avogadro’s Number * buzzillion positional and momementum states) !
The number is large because we take the number of particles in the gas tank, the number of possible positions and momenta using some approximate small interger-like units (we drop the assumption of continuous coordinates and momentum and go with a quasi-quantization). You get a monstrously large number of microstates.
Take the log of that insanely large number and that’s the number of degrees of freedom. Since we don’t actually count microstates in the lab but rather use thermometers and calorimeters, we hence measure entropy in J/K which boils down to a count of degrees of freedom multiplied by some scaling coefficient.
So let me define a quantity: alpha = pure number scaling coefficient things into a bit measure
Degree Kelvin = alpha * Joules / (degrees of freedom)
Bit Entropy = (Joules/ Kelvin) * alpha =
[Joules/[(alpha * Joules / (degrees of freedom) ] ] * alpha
= (degrees of freedom) = “bits” or some other logarithmic measure of microstates
So you can make k = 1, you just have to multiply Joule/Kelvin by some pure number coefficient like alpha and you can make k = 1.
When you feel warm, it means more energy (Joules) per degree of freedom.
Chris,
Profuse apologies, to normalize k = 1
divide the Clausius integral as follows by Kb
S = Integral (dS) / Kb
where Kb is Boltzmann’s constant:
Kb 1.3806488 × 10^-23 J/K
So for my above example of delta S with boiling water:
delta S = 268 Joules/ Kelvin
to normalize it
S(normalized) = (268 J/K ) / Kb =
(268 J/K ) (1.3806488 × 10^-23 J/K) = 1.94 x 10^25
degrees of freedom
Degrees of freedom is the natural log of microstates so
W = Exp( 1.94 x 10^25 )
I erred in my derivation above with Dembski because Bill uses Log2 instead of Ln but that can be adjusted. 🙂
AVS @70:
Well, your thought experiment didn’t contain any detail about how the Sun’s energy could possibly cause an inevitable OOL and evolution, so there wasn’t much to refute.
As a general matter, however, I would note that Dr. Sewell spent a fair amount of time addressing the “Earth is an open system” idea and showed why it doesn’t make a bit of difference to the question of OOL and evolution. Just pouring energy into a system doesn’t do anything meaningful to help the wildly improbable somehow now become probable.
In part of my comment #57 above, I also addressed this issue:
“The compensation argument in regards to OOL and evolution is nonsensical because (i) OOL and evolution are not primarily thermal problems, (ii) even to the extent that energy is needed for OOL and evolution, simply pouring energy into the system isn’t helpful; there needs to be a directing process to channel the energy in useful ways, and (iii) no-one doubts that there is plenty of energy available, whether it be lightning strikes, volcanic vents, the Sun, deep sea vents, or otherwise; energy (at least in terms of raw quantity) has never been the issue.”
No-one doubts that the Earth receives energy from the Sun. No-one doubts that there are also other energy sources potentially available for OOL. Getting available energy is a minor and trivial part of the problem. As mentioned on the other thread, these are the kinds of issues that need to be dealt with:
http://www.uncommondescent.com.....ent-494466
Finally, my comment #8 shows why the “closed system” vs “open system” debate is a red herring in terms of OOL and evolution.
chris:
Well, I’m not wedded to that. I simply offered it as a potential clarification to the “stable states exist” definition, which doesn’t seem to say anything.
Hmmm. “Remain unchanged indefinitely.” Sounds pretty stable. 🙂
Seriously, though, it sounds like a definitional game at this point. The only thing metastable states demonstrate is that something can be stable under a certain set of conditions, but when exposed to a different set of conditions it can transition to a more stable state. That is true of everything, presumably — it will tend toward the most stable state available, given the available conditions. That is a confirmation, not a refutation, of what I wrote.
I hear you keep asserting this, but we haven’t seen any reason to think this is true, other than the general assertion that if we “wait long enough” then “every possible state” will eventually show up. With due respect, it doesn’t seem like the kind of thing we could base any kind of physical law on. If we were actually trying to apply this in practice, t any given moment when we think (based on our all our understanding of physics and chemistry, that the system should be X) it could be Y instead. Pretty strange “law” that.
I wonder if you would have the same view with a solid object, say something like a temperature profile of a solid steel rod? Are you saying that if we wait long enough that eventually one end will heat up and the other end will cool down, because all possible states will eventually be manifest? If so, then I’m afraid I can’t agree. On the other hand, if you acknowledge it won’t happen with a solid object, then perhaps your tank of molecules is not so much a question of thermodynamics. We see the molecules moving and say, “Well, gee, if they move around randomly for long enough, then eventually any state can arise.” Whether or not that is true, it hardly seems the basis for forming an understanding of the processes at work.
—–
Again, I think you’ve put together some great thoughts and you probably have more experience with this issue than I. I’m just trying to pin down what is being proposed to see if I can understand it better.
Dear Dr. Scordova
I am profoundly grateful for what you have written.
That thermal entropy should have no units is an astonishing insight for me. I dont yet fully understand (and thus accept) what you wrote, but I will understad it and hope to accept it. I wanted to thank you for your help now.
Based on the insight you have given me, I now see how information entropy can indeed be the same as thermal entropy. I want use that to better understand what you call the Clausius view, and how it fits in.
I’m unsure what, if anything, this has to do with evolution or the origin of life. As regards the former, I dont care. 5 years ago Dr EV Koonin stated that “the modern (evolutionary) synthesis has crumbled beyond repair. Glimpses of a new synthesis may be discernible in blah blah blah” So I’m not going to waste time on Evolution until its proponents decide can tell me what Evolution is.
As regards the latter, Naturalistic Abiogenesis is so obviously “hopelessly improbable” that a sane person doesnt need Dr Dembski’s formulas to quantify just how hopelessly hopeless it is.
Finally, this insight confirms my claim that most statements of the 2nd law, such as those of Kelvin, Plank, and Hawking, are false, as they make absolute claims while the second law is about probabilities. I confess to being unimpressed by Dr Hawking, except for his talents on the Talk Show circuit, but I’m still surprised that 100 years after Bolzmann, he would make such a silly error.
Very truly yours
Chris Haynes
If there wasn’t much to my idea, then it shouldn’t be hard to refute, no?
I understand where you are coming from though, “yes there’s a huge energy input, but how does it get used?”
That is where things get complicated, which is why I didn’t bother to continue my thoughts hehe. But anyways I will try to do it quickly now. Basically, the first step would the light energy driving endergonic chemical reactions, making slightly more complex molecules than what was present at the start. As molecules become more complex, they gain properties that allow them to have certain functions. Early small polypeptide-like strands, simple amphipathic molecules, simple sugars and lipids. These are the molecules of life, and while not a simple feat eventually give rise to early cells, with a simple membrane and protein-like molecules, some early nucleotides eventually. While it may not resemble life as we know it today, it is nonetheless a living organism.
Joe:
Entropy!
chris haynes yesterday:
chris haynes today:
Oh my:
The Protein Folding Problem and its Solutions
S = logW
A Farewell to Entropy. Statistical Thermodynamics Based on Information
This book develops Statistical Thermodynamics from Shannon’s Theory of Information.
Chris Haynes,
I cleaned up the calcs above and used slightly different notation so as to please the formalists out there.
Entropy examples connecting Clausius, Boltzmann, Dembski
Now, as to what you said about the 2nd law being false, it was well known since Boltzmann’s time that it was about probability, NOT absolutism, especially for systems of 2 or 3 or a few molecules.
What this means is that for nano-scale systems, some of the deviations from the average can be taken advantage of as you have suspected. Example from the Physical Review Letters:
Brownian Refrigerators.
Thank you, but I’m not Dr. Cordova, merely Mr. Cordova, MS Applied Physics, BS EE, BS MATH, BS CS. 🙂
Never said “entropy is correlated with disorder”. That’s just your usual clueless and baseless trolling.
And how do you propose a replicating cell becomes organized without translated information? What organizational principles are at work and how do they physically manifest themselves to provide constraint and replication, without specification? When does a system arise to resolve the necessary discontinuity between whatever source of constraint you are proposing and the cellular objects being constrained?
Good questions upright. As far as organization goes, these cells would not have much. The simple membrane they have allows the cell to separate the internal environment from the external, providing a suitable environment for basic metabolic reactions to occur as some molecules can cross the membrane while others cannot. This builds up energy in the form of concentration and electrical gradients. Replication in the earliest of organisms would simply be driven by the increasing instability of the membrane as it increases in size and also by environmental movement, as bicelles have been shown to form spontaneously in water and can be forced to replicate simply by vibrating the water environment. These cells would be very different from today’s cells, but their lack of much order and constraint allows them to undergo molecular evolution at a rapid pace with a lesser possibility for “detrimental” changes.
The most appropriate term in technical literature is “algorithmic entropy”. More algorithmic entropy (algorithmic complexity) implies more disorder.
Here is a comparison contrast:
Comment on Shannon entropy, Thermodyanmic Entorpy, Algorithmic Entropy
Dr. Sewell’s X-entropies are probably akin to algorithmic entropy. But as I try to caution, I recommend understanding traditional entropies first as is taught in chem, physics, and engineering courses. Algorithmic entropy is a very different animal.
An obscure definition of “configuration entropy” was used by Bradley, Thaxton, Olsen but some have complained it was idiosyncratic and some physicists don’t like the way material scientists have used the notion. I try not to go there.
Well then, start with whatever you propose, and tell us how specification rises across the discontinuity; how that specification is achieved; and how the discontinuity is maintained. Since we already know from living systems that this is what we must arrive at, give a plausible first step.
Agitation is not an answer.
Upright, I’m not a word smith. Can you please spell out exactly what you are asking me? Thanks.
Certainly you realize that specification is required for the nascent genome to constrain (organize) and replicate the living cell. I am asking how that specification rises across the discontinuity between whatever (you suggest) is doing the constraining and those objects within the cell that must be constrained. I am asking how that rises, and how it maintains the necessary discontinuity. Again, we know this is where we are going because that is the way we find it. Do you have any proposal that actually addresses any of these physical requirements, and if not, then what is the explanatory value of your proposal with regard to the problem.
chris haynes @85:
Well said.
At this stage the cells I would think, don’t even have what we would call a genome. They are merely made up of simple protein and nucleotide -like molecules with some simple sugars and lipids. Replication, as I have said occurs simply because the membrane is destabilized by both an increase in size and energy input. While the components of the cell may not split evenly between the two daughter cells, they don’t need to because as I have said, the cell at this stage is relatively insensitive to what we would call “detrimental” changes.
AVS @86:
Thank you for taking time to lay it out. What you have written, however, is not an explanation, but rather just a restatement of the hypothetical abiogenesis story: start with a few molecules, they react, over time they become more complex, eventually they turn into simple life.
Everybody on this site who is critiquing abiogenesis is familiar with the basic outline of the idea. The devil, as they say, is in the details. There are numerous insurmountable problems with the materialistic abiogenesis story. Getting energy into the situation is probably the very simplest and easiest of all the issues (although having the right amount of energy and the right kind is, in fact, more difficult than one might initially assume).
There are many reasons to conclude that a naturalistic abiogenesis scenario is essentially impossible within the resources of the known universe. But probably the most significant problems include getting information and control mechanisms in place. That is the fundamental and foundational problem. And it makes no difference whether we are talking about an “open system” or a “closed system.” This is why the “compensation” argument, or the “Earth is an open system” idea doesn’t cut it. Not so much because it isn’t true (though even that point could be argued); but because it doesn’t address the central issues; it is simply irrelevant to the key questions at hand.
Again, I’m not completely on board with Dr. Sewell’s approach of using Second Law terminology to drive home the point, though I am intrigued and am willing to give it a fair reading. More importantly, whatever drawbacks his use of terminology may have, his overall point is sound, namely that these informational and control mechanisms do not (by all experience and under certain broader formulations of the Second Law) arise by purely natural processes.
I do agree, EA, that the generation of what we would call a “genome” through molecular evolution is no simple task, but I would not say it is impossible. Anywho, I have outlined what I think is a plausible mechanism for the generation of the first living organisms, feel free to tell me what is wrong with it. I’m not sure why a certain amount or type of energy would be required for anything I have said.
#100
So… you have no proposal that actually addresses any of the identifiable necessary material conditions of an autonomous self-replicating cell.
Okay.
I’m not sure why I should. My conversation with EA was about the earliest of living cells. These cells would still be far from self-replicating autonomy. But they are self-replicating in the most simple sense.
Apparently you are not sure why a proposal ostensibly meant to shed light on the origin of life (via the organization of the cell) should address the necessary material conditions of organizing the cell?
Not surprisingly, your proposal begins with a hypothetical state, and advances it only to yet another vague hypothetical state.
Perhaps you will follow on with a proposal that advances it to the state that we actually find it in nature?
It would seem that this would be the only one of any real explanatory power.
Your association of life and organization is your problem. I outlined the steps to get to the first living cell. These cells were no doubt highly unorganized, but they were living. Saying that this is not enough, and that I must explain the evolution of these first cells to the autonomous, self-replicating cells we see today is just you moving the goalposts.
Your association of life and organization is your problem.
I outlined the steps to get to the first living cell.
These cells were no doubt highly unorganized, but they were living.
Saying that this is not enough
that I must explain the evolution of these first cells
to the autonomous, self-replicating cells we see today
is just you moving the goalposts
😉
AVS at 102:
Not all systems require the same source of energy. But say your hypothetical example needs just the sun as energy source. The problem is that too much sun can lead to destruction. Homeostasis is a concern. Also, the issue of homochirality and its relationship to energy needs to be considered.
AVS couldn’t outline how to get the first living cell via blind processes if its life depended on it.
My questions to AVS were not intended to illicit any particularly meaningful answers from him.
We all recognize that those answers don’t exist in the materialist framework.
However… after watching him willfully misrepresent ID proponents as a means to attack them, then follow on by being a wholly beligerent ass in virtually every comment he types out, I just wanted to demonstrtate that not only does he not have any answers, he doesn’t even realize the questions.
I also wanted to demonstrate that the absolute personal certainty that underlies his unecessary and bigoted belligerance – is completely without empirical foundation. It ain’t about the science.
Of course, none of this will mediate AVS in the least. He is a complete thumper, and a clever one. Nothing will knock him off his course, particularly the science. Or honesty. Or logic. Or mystery. Or reality. Expect nothing else.
AVS @102:
No you haven’t. You haven’t outlined any plausible mechanism. You have just restated a vague, detail-free, unspecified, hypothetical just-so story. It isn’t remotely plausible.
Let’s say I go around telling people that I have a detailed explanation for how we can travel to other star systems in just a few days. When they ask for particulars I say: First we build a test ship to travel to Mars, then we discover new and stronger alloys for the ship’s hull, then we invent a small and massively-powerful fusion power source, then we discover how to bend space-time so that we can travel faster than light, then we build a ship incorporating all that technology and use it to travel to other star systems.
If people, quite reasonably, point out that I haven’t provided any meaningful detail, that the end goal faces serious technological and basic conceptual hurdles, that all I’ve done is describe the basic science-fiction hypotheticals that have been around for a hundred years, it won’t do to say: “Now you’re moving the goalposts. I’ve given you a ‘plausible mechanism’ for getting to other star systems.” I would be laughed off the stage.
Until you actually delve into the details of abiogenesis, until you actually start looking at what it takes to get from A to B to C, your fantasy story is just that — pure fantasy. It has no basis in actual chemistry or physics; it has no basis in reality; it is nothing more than a materialistic creation miracle story.
I invite you to take some time to actually look into the details of what would be required for abiogenesis. It is a fascinating intellectual journey and one well worth embarking on.
cue the sales pitch
EA, your Mars example is a poor comparison. It is built on discovering and building new things. My outline, while I know is not detailed, is built on things we have already discovered. Experiments have been done that more complex molecules from the more simple, including amino acids, protein lattices with catalytic activity, sugars, and simple amphipathic lipid molecules.
Yes upright, we all know you were just asking a question that you know is extremely complex and that no one currently has an answer to. It’s a typical UD tactic. You just word your questions differently. I very much realize that the rise of order and complex systems in the cell is an important question, but I never intended to answer that question when I originally posted. Even if I did try to think out a mechanism that answers your question, as EA has just demonstrated, you guys would just sneer and say “oh it’s a just-so fairytale.”
Have a good one, guys.
I’m sure you’re right AVS. With a little sunlight and agitation, functional organization sets in, and living things arise that can replicate themselves. That whole information thingy – where scripts of prescriptive information get distributed from memory storage out to the cell and start doing stuff – all that gets tacked on later somehow.
😉
Next time, lose the unecessary condescending BS and try addressing the actual issues (as they actually exist in nature) then we will see.
AVS:
Yet nothing that demonstrates a living organism can arise from just matter,energy and their interactions.
So thanks for demonstrating your position is pseudoscience BS.
Thank you for misrepresenting my argument, Upright. External energy inputs from the sun and the environment on Earth drive only a very low level of organization, after more complex molecules start to form, their interactions are what slowly allow slightly more and more organized systems to arise. After the generation of the first living cells, the external energy is not as important as what is going on internally in the cell. And don’t worry, there won’t be a next time.
Misrepresentation?
Dear AVS
Book your flight to Stockholm, Sweden. You’ll get the Nobel Prize
You say “I have outlined what I think is a plausible mechanism for the generation of the first living organisms.”
The next step is a piece of cake. Just demonstrate it. All you need would be in a two bit chemistry lab.
Go show these pompous clowns, from Oparin, to Miller and Urey, to Szostak how to do what they’ve bungled for 90 years.
As a Creationist, I acknowledge complete defeat
Good work!!!!
The Clausius view dominates in practice because it is practical. You mostly need thermometers and some basic lab equipment.
The Boltzman view is important when we start dealing with things at the molecular level.
The Clausius view emerged without any need to even assume atoms existed, and might well have been compatible with Caloric heat theory where heat was viewed as some sort of physical fluid that got transferred.
The Clausius view is important in building heat pumps, refrigerators, steam engines.
I’m not quite sure, but I think the Boltzmann entropy was possibly conceived without fully realizing it was connected to Cluasius entropy. It took a lot of miserably difficult math to figure out that two different calculations could arrive at the same quantity:
Integral (dS) = k log W
where dS = dq/T
Had that miserable math not been done by scientists like Josiah Gibbs, we probably could not connect thermodynamics to information theory and probability at all.
It was made possible by the Liouville theorem, but ironically, that’s sort of flawed because using the Liouville theorem assumes atoms behave classically, which isn’t quite the case since they behave quantum mechanically! Gibbs got the right answer without exactly the right assumptions! LOL!
His famous work:
Elementary Principles in Statistical Mechanics, developed with especial reference to the rational foundation of thermodynamics. was sheer genius combined with a little luck that atoms behave just classically enough for his theory to work. By classically, I mean, he modeled molecules like infinitesimal billiard balls and glossed over the contradiction that by modeling them as infinitesimal points rather finite size spheres, they wouldn’t ever collide with each other!
Nobody has figured out abiogenesis. Let’s start with that. But it is also unscientific to immediately turn to deus ex machina to explain it. It is still a work in progress. The issue, as I see it, is not that certain molecules can spontaneously combine to form proteins, or RNA, but how did they “evolve” to actually correspond to information exchange? Which came first, the RNA or the proteins? And how did a code in the RNA come to correspond to a specific protein? And how the heck did all the other proteins evolve that are needed to translate the code from RNA (or later DNA) into proteins without there being an evolutionary advantage in any of the intervening steps? Damn difficult questions, but that doesn’t drive me to design yet. It’s just a challenge to exhaust all the known forces to explain it before I go hunting for an other-wordly one.
Thanks, billmaz. You are asking the right kinds of questions and you highlight some of the key challenges.
Which known forces would you be referring to? The known ones have been pretty well exhausted. Is there some previously-unknown force hoped for? Or perhaps some previously-unknown interaction that will hopefully bridge the gap?
And the inference is not: “Abiogenesis is hard, so deus ex machina.”
The inference is: (i) naturalistic abiogenesis fails on multiple accounts, based on the current state of knowledge, (ii) there are good scientific reasons to conclude it isn’t possible given the resources of the known universe, furthermore (iii) we do know of a cause that can produce the kinds of effects at issue (the kinds of things you note in your #121). Even then, we can’t conclude that “God dunnit”; but, yes, we can can draw a reasonable inference that some intelligent cause was responsible.
Thank you, Eric, for your response. Your reference to the “current state of knowledge” is the key. Science is a process, as you well know. The “current state of knowledge” in the past was negligible, so man thought the sun was a god. Why do you believe that our current state of knowledge is the ultimate, final knowledge of mankind? Who knows what wonderful things we will discover in the future? Why do we have to give up and say “That’s it, we can’t explain it with our current state of knowledge, therefore ‘God dunnit’, or ‘some intelligent force’. In terms of knowledge, we are children. We just started to learn about our universe. Imagine our civilization a hundred thousand years from now, if we survive that long. As to your third reference that ‘we do know of a cause that can produce the kind of effects’ I speak about, that’s not a scientific answer. Yes, the easy answer is that an intelligent designer created it all. The tough question is to ask ‘could it have happened without an intelligent designer, and how? Let’s try to find out.’ Let us see what we can discover in that realm first, since history tells us that many of our previously unanswered questions were eventually answered by logic and science. It is too early to say that science has failed. Even if science does succeed in answering that question, there are many other questions which will arise. I do believe that there is an ultimate Mind that exists, but I don’t want to get bogged down with details that will prove to be well described by science in the future, which will then be another arrow in the quiver of those who want to debunk God or an eternal Mind. The ultimate questions are larger: who and what are we, is there something beyond our materialistic world, do we exist as something other than our bodies, is there meaning in our existence? These little details about whether and how evolution works are just that, details, which can be easily explained if there is a God.
Imagine, for instance, billions of galaxies, with billions and billions of planets, with billions of intelligent life forms. Are they any less God’s creatures? Each adapted to their own environment. They are all ‘created in His image.’ There is no contradiction here. The method of creation is irrelevant. The environment is irrelevant. Their form is irrelevant. God is either the God of all, or He is no god. So, let’s not quibble over details. Let’s let ourselves be filled with the wonder of creation, whatever the mechanism He has devised.
Bill, I’m glad you post here.
Referencing your thoughtful comments at 121, particularly about fully exhausting the capacities of natural forces. It really should be considered that the real-world capacity of a physical system to organize matter into a living thing is wholly dependent on that system incorporating a local independence from physical determinism. This is very clearly accomplished by the way in which the system is organized, and the system simply could not function without this remarkable feature – which just so happens to be found nowhere in the physical world except during the translation of language, mathematics and in the genome.
In the end, this is the central identifying feature of the system – the capacity to create physical effects that are not determined by physical law.
Thus far, the alternative to adopting a coherent model of the system, is to stand there (metaphorically, as science currently does) staring at the system producing those effects, and simply ignoring what is taking place before our very eyes. Instead, (almost as if a deep distraction is suddenly called for) we scratch our collective heads and say “We just don’t know yet how this happened”.
It’s horrible waste of discovery.
I’m with billmaz on this. Science gave up way to soon on Stonehenge. Heck it’s only rocks and mother nature makes rocks in abundance. So there isn’t any reason why mother nature, give billions of years, couldn’t have produced many Stonehenge-type formations.
For that matter we can never say there was a murder because we know that mother nature also can be involved with death. And she can produce fires, so forget about arson.
Heck we can now rid ourselves of many investigative venues because we have to wait for the future to unveil the truth. We are just rushing to judgment with our meager “knowledge”. Obviously the we of today don’t know anything but the we of tomorrow will figure it all out.
The science of today is meaningless and should just stay out of the way of the science of tomorrow. So let all of those alleged criminals out of prisons and let time sort it all out.
Thank you and good day. (removes tongue from cheek)
F/N (attn Sal C et al): It is time, again, to highlight the significance of the informational approach to entropy, which makes sense of the order/disorder/organisation conundrum, and draws out the link between entropy and information.
(And yes, Virginia, there IS a legitimate link between entropy and info, from Jaynes et al, I will highlight harry S Robertson in his Statistical Thermophysics.)
First, a Wiki clip from its article on informational entropy [as linked above but obviously not attended to], showing a baseline we need to address:
In short, once we have an observable macrostate, there is an associated cluster of possible microstates tied to the number of possible ways energy and mass can be arranged at micro level compatible with the macro conditions.
This includes the point that when a cluster is specifically functional [which patently will be observable], it locks us down to a cluster of configs that allow function. This means that from the field of initially abstractly possible states, we have been brought to a much tighter set of possibilities, thus such an organised functional state is information-rich.
In a way that is tied to the statistical underpinnings of thermodynamics.
Now, let me clip my note that is always linked through my handle, section A, where I clipped and summarised from Robertson:
____________
>>Summarising Harry Robertson’s Statistical Thermophysics (Prentice-Hall International, 1993) — excerpting desperately and adding emphases and explanatory comments, we can see, perhaps, that this should not be so surprising after all. (In effect, since we do not possess detailed knowledge of the states of the vary large number of microscopic particles of thermal systems [typically ~ 10^20 to 10^26; a mole of substance containing ~ 6.023*10^23 particles; i.e. the Avogadro Number], we can only view them in terms of those gross averages we term thermodynamic variables [pressure, temperature, etc], and so we cannot take advantage of knowledge of such individual particle states that would give us a richer harvest of work, etc.)
For, as [Robertson] astutely observes on pp. vii – viii:
And, in more details, (pp. 3 – 6, 7, 36, cf Appendix 1 below for a more detailed development of thermodynamics issues and their tie-in with the inference to design . . . ):
. . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . .
[deriving informational entropy, cf. discussions . . . ]
[H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . .
Jayne’s [summary rebuttal to a typical objection] is “. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.” . . . . [pp. 3 – 6, 7, 36; replacing Robertson’s use of S for Informational Entropy with the more standard H.]
As is discussed briefly in Appendix 1, Thaxton, Bradley and Olsen [TBO], following Brillouin et al, in the 1984 foundational work for the modern Design Theory, The Mystery of Life’s Origins [TMLO], exploit this information-entropy link, through the idea of moving from a random to a known microscopic configuration in the creation of the bio-functional polymers of life, and then — again following Brillouin — identify a quantitative information metric for the information of polymer molecules. For, in moving from a random to a functional molecule, we have in effect an objective, observable increment in information about the molecule. This leads to energy constraints, thence to a calculable concentration of such molecules in suggested, generously “plausible” primordial “soups.” In effect, so unfavourable is the resulting thermodynamic balance, that the concentrations of the individual functional molecules in such a prebiotic soup are arguably so small as to be negligibly different from zero on a planet-wide scale.
By many orders of magnitude, we don’t get to even one molecule each of the required polymers per planet, much less bringing them together in the required proximity for them to work together as the molecular machinery of life. The linked chapter gives the details. More modern analyses [e.g. Trevors and Abel . . . ], however, tend to speak directly in terms of information and probabilities rather than the more arcane world of classical and statistical thermodynamics, so let us now return to that focus; in particular addressing information in its functional sense, as the third step in this preliminary analysis.
As the third major step, we now turn to information technology, communication systems and computers, which provides a vital clarifying side-light from another view on how complex, specified information functions in information processing systems:
That is, we have now made a step beyond mere capacity to carry or convey information [–> what Shannon info strictly is], to the function fulfilled by meaningful — intelligible, difference making — strings of symbols. In effect, we here introduce into the concept, “information,” the meaningfulness, functionality (and indeed, perhaps even purposefulness) of messages — the fact that they make a difference to the operation and/or structure of systems using such messages, thus to outcomes; thence, to relative or absolute success or failure of information-using systems in given environments.
And, such outcome-affecting functionality is of course the underlying reason/explanation for the use of information in systems. [Cf. the recent peer-reviewed, scientific discussions here, and here by Abel and Trevors, in the context of the molecular nanotechnology of life.] Let us note as well that since in general analogue signals can be digitised [i.e. by some form of analogue-digital conversion], the discussion thus far is quite general in force.
So, taking these three main points together, we can now see how information is conceptually and quantitatively defined, how it can be measured in bits, and how it is used in information processing systems; i.e., how it becomes functional. In short, we can now understand that:
Functionally Specific, Complex Information [FSCI] is a characteristic of complicated messages that function in systems to help them practically solve problems faced by the systems in their environments. Also, in cases where we directly and independently know the source of such FSCI (and its accompanying functional organisation) it is, as a general rule, created by purposeful, organising intelligent agents. So, on empirical observation based induction, FSCI is a reliable sign of such design, e.g. the text of this web page, and billions of others all across the Internet. (Those who object to this, therefore face the burden of showing empirically that such FSCI does in fact — on observation — arise from blind chance and/or mechanical necessity without intelligent direction, selection, intervention or purpose.)
Indeed, this FSCI perspective lies at the foundation of information theory:
That this is broadly recognised as true, can be seen from a surprising source, Dawkins, who is reported to have said in his The Blind Watchmaker (1987), p. 8:
Here, we see how the significance of FSCI naturally appears in the context of considering the physically and logically possible but vastly improbable creation of a jumbo jet by chance. Instantly, we see that mere random chance acting in a context of blind natural forces is a most unlikely explanation, even though the statistical behaviour of matter under random forces cannot rule it strictly out. But it is so plainly vastly improbable, that, having seen the message — a flyable jumbo jet — we then make a fairly easy and highly confident inference to its most likely origin: i.e. it is an intelligently designed artifact. For, the a posteriori probability of its having originated by chance is obviously minimal — which we can intuitively recognise, and can in principle quantify.
FSCI is also an observable, measurable quantity; contrary to what is imagined, implied or asserted by many objectors.>>
____________
So, while this is a technical topic and so relatively inaccessible (though that is part of why UD exists, to address technical issues in a way that is available to the ordinary person through being in a place s/he may actually look at . . . ), and as well there is a lot of distractive use of red herrings led out to strawmen soaked in ad hominems and set alight with incendiary rhetoric that clouds, confuses, poisons and polarises the atmosphere, there is a patent need to show the underlying reasonable physical picture. First, for its own sake, second, as it is in fact the underlying context that shows the unreasonableness of the blind watchmaker thesis.
There is but one good, empirically and analytically warranted causal explanation for functionally specific complex organisation and/or associated information — FSCO/I for short, i.e. design. And the link between functional organisation, information and entropy is a significant part of the picture that points out why that is so.
KF
KF:
Thank you for your good work!
So, just to be simple, you are telling us that a tornado in a junkyard will not assemble a Boeing 747, even if sufficient energy is added to the system, and all the physical laws are respected?
What a disappointment… 🙂
(By the way, have you read the Wikipedia article for “Junkyard tornado”? Well, that’s really a good example of junk, and no tornado will ever be able to get any sense from it!)
PS: My earlier comment on Sewell’s article and video is here, it includes Sewell’s video.
PPS: Functionally specific organisation is not a particularly unusual phenomenon, it is there all around us including in posts in this thread which use strings of symbols to function in English language text based discussions. Similarly, computer software is like that. Likewise, the PCs used to read this exhibit the same, in the form of a nodes and joining arcs 3-d structure, such as can be represented by a set of digitised blueprints. For reasons linked tot he needle in haystack search challenge to find islands of function in vast config spaces beyond the atomic re=sources of our solar system or observable universe to search any more than a tiny fraction of, there is good reason tha the only empirically known cause of such FSCO/I is design. What we see above is an inversion of the vera causa principle that phenomena should be explained on causes known from observation to be adequate, because of imposition of ideological a priori materialism. Often this is disguised as a seemingly plausible principle of seeking a “natural” explanation. But when the empirically grounded explanation is not in accord with blind chance and mechanical necessity, but is consistent with what we do observe and experience, intelligent causes acting by design, we should be willing to ace this frankly and fairly. This is the essential point of the intelligent design view.
GP:
Always so good to hear from you.
On the Hoyle rhetorical flourish, the problem starts at a much lower level, say the instruments on the instrument panel, or the clock.
Which is where Paley was 200+ years ago: The FSCO/I of a clock has but one vera causa . . . design.
And BTW, I notice the objectors never seriously address his Ch 2 discussion on a self replicating clock, seen as a FURTHER manifestation of the functionally specific complexity and contrivance that point to design. As in, an irreducibly complex von Neumann self replicator [vNSR] dependent on codes, algorithms and coded algorithm implementing machines, plus detailed blueprints, is plainly chock full of the FSCO/I that is per empirical and analytical reasons, a strong sign of design as only empirically warranted adequate cause.
Over in the next thread I give part of why:
And:
Yes, I know I know, for years objectors have been brushing this aside or studiously ignoring it.
That has not changed its force one whit.
KF
KF,
I showed the connection with straightforward math above. If one is willing to rework the base of the logarithims and normalize entropy by dividing by Boltzmann’s constant
Firguratively speaking
S(normalized) = S(Claisius) = S(Boltzman) = S(Shannon) = S(Dembski)
or
S(normalized) = Integral (dS) = log W = log W = -logP
The connection was made possible by the Liouville theorem and a little luck, and then put on an even more rigorous foundation when statistical mechanics is framed in terms of quantum mechanics instead of classical mechanics.
I even went through sample calculations.
To make this connection possible one has to make a few other assumptions like equiprobability of microstates, but the same basic insight is there.
Shannon’s entropy is probably the most general because one can pick and choose the symbols, whereas for Boltzmann one is restricted to elementary particles like atoms.
Shannon/Dembski can work with things like coin flips, but Boltzmann and Clausius can’t. So the above equality holds if one is only considering energy or position-and-momentum microstates of molecules. Shannon and Dembski can consider other kinds of microstates like coins and computer bits in addition to molecules.
The reason the disciplines of thermodynamics and information theory seem so disparate is that counting atomic microstates even for small macroscopic systems is impossible (on the order of Avogadro’s number squared factorial) ! So instead we use thermometers and whatever necessary calorimetry to measure entropy for air conditioning systems.
For example, my calculation for number of microstates in the boiling of water with a 1000 watt heater for 100 seconds resulted in this increase in the number of microstates:
delta-W = e^(1.94*10^25) = 10^(8.43*10^24)
which dwarfs the UPB, and most ID literature doesn’t even touch numbers that large.
For readers wanting to get tortured with the details in more formal terms:
Entropy examples connecting Clausius, Boltzmann, Dembski
gpuccio:
It’s only because all of the mall components, like the screws and washers used to hold things together, keep getting blown away. If it weren’t for that… 😉
Joe, AVS, billmaz, gpuccio, UB et al.:
I’ve just posted a new thread to discuss abiogenesis, as it deserves its own discussion, and maybe we can let this one focus more on the Second Law.
Take a look and let me know your thoughts:
http://www.uncommondescent.com.....iogenesis/
Thanks!
Chris haynes #61:
I tend to take a far less formal approach to thermodynamics, but I think I can work with this. I do have to quibble a bit on #2, though: you need to divide by temperature (the temp of the ice bath in your definition), change the sign (so that more work = less entropy), and probably also set the zero-entropy point (I prefer to think in third-law-based absolute entropies, where zero entropy corresponds with an ordered state at a temperature of absolute zero). With those caveats, let me take a stab at question 3:
Consider a (theoretical) information-driven heat engine proposed by Charles H. Bennett (in section 5 of “The Thermodynamics of Computation — A Review,” originally published in International Journal of Theoretical Physics, vol. 21, no. 12, pp. 905-940, 1982). He imagines a heat engine that takes in blank data tape and heat, and produces work and tape full of random data. The principle is fairly general, but let’s use a version in which each bit along the tape consists of a container with a single gas molecule in it, and a divider down the middle of the container. If the molecule is on the left side of the divider, it represents a zero; if it’s on the right, it represents a one. The engine is fed a tape full of zeros, and what it does with each one is to put it in contact with a heat bath (in this case a large bath of ice water) at temperature T, replace the divider with a piston, allow the piston to move slowly to the right, and then withdraw the piston and replace the divider in the middle (trapping the gas molecule on a random side). While the piston moves to the right, the gas does (on average) k*T*ln(2) (where k is the Boltzmann constant) of work on it, and absorbs (on average) k*T*ln(2) of heat from the bath (via the walls of the container). Essentially, it’s a single-molecule ideal gas undergoing reversible isothermal expansion. And while the results on a single bit will vary wildly (as usual, you get thermal fluctuations on the order of k*T, which is as big as the effect we’re looking at), if you do this a large number of times, the average will tend to dominate, and things start acting more deterministic.
Now, apply this to your definition of entropy in #2: Suppose we can get work W_random from a random tape of length N as it comes into equilibrium with the ice water. If we convert a blank (all-zeroes) tape into a random tape and then bring *that* into equilibrium with the ice water, the work we we get is W_blank = N*k*T*ln(2) + W_random… which implies that the blank data tape has N*k*ln(2) less entropy than the random tape. (Actually, that’s just a lower bound; to show it’s an exact result, you have to run the process in reverse as well — which can be done, since Bennett’s heat engine is reversible.)
Essentially, each bit of Shannon-entropy in the data on the tape can be converted to (or from) k*ln(2) = 9.57e-24 J/K = 2.29e-24 cal/K = 9.07e-17 BTU/K of thermal entropy. That is the connection.
Now, let me try to relate this to some of the other topics under discussion. WRT the state-counting approach Sal Cordova is taking, this makes perfect sense: when the heat engine converts thermal entropy to Shannon-entropy, it’s decreasing the number of thermally-distinct states the system might be in, but increasing the number of informationally-distinct states (by the same ratio), leaving the total number of states (and hence total entropy) unchanged.
Sal has also mentioned a possible link to algorithmic entropy (aka Kolmogorov complexity) (Bennett also discusses this in section 6 of the paper I cited). I’ve not found this approach convincing, although I haven’t looked into it enough to have a serious opinion on the subject. Essentially, it looks to me like this approach resolves some of the definitional ambiguities with the Shannon approach — but at the cost of introducing a different (and worse) set of problems with the algorithmic definition.
What about a relation to FSCI, CSI, etc? Well, I don’t think you can draw a real connection there. There are actually a couple of problems in the way here: The first is that entropy (in pretty much all forms — thermal, Shannon, algorithmic, etc) has to do (loosely) with order & disorder, not organization. The second is that it doesn’t really have to do with order either, just disorder.
(Actually, as Sal has argued, entropy doesn’t quite correspond to disorder either, although I don’t think it’s as bad as he makes it out to be. Anyway it’s much closer to that than any of the others, so I’ll claim it’s close enough for the current discussion.)
To clarify the difference between organization, order, and disorder, let me draw on David Abel and Jack Trevors’ paper, “Three subsets of sequence complexity and their relevance to biopolymeric information” (published in Theoretical Biology and Medical Modelling 2005, 2:29). Actually, I’ll mostly draw on their Figure 4, which tries to diagram the relationships between a number of different types of (genetic) sequence complexity — random sequence complexity (RSC — roughly corresponding to disorder), ordered (OSC), and functional (FSC — roughly corresponging to organization). What I’m interested in here is the ordered-vs-random axis (horizontal on the graph), and functional axis (Y2/vertical on the graph). I’ll ignore the algorithmic compressibility axis (Y1 on the graph). Please take a look at the graph before continuing… I’ll wait…
Back? Good, now, the point I want to make is that the connection between thermal and information entropy only relates to the horizontal (ordered-vs-random) axis, not the vertical (functional, or organizational) axis. The point of minimum entropy is at the left-side bottom of the graph, corresponding to pure order. The point of maximum entropy is at the right-side bottom of the graph, corresponding to pure randomness. The functional/ordered region is in between those, and will have intermediate entropy.
Let me give some examples to illustrate this. For consistency, I’ll use Bennett-style data tapes, but you could use any other information-bearing medium (say, DNA sequences) and get essentially the same results. Consider three tapes, each 8,000 bits long:
Tape 1: a completely blank (all zeroes) tape.
Tape 2: a completely random tape.
Tape 3: a tape containing an 8,000-character essay in English (I’ll assume UTF-8 character encoding).
Let’s compute the entropy contributions from their information content in bits; if you want thermodynamic units, just multiply by k*ln(2). Their entropy contribution is going to be base-2 logarithm of the number of possible sequences the tape might have.
For tape 1, there is only one possible sequence, so the entropy is log2(1) = 0 bits.
For tape 2, there are 2^8000 possible sequences, so the entropy is log2(2^8000) = 8000 bits. (I know, kind of obvious…)
Tape 3 is a little more complicated to analyze There have been studies of the entropy of English text that put its Shannon-entropy content at around one bit per letter, so I’ll estimate that there are around 2^1000 possible English essays of that length, and so the entropy is around log2(2^1000) = 1000 bits.
As I said, both the minimum and maximum entropy correspond to a complete lack of organization. Organized information (generally) corresponds to intermediate entropy density. But it’s worse than that; let be add a fourth tape…
Tape 4: a tape containing a random sequence of 8,000 “A”s and “B”s (again, UTF-8 character encoding). There are 2^1000 possible sequences consisting of just “A” and “B”, so again the entropy is 1000 bits.
Tape 4 has the same entropy as tape 3, despite having no organized and/or functional information at all. From a thermodynamic point of view, the content of tapes 3 and 4 are equivalent because the have the same order/disorder content. The organization of their content is simply irrelevant here.
But it’s even worse than that. Let me add a fifth tape, this time a longer one…
Tape 5: a tape containing an 10,000-character essay in English.
Tape 5 is like tape 3, just ten times as big. Because it’s ten times bigger, it has ten times as much of everything: ten times the OSC, ten times the RSC, and ten times the FSC. And ten times the entropy contribution (by the same argument as in tape 3, there are around 2^10000 essays that length, so its entropy will be 10000 bits).
Comparing tapes 3 and 5 indicates that, at least in this case, an increase in functional complexity actually corresponds to an increase in entropy. This is what I meant when I said that entropy doesn’t have to do with order either, just disorder. (I think this is also essentially the same as Sal’s argument that entropy must increase for design to emerge.)
I have a textbook on Thermodynamics from 1923 by G.N. Lewis and Merle Randall which has as Chapter XI. Entropy and Probability.
In referring to Maxwell’s demon they write:
The compensation argument has apparently been around for a while. 🙂
continuing later:
Salvador argues in another thread that organisms tend to evolve towards simpler forms and less differentiation, just like non-living matter.
Who is right, and why?
Continuing from the Lewis/Randall text:
Remember, this is from 1923! Long before Shannon/Weaver.
But to cut to the chase:
How interesting is this?
From the Lewis/Randall text:
This is the leading paragraph in Chapter X: The Second Law Of Thermodynamics And The Concept Of Entropy.