For me, the real argument for intelligent design has always been extremely simple, and doesn’t require any advanced mathematics or microbiology to grasp. The video below makes this argument in the simplest, clearest way I can make it. My uncle Harry and aunt Martha like the video, and can’t understand why so many intelligent scientists aren’t impressed by this very simple argument.
Of course the problem is, the argument is just too simple, most scientists aren’t interested in arguments that their uncle Harry and aunt Martha can understand, they are looking for arguments that require some advanced technology, that show some understanding of evolutionary theory or microbiology that sets them apart from uncle Harry and aunt Martha. And indeed, most of the important scientific advances in our understanding of our world have required advanced technology and advanced degrees to achieve, but it is the curse of intelligent design that the strongest and clearest arguments are just too simple to get much traction in the scientific world. Of course there are many good arguments for ID being made now which do require advanced technology, and advanced degrees to understand, and I’m very grateful for the scientists who are making them: it’s clear to me if ID ever becomes widely accepted in the scientific world, it will be because of their writings, and not because of the simple arguments I am making. If I could figure out a way to use some more advanced mathematics in my arguments, if I could figure out a way to restate the basic point in such a way that uncle Harry and aunt Martha couldn’t understand it, I might make some progress (I don’t really have an uncle Harry or an aunt Martha, by the way, but many people do). Perhaps it would help if I linked to my resume, or to my finite element program, to show that I am capable of doing more advanced mathematics, even if I haven’t used any of it in this video.
The arguments for ID which require advanced science to understand are powerful, but never completely definitive: they look at small portions of the picture through a microscope. To make the completely definitive argument you have to step back and look at the big picture, but, alas, then the picture becomes too clear, and too simple.
Added later:
As I expected, a couple of commentors are trying to make the issue more complicated than it is. Rather than try to answer each objection one at a time, I would refer readers to this ENV post, where I point out that every attempt to argue that the spontaneous rearrangement of atoms on a barren planet into computers, books and airplanes does not violate the second law, can equally be applied to argue that a tornado running backward, turning rubble into houses and cars, would not violate it either. So unless you are willing to argue that tornados running backward would not violate the second law, don’t bother. And even if you are, it is obvious that a tornado running backward would violate some basic law of Nature, if not the second law as formulated by humans, then at least the basic natural principle behind the second law, and what has happened on Earth would clearly violate the same law, whatever it is.
[youtube 259r-iDckjQ]
Got it in one, Granville…. appropriately enough.
This illustrates how the human heart can absolutely annihilate the reasoning of people of quite egregious worldly intelligence, and routinely does so, simply because of the individual’s wishful thinking – his preference for a world-view with which it would be inconsistent.
Wishful thinking, however, as the ambience and ultimate rationale of our premises is not, ipso facto, seminally false. Indeed, when we choose the premises of our world-view, we all have to fall back on wishful thinking. It just happens that, it is perfectly consistent with Christian belief that we are prompted, inspired precisely in this manner by the Holy Spirit.
Where this propensity, nay, resolute determination, of the partisans of scientism for totally ga-ga reasoning, lies, however, is in their claim that truth can only be accessed under laboratory conditions, and will always be cold and hard, and accessible only to the mind of the cynical reductionist (moron).
Whereas the reality is that truth is anything but void of beauty, life and ‘charisma’: truth and the understanding it requires are both, in fact, live, vibrant and dynamic, and are not constrained by any demand to be cold, ugly, cruel, undesirable and proof against any hope that is not wholly psychopathic.
On the contrary, Einstein identified the criterion he resorted to in selecting his hypotheses as aesthetic. Of course, later, he had to point out to the myrmidons of scientism for whom he had such withering contempt, that ‘elegance’ was not, in itself, sufficient. They had to ‘do the math’. Not that the loopy Darwinists heeded his words either then – to the incredulous dismay of Wolfgang Pauli – or now.
Semi OT: as to,
“To make the completely definitive argument you have to step back and look at the big picture,”
Sort of reminds me of another piece of evidence that one has to step back away from in order to get the big picture ‘To make the completely definitive argument’:
“Q: Why can’t the Shroud just be be a medieval painting?
A: The image is also extremely faint, fading away completely if you get closer than about six feet, so it would have like trying to paint an enormous canvas in invisible ink.”
Game, set and match. Deism, theism and, now, Christianity, BA. Yet not a scintilla of concurrence from them.
They have breasted the mountain top, thanks to better scientists (open to non-partisan knowledge than themselves), and view the churchmen and theologians who’ve been sitting there for centuries, as mirages.
It is something wrong with the Second Low. I felt it already for years and wrote about it some articles but last week there was a new information about the machines with negative entropy here: http://www.accuweather.com/en/.....ro/3684063
Hi Professor Sewell,
Great post. I found the video very clear, and the argument straightforward. To those who say the sun could have done the trick, I say: sunbeams aren’t that smart.
However, I have a great fondness for numbers, hence my next question: has anyone in the ID community posted a refutation of Robert N. Oerter’s online paper, Does Life On Earth Violate the Second Law of Thermodynamics? using detailed numerical calculations? I’m just curious. Thanks.
vjtorley @5:
What is there to respond to? Oerter states that “it is physically impossible for evolution to violate the second law of thermodynamics.”
Of course it is impossible. Everyone knows it is impossible.
The ‘Earth-is-an-open-system’ argument is a complete and utter red herring and shows that the person putting forth the argument has no idea what they are talking about.
The discussion is largely a waste of time until we have a clear understanding of what the relevant question even is. Ascertaining what the relevant question is would be a useful avenue of discussion, but we can’t respond with detailed entropy calculations and the like until we first have some agreement on what we are talking about.
a few notes as to:
To me the ‘big picture’ that makes it clear that entropy relentlessly holds its grip on biology as it does the rest of creation is the fact that entropy is the primary reason physical bodies, which contain life, grow old and die. Dr Sanford notes that detrimental mutation accumulate as we grow older:
This following video clearly brings the ‘big picture’ point personally home to us about the effects of genetic entropy on the human body:
Amazingly, the Shroud of Turin, as out of place as the Shroud of Turin might seem to be in a discussion on entropy, gives us a ‘big picture’ look that Jesus Christ overcame entropy’s relentless ‘death grip’ on the human body:
Some may ask, ‘What does gravity have to do with entropy?’. Well it turns out that gravity (space-time), and entropy, are intimately connected:
as well:
Supplemental notes on the ‘big picture’ of slightly detrimental mutations:
A graph featuring ‘Kimura’s Distribution’ of beneficial compared to detrimental mutations is shown in the following video:
Moreover it is now found that the rare ‘beneficial’ mutations that work in a limited context to increase fitness produce what is termed ‘negative epistasis’ when the ‘beneficial’ mutations are combined together:
Thus though the ‘bigger picture’ may not be all that appealing to Darwinists, personally, I find the bigger picture quite beautiful:
Music and verse:
If we had something like maxwell’s demon we could convert the more probable system to a less probable system.
Evolution serves the purpose of the demon.
No violation of the second law required.
Simple.
“If we had something like maxwell’s demon we could convert the more probable system to a less probable system.
Evolution serves the purpose of the demon.
No violation of the second law required.
Simple.”
Yet,,,
The GS (genetic selection) Principle – David L. Abel – 2009
Excerpt: The GS (Genetic Selection) Principle states that biological selection must occur at the nucleotide-sequencing molecular-genetic level of 3’5′ phosphodiester bond formation. After-the-fact differential survival and reproduction of already-living phenotypic organisms (ordinary natural selection) does not explain polynucleotide prescription and coding.
http://www.bioscience.org/2009.....lltext.htm
but,,,
While neo-Darwinian evolution has no evidence that material processes can generate functional prescriptive information, Intelligent Design does have ‘proof of principle’ that information, via intelligence, can violate the second law and generate ‘potential energy’:
Maxwell’s demon demonstration turns information into energy – November 2010
Excerpt: Until now, demonstrating the conversion of information to energy has been elusive, but University of Tokyo physicist Masaki Sano and colleagues have succeeded in demonstrating it in a nano-scale experiment. In a paper published in Nature Physics they describe how they coaxed a Brownian particle to travel upwards on a “spiral-staircase-like” potential energy created by an electric field solely on the basis of information on its location. As the particle traveled up the staircase it gained energy from moving to an area of higher potential, and the team was able to measure precisely how much energy had been converted from information.
http://www.physorg.com/news/20.....nergy.html
Here’s what I want to know:
How many joules does it take for sex to evolve?
Vjtorley,
Looks like this is similar to the Styer article, which is critiqued in the last half of my other video , as well as point #2 of this ENV article, and several earlier articles referenced therein. Basically his error is the assumption that the second law only applies to thermal entropy, and every other type can be converted to units of thermal entropy, e.g., the increase in entropy due to a tornado hitting town can be expressed in units of Joules/degree Kelvin, which makes absolutely no sense.
The Second Law for Complete IDiots
1. There is more than one formulation of the second law.
2. The second law is not about order/disorder.
Mung,
You are right that there is more than one formulation, but the more general ones ARE about order/disorder. For example, see the quote from Classical and Modern Physics in footnote #3 of this article.
Granville, thank you for your response.
1. These are both for isolated systems.
2. Neither order nor disorder is a defined term.
Yet in the linked:
Begging the question.
On what basis do you [or Ford] define order to be the opposite of entropy, and how does that definition hold across the various formulations of the second law?
Isn’t the real subject of interest here one of equilibrium and how to convert a non-equilibrium environment into one that can perform work?
Sewell:
The second law applies to total entropy, and it’s very well established that different types of entropy are interconvertable. The classic example is probably adiabatic compression of an ideal gas (which converts some of its configurational entropy into thermal entropy) and adiabatic expansion (which converts thermal entropy to configurational), but there are lots more. I gave you an example of “carbon entropy” being converted to thermal entropy over a year and a half ago. In fact, if you’d taken gravity and density differences into account in the analysis of diffusion in a sold, you’d have seen the same effect at work there.
For a more extreme (and more directly relevant) example of the interconvertability of different types of entropy, consider the application of thermodynamics to informational entropy. Landauer’s principle holds that for each bit of information that is erased (which corresponds to a 1-bit decrease in Shannon entropy), there must be a compensating increase in thermal (or other) entropy of at least k*ln(2)=9.57e-24 Joules/Kelvin. This is quite difficult to test, because the change in thermal entropy is so small; but recent results seem to support the principle (see The unavoidable cost of computation revealed, in Nature News & comment, 07 March 2012).
This may make no sense at all to you; I’d argue that this just means you haven’t wrapped your head around the relevant physics. If you do any real amount of physics, you’ll run into lots of things that run counter to intuition, and you’ll get used to the fact that usually it’s your intuition that’s wrong. I’ll give you a hint: all of the entropies that the second law relates to are basically logarithmic measures of how many distinct states a system can be in (sometimes described as disorder), and since they all measure the same fundamental thing, it’s inevitable that they all have equivalent units.
F/N: This came up recently, and here is my main comment. KF
A few notes: Dr. Morowitz did a probability calculation working from the thermodynamic perspective, with a already existing cell, and came up with this number:
Also of interest to the discussion is the information content that is derived in a cell when working from a thermodynamic perspective:
of note: The 10^12 bits of information number for a bacterium is derived from entropic considerations, which is, due to the tightly integrated relationship between information and entropy, (IMHO) considered one of the most accurate measures of the transcendent quantum information/entanglement constraining a ‘simple’ life form to be so far out of thermodynamic equilibrium.
For calculations, from the thermodynamic perspective, please see the following site:
Quotes of Note:
It is also interesting to note that comparing the possible configurations of particles that cannot contain life with those that can contain life is literally far beyond what can be meaningfully imagined by humans.
Moreover it is now found that,,,
What makes the preceding finding interesting is that computer chips are fast approaching ‘Landauer’s limit’ and thus the integrated coding between the DNA, RNA and Proteins of the cell must apparently be ingeniously ‘programmed’ along the very stringent guidelines laid out by Charles Bennett from IBM for ‘reversible computation’ in order to achieve such amazing energy efficiency. (Of note: Bennett was also behind elucidating the basics of Quantum Teleportation)
The amazing energy efficiency possible with ‘reversible computation’ has been known about since Charles Bennett laid out the principles for such reversible programming in 1973, but as far as I know, due to the extreme level of complexity involved for achieving such ingenious ‘reversible computation’, has yet to be accomplished in any meaningful way by humans for our computer programs even to this day:
As well, a major stumbling block in materialistic thinking, a stumbling block held by Rolf Landauer himself, is that ‘information is physical’ (that information ’emerges’ from a material basis), yet it is now found that information is its own distinct physical entity which is more foundational to reality than material particles are.
As well, we now have very strong reason to believe that quantum information is ‘conserved’,,,
Moreover it is now found that it is quantum information/entanglement itself which is constraining the cell to be so far out of thermodynamic equilibrium:
Direct empirical confirmation is here:
Moreover, as if the preceding was not enough, quantum entanglement cannot be explained by any imaginable within space-time physical/material processes:
The following also addresses Rolf Landauer’s false contention that ‘information is physical’:
This following research goes even further and provides far more solid falsification for Rolf Landauer’s contention that information encoded in a computer is merely physical (merely ’emergent’ from a material basis) since he believed it always required energy to erase it;
Further note:
Music and verse:
The argument from Darwinists that pouring raw energy into a open system makes evolution inevitable is simply ‘not even wrong’ as an argument. Raw energy destroys rather than builds functional complexity:
Energy to be useful for life must be precisely controlled and directed:
Just how precisely controlled the energy of the cell is is revealed by the following:
This stunning energy efficiency of a cell is found to be optimal across all life domains, thus strongly suggesting that all life on earth was Intelligently Designed for maximal efficiency in mind instead of reflecting a pattern of somewhat random distribution that would be expected if evolution occurred:
The complexity being found in the metabolic/biochemeical pathways of the cell is jaw dropping:
Simply put, we know intelligent design exists – humans (at least, if not other animals to some degree) employ it. We know that intelligent design as humans employ it can generate phenomena that is easily discernible from phenomena that is not generated by intelligent design. Anyone who argues that a battleship’s complexity is not discernible from the complexity found in the materials after an avalanche is either committing intellectual dishonesty or willful self-delusion.
ID – as humans employ it – is a scientific fact. Indeed, science is the process of employing intelligent design to investigate phenomena. Without ID, science wouldn’t exist.
WJM: An avalanche that results in a battleship by blind chance plus necessity would be a sight to see! KF
Dr. Sewell @15 and Mung @16:
As I said, this is largely a semantic exercise (and your comments highlight this fact).
Assuming for sake of argument that Dr. Sewell has valid points relating to the systems he is describing, if (i) he insists on describing them in terms of the “Second Law of Thermodynamics” and (ii) his opponents disagree that what he is talking about even relates to the 2nd Law, then there is no common ground for discussion. This is why so many of these discussions result in talking past each other.
Incidentally, a couple of months ago I had a profound epiphany that for me brought all this 2nd Law discussion into clear focus. I would like to share with you that epiphany. Unfortunately I’ve since forgotten what it was! 🙁
Eric
Gordon Davisson @17:
Interesting thoughts. Let’s assume for a moment that you are correct that the 2nd Law applies to informational entropy and that the entropy can be measured in the Shannon sense.
What this suggests to me is that the 2nd Law is not really the place on which to focus our attention, because Shannon entropy is largely irrelevant to what we are interested in when we discuss design (to wit, a highly meaningful and functional sequence of 1’s and 0’s can have the same Shannon “information” as the same 1’s and 0’s mixed up in a meaningless jumble). (There have been myriad prior UD threads regarding Shannon information.)
Thus, Dr. Sewell’s focus on the 2nd Law seems to be, at best, tangentially related to the kind of functional complex specified information we are interested in for purposes of design. And, unfortunately, focusing on this Shannon kind of “information” also leads to unfruitful discussion of words like “order” and “disorder” (as already seen on this thread).
—–
I think Dr. Sewell had put forward some helpful examples of processes that don’t come about by chance. I also think he may have some valuable insights into how his examples relate to evolution and design. I’m just not sure yet what those are or how best to articulate them. I’m also not sure that couching his argument in terms of the 2nd Law is the right approach, because — to date at least — it has resulted primarily in semantic disagreements, rather than discussion of the underlying substance.
Semi OT:
Unlocking nature’s quantum engineering for efficient solar energy – January 7, 2013
Excerpt: Certain biological systems living in low light environments have unique protein structures for photosynthesis that use quantum dynamics to convert 100% of absorbed light into electrical charge,,,
Research from Cambridge’s Cavendish Laboratory studying light-harvesting proteins in Green Sulpher Bacteria – which can survive at depths of over 2,000 metres below the surface of the ocean – has found a mechanism in PPCs that helps protect energy from dissipating while travelling through the structure by actually reversing the flow of part of the escaped energy – by reenergising it back to exciton level through molecular vibrations.,,,
“Some of the key issues in current solar cell technologies appear to have been elegantly and rigorously solved by the molecular architecture of these PPCs – namely the rapid, lossless transfer of excitons to reaction centres.” As Chin points also out, stabilising ‘quantum coherence’, particularly at ambient temperatures – something the researchers have begun to explore – is an important goal for future quantum-based technologies, from advanced solar cells to quantum computers and nanotechnology. “These biological systems can direct a quantum process, in this case energy transport, in astoundingly subtle and controlled ways – showing remarkable resistance to the aggressive, random background noise of biology and extreme environments. “This new understanding of how to maintain coherence in excitons, and even regenerate it through molecular vibrations, provides a fascinating glimpse into the intricate design solutions – seemingly including quantum engineering – ,,, and which could provide the inspiration for new types of room temperature quantum devices.”
http://phys.org/news/2013-01-n.....nergy.html
Quote of note from preceding:
“In fact, our research suggests that these natural PPCs can achieve ‘hot and fast’ energy transfer – energy flows that prevent complete cooling to the temperature of their surroundings – which has been proposed as a way of improving solar cell efficiency beyond limits currently imposed by thermodynamics.” ,,,
Most every general physics text that discusses the second law cites examples of its application that are difficult to quantify, such as a wine glass breaking, books burning, or tornados destroying a town. But they, and most everyone else who discussed the topic, all agreed that while evolution represents a decrease in “entropy”, this decrease is “compensated” by increases outside the Earth, hence there is no problem with the second law.
Ever since I showed how silly this compensation argument is (primarily here ), I can’t seem to find anyone who thinks the second law has anything to do with tornados or evolution or other unquantifiable applications, and people like Eric Anderson seem to imply I’m the only person who ever thought it did.
Hi Granville,
Thanks very much for the link to the video and the ENV article in your response (#13 above) to my question. They were very helpful. Thanks again.
Dr. Sewell:
Thank you for your comments. Please don’t misunderstand my comments to be an attempt to refute (what I think is) your underlying argument. I am not sure I have a clear enough picture of what is being proposed to make that assessment.
I have said above (and previously on UD) that the “Earth-is-an-open-system” argument is one of the stupidest arguments ever put forward. On a earlier thread I even expressed surprise that you were having to spend so much energy refuting the argument. A moment’s reflection by the person of even average intelligence should be adequate to understand that it is an absurd position to take in support of the alleged evolutionary storyline. That said, if there are lots of people still making that argument then, by all means, I am glad that you are continuing your efforts to disabuse them of the notion.
My concern — or maybe ‘concern’ is too strong; perhaps ‘unease’ — with couching a design discussion in terms of thermodynamics is that understanding thermodynamics may be necessary, but is not sufficient, to understanding the kind of functional complex specified information we see in life. Kind of like demonstrating that gravity is relevant to living systems. Of course it is; but it doesn’t tell us much beyond that.
As a result, even if someone were to abandon their silly “Earth-is-an-open-system” talking point, it is still extremely easy for them to fall back on the time-worn formula of chance + time = the improbable.
Furthermore, the examples of thermodynamic processes you cite from physics textbooks, while showing that the concept of the 2nd Law can be applied broadly, do not really provide any insight regarding the origin of the underlying systems. Let me give an example of what I mean:
Let’s say that a building is being constructed and, when nearly completed, a tornado comes through and destroys it (we’re not talking actual annihilation of matter here of course, rather the conventional sense of breaking it apart and scattering the components to the wind). This can be viewed as an increase in entropy (decrease in “order”). Fine, as far as it goes. But the tornado could also in the same manner destroy, say, a pile of construction materials near the building site, or even the pile of dirt left from the foundation excavation.
Now we could spend a lot of time discussing with people whether the tornado caused an increase in entropy generally, whether that was compensated elsewhere by a decrease somewhere in the universe, whether the system is closed, whether the system is open, and so on. But none of it gets to the real heart of the issue, which is that the building was characterized by functional complex specified information. The fact that a thermodynamic process subsequently acted on a physical item (building, pile of materials, pile of dirt) tells us essentially nothing about whether the thing in question contained functional complex specified information in the first place, and consequently, whether the thing was designed.
Now, one may object and say that the building was originally more “ordered” than the pile of dirt and so, therefore, the tornado caused more disorder in the case of destroying the building. Fine. That is just a weakness of example, not substance. Instead of a pile of dirt, let’s propose something highly ordered, like a bed of crystals. Then one might further say, “Yes, but the kind of order we are talking about with the building is different from the kind of order we are talking about in a crystal.” To which I respond: “Exactly. Precisely my point.”
So ultimately, when the dust clears (either from our tornado or from our discussion) and everyone comes to happy agreement on the relevance of thermodynamic processes and the 2nd Law to the system in question, we are still required to determine — as an independent inquiry, without the need to invoke the 2nd Law — whether the thing in question (building, pile of dirt, crystals) contained functional complex specified information or not. And it is these indicia of design that we are most interested in, not whether something is more or less “ordered” or whether something is subject to the grinding, relentless influence of the 2nd Law over time (we can stipulate that every physical system is).
In summary, to the extent that people need to be disabused of their idea that “Earth-is-an-open-system-and-therefore-anything-goes,” I think your examples and efforts are valuable and worth pursuing. In terms of getting to an inference of design, I am less optimistic.
Eric,
The main point of the video, made in the last minute or so, is that if you DON’T believe in ID, the alternative is that the four unintelligent forces of physics alone must have rearranged the fundamental particles of physics into books, computers, cars, trucks and airplanes. You don’t even need to discuss the second law at all. Once so stated, most people immediately recognize the absurdity of an explanation without ID. Except for scientists, who immediately start looking for other examples of entropy changes where it is more difficult to say what the second law predicts, or for reasons to argue that, technically, the second law was not violated, or…Sigh, it seems to be completely impossible to get scientists to understand a concept this simple.
Eric Anderson @25:
I’d agree with this, but…
At least as I understand it, Dr. Sewell’s argument doesn’t have anything to do with Shannon entropy. In fact, he seems to reject any connection between Shannon entropy and thermal entropy — in his paper, “Poker Entropy and the Theory of Compensation” (mentioned here, although the link to the paper seems to be dead), he rejects as nonsense the idea that “poker entropy” (which is actually an instance of Shannon entropy) should have anything to do with thermal entropy.
Sewell’s argument instead relates to X-entropy, where X is carbon or something like that (he never actually says which he thinks are relevant), and suffers from the fundamental problem that the second law doesn’t apply to different types of entropy separately, but only to the total. Since there’s a huge amount of thermal entropy leaving Earth (see my calculation of the entropy flux here), the second law allows that (for example) a huge amount of carbon-entropy could be being converted to thermal entropy, and then leaving Earth in that form.
(Actually, I’m pretty sure the actual rate of entropy change of Earth is quite small, and that the huge amount of entropy leaving Earth is mostly cancelled by a similarly huge rate of entropy being produced on Earth. But the second law doesn’t require this — the second law doesn’t say anything about the rate of entropy production, only that the rate of entropy destruction is zero.)
BTW, there’s another way to approach the conclusion that Earth’s boundary conditions are sufficient to allow evolution and/or the origin of life: entropy is what’s known in the biz as an extensive quantity, meaning that it’s proportional to the amount of stuff we’re talking about. For instance, two gallons of water has (other things being equal) twice the entropy of a single gallon of water. Similarly, the entropy of 200 individuals of species A is twice the entropy of 100 individuals of species A.
That means that the total entropy change involved in going from 200 individuals of species A + 100 individuals of species B to 100 A’s and 200 B’s, is the same as the entropy change in going from 100 A’s to 100 B’s. So if the boundary conditions of Earth allow the entropy change required for a population shift, they also allow for the entropy change required for one species to evolve into another.
The same argument applies to the origin of life as well. The entropy change for species A to expand from 100 individuals to 200 individuals is the same as the entropy change for a population of 100 indivuals to emerge from … 0 individuals. So if the boundary conditions of Earth allow population growth, they also allow population origination.
(Now, I should clarify that just because something is thermodynamically allowed, does not mean that it’s actually possible; it just means that it’s not thermo that forbids it. Which means that thermo — like Shannon entropy — is irrelevant to ID.)
Dr. Sewell @28:
You haven’t shown that compensation is silly; in fact, your AML paper actually shows compensation happening. For example, anywhere ∇•J is positive, you’ll get a decrease in the local entropy density (compensated by an increase elsewhere). Similarly, if the right-hand side of inequality #5 is negative, you have an entropy increase outside the system, which allows (i.e. can compensate for) an entropy decrease inside the system (the left-hand side of inequality #5).
Compensation is entirely real. Compensation happens anytime you have a heat/matter/etc flow from one place to another. You can think of this as entropy flowing from one place to another (along with the heat/matter/whatever), but that’s just a different way of describing the same thing.
To the extent that your argument depends on rejecting compensation, your argument depends on rejecting reality.
When you find that everyone else disagrees with you, you really should consider that maybe you’re wrong and everyone else is right. (It doesn’t necessarily mean that you are wrong, but you should at least consider the possibility.) Especially when the best argument you can muster for your view amounts to “well, I can’t actually do the math, but it seems intuitively obvious that…”
as to:
‘So if the boundary conditions of Earth allow population growth, they also allow population origination.’
So you hold that if the boundary conditions didn’t allow for biological life to replicate then you would then grant that the boundary conditions would prevent the origination of life?
Mighty big of you!
,,, But did you happen to notice that if your argument was correct you would not be here to make the argument?
I have no idea how that follows from my argument. Can you explain your reasoning?
“I have no idea how that follows from my argument. Can you explain your reasoning?”
If
1.No replication of biological life possible
then Gordon graciously grants
2.No Origination of biological life possible
Thus
3.Only falsification Gordon will accept as correct is if biological Gordon did not exist.
i.e. mighty big of you!
What is entropy made of? Numbers?
ba77:
No, my argument only addresses what is allowed and forbidden by the second law. As i said at the end of #32: just because something is thermodynamically allowed, does not mean that it’s actually possible; it just means that it’s not thermo that forbids it.
Again, no. In the first place, this is the opposite of what you said earlier (“if your argument was correct you would not be here to make the argument”). In the second place, I would accept someone pointing out an error in my thermodynamics or reasoning (provided it actually was an error). In the third place, while my existence does pretty much confirm that reproduction is possible (and hence thermodynamically allowed), it neither confirms nor refutes the parallel I drew between reproduction and the origin of life.
(BTW, I should probably note that I did skip a few details when I drew the parallel. For one thing, I didn’t take individual variation into account [e.g. larger individuals will tend to have more entropy]. For another, I didn’t take the information-theoretic contribution to total entropy into account. But I don’t see any way that either of these invalidates my argument, they just complicate it.)
It’s not a question of graciousness or pettiness, it’s a question of getting the physics and logic right.
Granville Sewell @31:
Agreed.
Gordon Davisson @32:
Hmmm. Very interesting thought. It’s late for me so I think I’ll sleep on that one tonight.
‘The argument from Darwinists that pouring raw energy into a open system makes evolution inevitable is simply ‘not even wrong’ as an argument. Raw energy destroys rather than builds functional complexity:’
Bornagain, re your #21, the Darwinists probably inadvertently omitted, ”n’ stuff’ … ‘pouring raw energy ‘n’ stuff’. That would presumably cover the required control and direction agency. And they’re really Iders, who’ve kind of lost their way.
On the other hand, they could just be incorrigible dolts. I wonder which?
Gordon, you state:
“just because something is thermodynamically allowed, does not mean that it’s actually possible; it just means that it’s not thermo that forbids it.”
Okie Dokie, my bad for not catching that caveat.,, But Dr. Sewell hasn’t ever said that thermo forbids replication or the origin of life has he? He has, to the best of my knowledge said that thermo makes the origin of life and ‘vertical’ evolution extremely unlikely.
notes:
Physicist Rob Sheldon offers some thoughts on Sal Cordova vs. Granville Sewell on 2nd Law Thermo – July 2012
Excerpt: The Equivalence: Boltzmann’s famous equation (and engraved on his tombstone) S = k ln W, merely is an exchange rate conversion. If W is lira, and S is dollars, then k ln() is the conversion of the one to the other, which is empirically determined. Boltzmann’s constant “k” is a semi-empirical conversion number that made Gibbs “stat mech” definition work with the earlier “thermo” definition of Lord Kelvin and co.
Despite this being something as simple as a conversion factor, you must realize how important it was to connect these two. When Einstein connected mass to energy with E = (c2) m, we can now talk about mass-energy conservation, atom bombs and baby universes, whereas before Einstein they were totally different quantities.
Likewise, by connecting the two things, thermodynamics and statistical mechanics, then the hard rules derived from thermo can now be applied to statistics of counting permutations.
This is where Granville derives the potency of his argument, since a living organism certainly shows unusual permutations of the atoms, and thus has stat mech entropy that via Boltzmann, must obey the 2nd law. If life violates this, then it must not be lawfully possible for evolution to happen (without an input of work or information.)
The one remaining problem, is how to calculate it precisely.
http://www.uncommondescent.com.....aw-thermo/
“Klimontovich’s S-theorem, an analogue of Boltzmann’s entropy for open systems, explains why the further an open system gets from the equilibrium, the less entropy becomes. So entropy-wise, in open systems there is nothing wrong about the Second Law. S-theorem demonstrates that spontaneous emergence of regular structures in a continuum is possible.,,, The hard bit though is emergence of cybernetic control (which is assumed by self-organisation theories and which has not been observed anywhere yet). In contrast to the assumptions, observations suggest that between Regularity and Cybernetic Systems there is a vast Cut which cannot be crossed spontaneously. In practice, it can be crossed by intelligent integration and guidance of systems through a sequence of states towards better utility. No observations exist that would warrant a guess that apart from intelligence it can be done by anything else.”
Eugene S – UD Blogger
http://www.uncommondescent.com.....ent-418185
Per your #19, BA:
‘“In all,” argue Tompa and Rose, “an average protein would have approximately 3540 distinguishable interfaces,” and if one uses this number for the interactome space calculation, the result is 10 followed by the exponent 7.9 x 10^10.,,, the numbers preclude formation of a functional interactome (of ‘simple’ life) by trial and error,, within any meaningful span of time. This numerical exercise…is tantamount to a proof that the cell does not organize by random collisions of its interacting constituents. (i.e. that life did not arise, nor operate, by chance!)’
Don’t be nasty, BA… And you a Christian. Shame on you! Next you’ll be saying there was already a fly in the soup.
But sir, that’s our Primordial Soup. We see the spontaneous generation of flies from our Primordial Soup all the time.
KF@42
(Rob Sheldon here.) Thanks for remembering my comment. It is exactly what is going on in this comment thread. M@12 suggest some sort of incommensurate entropies, which he elaborates in #14 claiming that GS has mixed up his definitions, and that “order” is the wrong definition.
Boltzmann & Shannon both used “order” or permutations in their definition of entropy. Clausius, Maxwell etc, used heat and temperature. Boltzmann’s “ansatz” was to connect the two definitions with the eponymous constant.
Landauer repeats the Boltzmann “ansatz” using computer bits instead of permutations, and several published authors have claimed to validate or invalidate Landauer’s ansatz. Personally I think Landauer was using the “ideal gas” estimate of Boltzmann when he reused “k” for his “energy per entropy bit”, since the revolution in electronics today is storing information in the “spin” of an electron, or what is now called “spintronics”. Thus I don’t believe Landauer is even remotely close to the amount of energy per entropy bit of memory. That is, I don’t think the principle is false, but the “energy/bit” conversion factors of both Boltzmann and Landauer are undoubtedly wrong for modern information storage.
All this to say, that Granville is completely correct when he says that the increase in entropy of the Sun is never shown to come even close to explaining the decrease of entropy on the earth. The conversion constants are just not known. In principle, they could be known, but in practice we are far, far from even a rough estimate.
Until we get a better theory than Boltzmann’s permutation to ideal gas law, we are probably wise to use Granville’s suggestion of conserving entropy separately for each inconvertible quantity.
There was some nonsense in GD@32,33 about “compensation” and “extensible” entropy. Even an ideal gas has cases when the entropy is not extensible (additive), but certainly coherent systems, systems with long-range forces are demonstrably non-extensible. There’s even a large coterie of physicists proposing “Tsallis non-extensible entropy” as the solution to life, evolution, and the philosopher’s stone. The mere fact that people are not collections of ideal gas atoms, but coherent and “functional” should be strong evidence that applying entropy addition to humans is wrong as applying Boyle’s Law to my pot of caramel bubbling on the stove.
In fact, the coherence of all the objects alluded to by Granville Sewell in his paper on computers and jet planes, is precisely the sort of order that cannot be measured by Boltzmann’s ideal gas approximations.
You are free to propose your own favorite conversion between ideal gas entropy and designed artifacts, but I would venture a guess that it won’t hold up to experiment very long. Granville’s common sense is a whole lot more profound than Wikipedia and an introductory physics text, and “compensation” remains an almost completely metaphysical belief.
A couple more simple questions:
Would you want your teenager going off to university thinking that entropy and disorder were the same, or that entropy and disorder were inversely proportional?
Would you want your teenager going off to university thinking that Shannon entropy was measured in joules?
If a parent goes into the ordered room of the teenager and tosses it into a mess, does it take more entropy or less entropy than it took for the teen to order it?
Lecture on entropy and the 2nd law.
http://www.khanacademy.org/sci.....-intuition
Thanks Collin.
So what is it, precisely, that there is more of?
Is it a physical substance? What’s it made of?
How does it come to be that whenever anything happens in the universe, there is an increase in the number of microstates in the universe?
“So what is it, precisely, that there is more of?
Is it a physical substance? What’s it made of?
How does it come to be that whenever anything happens in the universe, there is an increase in the number of microstates in the universe?”[sic]
I’ve been lead to believe that what is increased when ever we read or hear that “entropy is increased”, can be heat, non-heat energy, or some combination. The “amount” of either would be commensurate with the the amount of energy expended in performing work of some sort. I don’t think that this is a very complete understanding, as my days of studying “mechanics” at university are definitely in the past.
Could you use Joules as the scale to measure the physical quantity? I think the answer is yes. I don’t know that it is the only applicable scale however.
I think it would be interesting to determine that the “physical substance” of which there is “more of” ,as you put it, whenever “anything happens in the universe”[sic] corresponded with an amount of increase in the quantity of either “dark matter” or “dark energy” in the universe. Though, I’ve recently read that a number of physicists consider that “dark matter” and “dark energy” are likely the same. Such that whenever “anything happens”[sic] the expansion rate of the universe grows by an amount commensurate with the increase in entropy.
I don’t suspect that will be the case, but it is an interesting notion none-the-less. Don’t you think?
Is it just me, or is the concept of a “backwards running process” incoherent?
This is going to be rather long, so I’m going to try to break it down by topic as much as I can. Sorry if it’s still a bit scattered…
Defending my argument from extensiveness:
Rob Sheldon@45 (note that I’m replying to bits of what Rob said out of order):
“Nonsense”? I beg to differ. While strict additivity only applies to systems with statistically independent microstates (I’m not sure what ideal gasses have to do with this), the deviations from additivity do not weaken the argument I made. In the first place the deviations are too small to matter, in the second place they’re in a direction that actually strengthens my argument, and in the third place they don’t even apply to Sewell’s X-entropies (eq. 3 of his AML paper is strictly extensive). Let me concentrate on the second point.
The entropies of statistical mechanics (whether we’re talking about Boltzmann’s formula, Gibbs’ more general formula, or Von Neumann’s quantum formula) are what’s known as subadditive; that is, the entropy of two systems taken together is always less than or equal to the sum of their individual entropies. That means that the entropy of 200 individuals is at most twice the entropy of 100 individuals. This in turn means that, as far as the second law is concerned, going from 0 individuals to 100 individuals is, if anything, easier than going from 100 individuals to 200 individuals.
(Just to clarify what should be obvious: in reality, going from 0 individuals to 100 is much harder than 100 -> 200, especially if the individuals happen to be rabbits. From this, I conclude that the second law is not the relevant limiting factor.)
(Also, my parallel between evolution vs. population shift doesn’t necessarily work properly when deviations from additivity are significant. No problem, just change it to a parallel between evolution vs. extinction of old species + population growth of new species.)
Let me give an example of this deviation from additivity: the entropy of genetic information. To keep the math simple, I’m going to use a highly oversimplified model; I’m trying to illustrate the principle here, not calculate realistic numbers. Let’s say there are 1,000 (1e3) possible (genetically distinct) species (I said I wasn’t going for realism, right?), and within each species there are 10,000 (1e4) possible genomes an individual might have.
Suppose some individual randomly poofs into existance. It could have any of 1e7 possible gemomes (1e3 species * 1e4 genomes within each species), so the genetic contribution to its entropy will be S_g(organism 1) = k * ln(1e7) ~= 16*k.
Now, suppose that individual reproduces (asexually, to keep things simple). The new organism will be of the same species as the parent, but have a different (assumed random) genome within the same species. If you look at the offspring by itself, it could also be any of 1e3 species * 1e4 genomes, so its entropy will be the same as its parent: S_g(offspring) = k * ln(1e7) ~= 16*k.
But look at the genetic entropy of the two together, by counting the number of possible genomes they could have. Since they’ll both be the same species, there’s only 1e3 species * 1e4 genomes for the parent * 1e4 organisms for the offspring = 1e11 total possibilitles, so S_g(organism 1 + offspring) = k * ln(1e11) ~= 25*k. This is k * ln(1e3) ~= 7*k less than the sum of their individual entropies, which is essentially a measure of how correlated their states are.
Compare that with what would’ve happened if the second organism had appeared independently (rather than deriving from organism 1): then the two organisms would be of independent species, so their total entropy would be the sum of their separate entropies, k * ln(1e14) ~= 32*k. So independent appearance of organisms is thermodynamically preferred to reproduction!
(Again, I’m not saying that organisms poofing into existence is possible, just that under conditions that allow reproduction, the second law doesn’t forbid it. As Stephen Lower put it: when thermodynamics says “no”, it means exactly that. When it says “yes”, it means “maybe”.)
(Also, note that adding organisms — by whatever process — adds entropy. So, the second law actually favors both evolution and reproduction, right? No, because the absolute entropy of the organisms isn’t really what’s important, it’s the entropy of the organisms relative to the entropy of the raw materials they formed from. That’s why, when I was drawing parallels, I chose comparisons where the raw-materials piece cancels out. For example, the amount of additional raw materials needed to go from 0 individuals to 100 is the same as to go from 100 to 200. If you’re not careful about this, you can wind up talking complete nonsense. In fact, anyone who blitheley talks about “the entropy decrease from evolution” without worrying about this is almost certainly talking nonsense. IMO it’s actually much better to think in terms of free energy or negentropy, but if you do that anyone who doesn’t know some thermo will have no idea what you’re talking about.)
I’m not significantly familiar with Tsallis entropy, and I’ve never heard of anyone relating it to evolution. Can you give me a pointer to this “large coterie”? In any case, I’m pretty sure it’s also subadditive, so what I said above goes for it as well.
Is thermodynamics even relevant?
You seem to be under the impression that Boltzmann’s formula for entropy, S=k*ln(ω), is limited to ideal gasses. If so, you are wrong; it applies to any classical (non-quantum) system with equally probable microstates, whether or not they happen to be ideal gasses. For a classical system with non-equally-probable microstates, use Gibbs’ formula, S=-k*sum(p_i*ln(p_i)), instead. Note that Gibbs’ formula is a generalization of Boltzmann’s: if all of the probabilities (the p_i’s) happen to be equal, both formulae give the same result. I’m not very familiar with quantum stat mech, but AIUI the relevant formula there is Von Neumann’s, S=-Tr(ρ*ln(ρ)), which is (other than a factor of k) a generalization of Gibbs’ formula.
If you deny that these formulae are relevant to what Sewell is talking about, you’re essentially denying that Sewell’s argument is based on thermodynamics (well, stat mech anyway). You can’t have it both ways: either Sewell’s argument is based on well-established thermodynamics (in which case it’s wrong), or it’s not based on well-established thermodynamics (in which case he’s being dishonest to claim the backing of thermodynamics for his argument).
Doing the math: Earth’s entropy flux vs. entropy decrease needed for evolution
(from earlier in Rob’s message:)
I think you’ve lost track of the burden of proof here. If anyone wants to use thermodynamics to argue against evolution, the burden of proof is on them to show that there’s a conflict. If the conversion constants aren’t known, that makes your (and Sewell’s) argument difficult, not mine.
However, the relevant constants are known (it’s the state counting that’s hard), and the claim that there’s a conflict between evolution and thermo has been refuted in several different ways.
In addition to my argument from extensiveness (or subadditivity, if you want to be picky), it’s also been done by directly estimating the relevant entropies. Have you read Emory Bunn’s article “Evolution and the second law of thermodynamics” (Am. J. Phys. 77 (2009), 922–925)? It’s mostly a re-do of the argument Daniel Styer made in an earlier paper, except that Styer made some serious mistakes; Bunn corrects these (although he also makes at least one minor mistake himself, see my earlier comment). Sewell has criticized these arguments, but as far as I can see his criticisms completely miss the mark (again, see my linked comment).
Note that neither Styer nor Bunn nor I claim that the Sun is increasing in entropy (I’m pretty sure it’s decreasing) let alone that it’s compensating for entropy decreases on Earth. The Earth actually recieves entropy from the Sun (in the form of thermal radiation), and dumps its waste entropy to deep space (again, in the form of thermal radiation). The flux is tricky to calculate, and (I claim) both Styer and Bunn get it wrong. I didn’t exactly do it right either, but I’m pretty confident I got a safe lower bound of 3.3e14 J/K per second for the net entropy flux leaving Earth. Can you find any error in my analysis in the linked comment?
But how much entropy decrease is needed for evolution? Bunn gives an upper limit of 1e44*k = 1.4e21 J/K, which is less than two months’ flux (based on my calculation). I’m no biochemist, but his calculation here looks roughly reasonable. I do have one semi-objection to it, though: it looks at the total entropy difference between all the organisms on Earth vs the same matter in its simplest molecular form. In other words, he’s counting up the entropy decrease needed for evolution and reproduction and growth etc. and, as my extensiveness argument imples, most of that entropy decrease is due to reproduction and growth, not evolution. On the other hand, the entropy decreases implied by reproduction and growth are coming out of the same available entropy budget (and the breakdown is hard to even define, let alone calculate), so he’s not actually wrong… just less specific than I’d like.
Do you see any problems with Bunn’s analysis? As you said earlier, it’s hard to calculate this accurately. But Bunn is only trying for an upper bound, and his calculation would have to be off by a factor of over a billion for there to be a problem with evolution, so unless you see something seriously wrong, I’d say it’s good enough to prove his point.
Even more to the point, do you have any analysis that shows there isn’t enough entropy flux? Because if you don’t, I don’t see how you can make a thermodynamic case against evolution.
Conversons between different “types” of entropy:
(still earlier in Rob’s message:)
“Energy/bit” conversion factors don’t come into it unless you’re converting entropy to/from thermal form (in which case the temperature determines the conversion factor). If you think in terms of entropy, and apply the relevant formula (Boltzmann et al), the conversion factors for different kinds of entropy become pretty obvious. (Although as I said, the state-counting can be quite difficult.) (That, and the fact that most systems’ phase spaces don’t factor cleanly, which means their entropy can’t be cleanly split into different components — I’ll get back to this point.)
How on earth can it be wise to use something we know is seriously wrong? If entropy were conserved seperately for each type of entropy, no gas could ever be compressed, or condensed into a liquid, or frozen to a solid (since all of these convert configurational entropy to thermal) (with a caveat I’ll get to in a bit). Sediment settling to the bottom of a lake violates this separate conservation idea, as does a huge amount of biochemistry. My extensiveness argument means that it “rules out” biological reproduction and growth along with evolution. This is NOT a wise thing to assume.
It’s certainly not a wise thing to base an argument against evolution on. All anyone has to do to refute you is to point out that your argument is based on a false premise, and you’re done.
Now, about that caveat: splitting entropy into different types (e.g. thermal vs. configurational) is only really possible in certain situations, most notably in a classical ideal gas (finally, something that’s actually restricted to them!). In real gasses and solids and especially liquids, the thermal and configurational degrees of freedom aren’t independent (or even clearly defined), so you can’t cleanly split the total entropy into different types. So when I say that condensation converts configurational entropy into thermal, I’m really just making vague gestures about quantities that aren’t actually well-defined.
Does that caveat help Sewell’s case? No, for two reasons. First, because if the various types of entropy aren’t well-defined, his claim is something even worse than wrong, it’s meaningless. And second, because while his X-entropies bear a superficial similarity to the configurational entropies for specific elements, they aren’t actually the same thing (see olegt’s comments starting here). This means that when e.g. gaseous nitrogen condenses into a liquid, its nitrogen-entropy is decreasing, but it’s not being converted into anything else, it’s just decreasing; and this does not conflict with the second law because the second law doesn’t apply to nitrogen-entropy at all.
The mistake here is in thinking that entropy is something physical.
Mung: I agree that entropy isn’t a physical thing, it’s a property of physical things. But I don’t agree that thinking of entropy as a thing is a mistake. I think it’s a useful intuitive shortcut; a metaphor if you like.
While entropy isn’t a thing, it acts a lot like a thing. This means that by thinking of it as a thing, you immediately get a bunch of mostly-correct intuitions about how it behaves. For example, when heat flows from one place to another, there’s an entropy decrease where it came from and an increase where it went to. The technically correct way to describe this is that the entropy decrease is compensated by the (equal or larger) decrease, but it’s far more intuitive to think of the heat carrying entropy with it from one place to the other.
There are some places where the metaphor falls down, like the deviations from additivity I described in my last comment. The closest I can come to making sense of this in terms of the metaphor is that some of the same entropy is in multiple places, so if you just add the entropies from the various systems you’re counting some of the entropy twice (or more). Basically, you need to be ready to throw out your intuition whenever it disagrees with more detailed analysis; but that’s true anyway, so it’s not really a change.
So, you suffer some technical accuracy and a few subtle bad intuitions in exchange for quite a lot of good intuitions. If you understand the physics and math well, it may not worth the tradeoff. But in most of these discussions most of the participants have no real background in either, so I think it’s a worth it.