# Just Too Simple

For me, the real argument for intelligent design has always been extremely simple, and doesn’t require any advanced mathematics or microbiology to grasp. The video below makes this argument in the simplest, clearest way I can make it. My uncle Harry and aunt Martha like the video, and can’t understand why so many intelligent scientists aren’t impressed by this very simple argument.

Of course the problem is, the argument is just too simple, most scientists aren’t interested in arguments that their uncle Harry and aunt Martha can understand, they are looking for arguments that require some advanced technology, that show some understanding of evolutionary theory or microbiology that sets them apart from uncle Harry and aunt Martha. And indeed, most of the important scientific advances in our understanding of our world have required advanced technology and advanced degrees to achieve, but it is the curse of intelligent design that the strongest and clearest arguments are just too simple to get much traction in the scientific world. Of course there are many good arguments for ID being made now which do require advanced technology, and advanced degrees to understand, and I’m very grateful for the scientists who are making them: it’s clear to me if ID ever becomes widely accepted in the scientific world, it will be because of their writings, and not because of the simple arguments I am making. If I could figure out a way to use some more advanced mathematics in my arguments, if I could figure out a way to restate the basic point in such a way that uncle Harry and aunt Martha couldn’t understand it, I might make some progress (I don’t really have an uncle Harry or an aunt Martha, by the way, but many people do). Perhaps it would help if I linked to my resume, or to my finite element program, to show that I am capable of doing more advanced mathematics, even if I haven’t used any of it in this video.

The arguments for ID which require advanced science to understand are powerful, but never completely definitive: they look at small portions of the picture through a microscope. To make the completely definitive argument you have to step back and look at the big picture, but, alas, then the picture becomes too clear, and too simple.

As I expected, a couple of commentors are trying to make the issue more complicated than it is. Rather than try to answer each objection one at a time, I would refer readers to this ENV post, where I point out that every attempt to argue that the spontaneous rearrangement of atoms on a barren planet into computers, books and airplanes does not violate the second law, can equally be applied to argue that a tornado running backward, turning rubble into houses and cars, would not violate it either. So unless you are willing to argue that tornados running backward would not violate the second law, don’t bother. And even if you are, it is obvious that a tornado running backward would violate some basic law of Nature, if not the second law as formulated by humans, then at least the basic natural principle behind the second law, and what has happened on Earth would clearly violate the same law, whatever it is.

## 53 Replies to “Just Too Simple”

1. 1
Axel says:

Got it in one, Granville…. appropriately enough.

2. 2
Axel says:

This illustrates how the human heart can absolutely annihilate the reasoning of people of quite egregious worldly intelligence, and routinely does so, simply because of the individual’s wishful thinking – his preference for a world-view with which it would be inconsistent.

Wishful thinking, however, as the ambience and ultimate rationale of our premises is not, ipso facto, seminally false. Indeed, when we choose the premises of our world-view, we all have to fall back on wishful thinking. It just happens that, it is perfectly consistent with Christian belief that we are prompted, inspired precisely in this manner by the Holy Spirit.

Where this propensity, nay, resolute determination, of the partisans of scientism for totally ga-ga reasoning, lies, however, is in their claim that truth can only be accessed under laboratory conditions, and will always be cold and hard, and accessible only to the mind of the cynical reductionist (moron).

Whereas the reality is that truth is anything but void of beauty, life and ‘charisma’: truth and the understanding it requires are both, in fact, live, vibrant and dynamic, and are not constrained by any demand to be cold, ugly, cruel, undesirable and proof against any hope that is not wholly psychopathic.

On the contrary, Einstein identified the criterion he resorted to in selecting his hypotheses as aesthetic. Of course, later, he had to point out to the myrmidons of scientism for whom he had such withering contempt, that ‘elegance’ was not, in itself, sufficient. They had to ‘do the math’. Not that the loopy Darwinists heeded his words either then – to the incredulous dismay of Wolfgang Pauli – or now.

3. 3
bornagain77 says:

Semi OT: as to,

“To make the completely definitive argument you have to step back and look at the big picture,”

Sort of reminds me of another piece of evidence that one has to step back away from in order to get the big picture ‘To make the completely definitive argument’:

“Q: Why can’t the Shroud just be be a medieval painting?

A: The image is also extremely faint, fading away completely if you get closer than about six feet, so it would have like trying to paint an enormous canvas in invisible ink.”

4. 4
Axel says:

Game, set and match. Deism, theism and, now, Christianity, BA. Yet not a scintilla of concurrence from them.

They have breasted the mountain top, thanks to better scientists (open to non-partisan knowledge than themselves), and view the churchmen and theologians who’ve been sitting there for centuries, as mirages.

5. 5
Levan says:

It is something wrong with the Second Low. I felt it already for years and wrote about it some articles but last week there was a new information about the machines with negative entropy here: http://www.accuweather.com/en/.....ro/3684063

6. 6
vjtorley says:

Hi Professor Sewell,

Great post. I found the video very clear, and the argument straightforward. To those who say the sun could have done the trick, I say: sunbeams aren’t that smart.

However, I have a great fondness for numbers, hence my next question: has anyone in the ID community posted a refutation of Robert N. Oerter’s online paper, Does Life On Earth Violate the Second Law of Thermodynamics? using detailed numerical calculations? I’m just curious. Thanks.

7. 7

vjtorley @5:

What is there to respond to? Oerter states that “it is physically impossible for evolution to violate the second law of thermodynamics.”

Of course it is impossible. Everyone knows it is impossible.

The ‘Earth-is-an-open-system’ argument is a complete and utter red herring and shows that the person putting forth the argument has no idea what they are talking about.

The discussion is largely a waste of time until we have a clear understanding of what the relevant question even is. Ascertaining what the relevant question is would be a useful avenue of discussion, but we can’t respond with detailed entropy calculations and the like until we first have some agreement on what we are talking about.

8. 8
bornagain77 says:

a few notes as to:

“To make the completely definitive argument you have to step back and look at the big picture,”

To me the ‘big picture’ that makes it clear that entropy relentlessly holds its grip on biology as it does the rest of creation is the fact that entropy is the primary reason physical bodies, which contain life, grow old and die. Dr Sanford notes that detrimental mutation accumulate as we grow older:

*3 new mutations every time a cell divides in your body
* Average cell of 15 year old has up to 6000 mutations
*Average cell of 60 year old has 40,000 mutations
Reproductive cells are ‘designed’ so that, early on in development, they are ‘set aside’ and thus they do not accumulate mutations as the rest of the cells of our bodies do. Regardless of this protective barrier against the accumulation of slightly detrimental mutations still we find that,,,
*60-175 mutations are passed on to each new generation.

Quote taken from this video:

Genetic Entropy and The Mystery Of the Genome – Dr. John Sanford – video

This following video clearly brings the ‘big picture’ point personally home to us about the effects of genetic entropy on the human body:

Ageing Process – 80 years in 40 seconds – video

Amazingly, the Shroud of Turin, as out of place as the Shroud of Turin might seem to be in a discussion on entropy, gives us a ‘big picture’ look that Jesus Christ overcame entropy’s relentless ‘death grip’ on the human body:

THE EVENT HORIZON (Space-Time Singularity) OF THE SHROUD OF TURIN. – Isabel Piczek – Particle Physicist
Excerpt: We have stated before that the images on the Shroud firmly indicate the total absence of Gravity. Yet they also firmly indicate the presence of the Event Horizon. These two seemingly contradict each other and they necessitate the past presence of something more powerful than Gravity that had the capacity to solve the above paradox.
http://shroud3d.com/findings/i.....-formation

A Quantum Hologram of Christ’s Resurrection? by Chuck Missler
Excerpt: “You can read the science of the Shroud, such as total lack of gravity, lack of entropy (without gravitational collapse), no time, no space—it conforms to no known law of physics.” The phenomenon of the image brings us to a true event horizon, a moment when all of the laws of physics change drastically. Dame Piczek created a one-fourth size sculpture of the man in the Shroud. When viewed from the side, it appears as if the man is suspended in mid air (see graphic, below), indicating that the image defies previously accepted science. The phenomenon of the image brings us to a true event horizon, a moment when all of the laws of physics change drastically.
http://www.khouse.org/articles/2008/847

Some may ask, ‘What does gravity have to do with entropy?’. Well it turns out that gravity (space-time), and entropy, are intimately connected:

Evolution is a Fact, Just Like Gravity is a Fact! UhOh!
Excerpt: The results of this paper suggest gravity arises as an entropic force, once space and time themselves have emerged.
http://www.uncommondescent.com.....fact-uhoh/

Shining Light on Dark Energy – October 21, 2012
Excerpt: It (Entropy) explains time; it explains every possible action in the universe;,,
Even gravity, Vedral argued, can be expressed as a consequence of the law of entropy.,,,
The principles of thermodynamics are at their roots all to do with information theory. Information theory is simply an embodiment of how we interact with the universe —,,,
http://crev.info/2012/10/shini.....rk-energy/

as well:

Entropy of the Universe – Hugh Ross – May 2010
Excerpt: Egan and Lineweaver found that supermassive black holes are the largest contributor to the observable universe’s entropy. They showed that these supermassive black holes contribute about 30 times more entropy than what the previous research teams estimated.
http://www.reasons.org/entropy-universe

Supplemental notes on the ‘big picture’ of slightly detrimental mutations:

“The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain – Michael Behe – December 2010
Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.
http://behe.uncommondescent.co.....evolution/

“I have seen estimates of the incidence of the ratio of deleterious-to-beneficial mutations which range from one in one thousand up to one in one million. The best estimates seem to be one in one million (Gerrish and Lenski, 1998). The actual rate of beneficial mutations is so extremely low as to thwart any actual measurement (Bataillon, 2000, Elena et al, 1998). Therefore, I cannot …accurately represent how rare such beneficial mutations really are.” (J.C. Sanford; Genetic Entropy page 24) – 2005

9. 9
bornagain77 says:

A graph featuring ‘Kimura’s Distribution’ of beneficial compared to detrimental mutations is shown in the following video:

Evolution Vs Genetic Entropy – Andy McIntosh – video
http://www.metacafe.com/watch/4028086

Moreover it is now found that the rare ‘beneficial’ mutations that work in a limited context to increase fitness produce what is termed ‘negative epistasis’ when the ‘beneficial’ mutations are combined together:

Mutations : when benefits level off – June 2011 – (Lenski’s e-coli after 50,000 generations, which is equivalent to approx. 1 million years of human evolution)
Excerpt: After having identified the first five beneficial mutations combined successively and spontaneously in the bacterial population, the scientists generated, from the ancestral bacterial strain, 32 mutant strains exhibiting all of the possible combinations of each of these five mutations. They then noted that the benefit linked to the simultaneous presence of five mutations was less than the sum of the individual benefits conferred by each mutation individually.
http://www2.cnrs.fr/en/1867.htm?theme1=7

The diminishing returns of beneficial mutations – July 2011
Excerpt: Evolution thus has three strikes against it: most mutations are not beneficial, practically all mutations destroy specified complexity, and, now, even ‘beneficial’ mutations work against each other. While mutations may be of limited benefit to a single organism in a limited context (e.g., sickle cell anemia can protect against malaria even though the sickle cell trait is harmful), mutations seem to be no benefit whatsoever for microbes-to-man evolution, whether individually or together.
http://creation.com/antagonistic-epistasis

Thus though the ‘bigger picture’ may not be all that appealing to Darwinists, personally, I find the bigger picture quite beautiful:

The Artists – The Artists is a short film about two rival painters who fail to see the bigger picture.
http://vimeo.com/33670490

Music and verse:

Steven Curtis Chapman – God is God (Original Version) –

Lyric from preceding song:

“God is God and I am not
I can only see a part of the picture He’s painting
God is God and I am man
So I’ll never understand it all
For only God is God”,,,

Isaiah 64:8
– But now, O LORD, thou [art] our father; we [are] the clay, and thou our potter; and we all [are] the work of thy hand.

1 Corinthians 2:9
However, as it is written: “No eye has seen, no ear has heard, no mind has conceived what God has prepared for those who love him”–

10. 10
Mung says:

If we had something like maxwell’s demon we could convert the more probable system to a less probable system.

Evolution serves the purpose of the demon.

No violation of the second law required.

Simple.

11. 11
bornagain77 says:

“If we had something like maxwell’s demon we could convert the more probable system to a less probable system.

Evolution serves the purpose of the demon.

No violation of the second law required.

Simple.”

Yet,,,

The GS (genetic selection) Principle – David L. Abel – 2009
Excerpt: The GS (Genetic Selection) Principle states that biological selection must occur at the nucleotide-sequencing molecular-genetic level of 3’5′ phosphodiester bond formation. After-the-fact differential survival and reproduction of already-living phenotypic organisms (ordinary natural selection) does not explain polynucleotide prescription and coding.
http://www.bioscience.org/2009.....lltext.htm

but,,,

While neo-Darwinian evolution has no evidence that material processes can generate functional prescriptive information, Intelligent Design does have ‘proof of principle’ that information, via intelligence, can violate the second law and generate ‘potential energy’:

Maxwell’s demon demonstration turns information into energy – November 2010
Excerpt: Until now, demonstrating the conversion of information to energy has been elusive, but University of Tokyo physicist Masaki Sano and colleagues have succeeded in demonstrating it in a nano-scale experiment. In a paper published in Nature Physics they describe how they coaxed a Brownian particle to travel upwards on a “spiral-staircase-like” potential energy created by an electric field solely on the basis of information on its location. As the particle traveled up the staircase it gained energy from moving to an area of higher potential, and the team was able to measure precisely how much energy had been converted from information.
http://www.physorg.com/news/20.....nergy.html

12. 12
Mung says:

Here’s what I want to know:

How many joules does it take for sex to evolve?

13. 13
Granville Sewell says:

Vjtorley,
Looks like this is similar to the Styer article, which is critiqued in the last half of my other video , as well as point #2 of this ENV article, and several earlier articles referenced therein. Basically his error is the assumption that the second law only applies to thermal entropy, and every other type can be converted to units of thermal entropy, e.g., the increase in entropy due to a tornado hitting town can be expressed in units of Joules/degree Kelvin, which makes absolutely no sense.

14. 14
Mung says:

The Second Law for Complete IDiots

1. There is more than one formulation of the second law.

2. The second law is not about order/disorder.

15. 15
Granville Sewell says:

Mung,

You are right that there is more than one formulation, but the more general ones ARE about order/disorder. For example, see the quote from Classical and Modern Physics in footnote #3 of this article.

16. 16
Mung says:

Granville, thank you for your response.

In Classical and Modern Physics, Kenneth Ford [7] writes “There are a variety of ways in which the second law of thermodynamics can be stated, and we have encountered two of them so far: (1) For an isolated system, the direction of spontaneous change is from an arrangement of lesser probability to an arrangement of greater probability. (2) For an isolated system, the direction of spontaneous change is from order to disorder”.

1. These are both for isolated systems.

2. Neither order nor disorder is a defined term.

“Entropy” sounds much more scientific than “order”, but note that, in this paper, “order” is simply defined as the opposite of “entropy”.

Begging the question.

On what basis do you [or Ford] define order to be the opposite of entropy, and how does that definition hold across the various formulations of the second law?

Isn’t the real subject of interest here one of equilibrium and how to convert a non-equilibrium environment into one that can perform work?

17. 17
Gordon Davisson says:

Sewell:

Basically his error is the assumption that the second law only applies to thermal entropy, and every other type can be converted to units of thermal entropy, e.g., the increase in entropy due to a tornado hitting town can be expressed in units of Joules/degree Kelvin, which makes absolutely no sense.

The second law applies to total entropy, and it’s very well established that different types of entropy are interconvertable. The classic example is probably adiabatic compression of an ideal gas (which converts some of its configurational entropy into thermal entropy) and adiabatic expansion (which converts thermal entropy to configurational), but there are lots more. I gave you an example of “carbon entropy” being converted to thermal entropy over a year and a half ago. In fact, if you’d taken gravity and density differences into account in the analysis of diffusion in a sold, you’d have seen the same effect at work there.

For a more extreme (and more directly relevant) example of the interconvertability of different types of entropy, consider the application of thermodynamics to informational entropy. Landauer’s principle holds that for each bit of information that is erased (which corresponds to a 1-bit decrease in Shannon entropy), there must be a compensating increase in thermal (or other) entropy of at least k*ln(2)=9.57e-24 Joules/Kelvin. This is quite difficult to test, because the change in thermal entropy is so small; but recent results seem to support the principle (see The unavoidable cost of computation revealed, in Nature News & comment, 07 March 2012).

This may make no sense at all to you; I’d argue that this just means you haven’t wrapped your head around the relevant physics. If you do any real amount of physics, you’ll run into lots of things that run counter to intuition, and you’ll get used to the fact that usually it’s your intuition that’s wrong. I’ll give you a hint: all of the entropies that the second law relates to are basically logarithmic measures of how many distinct states a system can be in (sometimes described as disorder), and since they all measure the same fundamental thing, it’s inevitable that they all have equivalent units.

18. 18
kairosfocus says:

F/N: This came up recently, and here is my main comment. KF

19. 19
bornagain77 says:

A few notes: Dr. Morowitz did a probability calculation working from the thermodynamic perspective, with a already existing cell, and came up with this number:

DID LIFE START BY CHANCE?
Excerpt: Molecular biophysicist, Horold Morowitz (Yale University), calculated the odds of life beginning under natural conditions (spontaneous generation). He calculated, if one were to take the simplest living cell and break every chemical bond within it, the odds that the cell would reassemble under ideal natural conditions (the best possible chemical environment) would be one chance in 10^100,000,000,000. You will have probably have trouble imagining a number so large, so Hugh Ross provides us with the following example. If all the matter in the Universe was converted into building blocks of life, and if assembly of these building blocks were attempted once a microsecond for the entire age of the universe. Then instead of the odds being 1 in 10^100,000,000,000, they would be 1 in 10^99,999,999,916 (also of note: 1 with 100 billion zeros following would fill approx. 20,000 encyclopedias)
http://members.tripod.com/~Black_J/chance.html

Punctured cell will never reassemble – Jonathan Wells – 2:40 mark of video

Also of interest to the discussion is the information content that is derived in a cell when working from a thermodynamic perspective:

“a one-celled bacterium, e. coli, is estimated to contain the equivalent of 100 million pages of Encyclopedia Britannica. Expressed in information in science jargon, this would be the same as 10^12 bits of information. In comparison, the total writings from classical Greek Civilization is only 10^9 bits, and the largest libraries in the world – The British Museum, Oxford Bodleian Library, New York Public Library, Harvard Widenier Library, and the Moscow Lenin Library – have about 10 million volumes or 10^12 bits.” – R. C. Wysong

‘The information content of a simple cell has been estimated as around 10^12 bits, comparable to about a hundred million pages of the Encyclopedia Britannica.”
Carl Sagan, “Life” in Encyclopedia Britannica: Macropaedia (1974 ed.), pp. 893-894

of note: The 10^12 bits of information number for a bacterium is derived from entropic considerations, which is, due to the tightly integrated relationship between information and entropy, (IMHO) considered one of the most accurate measures of the transcendent quantum information/entanglement constraining a ‘simple’ life form to be so far out of thermodynamic equilibrium.

“Is there a real connection between entropy in physics and the entropy of information? ….The equations of information theory and the second law are the same, suggesting that the idea of entropy is something fundamental…” Siegfried, Dallas Morning News, 5/14/90, [Quotes Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin]

For calculations, from the thermodynamic perspective, please see the following site:

Moleular Biophysics – Information theory. Relation between information and entropy: – Setlow-Pollard, Ed. Addison Wesley
Excerpt: Linschitz gave the figure 9.3 x 10^12 cal/deg or 9.3 x 10^12 x 4.2 joules/deg for the entropy of a bacterial cell. Using the relation H = S/(k In 2), we find that the information content is 4 x 10^12 bits. Morowitz’ deduction from the work of Bayne-Jones and Rhees gives the lower value of 5.6 x 10^11 bits, which is still in the neighborhood of 10^12 bits. Thus two quite different approaches give rather concordant figures.
http://www.astroscu.unam.mx/~a.....ecular.htm

Quotes of Note:

“From the beginning of this book we have emphasized the enormous information content of even the simplest living systems. The information cannot in our view be generated by what are often called ‘natural’ processes, as for instance through meteorological and chemical processes occurring at the surface of a lifeless planet. As well as a suitable physical and chemical environment, a large initial store of information was also needed. We have argued that the requisite information came from an ‘intelligence’, –
Sir Fred Hoyle, Chandra Wickramasinghe – A Theory of Cosmic Creationism – pg. 150

“The statistical probability that organic structures and the most precisely harmonized reactions that typify living organisms would be generated by accident, is zero.”
Ilya Prigogine, Gregoire Nicolis, and Agnes Babloyantz, Physics Today 25, pp. 23-28. (Sourced Quote)

It is also interesting to note that comparing the possible configurations of particles that cannot contain life with those that can contain life is literally far beyond what can be meaningfully imagined by humans.

The Humpty-Dumpty Effect: A Revolutionary Paper with Far-Reaching Implications – Paul Nelson – October 23, 2012
Excerpt: Tompa and Rose calculate the “total number of possible distinct patterns of interactions,” using yeast, a unicellular eukaryote, as their model system; this “total number” is the size of the space that must be searched. With approximately 4,500 proteins in yeast, the interactome search space “is on the order of 10^7200, an unimaginably large number,” they write — but “more realistic” estimates, they continue, are “yet more complicated.” Proteins present many possible surfaces for chemical interaction. “In all,” argue Tompa and Rose, “an average protein would have approximately 3540 distinguishable interfaces,” and if one uses this number for the interactome space calculation, the result is 10 followed by the exponent 7.9 x 10^10.,,, the numbers preclude formation of a functional interactome (of ‘simple’ life) by trial and error,, within any meaningful span of time. This numerical exercise…is tantamount to a proof that the cell does not organize by random collisions of its interacting constituents. (i.e. that life did not arise, nor operate, by chance!)
http://www.evolutionnews.org/2.....65521.html

Moreover it is now found that,,,

Life Leads the Way to Invention – Feb. 2010
Excerpt: a cell is 10,000 times more energy-efficient than a transistor. “In one second, a cell performs about 10 million energy-consuming chemical reactions, which altogether require about one picowatt (one millionth millionth of a watt) of power.” This and other amazing facts lead to an obvious conclusion: inventors ought to look to life for ideas.,,, Essentially, cells may be viewed as circuits that use molecules, ions, proteins and DNA instead of electrons and transistors. That analogy suggests that it should be possible to build electronic chips – what Sarpeshkar calls “cellular chemical computers” – that mimic chemical reactions very efficiently and on a very fast timescale.
http://creationsafaris.com/cre.....#20100226a

What makes the preceding finding interesting is that computer chips are fast approaching ‘Landauer’s limit’ and thus the integrated coding between the DNA, RNA and Proteins of the cell must apparently be ingeniously ‘programmed’ along the very stringent guidelines laid out by Charles Bennett from IBM for ‘reversible computation’ in order to achieve such amazing energy efficiency. (Of note: Bennett was also behind elucidating the basics of Quantum Teleportation)

Notes on Landauer’s principle, reversible computation, and Maxwell’s Demon – Charles H. Bennett
Excerpt: Of course, in practice, almost all data processing is done on macroscopic apparatus, dissipating macroscopic amounts of energy far in excess of what would be required by Landauer’s principle. Nevertheless, some stages of biomolecular information processing, such as transcription of DNA to RNA, appear to be accomplished by chemical reactions that are reversible not only in principle but in practice.,,,,
http://www.hep.princeton.edu/~.....501_03.pdf

The amazing energy efficiency possible with ‘reversible computation’ has been known about since Charles Bennett laid out the principles for such reversible programming in 1973, but as far as I know, due to the extreme level of complexity involved for achieving such ingenious ‘reversible computation’, has yet to be accomplished in any meaningful way by humans for our computer programs even to this day:

Reversible computing
Excerpt: Reversible computing is a model of computing where the computational process to some extent is reversible, i.e., time-invertible.,,, Although achieving this goal presents a significant challenge for the design, manufacturing, and characterization of ultra-precise new physical mechanisms for computing, there is at present no fundamental reason to think that this goal cannot eventually be accomplished, allowing us to someday build computers that generate much less than 1 bit’s worth of physical entropy (and dissipate much less than kT ln 2 energy to heat) for each useful logical operation that they carry out internally.
http://en.wikipedia.org/wiki/R....._computing

As well, a major stumbling block in materialistic thinking, a stumbling block held by Rolf Landauer himself, is that ‘information is physical’ (that information ’emerges’ from a material basis), yet it is now found that information is its own distinct physical entity which is more foundational to reality than material particles are.

Ions have been teleported successfully for the first time by two independent research groups
Excerpt: In fact, copying isn’t quite the right word for it. In order to reproduce the quantum state of one atom in a second atom, the original has to be destroyed. This is unavoidable – it is enforced by the laws of quantum mechanics, which stipulate that you can’t ‘clone’ a quantum state. In principle, however, the ‘copy’ can be indistinguishable from the original (that was destroyed),,,

Atom takes a quantum leap – 2009
Excerpt: Ytterbium ions have been ‘teleported’ over a distance of a metre.,,,
“What you’re moving is information, not the actual atoms,” says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second.
http://www.freerepublic.com/fo.....1769/posts

Quantum Entanglement and Teleportation – Anton Zeilinger – video
http://www.metacafe.com/watch/5705317/

20. 20
bornagain77 says:

As well, we now have very strong reason to believe that quantum information is ‘conserved’,,,

Quantum no-hiding theorem experimentally confirmed for first time
Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. A third and related theorem, called the no-hiding theorem, addresses information loss in the quantum world. According to the no-hiding theorem, if information is missing from one system (which may happen when the system interacts with the environment), then the information is simply residing somewhere else in the Universe; in other words, the missing information cannot be hidden in the correlations between a system and its environment.
http://www.physorg.com/news/20.....tally.html

Moreover it is now found that it is quantum information/entanglement itself which is constraining the cell to be so far out of thermodynamic equilibrium:

Quantum Information/Entanglement In DNA – Elisabeth Rieper – short video
http://www.metacafe.com/watch/5936605/

Direct empirical confirmation is here:

Does DNA Have Telepathic Properties?-A Galaxy Insight – 2009
Excerpt: DNA has been found to have a bizarre ability to put itself together, even at a distance, when according to known science it shouldn’t be able to.,,, The recognition of similar sequences in DNA’s chemical subunits, occurs in a way unrecognized by science. There is no known reason why the DNA is able to combine the way it does, and from a current theoretical standpoint this feat should be chemically impossible.
http://www.dailygalaxy.com/my_.....ave-t.html

DNA Can Discern Between Two Quantum States, Research Shows – June 2011
Excerpt: — DNA — can discern between quantum states known as spin. – The researchers fabricated self-assembling, single layers of DNA attached to a gold substrate. They then exposed the DNA to mixed groups of electrons with both directions of spin. Indeed, the team’s results surpassed expectations: The biological molecules reacted strongly with the electrons carrying one of those spins, and hardly at all with the others. The longer the molecule, the more efficient it was at choosing electrons with the desired spin, while single strands and damaged bits of DNA did not exhibit this property.
http://www.sciencedaily.com/re.....104014.htm

Moreover, as if the preceding was not enough, quantum entanglement cannot be explained by any imaginable within space-time physical/material processes:

Looking Beyond Space and Time to Cope With Quantum Theory – (Oct. 28, 2012)
Excerpt: To derive their inequality, which sets up a measurement of entanglement between four particles, the researchers considered what behaviours are possible for four particles that are connected by influences that stay hidden and that travel at some arbitrary finite speed.
Mathematically (and mind-bogglingly), these constraints define an 80-dimensional object. The testable hidden influence inequality is the boundary of the shadow this 80-dimensional shape casts in 44 dimensions. The researchers showed that quantum predictions can lie outside this boundary, which means they are going against one of the assumptions. Outside the boundary, either the influences can’t stay hidden, or they must have infinite speed.,,,
The remaining option is to accept that (quantum) influences must be infinitely fast,,,
“Our result gives weight to the idea that quantum correlations somehow arise from outside spacetime, in the sense that no story in space and time can describe them,” says Nicolas Gisin, Professor at the University of Geneva, Switzerland,,,
http://www.sciencedaily.com/re.....142217.htm

The following also addresses Rolf Landauer’s false contention that ‘information is physical’:

Scientists show how to erase information without using energy – January 2011
Excerpt: Until now, scientists have thought that the process of erasing information requires energy. But a new study shows that, theoretically, information can be erased without using any energy at all. Instead, the cost of erasure can be paid in terms of another conserved quantity, such as spin angular momentum.,,, “Landauer said that information is physical because it takes energy to erase it. We are saying that the reason it is physical has a broader context than that.”, Vaccaro explained.
http://www.physorg.com/news/20.....nergy.html

This following research goes even further and provides far more solid falsification for Rolf Landauer’s contention that information encoded in a computer is merely physical (merely ’emergent’ from a material basis) since he believed it always required energy to erase it;

Quantum knowledge cools computers: New understanding of entropy – June 2011
Excerpt: No heat, even a cooling effect;
In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that “more than complete knowledge” from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy.
Renner emphasizes, however, “This doesn’t mean that we can develop a perpetual motion machine.” The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what’s known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says “We’re working on the edge of the second law. If you go any further, you will break it.”
http://www.sciencedaily.com/re.....134300.htm

Further note:

Are Humans merely Turing Machines?

Music and verse:

High School Musical 2 – You are the music in me

Acts 17:28
‘For in him we live and move and have our being.’ As some of your own poets have said, ‘We are his offspring.’

21. 21
bornagain77 says:

The argument from Darwinists that pouring raw energy into a open system makes evolution inevitable is simply ‘not even wrong’ as an argument. Raw energy destroys rather than builds functional complexity:

Evolution Vs. Thermodynamics – Open System Refutation – Thomas Kindell – video
http://www.metacafe.com/watch/4143014

Energy to be useful for life must be precisely controlled and directed:

Peer-Reviewed Articles in International Journal of Design & Nature – Casey Luskin – February, 2012
Excerpt: Truman further notes that “McIntosh has done us a major service by reminding us that energy processing in useful manners requires specialized machines.”
http://www.evolutionnews.org/2.....56001.html

The ATP Synthase Enzyme – exquisite motor necessary for first life – video

Just how precisely controlled the energy of the cell is is revealed by the following:

Optimal Design of Metabolism – Dr. Fazale Rana – July 2012
Excerpt: A new study further highlights the optimality of the cell’s metabolic systems. Using the multi-dimension optimization theory, researchers evaluated the performance of the metabolic systems of several different bacteria. The data generated by monitoring the flux (movement) of compounds through metabolic pathways (like the movement of cars along the roadways) allowed researchers to assess the behavior of cellular metabolism. They determined that metabolism functions optimally for a system that seeks to accomplish multiple objectives. It looks as if the cell’s metabolism is optimized to operate under a single set of conditions. At the same time, it can perform optimally with relatively small adjustments to the metabolic operations when the cell experiences a change in condition.
http://www.reasons.org/article.....metabolism

This stunning energy efficiency of a cell is found to be optimal across all life domains, thus strongly suggesting that all life on earth was Intelligently Designed for maximal efficiency in mind instead of reflecting a pattern of somewhat random distribution that would be expected if evolution occurred:

Mean mass-specific metabolic rates are strikingly similar across life’s major domains: Evidence for life’s metabolic optimum
Excerpt: Here, using the largest database to date, for 3,006 species that includes most of the range of biological diversity on the planet—from bacteria to elephants, and algae to sapling trees—we show that metabolism displays a striking degree of homeostasis across all of life.
http://www.ncbi.nlm.nih.gov/pm.....MC2572558/

The complexity being found in the metabolic/biochemeical pathways of the cell is jaw dropping:

Map Of Major Metabolic Pathways In A Cell – Diagram
http://www.sigmaaldrich.com/im.....17_04_.pdf

ExPASy – Biochemical Pathways – interactive schematic
http://web.expasy.org/cgi-bin/.....mbnails.pl

22. 22
William J Murray says:

Simply put, we know intelligent design exists – humans (at least, if not other animals to some degree) employ it. We know that intelligent design as humans employ it can generate phenomena that is easily discernible from phenomena that is not generated by intelligent design. Anyone who argues that a battleship’s complexity is not discernible from the complexity found in the materials after an avalanche is either committing intellectual dishonesty or willful self-delusion.

ID – as humans employ it – is a scientific fact. Indeed, science is the process of employing intelligent design to investigate phenomena. Without ID, science wouldn’t exist.

23. 23
kairosfocus says:

WJM: An avalanche that results in a battleship by blind chance plus necessity would be a sight to see! KF

24. 24

Dr. Sewell @15 and Mung @16:

As I said, this is largely a semantic exercise (and your comments highlight this fact).

Assuming for sake of argument that Dr. Sewell has valid points relating to the systems he is describing, if (i) he insists on describing them in terms of the “Second Law of Thermodynamics” and (ii) his opponents disagree that what he is talking about even relates to the 2nd Law, then there is no common ground for discussion. This is why so many of these discussions result in talking past each other.

Incidentally, a couple of months ago I had a profound epiphany that for me brought all this 2nd Law discussion into clear focus. I would like to share with you that epiphany. Unfortunately I’ve since forgotten what it was! 🙁

Eric

25. 25

Gordon Davisson @17:

Interesting thoughts. Let’s assume for a moment that you are correct that the 2nd Law applies to informational entropy and that the entropy can be measured in the Shannon sense.

What this suggests to me is that the 2nd Law is not really the place on which to focus our attention, because Shannon entropy is largely irrelevant to what we are interested in when we discuss design (to wit, a highly meaningful and functional sequence of 1’s and 0’s can have the same Shannon “information” as the same 1’s and 0’s mixed up in a meaningless jumble). (There have been myriad prior UD threads regarding Shannon information.)

Thus, Dr. Sewell’s focus on the 2nd Law seems to be, at best, tangentially related to the kind of functional complex specified information we are interested in for purposes of design. And, unfortunately, focusing on this Shannon kind of “information” also leads to unfruitful discussion of words like “order” and “disorder” (as already seen on this thread).

—–

I think Dr. Sewell had put forward some helpful examples of processes that don’t come about by chance. I also think he may have some valuable insights into how his examples relate to evolution and design. I’m just not sure yet what those are or how best to articulate them. I’m also not sure that couching his argument in terms of the 2nd Law is the right approach, because — to date at least — it has resulted primarily in semantic disagreements, rather than discussion of the underlying substance.

26. 26
bornagain77 says:

Semi OT:

Unlocking nature’s quantum engineering for efficient solar energy – January 7, 2013
Excerpt: Certain biological systems living in low light environments have unique protein structures for photosynthesis that use quantum dynamics to convert 100% of absorbed light into electrical charge,,,
Research from Cambridge’s Cavendish Laboratory studying light-harvesting proteins in Green Sulpher Bacteria – which can survive at depths of over 2,000 metres below the surface of the ocean – has found a mechanism in PPCs that helps protect energy from dissipating while travelling through the structure by actually reversing the flow of part of the escaped energy – by reenergising it back to exciton level through molecular vibrations.,,,
“Some of the key issues in current solar cell technologies appear to have been elegantly and rigorously solved by the molecular architecture of these PPCs – namely the rapid, lossless transfer of excitons to reaction centres.” As Chin points also out, stabilising ‘quantum coherence’, particularly at ambient temperatures – something the researchers have begun to explore – is an important goal for future quantum-based technologies, from advanced solar cells to quantum computers and nanotechnology. “These biological systems can direct a quantum process, in this case energy transport, in astoundingly subtle and controlled ways – showing remarkable resistance to the aggressive, random background noise of biology and extreme environments. “This new understanding of how to maintain coherence in excitons, and even regenerate it through molecular vibrations, provides a fascinating glimpse into the intricate design solutions – seemingly including quantum engineering – ,,, and which could provide the inspiration for new types of room temperature quantum devices.”
http://phys.org/news/2013-01-n.....nergy.html

27. 27
bornagain77 says:

Quote of note from preceding:

“In fact, our research suggests that these natural PPCs can achieve ‘hot and fast’ energy transfer – energy flows that prevent complete cooling to the temperature of their surroundings – which has been proposed as a way of improving solar cell efficiency beyond limits currently imposed by thermodynamics.” ,,,

28. 28
Granville Sewell says:

Most every general physics text that discusses the second law cites examples of its application that are difficult to quantify, such as a wine glass breaking, books burning, or tornados destroying a town. But they, and most everyone else who discussed the topic, all agreed that while evolution represents a decrease in “entropy”, this decrease is “compensated” by increases outside the Earth, hence there is no problem with the second law.

Ever since I showed how silly this compensation argument is (primarily here ), I can’t seem to find anyone who thinks the second law has anything to do with tornados or evolution or other unquantifiable applications, and people like Eric Anderson seem to imply I’m the only person who ever thought it did.

29. 29
vjtorley says:

Hi Granville,

Thanks very much for the link to the video and the ENV article in your response (#13 above) to my question. They were very helpful. Thanks again.

30. 30

Dr. Sewell:

Thank you for your comments. Please don’t misunderstand my comments to be an attempt to refute (what I think is) your underlying argument. I am not sure I have a clear enough picture of what is being proposed to make that assessment.

I have said above (and previously on UD) that the “Earth-is-an-open-system” argument is one of the stupidest arguments ever put forward. On a earlier thread I even expressed surprise that you were having to spend so much energy refuting the argument. A moment’s reflection by the person of even average intelligence should be adequate to understand that it is an absurd position to take in support of the alleged evolutionary storyline. That said, if there are lots of people still making that argument then, by all means, I am glad that you are continuing your efforts to disabuse them of the notion.

My concern — or maybe ‘concern’ is too strong; perhaps ‘unease’ — with couching a design discussion in terms of thermodynamics is that understanding thermodynamics may be necessary, but is not sufficient, to understanding the kind of functional complex specified information we see in life. Kind of like demonstrating that gravity is relevant to living systems. Of course it is; but it doesn’t tell us much beyond that.

As a result, even if someone were to abandon their silly “Earth-is-an-open-system” talking point, it is still extremely easy for them to fall back on the time-worn formula of chance + time = the improbable.

Furthermore, the examples of thermodynamic processes you cite from physics textbooks, while showing that the concept of the 2nd Law can be applied broadly, do not really provide any insight regarding the origin of the underlying systems. Let me give an example of what I mean:

Let’s say that a building is being constructed and, when nearly completed, a tornado comes through and destroys it (we’re not talking actual annihilation of matter here of course, rather the conventional sense of breaking it apart and scattering the components to the wind). This can be viewed as an increase in entropy (decrease in “order”). Fine, as far as it goes. But the tornado could also in the same manner destroy, say, a pile of construction materials near the building site, or even the pile of dirt left from the foundation excavation.

Now we could spend a lot of time discussing with people whether the tornado caused an increase in entropy generally, whether that was compensated elsewhere by a decrease somewhere in the universe, whether the system is closed, whether the system is open, and so on. But none of it gets to the real heart of the issue, which is that the building was characterized by functional complex specified information. The fact that a thermodynamic process subsequently acted on a physical item (building, pile of materials, pile of dirt) tells us essentially nothing about whether the thing in question contained functional complex specified information in the first place, and consequently, whether the thing was designed.

Now, one may object and say that the building was originally more “ordered” than the pile of dirt and so, therefore, the tornado caused more disorder in the case of destroying the building. Fine. That is just a weakness of example, not substance. Instead of a pile of dirt, let’s propose something highly ordered, like a bed of crystals. Then one might further say, “Yes, but the kind of order we are talking about with the building is different from the kind of order we are talking about in a crystal.” To which I respond: “Exactly. Precisely my point.”

So ultimately, when the dust clears (either from our tornado or from our discussion) and everyone comes to happy agreement on the relevance of thermodynamic processes and the 2nd Law to the system in question, we are still required to determine — as an independent inquiry, without the need to invoke the 2nd Law — whether the thing in question (building, pile of dirt, crystals) contained functional complex specified information or not. And it is these indicia of design that we are most interested in, not whether something is more or less “ordered” or whether something is subject to the grinding, relentless influence of the 2nd Law over time (we can stipulate that every physical system is).

In summary, to the extent that people need to be disabused of their idea that “Earth-is-an-open-system-and-therefore-anything-goes,” I think your examples and efforts are valuable and worth pursuing. In terms of getting to an inference of design, I am less optimistic.

31. 31
Granville Sewell says:

Eric,
The main point of the video, made in the last minute or so, is that if you DON’T believe in ID, the alternative is that the four unintelligent forces of physics alone must have rearranged the fundamental particles of physics into books, computers, cars, trucks and airplanes. You don’t even need to discuss the second law at all. Once so stated, most people immediately recognize the absurdity of an explanation without ID. Except for scientists, who immediately start looking for other examples of entropy changes where it is more difficult to say what the second law predicts, or for reasons to argue that, technically, the second law was not violated, or…Sigh, it seems to be completely impossible to get scientists to understand a concept this simple.

32. 32
Gordon Davisson says:

Eric Anderson @25:

Interesting thoughts. Let’s assume for a moment that you are correct that the 2nd Law applies to informational entropy and that the entropy can be measured in the Shannon sense.

What this suggests to me is that the 2nd Law is not really the place on which to focus our attention, because Shannon entropy is largely irrelevant to what we are interested in when we discuss design (to wit, a highly meaningful and functional sequence of 1?s and 0?s can have the same Shannon “information” as the same 1?s and 0?s mixed up in a meaningless jumble). (There have been myriad prior UD threads regarding Shannon information.)

I’d agree with this, but…

Thus, Dr. Sewell’s focus on the 2nd Law seems to be, at best, tangentially related to the kind of functional complex specified information we are interested in for purposes of design. And, unfortunately, focusing on this Shannon kind of “information” also leads to unfruitful discussion of words like “order” and “disorder” (as already seen on this thread).

At least as I understand it, Dr. Sewell’s argument doesn’t have anything to do with Shannon entropy. In fact, he seems to reject any connection between Shannon entropy and thermal entropy — in his paper, “Poker Entropy and the Theory of Compensation” (mentioned here, although the link to the paper seems to be dead), he rejects as nonsense the idea that “poker entropy” (which is actually an instance of Shannon entropy) should have anything to do with thermal entropy.

Sewell’s argument instead relates to X-entropy, where X is carbon or something like that (he never actually says which he thinks are relevant), and suffers from the fundamental problem that the second law doesn’t apply to different types of entropy separately, but only to the total. Since there’s a huge amount of thermal entropy leaving Earth (see my calculation of the entropy flux here), the second law allows that (for example) a huge amount of carbon-entropy could be being converted to thermal entropy, and then leaving Earth in that form.

(Actually, I’m pretty sure the actual rate of entropy change of Earth is quite small, and that the huge amount of entropy leaving Earth is mostly cancelled by a similarly huge rate of entropy being produced on Earth. But the second law doesn’t require this — the second law doesn’t say anything about the rate of entropy production, only that the rate of entropy destruction is zero.)

BTW, there’s another way to approach the conclusion that Earth’s boundary conditions are sufficient to allow evolution and/or the origin of life: entropy is what’s known in the biz as an extensive quantity, meaning that it’s proportional to the amount of stuff we’re talking about. For instance, two gallons of water has (other things being equal) twice the entropy of a single gallon of water. Similarly, the entropy of 200 individuals of species A is twice the entropy of 100 individuals of species A.

That means that the total entropy change involved in going from 200 individuals of species A + 100 individuals of species B to 100 A’s and 200 B’s, is the same as the entropy change in going from 100 A’s to 100 B’s. So if the boundary conditions of Earth allow the entropy change required for a population shift, they also allow for the entropy change required for one species to evolve into another.

The same argument applies to the origin of life as well. The entropy change for species A to expand from 100 individuals to 200 individuals is the same as the entropy change for a population of 100 indivuals to emerge from … 0 individuals. So if the boundary conditions of Earth allow population growth, they also allow population origination.

(Now, I should clarify that just because something is thermodynamically allowed, does not mean that it’s actually possible; it just means that it’s not thermo that forbids it. Which means that thermo — like Shannon entropy — is irrelevant to ID.)

33. 33
Gordon Davisson says:

Dr. Sewell @28:

Ever since I showed how silly this compensation argument is (primarily here) […]

You haven’t shown that compensation is silly; in fact, your AML paper actually shows compensation happening. For example, anywhere ∇•J is positive, you’ll get a decrease in the local entropy density (compensated by an increase elsewhere). Similarly, if the right-hand side of inequality #5 is negative, you have an entropy increase outside the system, which allows (i.e. can compensate for) an entropy decrease inside the system (the left-hand side of inequality #5).

Compensation is entirely real. Compensation happens anytime you have a heat/matter/etc flow from one place to another. You can think of this as entropy flowing from one place to another (along with the heat/matter/whatever), but that’s just a different way of describing the same thing.

To the extent that your argument depends on rejecting compensation, your argument depends on rejecting reality.

[…] I can’t seem to find anyone who thinks the second law has anything to do with tornados or evolution or other unquantifiable applications, and people like Eric Anderson seem to imply I’m the only person who ever thought it did.

When you find that everyone else disagrees with you, you really should consider that maybe you’re wrong and everyone else is right. (It doesn’t necessarily mean that you are wrong, but you should at least consider the possibility.) Especially when the best argument you can muster for your view amounts to “well, I can’t actually do the math, but it seems intuitively obvious that…”

34. 34
bornagain77 says:

as to:

‘So if the boundary conditions of Earth allow population growth, they also allow population origination.’

So you hold that if the boundary conditions didn’t allow for biological life to replicate then you would then grant that the boundary conditions would prevent the origination of life?

Mighty big of you!

,,, But did you happen to notice that if your argument was correct you would not be here to make the argument?

35. 35
Gordon Davisson says:

,,, But did you happen to notice that if your argument was correct you would not be here to make the argument?

I have no idea how that follows from my argument. Can you explain your reasoning?

36. 36
bornagain77 says:

“I have no idea how that follows from my argument. Can you explain your reasoning?”

If

1.No replication of biological life possible

then Gordon graciously grants

2.No Origination of biological life possible

Thus

3.Only falsification Gordon will accept as correct is if biological Gordon did not exist.

i.e. mighty big of you!

37. 37
Mung says:

What is entropy made of? Numbers?

38. 38
Gordon Davisson says:

ba77:

If

1.No replication of biological life possible

then Gordon graciously grants

2.No Origination of biological life possible

No, my argument only addresses what is allowed and forbidden by the second law. As i said at the end of #32: just because something is thermodynamically allowed, does not mean that it’s actually possible; it just means that it’s not thermo that forbids it.

Thus

3.Only falsification Gordon will accept as correct is if biological Gordon did not exist.

Again, no. In the first place, this is the opposite of what you said earlier (“if your argument was correct you would not be here to make the argument”). In the second place, I would accept someone pointing out an error in my thermodynamics or reasoning (provided it actually was an error). In the third place, while my existence does pretty much confirm that reproduction is possible (and hence thermodynamically allowed), it neither confirms nor refutes the parallel I drew between reproduction and the origin of life.

(BTW, I should probably note that I did skip a few details when I drew the parallel. For one thing, I didn’t take individual variation into account [e.g. larger individuals will tend to have more entropy]. For another, I didn’t take the information-theoretic contribution to total entropy into account. But I don’t see any way that either of these invalidates my argument, they just complicate it.)

i.e. mighty big of you!

It’s not a question of graciousness or pettiness, it’s a question of getting the physics and logic right.

39. 39

Granville Sewell @31:

. . . the alternative is that the four unintelligent forces of physics alone must have rearranged the fundamental particles of physics into books, computers, cars, trucks and airplanes. You don’t even need to discuss the second law at all.

Agreed.

40. 40

Gordon Davisson @32:

That means that the total entropy change involved in going from 200 individuals of species A + 100 individuals of species B to 100 A’s and 200 B’s, is the same as the entropy change in going from 100 A’s to 100 B’s. So if the boundary conditions of Earth allow the entropy change required for a population shift, they also allow for the entropy change required for one species to evolve into another.

Hmmm. Very interesting thought. It’s late for me so I think I’ll sleep on that one tonight.

41. 41
Axel says:

‘The argument from Darwinists that pouring raw energy into a open system makes evolution inevitable is simply ‘not even wrong’ as an argument. Raw energy destroys rather than builds functional complexity:’

Bornagain, re your #21, the Darwinists probably inadvertently omitted, ”n’ stuff’ … ‘pouring raw energy ‘n’ stuff’. That would presumably cover the required control and direction agency. And they’re really Iders, who’ve kind of lost their way.

On the other hand, they could just be incorrigible dolts. I wonder which?

42. 42
bornagain77 says:

Gordon, you state:

“just because something is thermodynamically allowed, does not mean that it’s actually possible; it just means that it’s not thermo that forbids it.”

Okie Dokie, my bad for not catching that caveat.,, But Dr. Sewell hasn’t ever said that thermo forbids replication or the origin of life has he? He has, to the best of my knowledge said that thermo makes the origin of life and ‘vertical’ evolution extremely unlikely.

notes:

Physicist Rob Sheldon offers some thoughts on Sal Cordova vs. Granville Sewell on 2nd Law Thermo – July 2012
Excerpt: The Equivalence: Boltzmann’s famous equation (and engraved on his tombstone) S = k ln W, merely is an exchange rate conversion. If W is lira, and S is dollars, then k ln() is the conversion of the one to the other, which is empirically determined. Boltzmann’s constant “k” is a semi-empirical conversion number that made Gibbs “stat mech” definition work with the earlier “thermo” definition of Lord Kelvin and co.
Despite this being something as simple as a conversion factor, you must realize how important it was to connect these two. When Einstein connected mass to energy with E = (c2) m, we can now talk about mass-energy conservation, atom bombs and baby universes, whereas before Einstein they were totally different quantities.
Likewise, by connecting the two things, thermodynamics and statistical mechanics, then the hard rules derived from thermo can now be applied to statistics of counting permutations.
This is where Granville derives the potency of his argument, since a living organism certainly shows unusual permutations of the atoms, and thus has stat mech entropy that via Boltzmann, must obey the 2nd law. If life violates this, then it must not be lawfully possible for evolution to happen (without an input of work or information.)
The one remaining problem, is how to calculate it precisely.
http://www.uncommondescent.com.....aw-thermo/

“Klimontovich’s S-theorem, an analogue of Boltzmann’s entropy for open systems, explains why the further an open system gets from the equilibrium, the less entropy becomes. So entropy-wise, in open systems there is nothing wrong about the Second Law. S-theorem demonstrates that spontaneous emergence of regular structures in a continuum is possible.,,, The hard bit though is emergence of cybernetic control (which is assumed by self-organisation theories and which has not been observed anywhere yet). In contrast to the assumptions, observations suggest that between Regularity and Cybernetic Systems there is a vast Cut which cannot be crossed spontaneously. In practice, it can be crossed by intelligent integration and guidance of systems through a sequence of states towards better utility. No observations exist that would warrant a guess that apart from intelligence it can be done by anything else.”
Eugene S – UD Blogger
http://www.uncommondescent.com.....ent-418185

43. 43
Axel says:

‘“In all,” argue Tompa and Rose, “an average protein would have approximately 3540 distinguishable interfaces,” and if one uses this number for the interactome space calculation, the result is 10 followed by the exponent 7.9 x 10^10.,,, the numbers preclude formation of a functional interactome (of ‘simple’ life) by trial and error,, within any meaningful span of time. This numerical exercise…is tantamount to a proof that the cell does not organize by random collisions of its interacting constituents. (i.e. that life did not arise, nor operate, by chance!)’

Don’t be nasty, BA… And you a Christian. Shame on you! Next you’ll be saying there was already a fly in the soup.

44. 44
Mung says:

But sir, that’s our Primordial Soup. We see the spontaneous generation of flies from our Primordial Soup all the time.

45. 45

KF@42
(Rob Sheldon here.) Thanks for remembering my comment. It is exactly what is going on in this comment thread. M@12 suggest some sort of incommensurate entropies, which he elaborates in #14 claiming that GS has mixed up his definitions, and that “order” is the wrong definition.

Boltzmann & Shannon both used “order” or permutations in their definition of entropy. Clausius, Maxwell etc, used heat and temperature. Boltzmann’s “ansatz” was to connect the two definitions with the eponymous constant.

Landauer repeats the Boltzmann “ansatz” using computer bits instead of permutations, and several published authors have claimed to validate or invalidate Landauer’s ansatz. Personally I think Landauer was using the “ideal gas” estimate of Boltzmann when he reused “k” for his “energy per entropy bit”, since the revolution in electronics today is storing information in the “spin” of an electron, or what is now called “spintronics”. Thus I don’t believe Landauer is even remotely close to the amount of energy per entropy bit of memory. That is, I don’t think the principle is false, but the “energy/bit” conversion factors of both Boltzmann and Landauer are undoubtedly wrong for modern information storage.

All this to say, that Granville is completely correct when he says that the increase in entropy of the Sun is never shown to come even close to explaining the decrease of entropy on the earth. The conversion constants are just not known. In principle, they could be known, but in practice we are far, far from even a rough estimate.

Until we get a better theory than Boltzmann’s permutation to ideal gas law, we are probably wise to use Granville’s suggestion of conserving entropy separately for each inconvertible quantity.

There was some nonsense in GD@32,33 about “compensation” and “extensible” entropy. Even an ideal gas has cases when the entropy is not extensible (additive), but certainly coherent systems, systems with long-range forces are demonstrably non-extensible. There’s even a large coterie of physicists proposing “Tsallis non-extensible entropy” as the solution to life, evolution, and the philosopher’s stone. The mere fact that people are not collections of ideal gas atoms, but coherent and “functional” should be strong evidence that applying entropy addition to humans is wrong as applying Boyle’s Law to my pot of caramel bubbling on the stove.

In fact, the coherence of all the objects alluded to by Granville Sewell in his paper on computers and jet planes, is precisely the sort of order that cannot be measured by Boltzmann’s ideal gas approximations.

You are free to propose your own favorite conversion between ideal gas entropy and designed artifacts, but I would venture a guess that it won’t hold up to experiment very long. Granville’s common sense is a whole lot more profound than Wikipedia and an introductory physics text, and “compensation” remains an almost completely metaphysical belief.

46. 46
Mung says:

A couple more simple questions:

Would you want your teenager going off to university thinking that entropy and disorder were the same, or that entropy and disorder were inversely proportional?

Would you want your teenager going off to university thinking that Shannon entropy was measured in joules?

If a parent goes into the ordered room of the teenager and tosses it into a mess, does it take more entropy or less entropy than it took for the teen to order it?

47. 47
Collin says:

Lecture on entropy and the 2nd law.

48. 48
Mung says:

Thanks Collin.

When anything ever happens in the universe, the net effect is that there is more entropy in the universe itself. (1:25)

So what is it, precisely, that there is more of?

Is it a physical substance? What’s it made of?

How does it come to be that whenever anything happens in the universe, there is an increase in the number of microstates in the universe?

49. 49
ciphertext says:

“So what is it, precisely, that there is more of?

Is it a physical substance? What’s it made of?

How does it come to be that whenever anything happens in the universe, there is an increase in the number of microstates in the universe?”[sic]

I’ve been lead to believe that what is increased when ever we read or hear that “entropy is increased”, can be heat, non-heat energy, or some combination. The “amount” of either would be commensurate with the the amount of energy expended in performing work of some sort. I don’t think that this is a very complete understanding, as my days of studying “mechanics” at university are definitely in the past.

Could you use Joules as the scale to measure the physical quantity? I think the answer is yes. I don’t know that it is the only applicable scale however.

I think it would be interesting to determine that the “physical substance” of which there is “more of” ,as you put it, whenever “anything happens in the universe”[sic] corresponded with an amount of increase in the quantity of either “dark matter” or “dark energy” in the universe. Though, I’ve recently read that a number of physicists consider that “dark matter” and “dark energy” are likely the same. Such that whenever “anything happens”[sic] the expansion rate of the universe grows by an amount commensurate with the increase in entropy.

I don’t suspect that will be the case, but it is an interesting notion none-the-less. Don’t you think?

50. 50
Mung says:

Is it just me, or is the concept of a “backwards running process” incoherent?

51. 51
Gordon Davisson says:

This is going to be rather long, so I’m going to try to break it down by topic as much as I can. Sorry if it’s still a bit scattered…

Defending my argument from extensiveness:

Rob Sheldon@45 (note that I’m replying to bits of what Rob said out of order):

There was some nonsense in GD@32,33 about “compensation” and “extensible” entropy. Even an ideal gas has cases when the entropy is not extensible (additive), but certainly coherent systems, systems with long-range forces are demonstrably non-extensible.

“Nonsense”? I beg to differ. While strict additivity only applies to systems with statistically independent microstates (I’m not sure what ideal gasses have to do with this), the deviations from additivity do not weaken the argument I made. In the first place the deviations are too small to matter, in the second place they’re in a direction that actually strengthens my argument, and in the third place they don’t even apply to Sewell’s X-entropies (eq. 3 of his AML paper is strictly extensive). Let me concentrate on the second point.

The entropies of statistical mechanics (whether we’re talking about Boltzmann’s formula, Gibbs’ more general formula, or Von Neumann’s quantum formula) are what’s known as subadditive; that is, the entropy of two systems taken together is always less than or equal to the sum of their individual entropies. That means that the entropy of 200 individuals is at most twice the entropy of 100 individuals. This in turn means that, as far as the second law is concerned, going from 0 individuals to 100 individuals is, if anything, easier than going from 100 individuals to 200 individuals.

(Just to clarify what should be obvious: in reality, going from 0 individuals to 100 is much harder than 100 -> 200, especially if the individuals happen to be rabbits. From this, I conclude that the second law is not the relevant limiting factor.)

(Also, my parallel between evolution vs. population shift doesn’t necessarily work properly when deviations from additivity are significant. No problem, just change it to a parallel between evolution vs. extinction of old species + population growth of new species.)

Let me give an example of this deviation from additivity: the entropy of genetic information. To keep the math simple, I’m going to use a highly oversimplified model; I’m trying to illustrate the principle here, not calculate realistic numbers. Let’s say there are 1,000 (1e3) possible (genetically distinct) species (I said I wasn’t going for realism, right?), and within each species there are 10,000 (1e4) possible genomes an individual might have.

Suppose some individual randomly poofs into existance. It could have any of 1e7 possible gemomes (1e3 species * 1e4 genomes within each species), so the genetic contribution to its entropy will be S_g(organism 1) = k * ln(1e7) ~= 16*k.

Now, suppose that individual reproduces (asexually, to keep things simple). The new organism will be of the same species as the parent, but have a different (assumed random) genome within the same species. If you look at the offspring by itself, it could also be any of 1e3 species * 1e4 genomes, so its entropy will be the same as its parent: S_g(offspring) = k * ln(1e7) ~= 16*k.

But look at the genetic entropy of the two together, by counting the number of possible genomes they could have. Since they’ll both be the same species, there’s only 1e3 species * 1e4 genomes for the parent * 1e4 organisms for the offspring = 1e11 total possibilitles, so S_g(organism 1 + offspring) = k * ln(1e11) ~= 25*k. This is k * ln(1e3) ~= 7*k less than the sum of their individual entropies, which is essentially a measure of how correlated their states are.

Compare that with what would’ve happened if the second organism had appeared independently (rather than deriving from organism 1): then the two organisms would be of independent species, so their total entropy would be the sum of their separate entropies, k * ln(1e14) ~= 32*k. So independent appearance of organisms is thermodynamically preferred to reproduction!

(Again, I’m not saying that organisms poofing into existence is possible, just that under conditions that allow reproduction, the second law doesn’t forbid it. As Stephen Lower put it: when thermodynamics says “no”, it means exactly that. When it says “yes”, it means “maybe”.)

There’s even a large coterie of physicists proposing “Tsallis non-extensible entropy” as the solution to life, evolution, and the philosopher’s stone.

I’m not significantly familiar with Tsallis entropy, and I’ve never heard of anyone relating it to evolution. Can you give me a pointer to this “large coterie”? In any case, I’m pretty sure it’s also subadditive, so what I said above goes for it as well.

Is thermodynamics even relevant?

The mere fact that people are not collections of ideal gas atoms, but coherent and “functional” should be strong evidence that applying entropy addition to humans is wrong as applying Boyle’s Law to my pot of caramel bubbling on the stove.

In fact, the coherence of all the objects alluded to by Granville Sewell in his paper on computers and jet planes, is precisely the sort of order that cannot be measured by Boltzmann’s ideal gas approximations.

You seem to be under the impression that Boltzmann’s formula for entropy, S=k*ln(ω), is limited to ideal gasses. If so, you are wrong; it applies to any classical (non-quantum) system with equally probable microstates, whether or not they happen to be ideal gasses. For a classical system with non-equally-probable microstates, use Gibbs’ formula, S=-k*sum(p_i*ln(p_i)), instead. Note that Gibbs’ formula is a generalization of Boltzmann’s: if all of the probabilities (the p_i’s) happen to be equal, both formulae give the same result. I’m not very familiar with quantum stat mech, but AIUI the relevant formula there is Von Neumann’s, S=-Tr(ρ*ln(ρ)), which is (other than a factor of k) a generalization of Gibbs’ formula.

If you deny that these formulae are relevant to what Sewell is talking about, you’re essentially denying that Sewell’s argument is based on thermodynamics (well, stat mech anyway). You can’t have it both ways: either Sewell’s argument is based on well-established thermodynamics (in which case it’s wrong), or it’s not based on well-established thermodynamics (in which case he’s being dishonest to claim the backing of thermodynamics for his argument).

Doing the math: Earth’s entropy flux vs. entropy decrease needed for evolution

(from earlier in Rob’s message:)

All this to say, that Granville is completely correct when he says that the increase in entropy of the Sun is never shown to come even close to explaining the decrease of entropy on the earth. The conversion constants are just not known. In principle, they could be known, but in practice we are far, far from even a rough estimate.

I think you’ve lost track of the burden of proof here. If anyone wants to use thermodynamics to argue against evolution, the burden of proof is on them to show that there’s a conflict. If the conversion constants aren’t known, that makes your (and Sewell’s) argument difficult, not mine.

However, the relevant constants are known (it’s the state counting that’s hard), and the claim that there’s a conflict between evolution and thermo has been refuted in several different ways.

In addition to my argument from extensiveness (or subadditivity, if you want to be picky), it’s also been done by directly estimating the relevant entropies. Have you read Emory Bunn’s article “Evolution and the second law of thermodynamics” (Am. J. Phys. 77 (2009), 922–925)? It’s mostly a re-do of the argument Daniel Styer made in an earlier paper, except that Styer made some serious mistakes; Bunn corrects these (although he also makes at least one minor mistake himself, see my earlier comment). Sewell has criticized these arguments, but as far as I can see his criticisms completely miss the mark (again, see my linked comment).

Note that neither Styer nor Bunn nor I claim that the Sun is increasing in entropy (I’m pretty sure it’s decreasing) let alone that it’s compensating for entropy decreases on Earth. The Earth actually recieves entropy from the Sun (in the form of thermal radiation), and dumps its waste entropy to deep space (again, in the form of thermal radiation). The flux is tricky to calculate, and (I claim) both Styer and Bunn get it wrong. I didn’t exactly do it right either, but I’m pretty confident I got a safe lower bound of 3.3e14 J/K per second for the net entropy flux leaving Earth. Can you find any error in my analysis in the linked comment?

But how much entropy decrease is needed for evolution? Bunn gives an upper limit of 1e44*k = 1.4e21 J/K, which is less than two months’ flux (based on my calculation). I’m no biochemist, but his calculation here looks roughly reasonable. I do have one semi-objection to it, though: it looks at the total entropy difference between all the organisms on Earth vs the same matter in its simplest molecular form. In other words, he’s counting up the entropy decrease needed for evolution and reproduction and growth etc. and, as my extensiveness argument imples, most of that entropy decrease is due to reproduction and growth, not evolution. On the other hand, the entropy decreases implied by reproduction and growth are coming out of the same available entropy budget (and the breakdown is hard to even define, let alone calculate), so he’s not actually wrong… just less specific than I’d like.

Do you see any problems with Bunn’s analysis? As you said earlier, it’s hard to calculate this accurately. But Bunn is only trying for an upper bound, and his calculation would have to be off by a factor of over a billion for there to be a problem with evolution, so unless you see something seriously wrong, I’d say it’s good enough to prove his point.

Even more to the point, do you have any analysis that shows there isn’t enough entropy flux? Because if you don’t, I don’t see how you can make a thermodynamic case against evolution.

Conversons between different “types” of entropy:

(still earlier in Rob’s message:)

Boltzmann & Shannon both used “order” or permutations in their definition of entropy. Clausius, Maxwell etc, used heat and temperature. Boltzmann’s “ansatz” was to connect the two definitions with the eponymous constant.

Landauer repeats the Boltzmann “ansatz” using computer bits instead of permutations, and several published authors have claimed to validate or invalidate Landauer’s ansatz. Personally I think Landauer was using the “ideal gas” estimate of Boltzmann when he reused “k” for his “energy per entropy bit”, since the revolution in electronics today is storing information in the “spin” of an electron, or what is now called “spintronics”. Thus I don’t believe Landauer is even remotely close to the amount of energy per entropy bit of memory. That is, I don’t think the principle is false, but the “energy/bit” conversion factors of both Boltzmann and Landauer are undoubtedly wrong for modern information storage.

“Energy/bit” conversion factors don’t come into it unless you’re converting entropy to/from thermal form (in which case the temperature determines the conversion factor). If you think in terms of entropy, and apply the relevant formula (Boltzmann et al), the conversion factors for different kinds of entropy become pretty obvious. (Although as I said, the state-counting can be quite difficult.) (That, and the fact that most systems’ phase spaces don’t factor cleanly, which means their entropy can’t be cleanly split into different components — I’ll get back to this point.)

Until we get a better theory than Boltzmann’s permutation to ideal gas law, we are probably wise to use Granville’s suggestion of conserving entropy separately for each inconvertible quantity.

How on earth can it be wise to use something we know is seriously wrong? If entropy were conserved seperately for each type of entropy, no gas could ever be compressed, or condensed into a liquid, or frozen to a solid (since all of these convert configurational entropy to thermal) (with a caveat I’ll get to in a bit). Sediment settling to the bottom of a lake violates this separate conservation idea, as does a huge amount of biochemistry. My extensiveness argument means that it “rules out” biological reproduction and growth along with evolution. This is NOT a wise thing to assume.

It’s certainly not a wise thing to base an argument against evolution on. All anyone has to do to refute you is to point out that your argument is based on a false premise, and you’re done.

Now, about that caveat: splitting entropy into different types (e.g. thermal vs. configurational) is only really possible in certain situations, most notably in a classical ideal gas (finally, something that’s actually restricted to them!). In real gasses and solids and especially liquids, the thermal and configurational degrees of freedom aren’t independent (or even clearly defined), so you can’t cleanly split the total entropy into different types. So when I say that condensation converts configurational entropy into thermal, I’m really just making vague gestures about quantities that aren’t actually well-defined.

Does that caveat help Sewell’s case? No, for two reasons. First, because if the various types of entropy aren’t well-defined, his claim is something even worse than wrong, it’s meaningless. And second, because while his X-entropies bear a superficial similarity to the configurational entropies for specific elements, they aren’t actually the same thing (see olegt’s comments starting here). This means that when e.g. gaseous nitrogen condenses into a liquid, its nitrogen-entropy is decreasing, but it’s not being converted into anything else, it’s just decreasing; and this does not conflict with the second law because the second law doesn’t apply to nitrogen-entropy at all.

52. 52
Mung says:

The mistake here is in thinking that entropy is something physical.

53. 53
Gordon Davisson says:

Mung: I agree that entropy isn’t a physical thing, it’s a property of physical things. But I don’t agree that thinking of entropy as a thing is a mistake. I think it’s a useful intuitive shortcut; a metaphor if you like.

While entropy isn’t a thing, it acts a lot like a thing. This means that by thinking of it as a thing, you immediately get a bunch of mostly-correct intuitions about how it behaves. For example, when heat flows from one place to another, there’s an entropy decrease where it came from and an increase where it went to. The technically correct way to describe this is that the entropy decrease is compensated by the (equal or larger) decrease, but it’s far more intuitive to think of the heat carrying entropy with it from one place to the other.

There are some places where the metaphor falls down, like the deviations from additivity I described in my last comment. The closest I can come to making sense of this in terms of the metaphor is that some of the same entropy is in multiple places, so if you just add the entropies from the various systems you’re counting some of the entropy twice (or more). Basically, you need to be ready to throw out your intuition whenever it disagrees with more detailed analysis; but that’s true anyway, so it’s not really a change.

So, you suffer some technical accuracy and a few subtle bad intuitions in exchange for quite a lot of good intuitions. If you understand the physics and math well, it may not worth the tradeoff. But in most of these discussions most of the participants have no real background in either, so I think it’s a worth it.