Uncommon Descent Serving The Intelligent Design Community

“Specified Complexity” and the second law

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A mathematics graduate student in Colombia has noticed the similarity between my second law arguments (“the underlying principle behind the second law is that natural forces do not do macroscopically describable things which are extremely improbable from the microscopic point of view”), and Bill Dembski’s argument (in his classic work “The Design Inference”) that only intelligence can account for things that are “specified” (=macroscopically describable) and “complex” (=extremely improbable). Daniel Andres’ article can be found (in Spanish) here . If you read the footnote in my article A Second Look at the Second Law you will notice that some of the counter-arguments addressed are very similar to those used against Dembski’s “specified complexity.”

Every time I write on the topic of the second law of thermodynamics, the comments I see are so discouraging that I fully understand Phil Johnson’s frustration, when he wrote me “I long ago gave up the hope of ever getting scientists to talk rationally about the 2nd law instead of their giving the cliched emotional and knee-jerk responses. I skip the words ‘2nd law’ and go straight to ‘information'”. People have found so many ways to corrupt the meaning of this law, to divert attention from the fundamental question of probability–primarily through the arguments that “anything can happen in an open system” (easily demolished, in my article) and “the second law only applies to energy” (though it is applied much more generally in most physics textbooks). But the fact is, the rearrangement of atoms into human brains and computers and the Internet does not violate any recognized law of science except the second law, so how can we discuss evolution without mentioning the one scientific law that applies?

Comments
Oops: Forgot that a LT sign is opening for a tag. 3] "Tc is LESS THAN Th . . ."kairosfocus
April 3, 2007
April
04
Apr
3
03
2007
12:10 AM
12
12
10
AM
PDT
Professor Sewell: I am happy to see your post; not least because the 2nd law was my own point of entry into the ID discussion. (Onlookers, cf Appendix I in my always linked through my handle.) I also note your quote from Prof Johnson: “I long ago gave up the hope of ever getting scientists to talk rationally about the 2nd law instead of their giving the cliched emotional and knee-jerk responses. I skip the words ‘2nd law’ and go straight to ‘information’” Your own comment is also highly illustrative, and quite sad:
People have found so many ways to corrupt the meaning of this law, to divert attention from the fundamental question of probability–primarily through the arguments that “anything can happen in an open system” (easily demolished, in my article) and “the second law only applies to energy” (though it is applied much more generally in most physics textbooks). But the fact is, the rearrangement of atoms into human brains and computers and the Internet does not violate any recognized law of science except the second law, so how can we discuss evolution without mentioning the one scientific law that applies?
Of course, rhetoric is no substitute for sound evaluation of an issue, and so your persistence is deeply appreciated. For, you are right, dead right: 1] Now, the classic e.g. no 1 in studying the 2nd law, is an isolated system having in it two thermally interacting closed systems, A at Thot, B at Tcold. {I am using the more usual physics terminology: closed systems exchange energy, but not matter, with their surroundings. Open ones exchange both, isolated ones exchange neither.) 2] 2nd LOT is then deduced by estimating and summing the entropy shifts: A loses and B gains heat increment d'Q, in such a way that overall shift in entropy is: dS >/= [-d'Q/Thot] + [d'Q/Tcold] 3] Because Tc a system simply absorbing raw heat energy tends to have a RISE in entropy. This is unsurprising because Temperature is a measure of average random kinetic energy per degree of freedom for molecules etc. So: import of randomising energy, tends to make the internal microstates of a system consistent with the new macrostate indicated by say a rise in its temperature, more random. (That is, inter alia, information tends to be lost on heating. This is reflected in the classic Boltzmann expression s = k ln w, w being the number of microstates associated with a given macrostate, which of course strongly rises with temperature. Highly informational states of course tend to be such that w is sharply constrained, i.e. they are low entropy. This is exploited by TBO in their 1984 Mystery of Life's Origin, following Brillouin's tie-in between entropy and information.) 4] Of course, one may then move to case no 2: have B as a heat engine, whereby imported energy from A is partly converted into work on a target body, say C, and partly exhausted to a heat sink, D. In this case, B imports energy but by virtue of coupling it to an energy conversion subsystem, is able to partly transform it into orderly motion [which can in principle be quite complex and algorithmically controlled.] 5] Now, there are natural heat engines, e.g. a hurricane, but that does not subvert the fact that in every case where we see that the energy coupling and converting device in B exhibits specified complexity and we know how it originated directly, it is an artifact of intelligence. 6] Opening up the system to matter as well as energy flows does not alter this fact. [Just reflect on man-made engines, which often import fuel and air, then use combustion to drive the energy conversion cycle and exhaust waste material and heat to the surroundings.] So, we both know that importing raw energy tends to increase micro-scale disorder, and that CSI-based energy coupling and conversion systems have a known origin in intelligent action. And, while of course if there are sufficient probabilistic resources we can have strange things like the molecules in the room rushing to one end etc, and a tornado can build, fuel and fly a jumbo jet by chance, such is so remote probabilistically that it is maximally more likely that such phenomena and systems originated through intelligent action. The rhetorical resort to the maximally improbable and confusing to explain away the CSI in say the cell considered as an energy coupling and conversion system, thus -- yet again [and pardon my directness] -- shows the intellectual bankruptcy of evolutionary materialism. Keep up the good work GEM of TKIkairosfocus
April 3, 2007
April
04
Apr
3
03
2007
12:08 AM
12
12
08
AM
PDT
"...you idiots, unintelligent forces can’t do intelligent things!"
I like this. This is why, after more than half a century of indoctrination in the public schools, with the persistent help of the mainstream media, the vast majority of Americans don't buy blind-watchmaker Darwinism. It reminds me of my favorite ancient Chinese proverb: "Some things are so ridiculous, one needs a Ph.D. to believe them."GilDodgen
April 2, 2007
April
04
Apr
2
02
2007
10:54 PM
10
10
54
PM
PDT
Much of the disagreement and confusion about the second law is due to the fact that, unlike most other "laws" of science, there is not widespread consensus on exactly what it says. It was originally applied only to heat conduction, then gradually generalized more and more, until today many physics textbooks apply it to things like rusting computers and shattering glass. The key is that there is one and only one principle behind all applications, and that is the use of probability at the microscopic level to predict macroscopic change. Thus as far as I am concerned, this principle IS the second law. Thus, Scordova, if you agree, as I suspect you do, that it is extremely improbable that natural forces would cause atoms to rearrange themselves into computers on our Earth, even taking into account what is entering into our open system (solar energy), it seems you have to agree that what has happened on our planet violates at least the underlying principle behind the second law. The advantage of the second law argument over an argument using Dembski's 4th law is that, though the latter is completely valid, the former is a much more widely recognized law of science. Physics textbooks practically make the argument for design for you, all you have to do is point out that the laws of probability still apply in open systems, contrary to common belief. In any case, both approaches are just attempts to take what is obvious to the layman ("you idiots, unintelligent forces can't do intelligent things!") and formulate it in a more "scientific" way.Granville Sewell
April 2, 2007
April
04
Apr
2
02
2007
10:08 PM
10
10
08
PM
PDT
Granville, Welcome to Uncommon Descent. I am a big fan of your writings. I am however, reluctant to appeal to the traditional 2nd law as an argument supportive of design inferences. I believe Dembski's 4th law is more appropriate. There has been an ongoing discussion between myself an Professor Beling (a professor of Thermodynamics). See: Is 2nd Law a special case of 4th Law? My central conclusion regarding the 2nd law is taken from Bill's No Free Lunch:
the second law is subject to the Law of Conservation of Information [4th Law] page 172-173, No Free Lunch
and
A magnetic diskette recording random bits versus one recording, say, the text of this book are thermodynamically equivalent from the vantage of the second law. yet from the vantage of the fourth law they are radically different. Two Scrabble boards with Scrabble pieces covering identical squares are thermodynamically equivalent from the vantage of the second law. Yet from the vantage of the fourth law they can be radically different, one displaying a random arrangement of letters, and the other meaninful words and therefore CSI.
Thus, I'm a bit uncomforatable with 2nd law arguments. Founding father of ID Walter Bradley Thermodynamics and the Origin of Life:
the Second Law of Thermodynamics cannot be used to preclude a naturalistic origin of life.
scordova
April 2, 2007
April
04
Apr
2
02
2007
06:51 PM
6
06
51
PM
PDT
Hi Granville, I have some questions for ya. i've never understood how the 2nd Law can apply to biology. The first thing is that we can see how a zygote can become a full grown organism. Are you suggesting that this is breaking the 2nd law and the only reason it can break it is because this "designed information"? Also, using the mount improbable analogy, it seems this 2nd law argument is being used against the cliff side view of the developement of organisms, vs. the gradual slope side of the mountain where smaller events that are more probable are added up over time due to natural selection. So how does the 2nd Law apply to the slope side? For example, let's say you have the letter "A" at the bottom of the slope. It randomly attaches to other letters. Some combinations don't make english words and don't go up the slope. Meanwhile a few words like "at" and "an" go up the slope. The process continues and now words like "ate" and "and" go higher up the slope. Information is being generated by random events, but only because of the selection event. Are you saying that the 2nd law shouldn't allow these small events to happen? OR that the 2nd law doesn't allow them to add up over time? Thanks for your time FrossFross
April 2, 2007
April
04
Apr
2
02
2007
01:03 PM
1
01
03
PM
PDT
Ah, lifeform! I knew that part was questionable (it didn't make sense to me) but translating it as life-form makes total sense. Can you edit my translation with that fix?Atom
April 2, 2007
April
04
Apr
2
02
2007
12:49 PM
12
12
49
PM
PDT
Your translation is very good, Atom! Except the very last sentence should be "than a form of life", I think, from the context. Granville Sewell
April 2, 2007
April
04
Apr
2
02
2007
12:21 PM
12
12
21
PM
PDT
Here is a translation I made of the article. Caveat emptor: Spanish is NOT my first language and I have had no formal training in it. So the translation may be dodgy at parts. Any improvement is appreciated, but at least those without any spanish skills can get the gist of it:
Graville Sewell is a professor of mathematics at A&M Univeristy in Texas. Sewell has published in the second edition of his book The Numerical Solution of Ordinary and Partial Differencial Equations (John Wiley & Sons, 2005) an appendix with a strong, solid critique of Darwinism based obviously in differential equations and the Second Law of Thermodynamics. The approach of Sewell is very interesting: while counter-arguing the critique made against his previous writings he is, in reality, speaking of Specified Complexity as explained by Dembski in The Design Inference (Cambridge University Press, 1998) and more recently in a paper on Specifications. The critics of Sewell have said that his argument is one of improbability: for example, the result of one series of a thousand coin tosses is as improbable as any of the other (2^1000) -1 results, however, you will always get one of those results. This is exactly equal to saying that the Design Inference of Dembski is based soley on complexity (the inverse proportion of probability), completely forgetting that the most important characteristic when seeking to infer design is the Specification; improbability is hardly a guarantee. Sewell, I don't know if he does so conciously or not because he does not mention Dembski in the references appendix, responds with an argument similar to the Specifications: While it is certain that all the 2^1000 possible results are highly improbable if the coin is fair (the probability of each such series is exactly 2^-1000), it is also certain that within those there are very few of those results which have a short description. For example, it is not the same to obtain a series of randomly alternating heads and tails as it is to obtain a series of all heads from the thousand coin tosses. There is something special in getting results like a thousand heads in a row that allows us to easily conclude that chance is not responsible. Obtaining a thousand heads is the result that an algorithm can produce very easily; if the heads represent 1s and the tails 0s, then the algorithmic instruction would be something like this: "PRINT 1 1000 TIMES" These short descriptions are those that have their origin in Kolmogorov's theory of recursion: Kolmogorov noticed that probability per-se could not differentiate between those results obtained by chance and those not, and so he made use of short descriptions for allowing him to differentiate between them. Of all the possible results of random coin tosses, there are very few with short description lengths. Dembski already mentioned in his book and article that those results which were both highly improbable and also had short description lengths constitute a form of Specified Complexity, the property that allows one to infer intelligent design. In a prior entry I made mention of an interview with Pablo Ferrari, who is no Design theoretician. Ferrari ended his interview like this: [Journalist]: However, there is a probability, however small it may be, that all the air molecules in the this room will collect in one corner, or that a cup of water will heat up when frozen...violating the sacred Second Law of Thermodynamics and therefore, sooner or later it will occur and sooner or later we will see all the molecules of air concentrate in the corner of a room. [Ferrari]: From the point of view of the probabilities, yes, but we cannot guarantee that one will live long enough to see it. For example, for the drunkard to take one hundred successive steps forward [it is an uncertain walk], there are those who wait two to the hundred, for there are billions of billions of steps, and for the molecules of air there has to be tillions upon trillions of times (in reality many more than that, but it sounds bad) the age of and duration possible for the universe. So we can rest easy. ...and that a cup of water will heat up in a time of freezing is much, much more simple than a form of life.
Atom
April 2, 2007
April
04
Apr
2
02
2007
12:10 PM
12
12
10
PM
PDT
At the core of Darwinism is essentially the notion that you really can get something for nothing -- free information, free complex machinery, free design. – from chaos and natural law. The second law suggests that you can’t, so, obviously, the second law must not apply in the case of biological evolution, and anything can happen in an open system. The logic is simple: We know Darwinian processes can do all this marvelous stuff because life exists, and there is no other materialistic explanation we can think of. Therefore, by definition, the second law must not apply to the origin of living systems and their subsequent diversification and increase in complexity and information content. By the way, in an open system, isn’t machinery required to use the available energy to do useful and creative work? If so, machinery can’t come first, because machinery would be required to make that machinery. This is why I contend that some scientists have gone mad when it comes to Darwinism. They’ve completely lost the ability to think objectively, and recast the laws of nature at will to conform with Darwinian philosophy.GilDodgen
April 2, 2007
April
04
Apr
2
02
2007
10:45 AM
10
10
45
AM
PDT
Makes sense, I always thought your two approaches complemented each other. A short time ago when we were all having a discussion on the nature of specification I had brought up your definition as the one I preferred. I always assumed you and Dembski were talking about the same thing, using independent formulations.Atom
April 2, 2007
April
04
Apr
2
02
2007
10:16 AM
10
10
16
AM
PDT
1 2 3

Leave a Reply