Uncommon Descent Serving The Intelligent Design Community

“Specified Complexity” and the second law

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
arroba Email

A mathematics graduate student in Colombia has noticed the similarity between my second law arguments (“the underlying principle behind the second law is that natural forces do not do macroscopically describable things which are extremely improbable from the microscopic point of view”), and Bill Dembski’s argument (in his classic work “The Design Inference”) that only intelligence can account for things that are “specified” (=macroscopically describable) and “complex” (=extremely improbable). Daniel Andres’ article can be found (in Spanish) here . If you read the footnote in my article A Second Look at the Second Law you will notice that some of the counter-arguments addressed are very similar to those used against Dembski’s “specified complexity.”

Every time I write on the topic of the second law of thermodynamics, the comments I see are so discouraging that I fully understand Phil Johnson’s frustration, when he wrote me “I long ago gave up the hope of ever getting scientists to talk rationally about the 2nd law instead of their giving the cliched emotional and knee-jerk responses. I skip the words ‘2nd law’ and go straight to ‘information'”. People have found so many ways to corrupt the meaning of this law, to divert attention from the fundamental question of probability–primarily through the arguments that “anything can happen in an open system” (easily demolished, in my article) and “the second law only applies to energy” (though it is applied much more generally in most physics textbooks). But the fact is, the rearrangement of atoms into human brains and computers and the Internet does not violate any recognized law of science except the second law, so how can we discuss evolution without mentioning the one scientific law that applies?

Comments
Pixie I have responded at the blog. Onlookers are invited to go take a look. GEM of TKI PS: It is probably worth excerpting here as a closeoff at UD, an excerpt I used from Sewell's main presentation of his case as he cites above:
The second law is all about probability, it uses probability at the microscopic level to predict macroscopic change: the reason carbon distributes itself more and more uniformly in an insulated solid is, that is what the laws of probability predict when diffusion alone is operative. The reason natural forces may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy. The second law of thermodynamics is the reason that computers will degenerate into scrap metal over time, and, in the absence of intelligence, the reverse process will not occur; and it is also the reason that animals, when they die, decay into simple organic and inorganic compounds, and, in the absence of intelligence, the reverse process will not occur. The discovery that life on Earth developed through evolutionary "steps," coupled with the observation that mutations and natural selection -- like other natural forces -- can cause (minor) change, is widely accepted in the scientific world as proof that natural selection -- alone among all natural forces -- can create order out of disorder, and even design human brains, with human consciousness. Only the layman seems to see the problem with this logic. In a recent Mathematical Intelligencer article ["A Mathematician's View of Evolution," The Mathematical Intelligencer 22, number 4, 5-7, 2000] I asserted that the idea that the four fundamental forces of physics alone could rearrange the fundamental particles of Nature into spaceships, nuclear power plants, and computers, connected to laser printers, CRTs, keyboards and the Internet, appears to violate the second law of thermodynamics in a spectacular way.1 . . . . What happens in a closed system depends on the initial conditions; what happens in an open system depends on the boundary conditions as well. As I wrote in "Can ANYTHING Happen in an Open System?", "order can increase in an open system, not because the laws of probability are suspended when the door is open, but simply because order may walk in through the door.... If we found evidence that DNA, auto parts, computer chips, and books entered through the Earth's atmosphere at some time in the past, then perhaps the appearance of humans, cars, computers, and encyclopedias on a previously barren planet could be explained without postulating a violation of the second law here . . . But if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here." Evolution is a movie running backward, that is what makes it special. THE EVOLUTIONIST, therefore, cannot avoid the question of probability by saying that anything can happen in an open system, he is finally forced to argue that it only seems extremely improbable, but really isn't, that atoms would rearrange themselves into spaceships and computers and TV sets . . .
Go to the link to follow up on the significance of that and how it develops. All the best. kairosfocus
I have given up on this thread, and am continuing the discussion at the thread kairosfocus kindly started on his own blog here. The Pixie
OOPS! Shifts. [And Open Office AND Firefox did not spot the typo -- what does that tell us . . .] kairosfocus
Continuing . . . 4] Evolution can be considered a targeted search; the target being a species that flourishes in the given environment. Closeness to that target is rewarded by survival (that may be tautological, but that does not make it wrong) Several notes: a] First, an equivocation. “Targetted” in GAs is intelligent, not blind. (E.g. Set up antenna performance specs and randomise parameters to get the heuristically “best” outcome etc. Do a Monte Carlo pattern of runs on a model, etc. Trial and error by PC . . . ) By definition, NDT style evolution is precisely not based on a targetted, configurationally constrained search that rewards closeness to the identified target. b] Second, “natural selection” is a term not a creative force of reality. In the real world, what happens is that relatively well adapted individuals are more likely to thrive and reproduce, so their descendants dominate the population. There is no necessary tendency to drift. [Cf the discussion under the Blythian thread to see this.] NS is consistent with minor changes as observed [often oscillating, like Galapagos Finch beaks], it is consistent with stasis, and with loss of genetic information – how the founder principle often leads to new varieties and even “species.” [NB how some of the Galapagos Finch species are interbreeding successfully now . . .] c] The tautology issue comes up when the above is confused with a creative force. That the least unfit or better fitted survive and reproduce does not mean that they are innovative, adaptable to future unforeseen environmental shits, etc. Frontloading is intelligently targetted by contrast, with local adaptability across multiple environments a major objective of the optimisation. (I am not advocating this, just noting.) d] The biggie issue is information generation beyond the Dembski-type bound by in effect random processes, say 500 – 1000 bits, or about 250 – 500 monomers. Body plan level innovations to account for the Cambrian revolution on require three or five or more orders of magnitude more than that. The whole gamut of the observed cosmos, 13.7 BY and 10^80 or so atoms [on a generous estimate!] are insufficient to credibly cross that threshold once much less dozens or more times. e] The issue comes back to the point in my jet in a vat, or TBO's protein in a prebiotic soup. The bridge to cross is the gap from scattered components to complex integrated functional whole where a high threshold of functionality is required for minimal function to occur. Intelligent agents do this routinely. Random forces for excellent reasons linked tot he underlying analysis of stat thermodynamics, do not do so credibly within the gamut of the observed cosmos. So, absent a back way up Mt Improbable there is an unanswered problem. And, as Ch 9 in TBO points out, it's slim pickens on back paths up Mt Improbable.[Shapiro's recent Sci Am piece just underscores the point . . .] 5] It seems your original post was approved and has already gone through . . . GEM of TKI kairosfocus
Continuing . .. Got through! Now on points: 1] 0 K: You can't get there – and that's important. [And at any accessible temp, Sconfig is a function of the number of states that pass the functional/macroscopic state test, random states in effect having a no-test test. For a unique code, once it is specified, its Sconfig is already zero. But of course it has thermal entropy etc. Recall for a system that can be so comparmentalised, Wsys = W1.W2. [This now standard trick, I believe, was originally used by Boltzmann to derive s = k ln w itself, using a hypothetical physical partitioning of the system. That is comparmentalisation of statistical weights based on physical processes is an underlying assumption of the whole process. Thence my vats and nanobots again.] 2] Modes and degrees of freedom Of course,we have freezing out effects that on a quantum basis do separate modes "naturally." In TBO's case and my Vat exercise, once we see that the energy of bonds is more or less the same for any configuration, and there is no effective pressure-volume work being done, the enthalpy term is quasi-neutral across configurations. But, when not just any random or near random config will do [TBO discuss this on proteins in Chs 8 and 9], programmed work is normally indicated to get to the specified one. Prebiotic soup exercises end up requiring more probabilistic resources than are available in the credible gamut of the observed cosmos. BTW, from the discussion and refs made in TMLO, the usage of configurational work and entropy they make is in the OOL lit from that time, i.e. this is again not a design thought innovation as such. 3] Wiki note:
It is often useful to consider the energy of a given molecule to be distributed among a number of modes. For example, translational energy refers to that portion of energy associated with the motion of the center of mass of the molecule. Configurational energy refers to that portion of energy associated with the various attractive and repulsive forces between molecules in a system. The other modes are all considered to be internal to each molecule. They include rotational, vibrational, electronic and nuclear modes. If we assume that each mode is independent (a questionable assumption) the total energy can be expressed as the sum of each of the components . . .
TBO's usage is of course in this spirit, bearing in mind that the molecules in view are endothermically formed so the work of clumping and that of configuring can reasonably be separated as mere clumping is vastly unlikely to get to the macroscopically recognisable functional state. Thus too, my vats example. Pausing 2 . . . kairosfocus
Hi Pixie: I sympathise on the comment filtering issue -- having had some mysterious swallowings myself. On the other hand, in another thread this AM Dave Scott informed me they have had something like 90,000 spam messages in recent months, and very few of these have been filtered off improperly. [I think there is a two stage filter or something . . .] I have posted a thread for onward comments. Pausing [get the link out of the way first] . . . kairosfocus
Right, I give up. It is stupid when I have to submit an argument one sentence at a time to sneak it past the spam filter. My last seven word sentence was rejected, with nothing in any way offensive in it. Kairosfocus, can I suggest you start a thread at your blog on this, and we move over there. The Pixie
... continues Consider this thought experiment: The Pixie
... continues So follow it through. S(config) for DNA at 0 K is zero. The same for a random sequence, a simple repeating sequence and human DNA. The Pixie
kairosfocus Having big problems getting the next bit though, so I apologise for breaking it up so.
Pix: "S is zero at 0 K. Which agrees with the third law. For everything, no matter how complex." Correct and that is in part why you cannot get there, according to Nernst was it. One consequence would be perfectly efficient heat engines.
The Pixie
9] Genetic Algorithms: These are not blind but designed, targetted searches that reward closeness to a target function well within the probabilistic resources of a computer. They are an instance of intelligent design that uses random processes.
Evolution can be considered a targeted search; the target being a species that flourishes in the given environment. Closeness to that target is rewarded by survival (that may be tautological, but that does not make it wrong). Yes, genetic algorithms are designed. That does not by itself imply that analogous processes must also be designed. The Pixie
I suspect you did Engg th-D; poss as Chem Eng. I did th-D in physics, as well as further relevant studies.
Straight chemistry, actually.
As noted long since, TBO are ANALYTICALLY separating dSth [as they label it — it is really clumping] and dS config, exploiting the state function nature of dS.
The W of S = k ln W can be broken into various ways in which energy can be stored in a system (eg as rotational energy, vibrational). I believe what TBO are doing is akin to trying to split out say vibrational energy. I think that is different to what you seem to describe. It certainly makes no sense to break the process into discrete steps, one in which vibrational energy increases, the second in which all other modes increase. The point about entropy is that the energy is distributed across all available modes. Interestingly, this Wiki entry mentions configurational as one of those energy modes. However, this is the energy associated with intermolecular forces, i.e., clumping together, rather than the configuration of a molecule. The Pixie
kairosfocus
True, but immaterial...
Well, yes, the conversation has moved on, so my point is not relevant to what we are talking about now (thus, your points 1, 2 and 4 are responding to my objection to your vats argument, when my objection was actually to something else). However, I feel this is a fundamental problem with Sewell's argument. The Pixie
kairosfocus
True, but immaterial...
Well, yes, the conversation has moved on, so my point is not relevant to what we are talking about now (thus, your points 1, 2 and 4 are responding to my objection to your vats argument, when my objection was actually to something else). However, I feel this is a fundamental problem with Sewell's argument.
3] S is zero at 0 K. Which agrees with the third law. For everything, no matter how complex. Correct and that is in part why you cannot get there, according to Nernst was it. One consequence would be perfectly efficient heat engines.
Right, so follow it through. S(config) for DNA at absolute zero is zero. The same for a random sequence, a simple repeating sequence and human DNA. Consider this thought experiment... Heat up those DNA sequences to ambient. The sequences do not change. One DNA is still the same random sequence, one is the same repeating pattern, one is still human DNA. What is S(config) for each of those sequences now they are at ambient?
I suspect you did Engg th-D; poss as Chem Eng. I did th-D in physics, as well as further relevant studies.
Straight chemistry, actually.
As noted long since, TBO are ANALYTICALLY separating dSth [as they label it — it is really clumping] and dS config, exploiting the state function nature of dS.
The W of S = k ln W can be broken into various ways in which energy can be stored in a system (eg as rotational energy, vibrational). I believe what TBO are doing is akin to trying to split out say vibrational energy. I think that is different to what you seem to describe. It certainly makes no sense to break the process into discrete steps, one in which vibrational energy increases, the second in which all other modes increase. The point about entropy is that the energy is distributed across all available modes. Interestingly, this Wiki entry mentions configurational as one of those energy modes. However, this is the energy associated with intermolecular forces, i.e., clumping together, rather than the configuration of a molecule. The Pixie
PS: Pixie, I think there is a "J" who comments at UD, so if you mean Joseph, you will need to differentiate . . . GEM of TKI kairosfocus
Continuing . . . 7] What is it, a hundred parts per clump (rather low for a jet plane, but perhaps not for a replicating protein), and say a mole of parts in the vat. So there are (say) a hundred factorial ways to arrange the parts, and 6×10^21 clumps. in the pre-biotic world, we are looking at something like 80 different amino acids [taking into account chirality], and selecting the correct set of 20 of 80 100 times, in any order. That gives me P = 1/4^100 ~ 10-60 of getting TO a chain with the right chirality, bio-relevant amino acids alone. [I am of course leaving off those few oddball proteins that use odd acids] . Then, factor in correct bonding about 1/2 the time, and the odds go down by a further 1/2 ^100. Then we forget chain-stoppers and the odds of disassembly of such energetically unfavourable macromolecules. Of course metabolism first scenarios require dozens of proteins to get going. And more. In short, the odds of getting TO biofunctionality by chance are such as begin to exhaust probabilistic resources real fast. That's why I think the man I call “Honest Robert” Shapiro is mistaken to champion this scenario, even as he accurately points out the core challenge of the RNA world model:
[OOL researchers] have attempted to show that RNA and its components can be prepared in their laboratories in a sequence of carefully controlled reactions, normally carried out in water at temperatures observed on Earth . . . . Unfortunately, neither chemists nor laboratories were present on the early Earth to produce RNA . . . . The analogy that comes to mind is that of a golfer, who having played a golf ball through an 18-hole course, then assumed that the ball could also play itself around the course in his absence. He had demonstrated the possibility of the event; it was only necessary to presume that some combination of natural forces (earthquakes, winds, tornadoes and floods, for example) could produce the same result, given enough time. No physical law need be broken for spontaneous RNA formation to happen, but the chances against it are so immense, that the suggestion implies that the non-living world had an innate desire to generate RNA. The majority of origin-of-life scientists who still support the RNA-first theory either accept this concept (implicitly, if not explicitly) or feel that the immensely unfavorable odds were simply overcome by good luck.
8] I throw in those randomiser bots, and they will arrange and rearrange the parts all the time. And every time they happen upon on nano-jet, a “jet pilot bot” will remove it from the vat. What do you think the end result will be? Simple: once we are dealing with a realistically complex nano-jet, nothing functional will happen by random processes in the lifetime of the earth, with very high probability. TH JPBs will, with near-unity probability, search in vain. 9] Genetic Algorithms: These are not blind but designed, targetted searches that reward closeness to a target function well within the probabilistic resources of a computer. They are an instance of intelligent design that uses random processes. As Wiki notes: A typical genetic algorithm requires two things to be defined: 1 a genetic representation of the solution domain, 2 a fitness function to evaluate the solution domain. Thus, GA's show ID not blind RM + NS. Further to this, the level of information to be generated is well below the Dembski type bound, through the use of coded strings. [And, BTW, where did the required algorithms and coding schemes as well as complex functional machines to implement come form?] GEM of TKI kairosfocus
Continuing . . . 4] Just invoking probabilities and macrostates does not make it second law stuff. Relative statistical weights of mactostates is precisely how entropy is measured and compared in stat mech. That is what s = k ln w is about, where w is the number of microstates associated with the relevant macrostate. Robertson astutely notes, as can be seen in my always linked:
. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms [pp. Vii – viii] . . . . the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context. [pp. 3 – 7, 36, Stat Thermophysics, PHI]
In short, information, probability and thermodynamics are intimately linked, once we address cases in which clusters of microstates are associated with macroscopically observable ones. 2 LOT, is a part of that. 5] I once debated thermodynamics with a guy who had no idea about calculus I suspect you did Engg th-D; poss as Chem Eng. I did th-D in physics, as well as further relevant studies. 6] The best you can do is break the process into two steps, one with S(config) approximately zero, the second with S(thermal) approximately zero. As noted long since, TBO are ANALYTICALLY separating dSth [as they label it -- it is really clumping] and dS config, exploiting the state function nature of dS. By giving a thought expt that shows how bonding/clumping [and bonding energies, hence internal energy state, the dE part of dH – I feel a pull to use my more familiar dU] can be differentiated from configuring, I have shown why they can do that. Pausing . . . kairosfocus
Hi Pixie: Okay, today I paused to remember Vimy Ridge: 90 years to day and date. So pardon if I am a bit summary or rough and ready on points: 1] Some things are extremely unlikely, but not at all connected to the second law. True, but immaterial. The case in point in my thought expt and those in TBO's analysis, have everything to do with 2 LOT, and particularly to do with why TdS can be split up into clumping and configuring stages, to yield a decrease in entropy associated with the work of creating an information-rich structure. Of course, as 2 LOT requires, the planned work reqd to do so overcompensates so overall entropy increases. My corrective point stands: TBO made no mistake in splitting up TdS. In particular, I have shown why it makes sense for TBO to see that we can distinguish what I have re-termed “clumping” and configuring; and other cases where it may or may not happen that way are immaterial. The rest of their analysis follows. 2] It is a fallacy to claim that: Something is extremely improbable, therefore it is forbidden by the second law, therefore it will not happen. Notice, I have not ever spoken of “forbidden” by 2 LOT in the stat mech sense. I have pointed out that the equilibrium cluster of microstates overwhelmingly dominates the distribution and so fluctuations sufficiently far away from that equilibrium are utterly unlikely to occur. This is the basis for the classical observation that isolated systems do not spontaneously decrease their entropy etc. In particular, the relevant cases are such that the probabilistic resources of the observed cosmos are insufficient to credibly lead to an exception. [Your randomly selected cluster turns out to be functional for the “zip-zap-zop” is so likely to fall under this stricture that I simply discussed it as a “for the sake of argument” case. Remember that nothing forbids the air molecules all rushing to one end of your room, but the probabilities of such a fluctuation are so low we simply routinely ignore it.] Further to this, I also showed in my always linked – have you read it? -- that Clausius' case study no 1 of closed systems within an isolated system, with heat moving from one to the next body will cause an INCREASE of entropy in the second body, absent coupling of the raw energy to do work. The issue at stake, therefore being the ORIGIN of functionally specific, complex information processing and based systems beyond the Dembski type bound – exhaustion of the probabilistic resources of the observed cosmos. Thus, my vats example, especially the control vat. 3] S is zero at 0 K. Which agrees with the third law. For everything, no matter how complex. Correct and that is in part why you cannot get there, according to Nernst was it. One consequence would be perfectly efficient heat engines. [That is we are on the way to perpetual motion machines . . .] Pausing . . . kairosfocus
kairosfocus Okay, those vats. First off, splitting S(config) and S(thermal). If you want to know deltaH for a chemical reaction at a certain temperature, you could determine deltaH to cool/heat the reactants to standard conditions, look up the deltaH for the reaction, then determine deltaH to heat/cool the products to the stated conditions. You can break the one step process into a number of steps, and as long as you start and end at the same point, the overall thermodynamics must be the same. I do not think you can use that sort of approach for S(config) and S(thermal). As I read TBO, these are two things going on in every process; you cannot do one and then the other. The best you can do is break the process into two steps, one with S(config) approximately zero, the second with S(thermal) approximately zero. Maybe this is what you mean (or maybe you disagree), but I what to clarify my understanding.
e] Now, pour in a cooperative army of nanobots into one vat, capable of recognising jet parts and clumping them together haphazardly. [This is of course, work, and it replicates bonding at random. We see here dSthermal] After a time, will we be likely to get a flyable nano jet?
No, I do not think we do see deltaS(thermal), though perhaps we see an analogy to it. Do we get flyable nano jets? Yes, some. How many depends on how many ways there are of putting the parts together, and how many clumps. What is it, a hundred parts per clump (rather low for a jet plane, but perhaps not for a replicating protein), and say a mole of parts in the vat. So there are (say) a hundred factorial ways to arrange the parts, and 6x10^21 clumps. Hmm, I have some new nanobots. These are "jet pilot bots". I throw them into this vat, and all the complete nano-jets are removed. Yeah, okay, probably none right now, but... Then I throw in those randomiser bots, and they will arrange and rearrange the parts all the time. And every time they happen upon on nano-jet, a "jet pilot bot" will remove it from the vat. What do you think the end result will be?
h] Now, let us go back to the vat. For a large cluster of vats, we use direct assembly nanobots, but in each case we let the control programs vary at random - say hot them with noise bits generated by a process tied to a zener noise source. We put the resulting products in competition with the original ones,and if there is an improvement, we allow replacement. Iterate. Given the complexity of he relevant software, will we be likely to for instance come up with a hyperspace-capable spacecraft or some other sophisticated and un- anticipated technology? Justify your answer on probabilistic grounds. My prediction: we will have to wait longer than the universe exists to get a change that requires information generation on the scale of 500 - 1000 or more bits. [See the info-generation issue over macroevolution by RM + NS?]
I am not even sure hyperspace-capable spacecraft is possible, which could mean this task is impossible for an intelligent civilisation, let alone a vat of nano-bots. There is software that does do this sort of thing. It does not define nanobots, but it does come up with novel ideas, using RM + NS. It uses an idea called a Geneteic Algorithm. I believe they do indeed generate information. Given that your process is computer controlled, surely we can dispense with the imaginary nano-bots, and just look at software that really exists. The Pixie
J Only just noticed your post, sorry (and wanting to put off those vats...).
What intelligent (including human) agency can do, however, is generate results that are different from anything produced by blind/dumb/purposeless processes.
If by results you mean things like jumbo jets, then yes.
If one holds that all natural processes are blind/dumb/purposeless, then this implies that intelligence is “supernatural.”
I do not.
You left out a few adjectives that make all the difference: “unpredictably,” “indefinitely,” “of arbitrary character.”
Can you explain why these make a difference? Specifically, what is the relevance to the discussion? If I am slaying strawmen, it is because I cannot see what your point is. The Pixie
kairosfocus Happy Easter!
You are reiterating a point I have repeatedly made — “there is no logical or physical principle that forbids extremely improbable events” or the like — but to distract from its proper force. For, the stat mech form of 2 LOT shows that non-equilibrium microstates are sufficiently improbable in systems of the scale in question, that they will not reasonably be spontaneously accessed by macroscopic entities within the lifetime of the observed universe etc. Thus, the classical result.
No. What you say here is true, but entirely misses my point. Some things are extremely unlikely, but not at all connected to the second law. The chances of me winning the UK lottery on saturday is 1 in 14 million (if that is not improbable enough, then winning the lottery for n consequetive Saturdays). That has nothing to do with entropy. It has nothing to do with the second law. Even if you describe it in terms of macrostates (winning in the lottery is a single macrostate, not winning is 14 million), it still has nothing to do with the second law. It is a fallacy to claim that: Something is extremely improbable, therefore it is forbidden by the second law, therefore it will not happen. It you want to invoke the second law of thermodynamics, you have to look at the thermodynamic entropy.
Oops, that is I am afraid impossible under 3 LOT: no finite number of refrigeration cycles can reduce material objects to absolute zero. 0K is inaccessible though we can get pretty close to it.
I was talking hypothetically. But the boys in the theoretical lab have done some sums, based on S = k ln W. It turns out that from a consideration of the macrostates, S is zero at 0 K. Which agrees with the third law. For everything, no matter how complex.
However, in your case you have inadvertently smudged over the key point that we are looking at relative statistical weights of observable macrostates; in my case as functional ones – a flyable microjet. In yours, ultimately, a sub assembly for “a “zip-zap-zop.” [Notice that if the sub assembly works is something we can recognise macroscopically.]
What this comes down to is the point I made at the start. Just invoking probabilities and macrostates does not make it second law stuff. Even if TBO says it is.
PS: I should note on a likelihood point, once conceded BTW by Dawkins. If we see a flyable jet, the best explanation is intelligent agency, not a tornado in a junkyard. That is because relative to chance plus necessity only, the configuration is so improbable that we recognise at once that a force that makes it far more likely is the best explanation: agency. You will note that the phil of inference to best explanation is the underlying basis for scientific explanation.
Sure. You consider competing explanations, determine probabilities, and go with the most probable (and your confidence reflects the probabilities).
PPS: I took exception to the “do you know” point...
I asked about dS as I once debated thermodynamics with a guy who had no idea about calculus, and it was a long time before I realised that. His arguments were entirely different from yours, but you did seem to be using dS for deltaS, so I just asked for reassurance. I was afraid it would cause offense, and now I have the reassurance, I apologise for asking (if that makes sense). The Pixie
PS: I should note on a likelihood point, once conceded BTW by Dawkins. If we see a flyable jet, the best explanation is intelligent agency, not a tornado in a junkyard. That is because relative to chance plus necessity only,the configuration is so improbable that we recognise at once that a force that makes it far more likely is the best explanation: agency. You will note that the phil of inference to best explanation is the underlying basis for scientific explanation. The point of course also holds for the vats expt, as a comparison with he control vat will show and of course with the vats for which the control software has been randomised. PPS: I took exception to the "do you know" point, as this fits in all too closely with Dawkins-style prejudice and bigotry: if you object to NDT and broader evo materialism, you "must" be ignorant, stupid, insane or wicked. I freely confess to being a penitent sinner under reformation, but daresay that as a holder of relevant undergraduate and graduate degrees, I am none of the first three. So, for the sake of productive, civil discourse, let us not go there. GEM of TKI kairosfocus
Continuing . . . 3] I am curious if you know the difference between deltaS and dS? Obviously. Delta, strictly is finite increment, d is infinitesimal. [I am not bothering to make too much of the distinction here as we both know that we can convert the second into the first by integration, iterative summation of the increments [in the limit as dX or whatever - -> 0] in effect. I assume you too have done at least high school calculus. 4] First I will send in my “splodge assembler nanobots”. These will assemble the parts into a random, but prespecified configuration (I call it a “splodge”). And then I will send the “randomiser nanobots” in; these rearrange the parts randomly. First, “a random, but prespecified configuration” is, strictly, a contradiction in terms. You probably mean a specified, complex [beyond 500 – 1000 bits of information] but non-functional configuration. These will indeed reduce the number of accessible microstates, and will do dSclump and dSconfig, but will not in so doing create a macroscopically functional macrostate. [Of course after the fact “discovery” that an at random selected targetted microstate is functional for something else will be utterly improbable, but moreso, it will not underminse the force of my main point, namely dSclump and dSconfig are in fact clearly distinguishable and incremental.] Rearranging the parts thereafter at random will simply expand the number of microstates corresponding to the macrostate -- any at random clumped state will do. Again, the point I have made stands. 5] The cryogenics lab have successfully cooled the nano-jet, the nano-splodge and the mixture of products from the randomiser-bots to absolute zero Oops, that is I am afraid impossible under 3 LOT: no finite number of refrigeration cycles can reduce material objects to absolute zero. 0K is inaccessible though we can get pretty close to it. In any case, on your real point: 6] Same entropy, but different configurations. Can you explain how that can be? When s = k ln w is based on the same number of accessible states, we are looking at the same value of entropy. However, in your case you have inadvertently smudged over the key point that we are looking at relative statistical weights of observable macrostates; in my case as functional ones – a flyable microjet. In yours, ultimately, a sub assembly for “a “zip-zap-zop.” [Notice that if the sub assembly works is something we can recognise macroscopically.] In either case, we are looking at the same, now demonstrably plain point: dSclump and dSconfig are incrementally separable adn analytically meaningful. So, TBO's analysis makes sense. GEM of TKI kairosfocus
Ah, Pixie: First, Happy Easter. I will note on selected points, mostly in sequence. 1] Just because something is extremely unlikely, that does not make it forbidden by the second law. Not even when you invoke macrostates! You are reiterating a point I have repeatedly made -- “there is no logical or physical principle that forbids extremely improbable events” or the like -- but to distract from its proper force. For, the stat mech form of 2 LOT shows that non-equilibrium microstates are sufficiently improbable in systems of the scale in question, that they will not reasonably be spontaneously accessed by macroscopic entities within the lifetime of the observed universe etc. Thus, the classical result. Consequently, as you will note from my vats thought experiment, the 2 LOT does apply to the situation, and entropy is a relevant consideration. [Notice how I chose a scale sufficiently small to be quasi-molecular but too large to be quantum-dominated. By using the nanobots, you can see how the work of clumping reduces the number of accessible microstates, and how the work of configuring further reduces; thus by s = k ln w, and the state functional nature of s, we can distinguish dSclump [or thermal if you will] from dSconfig. The tornado example is similar, but on a larger scale.] 2] . . . because that is the way you have set it up. Precisely, in order to show that we can distinguish between [1] the work of setting up a random agglomeration, and [2] that of setting up a macroscopically recognisable functional one. (This was where your objection lay to TBO's work.) I have shown by an appropriate thought expt that one can do TdSclump then TdSconfig, or do it directly; in both cases by application of intelligence. You will notice that the sign of dS in both cases is incrementally negative: [1] divide up the positions in the vat [~ 1m^3] into suitable phase space cells such that only one part will be in each at most [~ 10-6 m], [2] similarly, work out the 3-d rotational alignment of the parts, such that only one will be more or less correct – 45 degree increments gives 8 * 8* 8 = 512 angular orientations per part. The number of accessible location and orientation states for parts in the vat as a whole vastly exceed those for clumping, and those for clumping vastly exceed those for alignment to make a functional jet. I have left off the selection and sorting work, and the need to kill off translation and rotation, which again will both compound the numbers of accessible microstates. I should note that the assembly of biological macromolecules is an endothermic process, as TBO highlight. Also, say let the parts be magnetic. [Hard to do for Al but let's ignore for the sake of argument.] The relevant scale will be such that the amount of pulling force exerted will be tiny until the parts are nearly in contact, which is what I set up. Also 1 m^3 of say water has in it has something like 56 k mol, or 3.4 *10^28 molecules. A few hundred million parts will be “lost” in that space, apart from smart search techniques, which of course require work. [Thence Brillouin's point on Maxwell's Demon.] In short, my case has been properly made. Pausing (and dear filter please be kind today . . .) . . . kairosfocus
kairosfocus The cryogenics lab have successfully cooled the nano-jet, the nano-splodge and the mixture of products from the randomiser-bots to absolute zero, and discovered that they all have zero entropy at that temperature. Same entropy, but different configurations. Can you explain how that can be? I will have to respond to the rest of your post later, I am afraid. The Pixie
kairosfocus I appreciate by the way that I have yet to address your main point.
In this vat, call out the random cluster nanobots, and send in the jet assembler nanobots. These recognise the parts, and rearrange them to form a jet, doing configuration work. A flyable jet results - a macrostate with a much smaller statistical weight of microstates, probably of order ones to tens or perhaps hundreds. [We see here separated dSconfig.]
I am curious if you know the difference between deltaS and dS? Anyway, this "configuration work"... It sounds as though there is an energy requirement here, so perhaps we can play around with that. It just so happens I have my own nanobots. First I will send in my "splodge assembler nanobots". These will assemble the parts into a random, but prespecified configuration (I call it a "splodge"). And then I will send the "randomiser nanobots" in; these rearrange the parts randomly. Let us suppose that all three nanobot armies end up doing the same number of changes (though the specific changes are different), which one did the most "configuration work"? Which expended the most energy? Ah, now here is a surprise. I have just heard from the next door lab that a "splodge" is a vital component in a "zip-zap-zop". Turns out that that configuration is actually very useful. How does that affect the "configuration work" of the "splodge assembler nanobots"? Does this new knowledge change the thermodynamics? The Pixie
kairosfocus I am having problems with the spam filter, so this may be spread over many posts. Sorry
In the control vat, we simply leave nature to its course. Will a car, a boat a sub or a jet , etc, or some novel nanotech emerge at random? [Here, we imagine the parts can cling to each other if they get close enough, in some unspecified way, similar to molecular bonding; but that the clinging is not strong enough for them to clump and precipitate.]ANS: Logically and physically possible but he equilibrium state will on stat thermodynamics grounds overwhelmingly dominate — high disorder.
Well, yes, because that is the way you have set it up. Now let us suppose that the parts cling to each other in a way for the to clump together. Now the thermodynamics causes them to clump together! Say the process is: A + B + C + D -> car + heat As the process releases energy, under some conditions this process may well be thermodynamically flavoured. It may seem counter-intuitive, but the nano-car plus heat may be the more disordered state - thermodynamically (i.e., for the energy). Consider crystal formation; cool a saturated salt solution, and this is what happens: Na+ + Cl- -> NaCl(crystals) + heat The highly ordered crystals, plus the heat are thermodynamically favoured over the chaotic ions dissolved in water. Yes, I know order in a crystal is different to complexity in whatever, but the second law is about entropy, which is disorder. And not complexity. The Pixie
kairosfocus
(I have already contrasted the case of a tornado in a junkyard that it is logically and physically possible can do the same, but the functional macrostate is so rare relative to non functional ones that random search strategies are maximally unlikely to access it, i.e. we see here 2nd LOT at work.]
No, this has nothing to do with the second law! The second law is about entropy. If you can frame the situation in terms of entropy, you might have a point. Just because something is extremely unlikely, that does not make it forbidden by the second law. Not even when you invoke macrostates! The Pixie
Continuing . . . h] Now, let us go back to the vat. For a large cluster of vats, we use direct assembly nanobots, but in each case we let the control programs vary at random – say hot them with noise bits generated by a process tied to a zener noise source. We put the resulting products in competition with the original ones,and if there is an improvement, we allow replacement. Iterate. Given the complexity of he relevant software, will we be likely to for instance come up with a hyperspace-capable spacecraft or some other sophisticated and un- anticipated technology? Justify your answer on probabilistic grounds. My prediction: we will have to wait longer than the universe exists to get a change that requires information generation on the scale of 500 – 1000 or more bits. [See the info-generation issue over macroevolution by RM + NS?] I] Try again, this time to get to the initial assembly program by chance . . .See the abiogenesis issue? j] In the actual case, we have cells that use sophisticated machinery to assemble the working macromolecules, direct them to where they should go, and put them to work in a self-replicating, self-maintaining automaton. Clumping work [if you prefer that to TBO's term chemical work, fine], and configuring workl can be identified and applied to the shift in entropy through the s = k ln w equation. This, through Brillouin, TBO link to information, citing as well Yockey-Wicken's work atthe time and their similar definition of information. [As you know I have pointed to Robertson on why this link makes sense -- and BTW, it also shows why energy converters that use additional knowledge can couple energy in ways that go beyond the Carnot efficiency limit for heat engines.] In short, the basic point made by TBO in Chs 7 - 8 is plainly sound. The rest of their argument follows. 2] On heating and cooling vs assembling rocks BTW, on cooling rocks and continents, the thermal entropy does reduce on cooling as the number of accessible microstates reduces. Now, redo the experiment above with nano-ashlars etc that together make up a model, functional aqueduct, complete with an arched bridge -- that we could inspect through a microscope. --> Would this be likely to happen by chance + necessity only if you heat the vat [inject more random molecular motion]? --> Would it happen if you were to clump the stones haphazardly? --> If you clump then assemble? --> If you search out and directly assemble? --> Can you identify dStot, dSthermal on heating/cooling, dSconfig? --> Apart from scale, is this in principle different from a tornado building an aqueduct, vs a Roman legion doing so? (In short, it seems to me that we can in principle identify the entropies associated, though of course to actually measure would be beyond us at this level of technology!!!) I trust this helps for now Happy Easter GEM of TKI kairosfocus
Hi Pixie and Joe [et al]: Today is Good Friday [so my focus for the day is on other matters of greater moment . . .]. I think a thought experiment will be helpful in clarifying, along with a pause to read the online chapters of TMLO, 7, 8 & 9. My own always linked may also help, follow the link through my name, please. 1] THOUGHT EXPT: a] Consider the assembly of a Jumbo jet, which plainly requires intelligently designed, physical work in all actual observed cases. That is, orderly motions were impressed by forces on selected, sorted parts, in accordance with a complex specification. (I have already contrasted the case of a tornado in a junkyard that it is logically and physically possible can do the same, but the functional macrostate is so rare relative to non functional ones that random search strategies are maximally unlikely to access it, i.e. we see here 2nd LOT at work.] b] Now, let us shrink the example, to a nano-jet so small that the parts are susceptible to brownian motion, i.e they are of sub-micron scale and act as large molecules, say a million of them, some the same, some different etc. In-principle possible. Do so also for a car a boat and a submarine, etc. c] In several vats of a convenient fluid, decant examples of the differing nanotechnologies, so that the particles can then move about at random. d] In the control vat, we simply leave nature to its course. Will a car, a boat a sub or a jet , etc, or some novel nanotech emerge at random? [Here, we imagine the parts can cling to each other if they get close enough, in some unspecified way, similar to molecular bonding; but that the clinging is not strong enough for them to clump and precipitate.]ANS: Logically and physically possible but he equilibrium state will on stat thermodynamics grounds overwhelmingly dominate -- high disorder. e] Now, pour in a cooperative army of nanobots into one vat, capable of recognising jet parts and clumping them together haphazardly. [This is of course, work, and it replicates bonding at random. We see here dSthermal] After a time, will we be likely to get a flyable nano jet? f] In this vat, call out the random cluster nanobots, and send in the jet assembler nanobots. These recognise the parts, and rearrange them to form a jet, doing configuration work. A flyable jet results -- a macrostate with a much smaller statistical weight of microstates, probably of order ones to tens or perhaps hundreds. [We see here separated dSconfig.] g] In another vat we put in an army of clumping and assembling nanobots, so we go straight to making a jet based on the algorithms that control the nanobots. Since entropy is a state function, we see here that direct assembly is equivalent to clumping and then reassembling from random "macromolecule" to configured functional one." That is: dS tot = dSthermal + dS config. Pausing . . . kairosfocus
Joe, if a planet is turning, then a region of the surface is sometimes in day and sometime night. At night, that entire region will be cooling down, a process it achives by "coupling" with space, so it can "raw export" that energy. This all results in a lowering of entropy across the physical landscape. Of course, you could take the position that all planets were set up to do that by God, but then you might as well claim that evolution was set up to do that by God, or abiogenesis if you really want to go down that route. The Pixie
the Pixie, I'm not sure what your point is. Would a planet that is not turning also have huge regions cooling down? Does a star that turns have huge regions cooling down? How about a star that doesn't turn? the Pixie: Every planet that is turning will have huge regions cooling down, and so decreasing in entropy. I would say it would depend on the speed of rotation as well as the atmosphere. And then all that would also depend on how the rotation started. Joseph
Joe, I am not sure what your point is. Every planet that is turning will have huge regions cooling down, and so decreasing in entropy. This is not something unique to this "priveledged planet". The Pixie
the Pixie: Non-intelligent systems repeatedly, but predictably create large regions of low entropy; as the planet turns whole continents cool down, losing entropy. That just begs the question as "The Privileged Planet" makes it clear that the planet and solar system were intelligently designed. IOW you start off talking about "non-intelligent systems", there isn't any data that would demonstrate this planet is such a system and yet you appear to be using it as an example to support your claim. Joseph
kairosfocus I am having problems with the filter, so have had to break my post up into smaller and smaller bits as I get to the problem.
All that happens in TMLO at that point [NB: I am looking at my paper copy, with my annotation “where dG/T LTE 0 for spontaneous changes”] is they point out that there is a difference between a random and a specified configuration of the molecules they have in mind:
What they are doing is wrong, but I do now realise that the way they are justifying it is in the way you say, not the way I said originally. Yes, they say they are splitting deltaS into deltaS[thermal] and deltaS[config]. But deltaS[thermal] is the same as the deltaS that Gibbs' used, and everyone since him has used. They slipping deltaS[config] into deltaS, and then explicitly bringing it out again as they go from 8.4b to 8.5, so in effective introducing a whole new term deltaS[config]. So the issue is that it makes no sense to split the entropy, S = S[thermal] + S[config]. Remember that the thermodynamic entropy is a property we can measure in the laboratory. It is a routine analysis, simply involving heating a sample of known weight and measuring the energy input (say with a differential scanning calorimeter), then extrapolating back to absolute zero (where entropy is zero). The Pixie
kairosfocus
6] Spontaneous: By chance + necessity only.
Well that is the first time I have come across THAT definition for spontaneous. So, as I asked last time, is the combustion of coal spontaneous? Does entropy increase or decrease? Personally, I believe it is spontaneous, as the word is used in thermodynamics (but not under common usage), and entropy does increase, but I will be intrigued to see what your position is. For reference, see the Wiki entry for "spontaneous process". Please note that it also relates the two terms of the Gibbs' equation to entropy in the system and entropy in the surroundings, and not to "chemical work" and "entropy work", as Thaxton would have us believe. The Pixie
kairosfocus
6] Spontaneous: By chance + necessity only.
Well that is the first time I have come across THAT definition for spontaneous. So, as I asked last time, is the combustion of coal spontaneous? Does entropy increase or decrease? Personally, I believe it is spontaneous, as the word is used in thermodynamics (but not under common usage), and entropy does increase, but I will be intrigued to see what your position is. For reference, see the Wiki entry for "spontaneous process". Please note that it also relates the two terms of the Gibbs' equation to entropy in the system and entropy in the surroundings, and not to "chemical work" and "entropy work", as Thaxton would have us believe.
All that happens in TMLO at that point [NB: I am looking at my paper copy, with my annotation “where dG/T LTE 0 for spontaneous changes”] is they point out that there is a difference between a random and a specified configuration of the molecules they have in mind:
What they are doing is wrong, but I do now realise that the way they are justifying it is in the way you say, not the way I said originally. Yes, they say they are splitting deltaS into deltaS[thermal] and deltaS[config]. But deltaS[thermal] is the same as the deltaS that Gibbs' used, and everyone since him has used. They slipping deltaS[config] into deltaS, and then explicitly bringing it out again as they go from 8.4b to 8.5, so in effective introducing a whole new term deltaS[config]. So the issue is that it makes no sense to split the entropy, S = S[thermal] + S[config]. Remember that the thermodynamic entropy is a property we can measure in the laboratory. It is a routine analysis, simply involving heating a sample of known weight and measuring the energy input (say with a differential scanning calorimeter), then extrapolating back to absolute zero (where entropy is zero). I believe that if S[config] (if we suppose such a quantity) is zero at 0 K, and presuming the configuration does not change during warming to ambient, then S[config] is still zero at 25degC. Bear in mind that chemists, physics and engineers have been using the Gibbs' equation for about a century to extend the second law to open systems, and in all that time no one has ever found an exception. I think that would be odd if there really was an additional term. Has no one any experimental data of processes going or not going that can only be explained by this new term? Why not? The Pixie
kairosfocus
Of course, as you later note, the 2nd law is at micro level, founded on thermodynamic probabilities. That is where the basic problem comes in in your arguments on abiogenesis — the probabilities and equilibria likely to result are such that in the gamut of a planet with a prebiotic soup, the relevant macro-moloecules simply will not form in anything like the concentration or proximity to achieve either RNA world or metabolism first scenarios.
We agree that entropy must increase because it is too improbable for entropy to decrease. What I am objecting to is the reverse claim; that if something is improbable, then that is because entropy must be decreasing. It is improbable that I will win the lottery or get stuck by lightning, but that has nothing to do with entropy. Personally I do not know enough about (supposed) pre-biotic conditions to be able to estimate the probabilities (though if you have any figures, and can justify them, I would be interested to see).
You skip over the key difference between a random DNA polymer and a bio-informational one. Both are complex if long enough, but one is functionally specified, the other is not. To move from the random state to the functional one plainly requires work, and shifts the entropy as there is now a quite discernibly different macrostate — just think about the effect of random changes in DNA in living systems.
So we have three DNA sequences, of equal length. Sequence X is a simple repeating patterm, sequence Y is random and sequence Z is human DNA. Do they have the same thermodynamic entropy? Does it take the same energy input to synthesise them in the lab, one base at a time (does it make a difference if you specify the random sequence in advance)? If not, which will take more, and why? My position is that each sequence has the same entropy. As each base is added, the entropy of the sequence will change by the same amount, regardless of whether the sequence ultimately becomes X, Y or Z. It is my belief that at absolute zero all three sequences will have zero entropy (that is what the third law tells me, anyway). The Pixie
kairosfocus
Of course, as you later note, the 2nd law is at micro level, founded on thermodynamic probabilities. That is where the basic problem comes in in your arguments on abiogenesis — the probabilities and equilibria likely to result are such that in the gamut of a planet with a prebiotic soup, the relevant macro-moloecules simply will not form in anything like the concentration or proximity to achieve either RNA world or metabolism first scenarios.
We agree that entropy must increase because it is too improbable for entropy to decrease. What I am objecting to is the reverse claim; that if something is improbable, then that is because entropy must be decreasing. It is improbable that I will win the lottery or get stuck by lightning, but that has nothing to do with entropy. Personally I do not know enough about (supposed) pre-biotic conditions to be able to estimate the probabilities (though if you have any figures, and can justify them, I would be interested to see).
You skip over the key difference between a random DNA polymer and a bio-informational one. Both are complex if long enough, but one is functionally specified, the other is not. To move from the random state to the functional one plainly requires work, and shifts the entropy as there is now a quite discernibly different macrostate — just think about the effect of random changes in DNA in living systems.
So we have three DNA sequences, of equal length. Sequence X is a simple repeating patterm, sequence Y is random and sequence Z is human DNA. Do they have the same thermodynamic entropy? Does it take the same energy input to synthesise them in the lab, one base at a time (does it make a difference if you specify the random sequence in advance)? If not, which will take more, and why? My position is that each sequence has the same entropy. As each base is added, the entropy of the sequence will change by the same amount, regardless of whether the sequence ultimately becomes X, Y or Z. It is my belief that at absolute zero all three sequences will have zero entropy (that is what the third law tells me, anyway).
6] Spontaneous: By chance + necessity only.
Well that is the first time I have come across THAT definition for spontaneous. So, as I asked last time, is the combustion of coal spontaneous? Does entropy increase or decrease? Personally, I believe it is spontaneous, as the word is used in thermodynamics (but not under common usage), and entropy does increase, but I will be intrigued to see what your position is. For reference, see the Wiki entry for "spontaneous process" (the filter seems to choke on the link). Please note that it also relates the two terms of the Gibbs' equation to entropy in the system and entropy in the surroundings, and not to "chemical work" and "entropy work", as Thaxton would have us believe.
All that happens in TMLO at that point [NB: I am looking at my paper copy, with my annotation “where dG/T LTE 0 for spontaneous changes”] is they point out that there is a difference between a random and a specified configuration of the molecules they have in mind:
What they are doing is wrong, but I do now realise that the way they are justifying it is in the way you say, not the way I said originally. Yes, they say they are splitting deltaS into deltaS[thermal] and deltaS[config]. But deltaS[thermal] is the same as the deltaS that Gibbs' used, and everyone since him has used. They slipping deltaS[config] into deltaS, and then explicitly bringing it out again as they go from 8.4b to 8.5, so in effective introducing a whole new term deltaS[config]. So the issue is that it makes no sense to split the entropy, S = S[thermal] + S[config]. Remember that the thermodynamic entropy is a property we can measure in the laboratory. It is a routine analysis, simply involving heating a sample of known weight and measuring the energy input (say with a differential scanning calorimeter), then extrapolating back to absolute zero (where entropy is zero). I believe that if S[config] (if we suppose such a quantity) is zero at 0 K, and presuming the configuration does not change during warming to ambient, then S[config] is still zero at 25degC. Bear in mind that chemists, physics and engineers have been using the Gibbs' equation for about a century to extend the second law to open systems, and in all that time no one has ever found an exception. I think that would be odd if there really was an additional term. Has no one any experimental data of processes going or not going that can only be explained by this new term? Why not? The Pixie
J: 1] Thanks: Welcome. 2] Re Pixie on strawmen Sadly, just as predicted at the head of the thread. 3] Pixie: re Gibbs free energy & entropy Cf TMLO ch7 and TBO's discussion in and around eqns 7.5 - 10b. (The next section on far from eq syss is also interesting on components of S. Ch 8, eqns 8.1 - 3c and their context are also interesting on the issue of random vs informational molecules and entropy. Eqs 8.4 - 5 follow in that context. ) GEM of TKI kairosfocus
Me:
What intelligence certainly can do is repeatedly and unpredictably create regions of indefinitely large amounts of low entropy, of arbitrary character. Blind/dumb/purposeless processes can’t.
The Pixie:
I am not clear what you ...are claiming here. There are no known exceptions to the second law; throwing intelligence into the mix will not change that one bit (supernatural intelligence might, but I assume you are talking about human intelligence).
I agree, and have never claimed otherwise. There are no known exceptions to the 2nd law, including in cases involving intelligence agency. What intelligent (including human) agency can do, however, is generate results that are different from anything produced by blind/dumb/purposeless processes. If one holds that all natural processes are blind/dumb/purposeless, then this implies that intelligence is "supernatural." __________ The Pixie:
Non-intelligent systems repeatedly, but predictably create large regions of low entropy; as the planet turns whole continents cool down, losing entropy.
I agree, and have never claimed otherwise. There are tons of self-ordering phenomena in nature. You left out a few adjectives that make all the difference: "unpredictably," "indefinitely," "of arbitrary character." You are slaying strawmen. __________ kairosfocus:
As touching the more basic point, intelligence does not violate entropy, it intentionally creates low entropy systems.
Thanks. j
Patrick: Insomnia patrol. Thanks, appreciated. OFF-TOPIC: Congratulations. The advice I got in prep for mine, was to remember this is my bride's big day, so once she has said yes, I need to say yes until the parson pronounces us man and wife. Subsequent to that, 17 + years ago now, I have found that it is often wise to reserve no for emergency use only! (But then I am a very blessed and very happy man. Not least, I get the chance to improve my image by being seen in good company every day!) Have a great wedding day,and may God grant you a blessed marriage. GEM of TKI PS: Pixie -- I forgot to mention, TMLO was favourably reviewed by prof Robert Shapiro; famous OOL researcher and chemist. [His recent sci am article will well repay a read.] kairosfocus
kairosfocus, Sorry about the delay...I've been busy preparing for my wedding so I don't have the time to clear the moderation queue as often. Patrick
H'mm: I have put up a response on points, but it is in filter. I hope that is not because I took time to discuss 8.4 - 5 using the equations. [Maybe there was a point to Pixie's spelling out . . .] Since that is the most serious claim, I note that in the step in question, dH is constant inthe equations, all TBO have done is to split up the increment in entropy into thermal and configurational parts. The chemistry per se does not distinguish a random from a biofunctional polymer and the cell uses an algorithmic system to create the latter, but as S is a state function such a split is equivalent, and analytically helpful. It will help to realise that Thaxton is a PhD chemist and Bradley a PhD polymer scientist specialising in fracture mechanics, which is of course riddled with thermodynamic considerations. GEM of TKI kairosfocus
H'mm: The thread is fairly active. I will note on several points: 1] J: re "Entropy accountancy" I stand corrected on terminology, having misread your term. As touching the more basic point, intelligence does not violate entropy, it intentionally creates low entropy systems. (In effect, we are dealing with specialised energy converters under algortihmic or direct intelligent control.) And, examples are quite commonly encountered. E.g building a house to a plan, or a flyable jumbo jet -- compare the odds of such happening [spontaneously!] by a tornado passing through a junkyard. 2] Pixie: I am going to ignore the probabilitistic arguments that are not based on the second law . . . . Of course, as you later note, the 2nd law is at micro level, founded on thermodynamic probabilities. That is where the basic problem comes in in your arguments on abiogenesis -- the probabilities and equilibria likely to result are such that in the gamut of a planet with a prebiotic soup, the relevant macro-moloecules simply will not form in anything like the concentration or proximity to achieve either RNA world or metabolism first scenarios. 3] on "coupling" Obfuscation of what is plain enough. Have a look at the isolated system with sub systems again, and observe why B increases its entropy on receiving an increment of heat. As you know, systems that use heat are limited by Carnot, but those that couple energy can exceed that limit. 4] Rock cooling down: Distractor; the relevant bodies -- as discussed -- are energy receivers or converters. 5] DNA chains You skip over the key difference between a random DNA polymer and a bio-informational one. Both are complex if long enough, but one is functionally specified, the other is not. To move from the random state to the functional one plainly requires work, and shifts the entropy as there is now a quite discernibly different macrostate -- just think about the effect of random changes in DNA in living systems. And, how you get to that state starting with prebiotic conditions and through chance + necessity only is precisely the heart of my point, Dr Sewell's point and for that matter TBO's. 6] Spontaneous: By chance + necessity only. A refrigerator forces export of heat from colder body to hotter ones, but that is not at all a spontaneous process or system. 7] "Error" in TBO Ch 8 Eqns 8.4b to 5 What are you talking about? All that happens in TMLO at that point [NB: I am looking at my paper copy, with my annotation "where dG/T LTE 0 for spontaneous changes"] is they point out that there is a difference between a random and a specified configuration of the molecules they have in mind: 8.4b: dG = dH - TdS 8.5: dG = dH - T dS th - T dS config In short the step 8.4b --> 8.5 is to simply split up TdS: TdS = T{dSth + dSconfig) That is what I pointed to verbally in my point 5 just above. It also makes sense once we can macroscopically distinguish between the random and specified polymer, which we can. One is biofunctional, the other is [by overwhelming probability] not, and no prizes for spotting which is which. More directly, they do NOTHING to dH, the enthalpy [Onlookers, roughly, heat content], and that makes sense there. Going back to 8.4a: dG = dE + PdV - TdS So, of course, they substitute: dH = dE + pdV Since we are not looking at significant pressure-volume work, and since the bonding energy in the relevant chains is independent of the order of DNA monomers, and since that more or less holds for proteins at the point of chaining [as opposed to folding!], dH is not changing between 8.4 and 8.5. There is no error, grave or minor, in the step. GEM of TKI kairosfocus
DaveScot
Thermodyamic principles are applied to far more than just energy. Keep in mind matter and energy are the same thing according to e=mc^2. In general 2LoT applies to gradients of all kinds. Gradients are areas of lowered entropy and 2LoT states that entropy tends to increase in ordered systems (order decreases). It also applies to information gradients.
Sorry, but I do not see how your Wiki reference supports your claim. It seems to just be saying that thermodynamics is important in a wide range of fields. From the start of the entry: Thermodynamics (from the Greek θερμη, therme, meaning "heat" and δυναμις, dunamis, meaning "power") is a branch of physics that studies the effects of changes in temperature, pressure, and volume on physical systems at the macroscopic scale by analyzing the collective motion of their particles using statistics.[1][2] Roughly, heat means "energy in transit" and dynamics relates to "movement"; thus, in essence thermodynamics studies the movement of energy and how energy instills movement. kairosfocus You mention and link to chapter eight of The Mystery of Life's Origin (TBO). There is a serious flaw in their argument as they go from equation 8-4b to equation 8-5. Equation 8-4b is the Gibbs' equation: deltaG = deltaH - TdeltaS A funny thing happens to the Gibbs' equation if you divide through by -T: -deltaG/T = deltaH/T + deltaS deltaS is, of course, the entropy change in the system. deltaH/T is the entropy change in the surroundings (from dS = dQ/T, when T is constant). So -deltaG/T is the total entropy change. Second law says that must be positive, so deltaG must therefore be negative. That, in reverse, is the derivation of the Gibbs' function (see here. Thaxton et al have missed that, and it is apparant that they believe deltaH is the "Chemical work" (rather than a measure of the entropy change in the surroundings) and TdeltaS is the "Thermal entropy work" (rather than a measure of the entropy change in the system). Gibbs actually had it all covered. He had accounted for the entropy change in the system, and for the entropy change outside the system. It makes no sense to introduce any new terms; what other entropy can there possibly be? Nevertheless, in equation 8-5, Thaxton et al add "Configurational entropy work"! J and kairosfocus
J: That said, however, I don’t maintain that there is nothing special about intelligence with regard to entropy. What intelligence certainly can do is repeatedly and unpredictably create regions of indefinitely large amounts of low entropy, of arbitrary character. Blind/dumb/purposeless processes can’t.
kairosfocus: Precisely.
I am not clear what you two are claiming here. There are no known exceptions to the second law; throwing intelligence into the mix will not change that one bit (supernatural intelligence might, but I assume you are talking about human intelligence). Non-intelligent systems repeatedly, but predictably create large regions of low entropy; as the planet turns whole continents cool down, losing entropy. The Pixie
kairosfocus
But the story of Body A is here serving as a rhetorical distractor from the key issue, namely that unless energy is COUPLED into a system that imports it, raw importation of energy will naturally increase its entropy. And, of course the systems relevant to say the OOL are energy importers, not heat exporters, much less work exporters. Dumping raw energy into the prebiotic atmosphere and oceans does not credibly get us to the origins of the functionally specific, highly complex and integrated macromolecules of life. Not in the gamut of the observed universe across its entire lifetime. The probabilistic resources are just plain far too scanty. [And, resort to an unobserved quasi-infinite array of sub-universes simply lets the cat out of that bag.]
I am going to ignore the probabilitistic arguments that are not based on the second law. Personally, I am just talking about the second law (and yes, I know that has a probabilitistic argument at its root, but that does not imply all probabilitistic arguments are connected to the second law. Abiogenesis involves a whole load of chemical reactions. It is my belief that each and every one is accompagnied by an overall increase in entropy. For example, the coming together of two amino acids to form a dipeptide will give an increase in entropy under certain conditions (eg where something is happening to the resultant water). We do not know what those reactions were, so, yes, I am taking that on faith, if you like. Frankly, I see the word "coupled" as a rhetorical device meant to make us think of machinery. In what way were A and B "coupled". Er, they were next to each other. How much design does that take? Body A manages to export heat - losing entropy in the process - body B manages to import heat. No intelligence required.
Further to this, you will note I was careful to give a specific case of a natural heat engine [a tropical cyclone] and to point out just where the issue relevant to the design inference comes in. Namely, the ORIGIN of such energy coupling and conversion devices as manifest specified complexity, e.g. as in [a] a jet engine in a jumbo jet, or [b] the far more sophisticated DNA -RNA -Ribosome -Enzyme systems in a cell.
Any rock cooling down overnight is an example of entropy decreasing without any need for design or machinery.
Of course, we need to ask how snowflakes form. Answer, the water is a closed but not isolated system, and as latent heat of fusion is extracted, the anisotropy of the molecules lends itself to highly structured, orderly — but precisely not COMPLEX — crystallisation. [This very case is cogently discussed in TMLO, which BTW also makes reference to relevant values from thermodynamics tables and from more specific research on the chemistry of biopolymers and their precursors.] By sharpest contrast, DNA is highly aperiodic, which is how it can store information in a code. How much information is stored in . . . HOHHOHHOHHOH . . .? [I know that’s a 2-D rep of a 3-D pattern, but the point is not materially different.]
It is worth bearing in mind that entropy is about order (or disorder, strictly), not complexity. A simple repeating sequence of bases in a DNA chain has exactly the same thermodynamic entropy as a DNA chain of the same length that prescrobes a man.
This simply dodges the issue of getting TO the functional complex information stored and processed in the cellular energy conversion device.
Sure, but we are talking about the second law. The second law says nothing about how you get from one state to another, about machinery, about intelligence. It just says the entropy is higher at the end. There may or may not be these obstacles, but that would be off-topic.
Predictably, we see a pouncing on a minor error, in a context where the material point was already noted and corrected: SPONTANEOUS. (So, I have reason to say that we see here the knocking over of a convenient strawman. The PCs and net are DESIGNED, and the DNA that largely controls the production of the human brain etc evinces all the characteristics that we would at once infer to be designed were not a worldview assertion in the way.]
I am not sure what your point is about "spontaneous". In thermodynamics the combustion of coal is "spontaneous", but you try lighting a coal fire with a match. And again, you miss the fact that the second law says nothing about how you get from one state to another, about machinery, about intelligence. It just says the entropy is higher at the end. The Pixie
kairosfocus, I specifically referred to entropy accountancy, not "energy accountancy." Per the 2nd law, the entropy rate balance for a control volume (CV) is as follows: [rate of change of entropy within the CV] = + [rate of entropy transfer associated with heat transfer across boundary] (1) + [rate of entropy transfer associated with mass transfer into the CV] (2) - [rate of entropy transfer associated with mass transfer out of the CV] (3) + [rate of entropy production within the CV due to irreversibilities] (4) (1) is positive when heat is transferred in, negative when heat is transferred out. (2) and (3) are zero for closed systems (mass is not transferred into or out of a closed system). (4) is always positive. Show me an experiment that demonstrates that intelligence can violate this. (In 100 words or less, please.) j
H'mm: A follow up or two seems in order, even in the absence of Pixie: 1] DS: Wiki cite Surprisingly good, though I have in significant measure lost respect for this encyclopedia due to censorship and bias. A pity, as it was a good idea. 2] J: I know of no good reason to think that intelligence (or the development any of the products thereof) violates the entropy accountancy required by the 2nd Law. First, the law that undertakes "energy accountancy" is the 1st not the second. The latter is the one that is called "time's arrow" for a reason -- it is temporally asymmetric and so gives a driving constraint on change. Second, strictly the 2nd law does not "forbid" what is improbable -- on the statistical thermodynamics view. It simply lays out the facts on the numbers of microstates that are consistent with given macroscopic state descriptions, e.g the pressure, volume and temperature of a body of gas. Those facts are that we see an overwhelmingly probable cluster of microstates, which leads us to expect that fluctuations from equilibrium will be small enough to more or less ignore for any sizable body of gas, for instance. (A directly related point, elaborated by Einstein in 1905 -- this is why we basically don't see Brownian motion for sufficiently large particles in a fluid. The transition zone for BM is about a micron if memory serves.) The relevant issue -- as has been pointed out already, and is in Dr Sewell's main article -- is that highly informational configurations of matter are precisely tightly constrained and so are vastly improbable relative to less information-rich states. Have a look, e.g. at TBO's thermodynamic analysis of the chance origin of protein and DNA molecules here, to see the point. Their onward analysis [Ch 9] of the plight of OOL research can in that context be viewed as a prediction of experimental results -- one that has been amply borne out over these 23 years since the publication of their work. (Dr Robert Shapiro's recent Sci Am article tellingly updates the point -- though he, too, fails to see that his strictures on the RNA world hypothesis also cut across his own favoured metabolism first model.) So, there IS empirical data that strongly supports Dr Sewell's point. Just, there is a paradigm in the way of seeing the confirmation for what it is. 3] I don’t maintain that there is nothing special about intelligence with regard to entropy. What intelligence certainly can do is repeatedly and unpredictably create regions of indefinitely large amounts of low entropy, of arbitrary character. Blind/dumb/purposeless processes can’t. Precisely. In the cell [e.g. in the brain, etc] and in the PC and Internet [etc] we see: regions of indefinitely large amounts of low entropy, of arbitrary character. We know empirically that intelligent agency routinely produces such zones of functionally specific, complex integrated information and information processing structures. On thermodynamic probability grounds, we know that blind, spontaneous forces are maximally unlikely to get to such states in the gamut of the observed cosmos across its estimated lifetime. [Dembski's threshold is the spontaneous generation of ~ 500 bits. Both PC technologies and cell-based life systems by far exceed that bound.] So -- reckoning with a couple of infelicities of expression -- apart from a worldview assumption that leads to the resort to the maximally improbable, instead of what is empirically supported, instead of the obvious, empirically well-supported inference? Namely:
. . . the [spontaneous] rearrangement of atoms into human brains and computers and the Internet . . . [is vastly improbable to the point [where] within the probabilistic resources of the OBSERVED cosmos, such a spontaneous rearrangement of hydrogen etc into humans and their information-rich artifacts, is maximally unlikely relative to the hypothesis that chance and/or natural regularities are the only driving forces . . .]
Jus wonderin . . . GEM of TKI kairosfocus
Dr. Sewell: First, welcome to UD. I do enjoy your writings. However, I must take issue with you about the 2nd law of thermo. You state:
But the fact is, the rearrangement of atoms into human brains and computers and the Internet does not violate any recognized law of science except the second law.
I know of no good reason to think that intelligence (or the development any of the products thereof) violates the entropy accountancy required by the 2nd Law. Do you know of any experiment that demonstrates this? That said, however, I don't maintain that there is nothing special about intelligence with regard to entropy. What intelligence certainly can do is repeatedly and unpredictably create regions of indefinitely large amounts of low entropy, of arbitrary character. Blind/dumb/purposeless processes can't. j
I'm curious as to what Dr. Dembski thinks on ID. Does he believe there was an actual Adam and Eve and does he view the creation of life? Does Dembski support front loading? Curious sfg
Pixie Thermodyamic principles are applied to far more than just energy. Keep in mind matter and energy are the same thing according to e=mc^2. In general 2LoT applies to gradients of all kinds. Gradients are areas of lowered entropy and 2LoT states that entropy tends to increase in ordered systems (order decreases). It also applies to information gradients. Intelligent processes can create information gradients. For instance - absent intelligence the library of congress, a highly ordered set of information, would not exist with only energy from the sun input into the system and no intelligent agency directing how that energy is utilized to decrease entropy. Similarly buildings, roads, space shuttles, and all sorts of other highly ordered things wouldn't exist without intelligent agency. Thermodynamics
The starting point for most thermodynamic considerations are the laws of thermodynamics, which postulate that energy can be exchanged between physical systems as heat or work.[4] They also postulate the existence of a quantity named entropy, which can be defined for any system.[5] In thermodynamics, interactions between large ensembles of objects are studied and categorized. Central to this are the concepts of system and surroundings. A system is composed of particles, whose average motions define its properties, which in turn are related to one another through equations of state. Properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes. With these tools, thermodynamics describes how systems respond to changes in their surroundings. This can be applied to a wide variety of topics in science and engineering, such as engines, phase transitions, chemical reactions, transport phenomena, and even black holes. The results of thermodynamics are essential for other fields of physics and for chemistry, chemical engineering, aerospace engineering, mechanical engineering, cell biology, biomedical engineering, and materials science to name a few.[6][7]
DaveScot
More specifically, this 4 page document: Can ANYTHING happen in an open system? Atom
RE Pixie @ 13: You seem to have not read Prof Sewell's article on the Second Law. Since you did not mention Boundary Conditions (or maybe I read your post too fast an missed it) it seems that you just argue what his paper answers. In a sentence, entropy can only decrease in an open system as fast as you export it through the boundary. He shows this mathematically. If you get an increase in order in an open system, it is because you are importing order. I suggest reading the paper if you haven't had a chance. A Second Look at the Second Law Atom
Dr Sewell Perhaps it is as well I lost my own blog post for the morning -- I will have to reconstruct from memory -- through one of those annoying PC mess-ups. For, on telling Firefox to reconstruct, it updated this thread which just happened to have had this page open in a tab. Lo and behold: Pixie aptly illustrates just the point made above by Profs Johanson and Sewell! FYI, Pixie: 1] The case no 1: || A, Th --> d'Q --> B, Tc || This shows that a hotter body on losing heat will (of course -- the number of available microstates goes down] reduce its entropy, but also that lost heat goes somewhere, and in so doing it net retains or increases the entropy of the overall system. But the story of Body A is here serving as a rhetorical distractor from the key issue, namely that unless energy is COUPLED into a system that imports it, raw importation of energy will naturally increase its entropy. And, of course the systems relevant to say the OOL are energy importers, not heat exporters, much less work exporters. Dumping raw energy into the prebiotic atmosphere and oceans does not credibly get us to the origins of the functionally specific, highly complex and integrated macromolecules of life. Not in the gamut of the observed universe across its entire lifetime. The probabilistic resources are just plain far too scanty. [And, resort to an unobserved quasi-infinite array of sub-universes simply lets the cat out of that bag.] Further to this, you will note I was careful to give a specific case of a natural heat engine [a tropical cyclone] and to point out just where the issue relevant to the design inference comes in. Namely, the ORIGIN of such energy coupling and conversion devices as manifest specified complexity, e.g. as in [a] a jet engine in a jumbo jet, or [b] the far more sophisticated DNA -RNA -Ribosome -Enzyme systems in a cell. In all directly known cases, where such functionally specific, complex systems are observed, their origin is intelligent agency. Properly, itis those who argue that in effect a tornado in a junkyard can assemble, fuel and fly a 747, who have something to prove. Something that, after 150 years of trying, remains unproved to date. 2] The second law [i]does[/i] only apply to thermodynamic entropy . . . A glance at say Brillouin, as in my linked through my handle, will show that entropy is far broader than just heat, through the informational implications of the statistical weight of macrostates. This insight is is exploited in TBO's TMLO, Chapter 8, also linked through my own outline discussion. Surely, as one familiar with Gibbs etc, you are aware of the informational approach to statistical thermodynamics that his work was a precursor to, as followed through from Jaynes on to for instance Harry Robertson [Look up his Statistical Thermophysics, PHI, 1993], as I cite? [In short there is a whole other school out there tracing to Gibbs . . . You may disagree, but these folks have a serious point.] 3] Molecules of water will spontaneously form snowflakes (even in a system that has no water going in or out), so again “water order” can decrease. Of course, we need to ask how snowflakes form. Answer, the water is a closed but not isolated system, and as latent heat of fusion is extracted, the anisotropy of the molecules lends itself to highly structured, orderly -- but precisely not COMPLEX -- crystallisation. [This very case is cogently discussed in TMLO, which BTW also makes reference to relevant values from thermodynamics tables and from more specific research on the chemistry of biopolymers and their precursors.] By sharpest contrast, DNA is highly aperiodic, which is how it can store information in a code. How much information is stored in . . . HOHHOHHOHHOH . . .? [I know that's a 2-D rep of a 3-D pattern, but the point is not materially different.] 4] Evolution can happen because while entropy in the system is decreasing, the overall entropy, including the entropy of the surroundings is going up. This simply dodges the issue of getting TO the functional complex information stored and processed in the cellular energy conversion device. We have reason to believe that in the gamut of the observed cosmos, even so little as 500 bits of information will not spontaneously form, relative to a specified configuration. [E.g. Tossing and reading once per second, how long will you have to wait on average to get 500 heads in a set of coins? Ans: far longer than the observed universe existed for to date.] 5] The rearrangement of atoms into human brains and computers and the Internet clearly does not violate any law of nature, recognized or not. Predictably, we see a pouncing on a minor error, in a context where the material point was already noted and corrected: SPONTANEOUS. (So, I have reason to say that we see here the knocking over of a convenient strawman. The PCs and net are DESIGNED, and the DNA that largely controls the production of the human brain etc evinces all the characteristics that we would at once infer to be designed were not a worldview assertion in the way.] Predictable . . . and, sad. GEM of TKI kairosfocus
Thermodynamics usually deals with substances which may be treated as continuous and homogeneous (even though they consist of discrete atoms and molecules), with uniform properties throughout. In contrast, living things are discontinuous -- there are individual cells and then the cells have discontinuous structures inside. Furthermore, living things have a purposeful arrangement of parts. When a homogeneous substance at the macroscopic level consists of gadzillions of the same atoms or molecules, the substance's properties become uniform because the mathematics of the probabilities of large numbers takes over. If, for example, a coin is tossed ten times, five heads (or tails) is the most likely result but the probability of getting exactly this result is small. However, if a coin is tossed a thousand times, the probability that the percentage of heads (or tails) will fall between 45 and 55 is large. Therefore, tossing a coin one thousand times -- as compared to only ten times -- has a more uniform "property" of producing results of approximately one-half heads (or tails). Also, the Second Law of Thermodynamics is often stated in physical or engineering terms that have nothing to do with biology. For example, a popular statement of the SLOT is the Kelvin statement: "It is impossible to construct an engine, operating in a cycle, whose sole effect is receiving heat from a single reservoir and the performance of an equivalent amount of work." I think that the SLOT makes poor arguments either for or against evolution. I think that SLOT arguments against evolution do nothing more than expose critics of evolution to ridicule from the Darwinists. As for a 4th Law of Thermodynamics, I have heard of 4th and higher-numbered laws of thermodynamics, but I think that only the first four laws, the Zeroth through the 3rd, are universally recognized. My blog has an article discussing the application of the SLOT to evolution theory: http://im-from-missouri.blogspot.com/2007/02/2nd-law-of-thermodynamics-and.html Larry Fafarman
kairosfocus
1] Now, the classic e.g. no 1 in studying the 2nd law, is an isolated system having in it two thermally interacting closed systems, A at Thot, B at Tcold. {I am using the more usual physics terminology: closed systems exchange energy, but not matter, with their surroundings. Open ones exchange both, isolated ones exchange neither.)
What is interesting about this example is that it actually proves entropy can go down in a closed system! Just consider A; for the moment the system we are interested in comprises A alone. A is not an isolated system as it exchanges energy with B, but we can still study it as a system. When happens when heat energy flows from A to B? The entropy in A decreases! And it can do that because the total entropy, the entropy in the system (i.e. A) plus the entropy in the surroundings (effectively B, as heat can go nowhere else) is increasing. Even when we consider open systems, the second law demands that entropy goes up but the entropy in the system might still go down. And it is worth noting that the entropy in A decreased without the aid of any machinery. The Pixie
Granville Sewell
People have found so many ways to corrupt the meaning of this law, to divert attention from the fundamental question of probability–primarily through the arguments that “anything can happen in an open system” (easily demolished, in my article) and “the second law only applies to energy” (though it is applied much more generally in most physics textbooks). But the fact is, the rearrangement of atoms into human brains and computers and the Internet does not violate any recognized law of science except the second law, so how can we discuss evolution without mentioning the one scientific law that applies?
I am afraid I am going to have to disagree. The second law [i]does[/i] only apply to thermodynamic entropy (a measure of the distribution of energy, defined by dS = dQ/T). It is, afterall, the second law of [i]thermodynamics[/i]. Furthermore, the fact that it is specified for a closed system, that is one in which energy cannot get in or out, is a further hint that the second law is about energy. The second law does not apply to, say, "carbon order". The Earth is a closed system for "carbon order", and yet "carbon order" can and does increase, every time a tree grows. It does not apply to "water order". Molecules of water will spontaneously form snowflakes (even in a system that has no water going in or out), so again "water order" can decrease. That is not to say that the arrangement of matter is unaffected by the second law, but matter is affected because it impacts on how energy is distributed. A snowflake is low in thermodynamic entropy because the individual atoms cannot move much, there is not much scope for randomly distributing energy around a system. When a snowflake forms, the entropy of the water decreases. But this is okay for the scond law, because as the snowflake forms, it releases energy into the surroundings, and that increases the entropy of the surroundings. [i]Overall[/i] thermodynamic entropy increases, as it always must. Which brings us to open systems. [i]Formally[/i] the second law applies only to closed systems, but the mathematics of Gibbs' allows us to extend that to open systems. All you have to remember is that it is the total entropy, the entropy change in the system, plus the entropy change in the surroundings (ignoring [i]all[/i] other processes in the universe), must increase. Evolution can happen because while entropy in the system is decreasing, the overall entropy, including the entropy of the surroundings is going up. The rearrangement of atoms into human brains and computers and the Internet clearly does not violate any law of nature, recognized or not. It happens. I have three kids, all with human brains. Ten years ago, before any of the were conceived, all those atoms were not in those human brains. And trust me, no one violated any laws of nature to get those atoms into those human brains.
Much of the disagreement and confusion about the second law is due to the fact that, unlike most other “laws” of science, there is not widespread consensus on exactly what it says. It was originally applied only to heat conduction, then gradually generalized more and more, until today many physics textbooks apply it to things like rusting computers and shattering glass. The key is that there is one and only one principle behind all applications, and that is the use of probability at the microscopic level to predict macroscopic change. Thus as far as I am concerned, this principle IS the second law.
The second law says entropy increases in a closed system, S[final] > S[initial]. That was what it said when it was originally applied to heat conduction. That is still what it says when applied to chemical and biological processes, to rusting computers and even blackholes. And it is all the same entropy, dS = dQ/T. Boltzmann did indeed relate that entropy to "probability at the microscopic level", S = k ln W. But this equality is only justified if W is the number of energy microstates. Thermodynamic entropy is a "property of state". That means that a given substance at a give set of conditions always has the same entropy. You can look up that entropy for common materials at stardard conditions (eg here is a table for engineers of the entropy of steam at various temperatures). The entropy of a material can be calculated from dS = dQ/T (given that entropy is zero at abslute zero), or from S = k ln W. You get the same value either way. The Pixie
Ack! Calculus! Well, someone has to know how to do it. But leaving that aside for the moment, and of necessity ... Whenever someone starts blithering on about the ancient earth being an open system and receiving energy from the sun which could be enough to drive evolutionary processes I want to tell them to go and start living outdoors all the time and see what all that in-pouring energy from the sun will do to them. My poor non-tanning husband spent many days on the beach in his youth. The skin cancers started turning up in his very early 30s - thankfully all BCCs so far. Or, talking about driving, how about next time they go to the petrol station they don't bother with pumping it into the system designed to feed the fuel into a chamber where (by design) its energy is extracted via a series of tiny, controlled explosions which (by design) move pistons whose up and down movements (by design) are translated into forward or backward movement which (by design) you can select for by choosing one of several gears. Instead just pour it all over the car, light it and see how far the car will go. Of course I realise that these sort of simple arguments carry no weight with those who wilfully choose to give them no weight. Some may not believe that there are fairies at the bottom of the garden but they all believe in magic even if they do call it abiogenesis followed by macroevolution. Janice
H'mm: Given the way minor points are often abused by evolutionary materialism advocates through obfuscatory debate tactics, to divert attention from the material point, I should note that a slight adjustment on Dr Sewell's quote may be helpful:
. . . the [spontaneous] rearrangement of atoms into human brains and computers and the Internet . . . [is vastly improbable to the point of constituting a violation of what we reasonably expect under 2 LOT]]
Of course, too, the correctly understood sense of "violates" here, is that:
. . . within the probabilistic resources of the OBSERVED cosmos, such a spontaneous rearrangement of hydrogen etc into humans and their information-rich artifacts, is maximally unlikely relative to the hypothesis that chance and/or natural regularities are the only driving forces . . .
The most telling illustration of the force of that point, is the commonly met with idea that there is a quasi-infinite array of sub universes in the wider cosmos as a whole. So, with assumed randomly distributed laws and circumstances, the complex world we see described by Prof Sewell above has in fact happened by chance. This is of course both an ad hoc speculative assertion without basis in observational fact, and one that begs major metaphysical questions and debates. It shoots itself in the foot by what such a resort tot he quasi-infinite adn unobserved implies: the relevant probabilities are so small in a universe of up to 10^80 or so atoms and up to say 13.7 BY or so, that it is implausible to expect that the cosmos we see originated by chance + necessity only within that gamut. The now rising onward objection that such estimates of probability are incalculable or unproven or useless, fails too. For instance, consider that the odds of say the value of the Cosmological Constant [~ energy density of free space, the "yeast" that makes space itself expand] falling within a life-permitting range are sometines estimated as 1 in 10^53, based on the physics of what ranges it could have in the raw, and what range it can reasonably have that is life-permitting. The dismissal of this estimate or the like, rests on selectively ignoring or dismissing the basic facts, principles and factors in how probabilities (and related quantities) are calculated. That selective hyperskepticism starts with how we estimate the odds on throwing a 6 on a die as 1 in 6. That is, through the Laplacian principle of indifference and a comparison of the result in question to the range of possible results. For, as we have no reason to prefer any one face so of 6, the odds are 1 in 6 for a specified face. I further note that probabilities and associated expectations are routinely and reasonably estimated, and in effect indicate/ model (sometimes quantitatively) the intuitive idea of the rational degree of confidence we can place in something occurring or not occurring "by chance." They are equally routinely used in important decision-making situations. I trust these notes will be helpful in addressing some of the usual diversionary talking points. Ah Gone . . . GEM of TKI kairosfocus
Oops: Forgot that a LT sign is opening for a tag. 3] "Tc is LESS THAN Th . . ." kairosfocus
Professor Sewell: I am happy to see your post; not least because the 2nd law was my own point of entry into the ID discussion. (Onlookers, cf Appendix I in my always linked through my handle.) I also note your quote from Prof Johnson: “I long ago gave up the hope of ever getting scientists to talk rationally about the 2nd law instead of their giving the cliched emotional and knee-jerk responses. I skip the words ‘2nd law’ and go straight to ‘information’” Your own comment is also highly illustrative, and quite sad:
People have found so many ways to corrupt the meaning of this law, to divert attention from the fundamental question of probability–primarily through the arguments that “anything can happen in an open system” (easily demolished, in my article) and “the second law only applies to energy” (though it is applied much more generally in most physics textbooks). But the fact is, the rearrangement of atoms into human brains and computers and the Internet does not violate any recognized law of science except the second law, so how can we discuss evolution without mentioning the one scientific law that applies?
Of course, rhetoric is no substitute for sound evaluation of an issue, and so your persistence is deeply appreciated. For, you are right, dead right: 1] Now, the classic e.g. no 1 in studying the 2nd law, is an isolated system having in it two thermally interacting closed systems, A at Thot, B at Tcold. {I am using the more usual physics terminology: closed systems exchange energy, but not matter, with their surroundings. Open ones exchange both, isolated ones exchange neither.) 2] 2nd LOT is then deduced by estimating and summing the entropy shifts: A loses and B gains heat increment d'Q, in such a way that overall shift in entropy is: dS >/= [-d'Q/Thot] + [d'Q/Tcold] 3] Because Tc a system simply absorbing raw heat energy tends to have a RISE in entropy. This is unsurprising because Temperature is a measure of average random kinetic energy per degree of freedom for molecules etc. So: import of randomising energy, tends to make the internal microstates of a system consistent with the new macrostate indicated by say a rise in its temperature, more random. (That is, inter alia, information tends to be lost on heating. This is reflected in the classic Boltzmann expression s = k ln w, w being the number of microstates associated with a given macrostate, which of course strongly rises with temperature. Highly informational states of course tend to be such that w is sharply constrained, i.e. they are low entropy. This is exploited by TBO in their 1984 Mystery of Life's Origin, following Brillouin's tie-in between entropy and information.) 4] Of course, one may then move to case no 2: have B as a heat engine, whereby imported energy from A is partly converted into work on a target body, say C, and partly exhausted to a heat sink, D. In this case, B imports energy but by virtue of coupling it to an energy conversion subsystem, is able to partly transform it into orderly motion [which can in principle be quite complex and algorithmically controlled.] 5] Now, there are natural heat engines, e.g. a hurricane, but that does not subvert the fact that in every case where we see that the energy coupling and converting device in B exhibits specified complexity and we know how it originated directly, it is an artifact of intelligence. 6] Opening up the system to matter as well as energy flows does not alter this fact. [Just reflect on man-made engines, which often import fuel and air, then use combustion to drive the energy conversion cycle and exhaust waste material and heat to the surroundings.] So, we both know that importing raw energy tends to increase micro-scale disorder, and that CSI-based energy coupling and conversion systems have a known origin in intelligent action. And, while of course if there are sufficient probabilistic resources we can have strange things like the molecules in the room rushing to one end etc, and a tornado can build, fuel and fly a jumbo jet by chance, such is so remote probabilistically that it is maximally more likely that such phenomena and systems originated through intelligent action. The rhetorical resort to the maximally improbable and confusing to explain away the CSI in say the cell considered as an energy coupling and conversion system, thus -- yet again [and pardon my directness] -- shows the intellectual bankruptcy of evolutionary materialism. Keep up the good work GEM of TKI kairosfocus
"...you idiots, unintelligent forces can’t do intelligent things!"
I like this. This is why, after more than half a century of indoctrination in the public schools, with the persistent help of the mainstream media, the vast majority of Americans don't buy blind-watchmaker Darwinism. It reminds me of my favorite ancient Chinese proverb: "Some things are so ridiculous, one needs a Ph.D. to believe them." GilDodgen
Much of the disagreement and confusion about the second law is due to the fact that, unlike most other "laws" of science, there is not widespread consensus on exactly what it says. It was originally applied only to heat conduction, then gradually generalized more and more, until today many physics textbooks apply it to things like rusting computers and shattering glass. The key is that there is one and only one principle behind all applications, and that is the use of probability at the microscopic level to predict macroscopic change. Thus as far as I am concerned, this principle IS the second law. Thus, Scordova, if you agree, as I suspect you do, that it is extremely improbable that natural forces would cause atoms to rearrange themselves into computers on our Earth, even taking into account what is entering into our open system (solar energy), it seems you have to agree that what has happened on our planet violates at least the underlying principle behind the second law. The advantage of the second law argument over an argument using Dembski's 4th law is that, though the latter is completely valid, the former is a much more widely recognized law of science. Physics textbooks practically make the argument for design for you, all you have to do is point out that the laws of probability still apply in open systems, contrary to common belief. In any case, both approaches are just attempts to take what is obvious to the layman ("you idiots, unintelligent forces can't do intelligent things!") and formulate it in a more "scientific" way. Granville Sewell
Granville, Welcome to Uncommon Descent. I am a big fan of your writings. I am however, reluctant to appeal to the traditional 2nd law as an argument supportive of design inferences. I believe Dembski's 4th law is more appropriate. There has been an ongoing discussion between myself an Professor Beling (a professor of Thermodynamics). See: Is 2nd Law a special case of 4th Law? My central conclusion regarding the 2nd law is taken from Bill's No Free Lunch:
the second law is subject to the Law of Conservation of Information [4th Law] page 172-173, No Free Lunch
and
A magnetic diskette recording random bits versus one recording, say, the text of this book are thermodynamically equivalent from the vantage of the second law. yet from the vantage of the fourth law they are radically different. Two Scrabble boards with Scrabble pieces covering identical squares are thermodynamically equivalent from the vantage of the second law. Yet from the vantage of the fourth law they can be radically different, one displaying a random arrangement of letters, and the other meaninful words and therefore CSI.
Thus, I'm a bit uncomforatable with 2nd law arguments. Founding father of ID Walter Bradley Thermodynamics and the Origin of Life:
the Second Law of Thermodynamics cannot be used to preclude a naturalistic origin of life.
scordova
Hi Granville, I have some questions for ya. i've never understood how the 2nd Law can apply to biology. The first thing is that we can see how a zygote can become a full grown organism. Are you suggesting that this is breaking the 2nd law and the only reason it can break it is because this "designed information"? Also, using the mount improbable analogy, it seems this 2nd law argument is being used against the cliff side view of the developement of organisms, vs. the gradual slope side of the mountain where smaller events that are more probable are added up over time due to natural selection. So how does the 2nd Law apply to the slope side? For example, let's say you have the letter "A" at the bottom of the slope. It randomly attaches to other letters. Some combinations don't make english words and don't go up the slope. Meanwhile a few words like "at" and "an" go up the slope. The process continues and now words like "ate" and "and" go higher up the slope. Information is being generated by random events, but only because of the selection event. Are you saying that the 2nd law shouldn't allow these small events to happen? OR that the 2nd law doesn't allow them to add up over time? Thanks for your time Fross Fross
Ah, lifeform! I knew that part was questionable (it didn't make sense to me) but translating it as life-form makes total sense. Can you edit my translation with that fix? Atom
Your translation is very good, Atom! Except the very last sentence should be "than a form of life", I think, from the context. Granville Sewell
Here is a translation I made of the article. Caveat emptor: Spanish is NOT my first language and I have had no formal training in it. So the translation may be dodgy at parts. Any improvement is appreciated, but at least those without any spanish skills can get the gist of it:
Graville Sewell is a professor of mathematics at A&M Univeristy in Texas. Sewell has published in the second edition of his book The Numerical Solution of Ordinary and Partial Differencial Equations (John Wiley & Sons, 2005) an appendix with a strong, solid critique of Darwinism based obviously in differential equations and the Second Law of Thermodynamics. The approach of Sewell is very interesting: while counter-arguing the critique made against his previous writings he is, in reality, speaking of Specified Complexity as explained by Dembski in The Design Inference (Cambridge University Press, 1998) and more recently in a paper on Specifications. The critics of Sewell have said that his argument is one of improbability: for example, the result of one series of a thousand coin tosses is as improbable as any of the other (2^1000) -1 results, however, you will always get one of those results. This is exactly equal to saying that the Design Inference of Dembski is based soley on complexity (the inverse proportion of probability), completely forgetting that the most important characteristic when seeking to infer design is the Specification; improbability is hardly a guarantee. Sewell, I don't know if he does so conciously or not because he does not mention Dembski in the references appendix, responds with an argument similar to the Specifications: While it is certain that all the 2^1000 possible results are highly improbable if the coin is fair (the probability of each such series is exactly 2^-1000), it is also certain that within those there are very few of those results which have a short description. For example, it is not the same to obtain a series of randomly alternating heads and tails as it is to obtain a series of all heads from the thousand coin tosses. There is something special in getting results like a thousand heads in a row that allows us to easily conclude that chance is not responsible. Obtaining a thousand heads is the result that an algorithm can produce very easily; if the heads represent 1s and the tails 0s, then the algorithmic instruction would be something like this: "PRINT 1 1000 TIMES" These short descriptions are those that have their origin in Kolmogorov's theory of recursion: Kolmogorov noticed that probability per-se could not differentiate between those results obtained by chance and those not, and so he made use of short descriptions for allowing him to differentiate between them. Of all the possible results of random coin tosses, there are very few with short description lengths. Dembski already mentioned in his book and article that those results which were both highly improbable and also had short description lengths constitute a form of Specified Complexity, the property that allows one to infer intelligent design. In a prior entry I made mention of an interview with Pablo Ferrari, who is no Design theoretician. Ferrari ended his interview like this: [Journalist]: However, there is a probability, however small it may be, that all the air molecules in the this room will collect in one corner, or that a cup of water will heat up when frozen...violating the sacred Second Law of Thermodynamics and therefore, sooner or later it will occur and sooner or later we will see all the molecules of air concentrate in the corner of a room. [Ferrari]: From the point of view of the probabilities, yes, but we cannot guarantee that one will live long enough to see it. For example, for the drunkard to take one hundred successive steps forward [it is an uncertain walk], there are those who wait two to the hundred, for there are billions of billions of steps, and for the molecules of air there has to be tillions upon trillions of times (in reality many more than that, but it sounds bad) the age of and duration possible for the universe. So we can rest easy. ...and that a cup of water will heat up in a time of freezing is much, much more simple than a form of life.
Atom
At the core of Darwinism is essentially the notion that you really can get something for nothing -- free information, free complex machinery, free design. – from chaos and natural law. The second law suggests that you can’t, so, obviously, the second law must not apply in the case of biological evolution, and anything can happen in an open system. The logic is simple: We know Darwinian processes can do all this marvelous stuff because life exists, and there is no other materialistic explanation we can think of. Therefore, by definition, the second law must not apply to the origin of living systems and their subsequent diversification and increase in complexity and information content. By the way, in an open system, isn’t machinery required to use the available energy to do useful and creative work? If so, machinery can’t come first, because machinery would be required to make that machinery. This is why I contend that some scientists have gone mad when it comes to Darwinism. They’ve completely lost the ability to think objectively, and recast the laws of nature at will to conform with Darwinian philosophy. GilDodgen
Makes sense, I always thought your two approaches complemented each other. A short time ago when we were all having a discussion on the nature of specification I had brought up your definition as the one I preferred. I always assumed you and Dembski were talking about the same thing, using independent formulations. Atom

Leave a Reply