Uncommon Descent Serving The Intelligent Design Community

Where is the difference here?

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Since my Cornell conference contribution has generated dozens of critical comments on another thread, I feel compelled to respond. I hope this is the last time I ever have to talk about this topic, I’m really tired of it.

Here are two scenarios:

1. A tornado hits a town, turning houses and cars into rubble. Then, another tornado hits, and turns the rubble back into houses and cars.

2. The atoms on a barren planet spontaneously rearrange themselves, with the help of solar energy and under the direction of four unintelligent forces of physics alone, into humans, cars, high-speed computers, libraries full of science texts and encyclopedias, TV sets, airplanes and spaceships. Then, the sun explodes into a supernova, and, with the help of solar energy, all of these things turn back into dust.

It is almost universally agreed in the scientific community that the second stage (but not the first) of scenario 1 would violate the second law of thermodynamics, at least the more general statements of this law (eg, “In an isolated system, the direction of spontaneous change is from order to disorder” see footnote 4 in my paper). It is also almost universally agreed that the first stage of scenario 2 does not violate the second law. (Of course, everyone agrees that there is no conflict in the second stage.) Why, what is the difference here?

Every general physics book which discusses evolution and the second law argues that the first stage of scenario 2 does not violate the second law because the Earth is an open system, and entropy can decrease in an open system as long as the decrease is compensated by increases outside the Earth. I gave several examples of this argument in section 1, if you can find a single general physics text anywhere which makes a different argument in claiming that evolution does not violate the second law, let me know which one.

Well, this same compensation argument can equally well be used to argue that the second tornado in scenario 1 does not violate the second law: the Earth is an open system, tornados receive their energy from the sun, any decrease in entropy due to a tornado that turns rubble into houses and cars is easily compensated by increases outside the Earth. It is difficult to define or measure entropy in scenario 2, but it is equally difficult in scenario 1.

I’ll save you the trouble: there is only one reason why nearly everyone agrees that the second law is violated in scenario 1 and not scenario 2: because there is a widely believed theory as to how the evolution of life and of human intelligence happened, while there is no widely believed theory as to how a tornado could turn rubble into houses and cars. There is no other argument which can be made as to why the second law is not violated in scenario 2, that could not equally well be applied to argue that it is not violated in scenario 1 either.

Well, in this paper, and every other piece I have written on this topic, including my new Bio-Complexity paper , and the video below, I have acknowledged that, if you really can explain scenario 2, then it does not violate the basic principle behind the second law. In my conclusions in the Cornell contribution, I wrote:

Of course, one can still argue that the spectacular increase in order seen on Earth is consistent with the underlying principle behind the second law, because what has happened here is not really extremely improbable. One can still argue that once upon a time…a collection of atoms formed by pure chance that was able to duplicate itself, and these complex collections of atoms were able to pass their complex structures on to their descendents generation after generation, even correcting errors. One can still argue that, after a long time, the accumulation of genetic accidents resulted in greater and greater information content in the DNA of these more and more complex collections of atoms, and eventually something called “intelligence” allowed some of these collections of atoms to design cars and trucks and spaceships and nuclear power plants. One can still argue that it only seems extremely improbable, but really isn’t, that under the right conditions, the influx of stellar energy into a planet could cause atoms to rearrange themselves into computers and laser printers and the Internet.

Of course, if you can come up with a nice theory on how tornados could turn rubble into houses and cars, you can argue that the second law is not violated in scenario 1 either.

Elizabeth and KeithS, you are welcome to go back into your complaints about what an idiot Sewell is to think that dust spontaneously turning into computers and the Internet might violate “the basic principle behind the second law,” and how this bad paper shows that all of the Cornell contributions were bad, but please first give me another reason, other than the one I acknowledged, why there is a conflict with the second law (or at least the fundamental principle behind the second law) in scenario 1 and not in scenario 2? (Or perhaps you suddenly now don’t see any conflict with the second law in scenario 1 either, that is an acceptable answer, but now you are in conflict with the scientific consensus!)

And if you can’t think of another reason, what in my paper do you disagree with, it seems we are in complete agreement!!

Comments
The equivocation continues:
You just think evolution is improbable, like every other IDer and creationist.
Nope. We say there isn't any evidence that unguided evolution can account for multi-protein configurations. And that happens to be a fact. We also say there isn't any way to test the claim that unguided evolution can account for the diversity of life observed. That also happens to be a fact. And until you produce evidence for blind and undirected chemical and physical processes actually producing something of note, then you cannot show that Granville is wrong.Joe
July 7, 2013
July
07
Jul
7
07
2013
09:21 AM
9
09
21
AM
PDT
CS3, You and Granville have fallen into the trap of wanting the second law to do more than it actually does. The second law forbids violations of the second law, no more and no less. Let me give an example of the same error, but in terms of the first law. Suppose a friend of yours claims that gerbils keep poofing into existence in his living room. He is constantly giving gerbils away to his friends as a result of the alleged gerbil influx. You find this wildly implausible, but he is adamant that it really happens. You try to reason him out of his delusion by showing him that gerbils can't possibly materialize out of thin air. One of your arguments is that if gerbils really did poof into existence in his living room, this would be a violation of the first law of thermodynamics, which says that energy can be neither created nor destroyed. Since matter is a form of energy (by Einstein's famous equation), the appearance of a gerbil out of thin air would violate the first law. He tells you he's made careful measurements that show that every time a gerbil appears, the mass of the furniture in the living room decreases by a corresponding amount. In other words, the incarnation of the gerbil is compensated for by a decrease in mass of the living room furniture. You find this absurd and tell him "This compensation argument is bogus. The first law doesn't allow gerbils to poof into existence merely because there is a compensatory loss of mass in the living room furniture!" But if you tell him this, you are wrong. The first law does allow gerbils to poof into existence, because the first law only forbids violations of the first law, no more and no less. As long as the mass of the furniture decreases by the correct amount, there is no violation of the first law. The gerbil-poofing idea is still ridiculous, and you have many reasons to doubt it, but the first law is not one of them, because the first law is not violated. The first law is not obligated to rule out every wildly improbable event in the universe, including gerbil poofing. It only rules out violations of the first law. Likewise with evolution and the second law. You and Granville may (and obviously do) think that evolution is ridiculous, and that people and locomotives and lava lamps can't appear on a formerly barren planet simply because solar energy is streaming in and waste heat is radiating out. But your skepticism has nothing to do with the second law, because the second law is not violated. You just think evolution is improbable, like every other IDer and creationist.keiths
July 7, 2013
July
07
Jul
7
07
2013
09:14 AM
9
09
14
AM
PDT
keiths When I allowed that "compensation" is valid in the case of thermal entropy, I meant that you can compare thermal entropy values to thermal entropy values, and there is an inequality that must be satisfied, as you did in your example. However, I would not really call this the "compensation argument," because the increase in B is not merely "compensating" for the decrease in A in the sense of being an unrelated event that merely "offsets" it - it is, in this case, a necessary effect of the decrease. When Sewell uses the term "compensate", I believe he is referring only to cases in which the increase is an unrelated event that merely "offsets" an improbable event of another type, according to some global accounting scheme, as it is used in the Styer and Bunn papers. That definition is consistent with what you referred to as his "misinterpretation" of the compensation argument:
Granville misinterpets the compensation argument as saying that anything, no matter how improbable, can happen in a system as long as the above criterion is met. This is obviously wrong
I'm glad we agree that this is obviously wrong. However, can you explain how the methodology used by Styer and Bunn cannot be used to show that "anything, no matter how improbable, can happen in a system as long as the above criterion is met?" Just substitute the probability ratio of, say, a set of a thousand coins going from half heads and half tails to all heads in place of their estimate for the increase in improbability of organisms due to evolution. Plug that into the Boltzmann formula, and compare to the thermal entropy increase. If its magnitude is less, the Second Law is satisfied. It is hard to believe that such a silly argument would even need refuting, but it does, because, like you implied in 170, many seem to think it doesn't really matter how good or bad the argument is, as long as the conclusion is "correct", so no one else (as far as I know) has challenged them on this yet.CS3
July 7, 2013
July
07
Jul
7
07
2013
07:56 AM
7
07
56
AM
PDT
Yes, they did. When IDists called them on it, they started changing their tune. Then, Orwellian-style, you and others start re-writing history.
I'm not rewriting anything. I'm simply trying to explain why I think Granville's argument is wrong.
No, it’s not. I’ve read many papers and books myself over the course of my life that attempt to explain how life could be generated even though it is so improbable.
Well, it is not true that the reason people have proposed that there is more to the universe than the part we can see has nothing to do with how improbable life would be if there was only our bit. Some of it has been inspired by the weirdness that we do seem to have a universe that seems arbitrarily suited to render life probable, but that is rather different from the argument that life is improbable even given the parameters of this universe. In any case, it is not the contention of the vast proportion of "evolutionists" that life is necessarily particularly improbable in this universe.
In other words, a snowflake isn’t improbable because you know what causes a snowflake. But a living thing is improbable because you don’t know what causes living things.
No. A snowflake is not improbable because extrapolations of the basic forces and materials involved indicate it is not improbable. Life is improbable because extrapolations of the fundamental forces and materials offer no expectation that they should (or even could) generate a complex, self-reproducing machine.
I think you need to show your work. Also, demonstrate what you mean by "improbable". We know that snowflakes are probable because we have a frequency distribution. We don't have one for life.
If naturalism is true, we know what causes both snowflakes and life – physics. Physics predicts snowflakes; physics does not predict the spontaneous generation of complex, self-replicating machines operating via code and translation procedures.
But this is mere assertion. It is not an argument from physics.
Pardon my bluntness, but it is idiotic to make an equivalence between a snowflake and a living, self-replicating organism. Your service to your ideology is apparently making you say absurd things (DDS).
I didn't make an equivalence, in any sense other than that under the 2nd Law the low entropy of a snowflake,relative to the same molecules diffused, or of a tornado for that matter, is perfectly explicable under the 2nd Lawn, and so invoking the 2nd Law to support the claim that life is improbable is no more valid that invoking 2nd law to support the (false) claim that tornadoes are improbable. You can't predict a tornado on the basis of fundamental physics, otherwise we'd be able to keep out of their way. Whether a specific tornado forms is unpredictable, yet they do. Whether life forms on a specific planet might be similarly unpredictable, yet we know it formed on at least one. Nothing in the laws of physics, including the 2nd law of thermodynamics tells us that life is improbable, only that if it is improbable it is improbable. Which isn't terribly helpful.Elizabeth B Liddle
July 7, 2013
July
07
Jul
7
07
2013
07:29 AM
7
07
29
AM
PDT
God forbid (so to speak) that someone talk about entropy, evolution and open systems when discussing a paper entitled Entropy, Evolution and Open Systems.
Apparently when you say "OK" you don't mean "OK". Why do I find that not surprising? God forbid that someone talk about entropy and open systems on a barren lifeless planet.cantor
July 7, 2013
July
07
Jul
7
07
2013
06:39 AM
6
06
39
AM
PDT
No. Darwinists have not “tacitly accepted the argument that if the Earth were a closed system, life would be so improbable as to be considered impossible”.
Yes, they did. When IDists called them on it, they started changing their tune. Then, Orwellian-style, you and others start re-writing history.
This is a bit of a myth.
No, it's not. I've read many papers and books myself over the course of my life that attempt to explain how life could be generated even though it is so improbable.
In other words, a snowflake isn’t improbable because you know what causes a snowflake. But a living thing is improbable because you don’t know what causes living things.
No. A snowflake is not improbable because extrapolations of the basic forces and materials involved indicate it is not improbable. Life is improbable because extrapolations of the fundamental forces and materials offer no expectation that they should (or even could) generate a complex, self-reproducing machine. If naturalism is true, we know what causes both snowflakes and life - physics. Physics predicts snowflakes; physics does not predict the spontaneous generation of complex, self-replicating machines operating via code and translation procedures. Pardon my bluntness, but it is idiotic to make an equivalence between a snowflake and a living, self-replicating organism. Your service to your ideology is apparently making you say absurd things (DDS).William J Murray
July 7, 2013
July
07
Jul
7
07
2013
06:14 AM
6
06
14
AM
PDT
Onlookers: For the sake of completeness, I will do what I now rarely to, by making a point by point reply to KS at 186. I note that the above exchanges are a farrago of red herrings led away to strawmen which are then set alight to distract from the core issues. ___________ >> When Granville argues against the compensation idea, he is unwittingly arguing against the second law itself.>> 1 --> The core of GS's argument is that forces that on balance of probabilities lead to diffusion and the like, are maximally implausible as the source of constructive work. Citing his paper, A Second Look at the Second Law, again (as done at 190 above):
Note that (2) [a flow gradient expression] simply says that heat ?ows from hot to cold regions—because the laws of probability favor a more uniform distribution of heat energy . . . . From [an eqn that entails that in such a system, d'S >/= 0] (5) it follows that in an isolated, closed, system, where there is no heat ?ux through the boundary d’S >/= 0. Hence, in a closed system, entropy can never decrease. Since thermal entropy measures randomness (disorder) in the distribution of heat, its opposite (negative) can be referred to as ”thermal order”, and we can say that the thermal order can never increase in a closed system. Furthermore, there is really nothing special about ”thermal” entropy. We can define another entropy, and another order, in exactly the same way, to measure randomness in the distribution of any other substance that diffuses, for example, we can let U(x,y,z,t) represent the concentration of carbon diffusing in a solid (Q is just U now), and through an identical analysis show that the ”carbon order” thus defined cannot increase in a closed system. It is a well-known prediction of the second law that, in a closed system, every type of order is unstable and must eventually decrease, as everything tends toward more probable states . . .
2 --> At no point have objectors provided an example of FSCO/I arising spontaneously by such dispersive forces, through their providing constructive work. This is also the implicit point in Hoyle's example of a trornado passing through a junkyard and lo and behaond a jumbo jet emerges, NOT. By contrast, the work involving a probbaly comparable ampount of energy or even less, by men, machines and equipment working to a constructive plan will build a jumbo jet. That is we must reecognise the difference between forces that blindly and freely move things around in accord with statistical patterns and those that move them according to a plan. 3 --> This issue lies as well at the heart of the recent challenge to explain how a box of 500 coins, all H came to be. KS, EL, and others of their ilk have been adamant to refuse the best explanation [constructive work] and to refuse as well to recognise that due to the differing statistical weights of clusters of microstates, such a 500H state arising by random tossing is prwqctically unobservable on the gamut of the solar system. 4 --> Notice, also, GS has put the issue of forces of diffusion at the pivot of his case, and indeed that at once allows us to see that when he speaks of X-entropy, he is speaking of the sort of thing that makes C diffuse even in the solid state. >>It’s easy to see why. Imagine two open systems A and B that interact only with each other. Now draw a boundary around just A and B and label the contents as system C.>> 5 --> Here KS revisits Clausius' first example, which appears in my always linked note and which is clipped in the FYI-FTR, he is about to refuse to look seriously at what is happening at micro level when d'Q of heat moves from A at a higher temp to B at a lower. In short he leads away via a red herring and erects and burns a strawman. Let me lay out the summary that was there for literally years in App 1 my note:
1] TMLO: In 1984, this well-received work provided the breakthrough critical review on the origin of life that led to the modern design school of thought in science. The three online chapters, as just linked, should be carefully read to understand why design thinkers think that the origin of FSCI in biology is a significant and unmet challenge to neo-darwinian thought. (Cf also Klyce's relatively serious and balanced assessment, from a panspermia advocate. Sewell's remarks here are also worth reading. So is Sarfati's discussion of Dawkins' Mt Improbable.) 2] But open systems can increase their order: This is the "standard" dismissal argument on thermodynamics, but it is both fallacious and often resorted to by those who should know better. My own note on why this argument should be abandoned is: a] Clausius is the founder of the 2nd law, and the first standard example of an isolated system -- one that allows neither energy nor matter to flow in or out -- is instructive, given the "closed" subsystems [i.e. allowing energy to pass in or out] in it: Isol System: | |(A, at Thot) --> d'Q, heat --> (B, at T cold)| | b] Now, we introduce entropy change dS >/= d'Q/T . . . "Eqn" A.1 c] So, dSa >/= -d'Q/Th, and dSb >/= +d'Q/Tc, where Th > Tc d] That is, for system, dStot >/= dSa + dSb >/= 0, as Th > Tc . . . "Eqn" A.2 e] But, observe: the subsystems A and B are open to energy inflows and outflows, and the entropy of B RISES DUE TO THE IMPORTATION OF RAW ENERGY. f] The key point is that when raw energy enters a body, it tends to make its entropy rise. This can be envisioned on a simple model of a gas-filled box with piston-ends at the left and the right: ================================= ||::::::::::::::::::::::::::::::::::::::::::|| ||::::::::::::::::::::::::::::::::::::::::::||=== ||::::::::::::::::::::::::::::::::::::::::::|| ================================= 1: Consider a box as above, filled with tiny perfectly hard marbles [so collisions will be elastic], scattered similar to a raisin-filled Christmas pudding (pardon how the textual elements give the impression of a regular grid, think of them as scattered more or less hap-hazardly as would happen in a cake). 2: Now, let the marbles all be at rest to begin with. 3: Then, imagine that a layer of them up against the leftmost wall were given a sudden, quite, quite hard push to the right [the left and right ends are pistons]. 4: Simply on Newtonian physics, the moving balls would begin to collide with the marbles to their right, and in this model perfectly elastically. So, as they hit, the other marbles would be set in motion in succession. A wave of motion would begin, rippling from left to right 5:As the glancing angles on collision will vary at random, the marbles hit and the original marbles would soon begin to bounce in all sorts of directions. Then, they would also deflect off the walls, bouncing back into the body of the box and other marbles, causing the motion to continue indefinitely. 6: Soon, the marbles will be continually moving in all sorts of directions, with varying speeds, forming what is called the Maxwell-Boltzmann distribution, a bell-shaped curve. 7: And, this pattern would emerge independent of the specific initial arrangement or how we impart motion to it, i.e. this is an attractor in the phase space: once the marbles are set in motion somehow, and move around and interact, they will soon enough settle into the M-B pattern. E.g. the same would happen if a small charge of explosive were set off in the middle of the box, pushing our the balls there into the rest, and so on. And once the M-B pattern sets in, it will strongly tend to continue . . . . for the injection of energy to instead do predictably and consistently do something useful, it needs to be coupled to an energy conversion device. g] When such energy conversion devices, as in the cell, exhibit FSCI, the question of their origin becomes material, and in that context, their spontaneous origin is strictly logically possible but -- from the above -- negligibly different from zero probability on the gamut of the observed cosmos. (And, kindly note: the cell is an energy importer with an internal energy converter. That is, the appropriate entity in the model is B and onward B' below. Presumably as well, the prebiotic soup would have been energy importing, and so materialistic chemical evolutionary scenarios therefore have the challenge to credibly account for the origin of the FSCI-rich energy converting mechanisms in the cell relative to Monod's "chance + necessity" [cf also Plato's remarks] only.) h] Now, as just mentioned, certain bodies have in them energy conversion devices: they COUPLE input energy to subsystems that harvest some of the energy to do work, exhausting sufficient waste energy to a heat sink that the overall entropy of the system is increased. Illustratively, for heat engines -- and (in light of exchanges with email correspondents circa March 2008) let us note: a good slice of classical thermodynamics arose in the context of studying, idealising and generalising from steam engines [which exhibit organised, functional complexity, i.e FSCI; they are of course artifacts of intelligent design and also exhibit step-by-step problem-solving processes (even including "do-always" looping!)]: | | (A, heat source: Th): d'Qi --> (B', heat engine, Te): --> d'W [work done on say D] + d'Qo --> (C, sink at Tc) | | i] A's entropy: dSa >/= - d'Qi/Th j] C's entropy: dSc >/= + d'Qo/Tc k] The rise in entropy in B, C and in the object on which the work is done, D, say, compensates for that lost from A. The second law -- unsurprisingly, given the studies on steam engines that lie at its roots -- holds for heat engines. [--> Notice, I have addressed the compensation issue all along.] l] However for B since it now couples energy into work and exhausts waste heat, does not necessarily undergo a rise in entropy having imported d'Qi. [The problem is to explain the origin of the heat engine -- or more generally, energy converter -- that does this, if it exhibits FSCI.] [--> Notice the pivotal question being ducked in the context of the origin of cell based life, through red herrings and strawmen.] m] There is also a material difference between the sort of heat engine [an instance of the energy conversion device mentioned] that forms spontaneously as in a hurricane [directly driven by boundary conditions in a convective system on the planetary scale, i.e. an example of order], and the sort of complex, organised, algorithm-implementing energy conversion device found in living cells [the DNA-RNA-Ribosome-Enzyme system, which exhibits massive FSCI]. n] In short, the decisive problem is the [im]plausibility of the ORIGIN of such a FSCI-based energy converter through causal mechanisms traceable only to chance conditions and undirected [non-purposive] natural forces. This problem yields a conundrum for chem evo scenarios, such that inference to agency as the probable cause of such FSCI -- on the direct import of the many cases where we do directly know the causal story of FSCI -- becomes the better explanation. As TBO say, in bridging from a survey of the basic thermodynamics of living systems in CH 7, to that more focussed discussion in ch's 8 - 9:
While the maintenance of living systems is easily rationalized in terms of thermodynamics, the origin of such living systems is quite another matter. Though the earth is open to energy flow from the sun, the means of converting this energy into the necessary work to build up living systems from simple precursors remains at present unspecified (see equation 7-17). The "evolution" from biomonomers of to fully functioning cells is the issue. Can one make the incredible jump in energy and organization from raw material and raw energy, apart from some means of directing the energy flow through the system? In Chapters 8 and 9 we will consider this question, limiting our discussion to two small but crucial steps in the proposed evolutionary scheme namely, the formation of protein and DNA from their precursors. It is widely agreed that both protein and DNA are essential for living systems and indispensable components of every living cell today.11 Yet they are only produced by living cells. Both types of molecules are much more energy and information rich than the biomonomers from which they form. Can one reasonably predict their occurrence given the necessary biomonomers and an energy source? Has this been verified experimentally? These questions will be considered . . . [Bold emphasis added. Cf summary in the peer-reviewed journal of the American Scientific Affiliation, "Thermodynamics and the Origin of Life," in Perspectives on Science and Christian Faith 40 (June 1988): 72-83, pardon the poor quality of the scan. NB:as the journal's online issues will show, this is not necessarily a "friendly audience."]
[[--> in short this question was actually addressed in the very first design theory work, TMLO, in 1984, so all along the arguments we are here addressing yet again are red herrings led away to strawmen soaked in ad hominems as we will see again below, and set alight to cloud, confuse, poison and polarise the atmosphere.]
>>Because A and B interact only with each other, and not with anything outside of C, we know that C is an isolated system (by definition). The second law tells us that the entropy cannot decrease in any isolated system (including C). We also know from thermodynamics that the entropy of C is equal to the entropy of A plus the entropy of B.>> 6 --> KS is setting up his red herring and strawman version. >>All of us (including Granville) know that it’s possible for entropy to decrease locally, as when a puddle of water freezes. So imagine that system A is a container of water that becomes ice.>> 7 --> Having dodged the pivotal issues of dispersive forces like diffusion being asked to carry out constructive work resulting in organisation of something that is rich in FSCO/I, KS gives an irrelevant example, of order emerging by mechanical necessity acting int eh context of heat outflow, where the polar molecules of water will form ice crystals on being cooled enough. This very example is specifically addressed in TMLO, and I have already spoken to this and similar cases. 8 --> By contrast, hear honest and serious remarks by Wicken and Orgel (which since 2010 have sat in the beginning of section D, IOSE intro-summary page, so KS either knows of or should know of this):
WICKEN, 1979: >> ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)] >> ORGEL, 1973: >> . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.]>>
9 --> KS, of course, has presented to us a case of crystallisation, as though it is an answer to the matter at stake. At this point, given his obvious situation as a highly informed person, this is willful perpetuation of a misrepresentation, which has a short sharp, blunt three-letter name that begins with L. >>Note: 1. The entropy of A decreases when the water freezes. 2. The second law tells us that the entropy of C cannot decrease. 3. Thermodynamics tells us that the entropy of C is equal to the entropy of A plus the entropy of B. 4. Therefore, if the entropy of A decreases, the entropy of B must increase by at least an equal amount to avoid violating the second law.>> 10 --> In rthese notes, KS ducks his intellectual responsibility to address just what happens with B so that the overall enrropy is increased. Namely, that precisely because of the rise in accessible energy, the number of ways for energy and mass to be arranged at micro level, so far increases as to exceed the loss in number of ways of A. 11 --> And, the exact same diffusive and dissipative forces already described strongly push B towards the clusters of states with the highest statistical weights, and away from those clusters with very low statistical weights. So, by importing energy B's entropy increases and by enough that the net result is at minimum to have entropy of the system constant. 12 --> It is the statistical reasoning linked to this, and the onward link tot he information involved, thence the information involved in functionally specific complex organisation, thence the need for constructive work rather than expecting diffusion and the like to do spontaneously this for "free" that are pivotal to the case that KS has here distracted form and misrepresented. (Cf my microjets in a vat thought exercise case study here, which has been around since when, was it 2008 or so? And even if KS was ignorant of that, he had the real import of Hoyle's argument, a contrast between what chaotic forces do and what planned constructive work does, as well as access to the points made by Orgel and Wicken. Likewise we can compare what Shapiro and Orgel said in their exchange on OOL. Golf balls do not in our experience play themselves around golf courses by lucky clusters of natural forces. If pigs could fly scenarios are nonsense. And the rock avalanche spontaneously forms Welcome to Walses at the border of Wales example has been around for a long time too. All of these are highlighting the difference in capability between blind chance and mechanical necessity and intelligently directed constructive work.) >>The second law demands that compensation must happen. If you deny compensation, you deny the second law.>> 13 --> A sad excuse to play at red herrings and strawmen. >>Thus Granville’s paper is not only chock full of errors, it actually shoots itself in the foot by contradicting the second law!>> 14 --> here comes the smoke of burning, ad hominem soaked strawmen , now. >>It’s a monumental mess that belongs nowhere near the pages of any respectable scientific publication. The BI organizers really screwed up when they accepted Granville’s paper. >> 15 --> throwing on more ad hominems to the fire to make even more polarisation, clouding of issues and poisoning of the atmosphere. ____________ KS's grade is F- - (F double minus), for willful failure to do duties of care to accuracy, substantial issues at stake, and fairness. The grade of those who tried to pile on, building on his presumed expertise and the assumption that design thinkers are ignorant, stupid, insane or wicked, is similarly F - - -, as they should know better. But onlookers, I hardly expect such to listen or accept correction, on long and sad track record. years form now they will still be presenting these fallacies as though they were correct answers to the pivotal problem of accounting for constructive work to erect FSCO/I rich systems. how do I know this? Easy, this is what has been going on since at least the 1990's on this topic. And similar loaded strawman misrepresentation games are the heart of the Darwinist objections to design theory as the UD weak argument correctives highlight. KF PS: I should again note as well just for completeness, the summary on the nature of entropy as linked to information, that is of all places nicely put in Wiki's article on informational entropy:
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
kairosfocus
July 7, 2013
July
07
Jul
7
07
2013
05:03 AM
5
05
03
AM
PDT
Darwinists tacitly accepted the argument that if the Earth were a closed system, life would be so improbable as to be considered impossible – but, the caveat has always been that Earth is an open system. Sewell is pointing out that unless the kind of order specific to what is being explained is being imported into the system from outside of it, the presence of life is as unlikely in closed system as in an open one.
No. Darwinists have not "tacitly accepted the argument that if the Earth were a closed system, life would be so improbable as to be considered impossible". Let's consider a closed earth - one so far from the sun that we can ignore energy input from the sun, but which is still fairly warm because its core is still molten. Let us further suppose that it is covered in water, and that there are hot spots on the sea bed where the molten core is closer to the crust. Do you agree that there will be convection currents in the sea? And, if you do, do you agree that local decreases in entropy are perfectly possible, even though there is no input of energy from anything external to the earth?
Already explained. How likely certain configurations of matter are under 2LoT (in the distributive sense) is determined by physical law; snowflakes, under physical law, are not unlikely configurations. Neither are spheroid celestial objects. Nor are rainbos. “What about a snowflake?” is not a viable rebuttal to the argument at hand. There is no known effect of natural laws acting on matter that would necessarily or likely produce a highly complex, functioning self-replicating machine.
In other words, a snowflake isn't improbable because you know what causes a snowflake. But a living thing is improbable because you don't know what causes living things. And if that isn't what you are saying, what are you saying? Both have are arrangements of matter lower entropy than the same elements, uniformly distributed. Why should one be any less "probable" than the other, "under the 2LoT"?
Most agree – even those outside of the ID community – that such an event is highly unlikely, to the point that some have theorized “infinite universes” to expand the distribution of chance matter configurations to accommodate the spontaneous generation of life from inanimate matter.
This is a bit of a myth. No, most people don't agree that life is "high unlikely" in this universe. Many think that given that we are here, there are probably many other planets on which life has also emerged. Hence SETI, much beloved of IDers :) There are many reasons why other universes in addition to our observable onehave been postulated, not least being the fact that we cannot see beyond the distance light can have travelled given Big Bang, the speed of light, and the rate of expansion of space. We are at the dead centre of the observable universe. I don't think anyone thinks that is for any reason other than that we can necessarily only observe things a certain distance away, in all directions.
I suggest that even Sewell isn’t saying that life actually violates the 2LoT, but rather that it is not sufficiently explained under 2LoT unless a specific kind of order is being imported from outside the system that makes life more likely that is not currently theorized.
I think that second thing is exactly what he is saying. Where he's wrong is to think that is a problem. Order doesn't have to be "imported" from outside earth. It simply needs to be "imported" from outside a cell! In the sense that work needs to be done on a diffused system in order to undiffuse it, and that work needs to be done by a system outside the diffused system, as a result of which, the outside system will increase in entropy. It's perfectly possible that the first proto-life-forms got their increased entropy from convection currents, and it's even possible that those convection currents got their own entropy reduction (from the state of still water) not from the sun, but from hotspots within the earth itself. But nobody, is propposing that they didn't get it from some system external to the life from itself, or (apart from, IDers, ironically) that the external system they got it from didn't increase in entropy as a result of doing work on it. If the parts that make up a life form got their decreased entropy from, inter alia, a convection current, nobody is suggesting that the convection current didn't diffuse a bit as a result. IDers however, do (or some do) seem to be implying that, i.e. that designers can organise matter into cool non-uniform arrangements without themselves decreasing in entropy.
My view is that it is information pertinent to the ordering of matter into life that is specifically being imported from outside of the system – even to this day.
What definition of "information" are you using in this sentence?Elizabeth B Liddle
July 7, 2013
July
07
Jul
7
07
2013
04:33 AM
4
04
33
AM
PDT
In that case what does “under the 2LoT” add to the argument?
Darwinists tacitly accepted the argument that if the Earth were a closed system, life would be so improbable as to be considered impossible - but, the caveat has always been that Earth is an open system. Sewell is pointing out that unless the kind of order specific to what is being explained is being imported into the system from outside of it, the presence of life is as unlikely in closed system as in an open one.
Yet sun-warmed sea can and does produce tornadoes from still air, and exquisite hexagonal crystals from water vapour. How is this possible?
Already explained. How likely certain configurations of matter are under 2LoT (in the distributive sense) is determined by physical law; snowflakes, under physical law, are not unlikely configurations. Neither are spheroid celestial objects. Nor are rainbos. "What about a snowflake?" is not a viable rebuttal to the argument at hand. There is no known effect of natural laws acting on matter that would necessarily or likely produce a highly complex, functioning self-replicating machine. Most agree - even those outside of the ID community - that such an event is highly unlikely, to the point that some have theorized "infinite universes" to expand the distribution of chance matter configurations to accommodate the spontaneous generation of life from inanimate matter. I suggest that even Sewell isn't saying that life actually violates the 2LoT, but rather that it is not sufficiently explained under 2LoT unless a specific kind of order is being imported from outside the system that makes life more likely that is not currently theorized. My view is that it is information pertinent to the ordering of matter into life that is specifically being imported from outside of the system - even to this day.William J Murray
July 7, 2013
July
07
Jul
7
07
2013
03:56 AM
3
03
56
AM
PDT
In that case what does "under the 2LoT" add to the argument? And what have tornadoes in junkyards got to do with anything? Nobody is suggesting that tornadoes can produce Boeing 747s from junkyards. Yet sun-warmed sea can and does produce tornadoes from still air, and exquisite hexagonal crystals from water vapour. How is this possible?Elizabeth B Liddle
July 7, 2013
July
07
Jul
7
07
2013
03:17 AM
3
03
17
AM
PDT
Can I ask if anyone on this thread still thinks that Granville was correct, when he said, in his Mathematical Intelligencer paper:
I think it's poorly worded. I would say that Darwinists assign to life the capacity for matter to do things under the 2LoT they would never accept anywhere else, open system or not, such as a tornado running through a junkyard and constructing a functioning 747. They only make that argument in this case because their ideology depends on it. "It's not impossible under the 2LoT" is not a significant argumentWilliam J Murray
July 7, 2013
July
07
Jul
7
07
2013
02:45 AM
2
02
45
AM
PDT
Lastly (for now!), above I wrote:
Here are two arrangements of Hs and Ts: TTTHTTHHTT TTTHTTHHTT Which is more “probable”? “But they are both the same!” you say! But I will now tell you that I generated the first by using the formula, =IF(RAND()> 0.5,”H”,”T”) in Excel and pasting the results into the post, and I generated the second by carefully copying the first into a new line. So the probability of getting the first pattern is 1/(2^10), whereas the probability of getting the second is near 1 (if I’d used cut and paste, it would have been 1, but I relied on hand-typing it). In other words, the probability of an arrangement is not discernable from looking at the arrangement, but by computing the probability of that arrangement, given a generative process.
Does anyone disagree that in order to tell which of my two arrangements of Hs and Ts was more "probable", we need to know the process that generated them? And if not, does anyone disagree that "probability" is not the property of an arrangement alone, but of the arrangement, given the process that is postulated to have generated it?Elizabeth B Liddle
July 7, 2013
July
07
Jul
7
07
2013
02:42 AM
2
02
42
AM
PDT
Cantor - sorry, missed this formulation of your question: Multiple Choice. Please select the 1 of the 4 statements below that most closely represents your views:
1) the unguided purposeless action of the four known physical laws cannot be the sole explanation for how this planet was transformed from barren and lifeless to what we see today, because that would violate the 2nd law 2) the unguided purposeless action of the four known physical laws is the sole explanation for how this planet was transformed from barren and lifeless to what we see today, and that does not violate the 2nd law 3) it is impossible to make a definitive quantitative argument either way 4) I do not know if it is possible to make a definitive quantitative argument either way
3 comes closest, but that is simply because my position is the standard scientific that no conclusion in science is ever definitive - all conclusions are provisional, and all models are incomplete. However, what can be definitive is what a hypothesis consists of, and the evolutionary hypothesis does not require that "natural selection" has "the ability to violate the second law of thermodynamics". Many processes can "cause order to arise from disorder" and no 2nd law violations are involved. Therefore Granville's claim, quoted by me in 220, is incorrect. 2 would be my position were it to be reworded to reflect the conditional nature of any hypothesis, for example: "If the unguided purposeless action of the four known physical laws were to be the sole explanation for how this planet was transformed from barren and lifeless to what we see today, this would not imply a violation of the 2nd law." Granville's claim is that it would, and my position is that that claim is incorrect. Moreoever, it is incorrect for exactly the reasons his objectors have given - that as long as local entropy decreases are "compensated for" by entropy decreases elsewhere then the 2nd law has not been violated. And "compensated" doesn't just mean that as long as there is increased entropy on Alpha Centauri, we can have spaceships here; it means that an increase in entropy in a system must be achieved by work done on that system by another system, which necessarily will experience an entropy increase, because that's what happens (see the 1st Law also) when work is done. The reason the sun comes into it is that the sun is the major source of energy on earth, and it is mostly because of the sun that we have energy gradients on earth - the earth is not in equilibrium. But there are other reasons - the earth itself is still cooling, and so we have geothermal energy gradients as well. All these gradients are lowish-entropy systems that can do work on other systems, resulting in local entropy increases in those othe systems, at the cost of decreased entropy in the system with the gradient (i.e. reduction in that gradient). So, to summarise: my position is that Granville's original claim was incorrect, that the rebuttals are essentially correct ("compensation" is the reason that local entropy increases are possible), and that Granville's counter rebuttal is confused and at best involves a reduction of his argument to a restatement of Dembski's CSI argument, which has nothing to do with thermodynamics at all. The 2nd Law of thermodynamics argument against evolutionary theory should, I suggest, be quietly laid to rest.Elizabeth B Liddle
July 7, 2013
July
07
Jul
7
07
2013
02:34 AM
2
02
34
AM
PDT
Can I ask if anyone on this thread still thinks that Granville was correct, when he said, in his Mathematical Intelligencer paper:
to attribute the development of life on Earth to natural selection is to assign to it–and to it alone, of all known natural “forces”–the ability to violate the second law of thermodynamics and to cause order to arise from disorder.
?Elizabeth B Liddle
July 7, 2013
July
07
Jul
7
07
2013
02:13 AM
2
02
13
AM
PDT
Cantor:
So are you. And Lizzie is oddly evasive.
Cantor: if you present me with a non-exhaustive list of possible positions, and none represents mine, clearly I cannot sign on to one of the positions you have offered. Let me ask you one question: 1. Does a tornado have less, or more, or the same, entropy, of any sort, than still air? If your answer depends on the kind of entropy, please say which entropies are greater, the same, or less, in a tornado than in still air. (I'd be interested in any answers to this question, which may shed light on where we are disagreeing here.)Elizabeth B Liddle
July 7, 2013
July
07
Jul
7
07
2013
02:01 AM
2
02
01
AM
PDT
CS3,
Compensation is valid when discussing only thermal entropy.
Thank you for stating that so unambiguously. KF and Granville, are you listening?
The problem with compensation is when it is applied to two different unrelated entropies... Here is an example of the compensation argument involving two unrelated entropies. This is what Sewell has a problem with.
I know this is hard to believe, CS3, but Granville really does believe that the entire concept of compensation is invalid. KF too. From the paper:
Of course the whole idea of compensation, whether by distant or nearby events, makes no sense logically: an extremely improbable event is not rendered less improbable simply by the occurrence of “compensating” events elsewhere. [emphasis added]
Not only is he wrong to claim that the "whole idea" of compensation is illogical; he also presents a strawman version of the compensation argument, as I pointed out earlier in the thread:
For anyone who still doesn’t get it, here is an explanation of Granville’s biggest error. The compensation argument says that entropy can decrease in a system as long as there is a sufficiently large net export of entropy from the system. Granville misinterpets the compensation argument as saying that anything, no matter how improbable, can happen in a system as long as the above criterion is met. This is obviously wrong, so Granville concludes that the compensation argument is invalid. In reality, only his interpretation of the compensation argument is invalid. The compensation argument itself is perfectly valid. The compensation argument shows that evolution doesn’t violate the second law. It does not say whether evolution happened; that is a different argument. Granville confuses the two issues because of his misunderstanding of the compensation argument. Since the second law isn’t violated, it has no further relevance. Granville is skeptical of evolution, but his skepticism has nothing to do with the second law. He is just like every other IDer and creationist: an evolution skeptic. You can see why this is a huge disappointment to him. Imagine if he had actually succeeded in showing that evolution violated a fundamental law of nature!
keiths
July 6, 2013
July
07
Jul
6
06
2013
10:56 PM
10
10
56
PM
PDT
1. The entropy of A decreases when the water freezes. 2. The second law tells us that the entropy of C cannot decrease. 3. Thermodynamics tells us that the entropy of C is equal to the entropy of A plus the entropy of B. 4. Therefore, if the entropy of A decreases, the entropy of B must increase by at least an equal amount to avoid violating the second law.
Compensation is valid when discussing only thermal entropy. The problem with compensation is when it is applied to two different unrelated entropies. I know your position is that thermal entropy is all that is relevant to the Second Law, so you might well never do such a thing. But see my comment 169, and you will see that Asimov, Styer, and Bunn clearly do just that. In your example, has something entered or left system A that makes the formation of ice in system A not improbable? Yes, heat. So there is no problem. Here is an example of the compensation argument involving two unrelated entropies. This is what Sewell has a problem with. 1. Spaceships and the Internet form in System A. 2. The thermal entropy of System B has increased. 3. Thermodynamics tells us that the entropy of C is equal to the entropy of A plus the entropy of B. 4. Somehow we convert the improbability of spaceships and the Internet to units of thermal entropy and make sure it is "compensated for" by the increase of thermal entropy in B.CS3
July 6, 2013
July
07
Jul
6
06
2013
10:06 PM
10
10
06
PM
PDT
KF, You claim my argument is wrong. If that's true, then you should be able to identify the precise step or steps that are mistaken and explain why they are mistaken. The steps are conveniently numbered in my argument, which is reproduced below for your convenience. Please refer to those numbers in your attempted rebuttal. Or you can try to bluff your way out of your predicament. Either way, the onlookers are watching. Good luck.
CS3, I’ve mentioned this a couple of times already but people (including you) haven’t picked up on it, so let me try again. When Granville argues against the compensation idea, he is unwittingly arguing against the second law itself. It’s easy to see why. Imagine two open systems A and B that interact only with each other. Now draw a boundary around just A and B and label the contents as system C. Because A and B interact only with each other, and not with anything outside of C, we know that C is an isolated system (by definition). The second law tells us that the entropy cannot decrease in any isolated system (including C). We also know from thermodynamics that the entropy of C is equal to the entropy of A plus the entropy of B. All of us (including Granville) know that it’s possible for entropy to decrease locally, as when a puddle of water freezes. So imagine that system A is a container of water that becomes ice. Note: 1. The entropy of A decreases when the water freezes. 2. The second law tells us that the entropy of C cannot decrease. 3. Thermodynamics tells us that the entropy of C is equal to the entropy of A plus the entropy of B. 4. Therefore, if the entropy of A decreases, the entropy of B must increase by at least an equal amount to avoid violating the second law. The second law demands that compensation must happen. If you deny compensation, you deny the second law. Thus Granville’s paper is not only chock full of errors, it actually shoots itself in the foot by contradicting the second law! It’s a monumental mess that belongs nowhere near the pages of any respectable scientific publication. The BI organizers really screwed up when they accepted Granville’s paper.
keiths
July 6, 2013
July
07
Jul
6
06
2013
10:03 PM
10
10
03
PM
PDT
KS: Your argument was anticipated [actually from Clausius on], and that you presented it as though it is a refutation, only shows that you are pushing strawman tactic talking points. What is to be explained and justified empirically, is the alleged production of constructive organising work of complex entities through diffusion or the like, not say the freezing of an ice cube. Onlookers, cf. FYI-FTR, with a focus on the case of the heat transfer from A to B and onwards. KFkairosfocus
July 6, 2013
July
07
Jul
6
06
2013
09:51 PM
9
09
51
PM
PDT
BTW: Your claimed counter example is by way of failing to address the Clausius example fully, as has been laid out in the longstanding note and the recent FYI-FTR. What you are ignoring is the force of the point of the relvant statistics that drives the 2nd law: when A transfers d'Q to B, at lower temp, the loss of possible ways to arrange mass and energy at micro levels for A is far less than the rise of same in B, which is how the 2nd law was defined. And as usual, you are ignoring the result that importation of raw energy INCREASES entropy, as I took time to discuss in detail in the FYI-FTR, excerpting a longstanding note. What happens in a different case is that if B has an energy converter that imported energy is coupled to, it may perform constructive shaft work, and will need to exhaust energy to C at a yet lower temp. But by slicing up the story and setting up strawmen you can pretend that you have answered the problem. You have not. What is needed of explanation is organisation tracing to constructive work and dissipative forces like diffusion, by overwhelming probability, are simply not capable of explaining such. but then, you and your ilk have failed to cogently address something so simple as a tray of coins in the 500 H state, so that is no surprise. You believe in the magical powers of lucky noise. Okay, if you want to do so, show us a case like the rock avalanche spontaneously producing "Welcome to Wales" at the Welsh border. KFkairosfocus
July 6, 2013
July
07
Jul
6
06
2013
09:46 PM
9
09
46
PM
PDT
KF, No rebuttal to my simple little argument?keiths
July 6, 2013
July
07
Jul
6
06
2013
09:39 PM
9
09
39
PM
PDT
KS: It is you who are bluffing and have been called. I freely say that you cannot back up your bluff. Show us your verified observation of a significant quantum of constructive work producing FSCO/I rich entities solely through spontaneous action of diffusion and other dissipative forces. Or even the degree of order reflected in a tray of coins in the 500H state, or the O2 molecules in a room all spontaneously rushing to one end. There are cases of order that do emerge by mechanical necessity and relevant forces in a situation such as snowflakes, or hurricanes, but this has nothing to do with functional, specific, complex organisation. Or, just tell us the case where it has been demonstrated empirically that by the ordinary physical and chemical forces in a warm little pond or the like, a living cell based on homochiral C chemistry in aqueous medium, with encapsulation and intelligent gating, metabolic sub systems and with a von Neumann code based self replication system, has originated by blind chance and mechanical necessity. That is the missing root of your tree of life, and no root, no shoots no tree. Or, do you have the equivalent of spontaneous formation of a Welcome to Wales sign at the border of Wales, through a rock avalanche? Show us, no more drumbeat dismissive talking points. I predict the result: you don't have any such, but your ideology demands that the equivalent and more, happened in some warm little pond or the like, leading to a world of life that by blind chance variation and loss of unlucky or less favoured varieties, managed to form dozens of body plans requiring increments of FSCO/I of order 10 - 100 mn bases. Remember, the unanswered UD pro-darwinist essay challenge stands at over nine months without a serious answer on empirically grounded evidence addressing the full tree of life from the root up. KFkairosfocus
July 6, 2013
July
07
Jul
6
06
2013
09:35 PM
9
09
35
PM
PDT
F/N 2: Let me also clip the cited remark in my note on the info-entropy link that was nicely summarised by -- surprise -- Wiki:
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann’s constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon’s information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell’s demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
Harry S Robertson, in Statistical Thermophysics, pp. vii - viii, has aptly brought out the work-related significance of that loss of information:
. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . .
KFkairosfocus
July 6, 2013
July
07
Jul
6
06
2013
09:23 PM
9
09
23
PM
PDT
KF, You're bluffing again. You dedicated an entire OP to challenging the compensation argument. I have shown that to deny the compensation argument is to deny the second law itself. The demonstration is simple and obviously correct. You (like Granville) have made a pitiful mistake and have shot yourself in the foot. To salvage your position, you will have to show that my analysis is flawed. Good luck.keiths
July 6, 2013
July
07
Jul
6
06
2013
09:20 PM
9
09
20
PM
PDT
F/N: This by Brillouin, is also instructive on the fundamental issues at stake: __________ >> How is it possible to formulate a scientific theory of information? The first requirement is to start from a precise definition . . . . We consider a problem involving a certain number of possible answers, if we have no special information on the actual situation. When we happen to be in possession of some information on the problem, the number of possible answers is reduced, and complete information may even leave us with only one possible answer. Information is a function of the ratio of the number of possible answers before and after, and we choose a logarithmic law in order to insure additivity of the information contained in independent situations [cf. basic outline here on in context from Section A my note.] . . . . Physics enters the picture when we discover a remarkable likeness between information and entropy. This similarity was noticed long ago by L. Szilard, in an old paper of 1929, which was the forerunner of the present theory. In this paper, Szilard was really pioneering in the unknown territory which we are now exploring in all directions. He investigated the problem of Maxwell's demon, and this is one of the important subjects discussed in this book. The connection between information and entropy was rediscovered by C. Shannon in a different class of problems, and we devote many chapters to this comparison. We prove that information must be considered as a negative term in the entropy of a system; in short, information is negentropy. The entropy of a physical system has often been described as a measure of randomness in the structure of the system. We can now state this result in a slightly different way: Every physical system is incompletely defined. We only know the values of some macroscopic variables, and we are unable to specify the exact positions and velocities of all the molecules contained in a system. We have only scanty, partial information on the system, and most of the information on the detailed structure is missing. Entropy measures the lack of information; it gives us the total amount of missing information on the ultramicroscopic structure of the system. This point of view is defined as the negentropy principle of information [added links: cf. explanation in Section A my note here], and it leads directly to a generalization of the second principle of thermodynamics, since entropy and information must, be discussed together and cannot be treated separately. This negentropy principle of information will be justified by a variety of examples ranging from theoretical physics to everyday life. The essential point is to show that any observation or experiment made on a physical system automatically results in an increase of the entropy of the laboratory. It is then possible to compare the loss of negentropy (increase of entropy) with the amount of information obtained. The efficiency of an experiment can be defined as the ratio of information obtained to the associated increase in entropy. This efficiency is always smaller than unity, according to the generalized Carnot principle. Examples show that the efficiency can be nearly unity in some special examples, but may also be extremely low in other cases. This line of discussion is very useful in a comparison of fundamental experiments used in science, more particularly in physics. It leads to a new investigation of the efficiency of different methods of observation, as well as their accuracy and reliability . . . . [Science and Information Theory, Second Edition, 1962. Dover Reprint.] >> _________ The onlooker is again invited to cf the FYI-FTR here. KFkairosfocus
July 6, 2013
July
07
Jul
6
06
2013
09:16 PM
9
09
16
PM
PDT
KS: With all due respects, this is utter nonsense:
Are you aware that to deny the compensation argument is to deny the second law itself?
You have just proved beyond all doubt that your ideology is leading you into thermodynamic absurdities by failing to distinguish between the implications of diffusion and related phenomena in light of relevant molecular circumstances etc, and what happens with purposeful constructive work. (Which is in fact as commonplace as what is needed to construct posts in this thread.) FYI, a construction site, or an assembly line -- e.g. for a Jumbo Jet -- are not violations of the second law. They show instead that intelligence with skill, purpose, plans, equipment and resources as necessary is able to carry out energy conversion based constructive work and by virtue of such, to create organised entities that would by overwhelming improbability similar to that of a 500H string of coins happening by tossing, not SPONTANEOUSLY appear by diffusion or some stand in for it. But of course, you and your ilk were in deep denial over the 500H coin toss exercise also. (If you still do not understand the relevance of diffusion as a contrast to construction [the point Sewell has been emphasising . . . ], kindly cf the thought exercise here on in my longstanding always linked note. Notice, how in the thought exercise in accord wit the laws of physics and an imagined circumstance, the parts for a flyable micro-jet are diffused in a large vat and then on the control, the issue is, would diffusion spontaneously assemble a jet or something else? [By overwhelming improbability, not credibly; on the gamut of solar system or observed cosmos.] Then, note the use of clumping nanobots, and the work of clumping, which undoes much of diffusion. Then note the further work of correctly arranging, which then shows the final construction. In these cases, observe that informationally directed work is able to construct a desired and planned entity. In so doing obviously much shaft work is done, which as it is performed by physical entity uses energy conversion devices and hence there is no violation of the 2nd law, as there is in the course of that work the exhaustion of waste energy that becomes heat. This is quite similar to was it Szilard's answer to the Maxwell Demon dilemma, that the effort to acquire and apply information makes the difference. And indeed, the thought exercise is obviously based on both an update to Maxwell and to Hoyle by miniaturising Hoyle. Nor, does the matter of the formation of a hurricane or a similar entity by forces of order [cf. here], answer to the problem; the difference between chaos caused by random diffusive forces, order caused by mechanical necessity and organisation in accordance with a Wicken type wiring diagram, is obvious, cf. here.) For that matter, this same difference between diffusive or dissipative, probability-driven processes and constructive work, holds for the production of posts in this thread, which also exhibit FSCO/I. Your quarrel at this point is not with me or with Dr Sewell or with Nobel equivalent prize-holder the late sir Fred Hoyle (who talked about Jumbo jets being made by tornadoes to illustrate his point). Nope, it is with J S Wicken, Robert Shapiro and Leslie Orgel, who at least acknowledged that something was missing in the account on OOL, as can be seen at 190 above and in the case of Wicken as I just linked. You are in the position of saying that constructive, functionally specific work, appearing spontaneously, has constructed life from chemicals in a stew in a pond or the like, and has onwards constructed the rest of the world of life. Without first pausing to show by concrete observed example that the production of constructive work by forces of diffusion or the like, is an empirically observed fact, leading to FSCO/I. That is, you are improperly appealing to the implied magical powers of lucky noise to do something like create, at the border of Wales, next to a rail road, by a rock avalanche, a sign saying "Welcome to Wales." Sewell's fundamental point is right, obviously right:
. . . The second law is all about probability, it uses probability at the microscopic level to predict macroscopic change: the reason carbon distributes itself more and more uniformly in an insulated solid [--> i.e. diffuses] is, that is what the laws of probability predict when diffusion alone is operative. The reason natural forces may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy. The second law of thermodynamics is the reason that computers will degenerate into scrap metal over time, and, in the absence of intelligence, the reverse process will not occur; and it is also the reason that animals, when they die, decay into simple organic and inorganic compounds, and, in the absence of intelligence, the reverse process will not occur.
KFkairosfocus
July 6, 2013
July
07
Jul
6
06
2013
08:57 PM
8
08
57
PM
PDT
cantor,
Please just skip over my posts if you don’t want to engage me on my terms.
OK.
The only “script” I have in mind is to tenaciously refuse to be derailed by your insistence on changing the subject to “evolution”.
Yes. God forbid (so to speak) that someone talk about entropy, evolution and open systems when discussing a paper entitled Entropy, Evolution and Open Systems. Good thing we have you keeping us on the rails, cantor.keiths
July 6, 2013
July
07
Jul
6
06
2013
08:10 PM
8
08
10
PM
PDT
I can see that you have a script in mind
The only "script" I have in mind is to tenaciously refuse to be derailed by your insistence on changing the subject to "evolution".cantor
July 6, 2013
July
07
Jul
6
06
2013
07:46 PM
7
07
46
PM
PDT
How about just making your case instead of asking us to choose among a set of poor and incomplete answers to a multiple-choice question?
Because you have your preferred style (posting long rambling incoherent self-contradicting irrelevant random thoughts) and I have mine: I ask questions. Please just skip over my posts if you don't want to engage me on my terms.cantor
July 6, 2013
July
07
Jul
6
06
2013
07:40 PM
7
07
40
PM
PDT
cantor, I can see that you have a script in mind that you'd like me (or Lizzie) to play out with you (as demonstrated, for example, by your eagerness to foist "option 3" on me). Reminds me of Upright Biped. How about just making your case instead of asking us to choose among a set of poor and incomplete answers to a multiple-choice question?keiths
July 6, 2013
July
07
Jul
6
06
2013
07:03 PM
7
07
03
PM
PDT
1 4 5 6 7 8 13

Leave a Reply