Uncommon Descent Serving The Intelligent Design Community

Granville Sewell’s important contribution to physics: Entropy-X

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Abstract:   In Sewell’s discussion of entropy flow, he defines “Entropy-X”, which is a very useful concept that should clarify misconceptions of entropy promulgated by Wikipedia and scientists unfamiliar with thermodynamics. Sewell’s important contribution is to argue that one can and should reduce the “atom-entropy” into a subsets of mutually exclusive “entropy-X”. Mathematically, this is like factoring an N x M matrix into block diagonal form, by showing that the cross terms between blocks do not contribute to the total. Each of the blocks in the diagonal then correspond to a separately computed entropy, or “Entropy-X”.  This contribution not only clarifies many of the misunderstandings of laypersons, it also provides a way for physicists to overcome their confusion regarding biology.

– 0 –

Introduction:     Entropy was initially discussed in terms of thermodynamics, as a quantity that came out of the energy, temperature, work formulas.  Ludwig Boltzmann found a way to relate billiard-ball counting statistics to this thermodynamic quantity, with a formula he had inscribed on his tombstone:  S = k ln(Ω). The right-hand-side of this equation, contains a logarithm of the possible ways to arrange the atoms. The left-hand-side is the usual thermodynamic quantity. Relating the two different worlds of counting and heat, is this constant “k”, now called the “Boltzmann constant”.

Now for the shocking part. There is no theory that predicts its value. It is a conversion constant that is experimentally determined. It works best when the real physical system approximates billiard balls–such as noble gasses. The constant gets progressively worse, or needs more adjustments, if the right-hand-side becomes N2 (diatomic gas) or CO2 (triatomic). Going from gas to liquid introduces even more corrections, and by the time we get to solids we use a completely different formula.

For example, a 1991 Nobel prize was awarded for studying how a long oily molecule moves around in a liquid, because not every state of rearrangement is accessible for tangled strings.  So 100 years after Boltzmann, we are just now tackling liquids and gels and trying to understand their entropy.

Does that mean we don’t know what entropy is?

No, it means that we don’t have a neat little Boltzmann factor for relating thermodynamic-S to computer statistics. We still believe that it is conserved, we just can’t compute the number very easily.  This is why Granville Sewell uses “X-entropy” to describe all the various forms of order in the system. We know they must be individually conserved, barring some conversion between the various types of entropy in the sytem, but we can’t compute it very well.

Nor is it simply that the computation gets too large. For example, in a 100-atom molecule, the entropy is NOT computed by looking at all the 100! permutations of atoms, simply because many of those arrangements are energetically impossible.  Remember, when Boltzmann described “ln(Ω)” he was calling it the possible states of the system. If the state is too energetic, it isn’t accessible without a huge amount of energy.  In particle physics, this limitation becomes known as “spontaneous symmetry breaking”, and is responsible for all the variation we see in our universe today.

So rather than counting “atom states”, we assemble atoms into molecules and form new entities that act as complete units, as “molecules”, and then the entropy consists of counting “states of the molecule”–a much smaller number than “states of the atoms of the molecules”.  Molecules form complexes, and then we compute “states of the molecular complexes”. And complexes form structures, such as membranes. Then we compute “states of the structures”. This process continues to build as we approach the size of a cell, and then we have to talk about organs and complete organisms, and then ecologies and Gaia. The point is that our “unit” of calculation is a getting larger and larger as the systems display larger and larger coherence.

Therefore it is completely wrong to talk about single-atom entropy and the entropy of sunlight when we are discussing structural entropy, for as Granville and previous commentators have said, the flow of energy in sunlight with a coherence length of one angstrom cannot explain the decameter coherence of a building.

So from consideration of the physics, it is possible to construct a hierarchical treatment of entropy which enables entropy to address the cell, but in 1991 we had barely made it to the oily liquid stage of development. So on the one hand, unlike many commentators imagine, physicists don’t know how to compute an equivalent “Boltzmann equation” for the entropy of life, but on the other hand Granville is also right, we don’t need to compute the Boltzmann entropy to show that it must be conserved.

– o –

Mathematical Discussion:   Sewell’s contribution is to recognize that there must be a hierarchical arrangement of atoms that permit the intractible problem of calculating the entropy to be treated as a sum of mutually exclusive sub-entropies.

This contribution can be seen in the difference between “Entropy-chromium” and “Entropy-heat” that he introduces in his paper, where Entropy-chromium is the displacement of chromium atoms in a matrix of iron holding the velocity of the atoms constant, and Entropy-heat considers a variation in velocities while holding the position constant.  These two type of entropy have a large energy barrier separating them on the order of several eV per atom that prevent them from interconverting. At sufficiently high temperature–say, the temperature at which the iron-chrome allow was poured–the chromium atoms have sufficient mobility to overcome the energy barrier and move around. But at the present room temperature, they are immobile.  So in the creation event of the chromium bar, the entropy was calculated for both position and velocity, but as it cooled, “spontaneous symmetry breaking” produced two smaller independent entropies from the single larger one.

Now the beginning of this calculation is the full-up, atom entropy where everything is accessible. This “big-bang” entropy gives at least 7 degrees of freedom for each atom. That is, the number of bins available for each atom are at least 7–one for the species, 3 that give the position in x,y,z and 3 that give the velocity in Vx,Vy,Vz.  We could subdivide species into all the different quantum numbers that define atoms, but for the moment we’ll ignore that nuance.  In addition, the quantization of space and velocity into “Planck” sizes of 10^-34 meters, means that our bins do not always have a real number, but have a quantized length or velocity specified by an integer number of Planck sizes.  But again, the real quantization is that atoms don’t overlap, so we can use a much coarser quantization of 10^-10 meters or angstrom atomic lengths. The reason this is important, is that we are reducing ln(Ω) by restricting the number of states of the system that we need to consider.

But if S = ln(Ω), then this means we are mathematically throwing away entropy! How is that fair?

There is a curious result in quantum mechanics, that says if we can’t distinguish two particles, then there is absolutely no difference if they are swapped. This is another way of saying that their position entropy is zero. So if we have two states of a system, separated by a Planck length, but can’t tell the difference, it doesn’t contribute to the entropy.  Now this isn’t to say that we can’t invent a system that can tell the difference, but since a Planck length corresponds to light with gamma-ray intensity, we really have to go back to the Big Bang to find a time when this entropy mattered.   This reconfirms our assumption that as a system cools, it loses entropy and has fewer and fewer states available.

But even this angstrom coarse graining in position represented by “Entropy-Chromium”, is still too fine for the real world because biology is not made out of noble gasses, bouncing in a chamber. Instead, life exists in a matrix of water, an H2O molecule of nanometer size. Just as we cannot tell the difference if we swap the two hydrogen atoms in the water molecule around, we can’t tell the difference if we swap two water molecules around. So the quantization entropy gets even coarser, and the number of states of the system shrink, simply because the atoms form molecules.

A very similar argument holds for the velocities. A hydrogen atom can’t have every velocity possible because it is attached to an oxygen atom. That chemical bond means that the entire system has to move as a unit. But QM tells us that as the mass of a system goes up, the wavelength goes down, which is to say, the number of velocities we have to consider in our binning is reduced as we have a more massive system. Therefore the velocity entropy drops as the system becomes more chemically bound.

And of course, life is mostly made out of polymers of 100’s or 1000’s of nanometers in extent, which have even more constraints as they get tangled around each other and attach or detach from water molecules. That was what the 1991 Nobel prize was about.

Mathematically, we can write the “Big Bang” entropy as a N x M matrix, where N is the number of particles and M the number of bins. As the system cools and becomes more organized, the entropy is reduced, and the system becomes “block-diagonal”, where blocks can correspond to molecules, polymer chains, cell structures, organelles, cells, etc.

Now here is the key point.

If we only considered the atom positions, and not these molecular and macro-molecular structures, the matrix would not be in block diagonal form. Then when we computed the Boltzmann entropy, ln(Ω), we would have a huge number of states available. But in fact, biology forms so much structure, that ln(Ω) is greatly reduced.  All those cross-terms in the matrix are empty, because they are energetically inaccessible, or topologically inaccessible (see the 1991 Nobel prize). Sewell is correct, there is a tremendous amount of order (or reduced entropy) that is improperly calculated if life is considered as a ball of noble gas.

Let me say this one more time. Sewell is not only correct about the proper way to calculate entropy, he has made a huge contribution in arguing for the presence of (nearly) non-convertible forms of sub-entropy.

– o –

Thermodynamic Comment:   Some have argued that this “sub-entropy” of Sewell’s can be explained by some sort of spontaneous symmetry breaking due to the influx of energy. We have talked about “cooling” causing spontaneous symmetry breaking, which is consistent with the idea that higher temperatures have higher entropy, but the idea that “heating” can also drive symmetry breaking and lowered entropy is incoherent. This is simply because thermodynamically d(TS) = dE, or dS = d(E/T), which is to say, energy brings entropy and temperature simultaneously.

Let’s look at this a bit closer and apply it to the Earth.  The Sun is at 5500K, and therefore its energy comes principally in the form of yellow photons.  The Earth global temperature averages out to about 300K, so it emits infrared photons.   In thermodynamic equilibrium, the energy entering has to balance the energy leaving (or the Earth would heat up!).  Since the entropy of the photons hitting the Earth have almost twenty times less than the entropy of the photons leaving the Earth, the Earth must be a source for entropy. Ergo, sunlight should be randomizing every molecule in the Earth and preventing life from happening.

Does this make sense? I mean everybody and their brother say that entropy can decrease if you have a heat engine in the system. Energy comes into your refrigerator as low entropy energy. Energy billows out of the coils in the back as high entropy heat. But inside the fridge is a low-entropy freezer.  Couldn’t this apply to Earth? (E.g., compensation argument.)

Well, if the Earth had a solar-powered refrigerator, and some insulation, and some fancy piping, yes. But of course, all that is an even more low entropy system than the electricity, so we are invoking bigger and bigger systems to get the entropy in the small freezer to go down. The more realistic comparison is without all that special pleading. Imagine you leave your freezer door open accidently and leave for the weekend. What will you find? A melted freezer and a very hot house, because your fridge would run continuously trying to cool something down that warmed even faster. In fact, this is how dehumidifiers function. So your house would be hot and dry, with a large puddle of coolish water in the kitchen. Some fridges dump that water into a pan under the heating coils, which would evaporate the water and we would then be back to our original state but at higher temperature. Would the entropy of the kitchen be greater or smaller than if you had unplugged the fridge for the weekend? Greater, because of all that heat.

And in fact, this is the basic argument for why objects that sit in space for long periods of time have surfaces that are highly randomized.  This is why Mars can’t keep methane in its atmosphere. This is why Titan has aerosols in its atmosphere. This is why the Moon has a sterile surface.

If the Earth is unique, there is more than simply energy flow from the Sun that is responsible.

Comments
If you don’t exhaustively annotate and footnote, explain and define, they will take every opportunity to misrepresent, obfuscate, and attempt to re-examine that which has already been covered. If you do exhaustively explain and contextualize and define and attribute, then you’re attacked for being too wordy or spamming. It’s a guerrilla tactic; they aren’t attempting to debate, they’re trying to win a war. You cannot explain the obvious to those that deny it.
William, science requires very precise and specific definitions. If you do not define your terms - including your units, you will find that your argument doesn't work. Many things are true in the word that are no obvious. Just because something is not obvious does not mean it is not true. Saying that it is obvious that a Boeing 747 has less entropy than a tornado, or than the junkyard from which the tornado putatively constructed it does not make it so. The fact that it seems "obvious" to you that it does, doesn't, as you say, make it possible to explain it to someone who denies that the 2nd law of thermodynamics is, um, not about thermodynamics. It is clear that the Boeing has more something than the junkyard (greater capacity to fly, for instance), but it is far from clear that that that thing is -entropy. And the reason becomes clear when you actually define the terms consistently. The 2nd Law of thermodynamics is about entropy change, where entropy is measured in joules. If the difference cannot be measured in joules, than it is not a change in entropy as defined in the 2nd Law. If there isn't a change in entropy as defined in the 2nd Law, then the 2nd Law of thermodynamics does not apply. The obfuscation, and misrepresentation and "drumbeat repetition in the teeth of repeated correction", as KF would say, seems to me to be coming from Granville's supporters, however inadvertently, not his critics. Be that as it may: thermodynamics is about energy. If something isn't about energy, it isn't about thermodynamics, although the statistical concepts my come in handy elsewhere.Elizabeth B Liddle
July 10, 2013
July
07
Jul
10
10
2013
06:55 AM
6
06
55
AM
PDT
Lizzie:
What was uniform still air, is now a massively powerful high energy system with extreme energy gradients!
Except tornadoes do NOT form out of "uniform still air". I would think that would be a violation of some law. BTW I doubt your position can even account for tornadoes.Joe
July 10, 2013
July
07
Jul
10
10
2013
06:47 AM
6
06
47
AM
PDT
Like others here (including Granville!) I am finding it difficult to make sense of the OP. But let me try to understand Robert’s post at 24:
The problem several people have with “entropy” is that they confuse it with a substance. For statistical physics, it is a shorthand for discussing the number of states of the system, while for thermodynamicists, it is related to both energy and temperature. The only place, and I stress it again, that statistics and thermal physics overlap, is when we are discussing atomic gasses. Then we can use Boltzmann’s equation. Everywhere else, we can’t convert them into thermal properties, and probably not even from one statistic to another.
Robert seems to be saying that thermodynamics and “statistical physics” are different. “Statistics” is certainly categorically different to either, just as the integer “2” is categorically different to the 2 as in “2 apples”. Thermodynamic entropy is about physical things i.e. about physics. Statistics is a tool we used to model it. So to say that “the only place…that statistics and thermal physics overlap, is where when we are discussing atomic gasses” seems utterly extraordinary to me (and rather at odds with what Granville is saying!). Certainly the statement that “Everywhere else, we can’t convert them into thermal properties, and probably not even from one statistic to another” seems very odd. Let’s go back to the 1st Law of thermodynamics, which is the law of conservation of energy, and which says that the change in INTERNAL ENERGY of a system is equal to its HEAT, a form of energy, measured in joules, which is Force x Distance, plus the WORK done, which is also defined as Force x Distance, and is thus a measure of energy expended, and can also be measured in joules. So of course we can “convert” macroscopic things like houses and tornados and trees into thermal properties. And we can calculate how much of internal energy a such a system has, and thus its capacity to do work it is able to do; after which its internal energy will be reduced, and its entropy increased.
For example, one could convert a computer code into 1?s and 0?s and measure its statistical entropy, but if I use that computer code to, say, sort all the books in the library into alphabetical order, does that produce a constant that relates the entropy of “computer code” into the entropy of “library catalogues”? I don’t think so. One needs a “converter”, and the efficiency of a converter depends on design, not physics.
If you are talking about Shannon entropy (which you seem to be), we are indeed not talking about energy, but about something else entirely. The Shannon entropy of a piece of computer code simply tells you something about the efficiency of the code (the “channel capacity”), not how good a specific configuration will be at sorting book titles into alphabetical order. We do know that the vast majority of configurations of the 1’s and 0’s of your code will not do anything, let alone sort your books. But, unlike a flask of water into which you drop a drop of dye, there is no tendency for a piece of code that initially starts off, say, with all the 0’s at one end and all the 1’s at the other, will, left to its own devices, end up jumbled. Nor is there any reason to think that code that consists of only a few 1’s and millions of 0’s, which will have far less Shannon entropy than one in which the 1’s and 0’s are fairly equal in number, will be more likely to do a useful job. In fact, it’s less likely, as the channel capacity (i.e. the Shannon entropy) will be much less. So you are correct that you can’t convert Shannon entropy to thermodynamic entropy. But if Granville’s claim was about Shannon entropy it would be false. A system with low thermodynamic entropy can do more work than a system with high thermodynamic entropy. But a piece of code with low Shannon entropy is much less capable of being rearranged into effective code than a piece of code with high Shannon entropy. So in terms of usefulness, they are the opposite: to get good code you want high Shannon entropy; to get lots of work, you want low thermodynamic entropy. So if Granville is talking about thermodynamic entropy (as his appeal to the 2nd Law of thermodynamics indicates) his claim is false. If he is talking about some other entropy, not to do with energy, then it is not at all clear what he is even claiming.
Why is this important? Well, the theorems for conservation of entropy all come out of thermodynamics. What Boltzmann did, was to show how this could be converted to statistical mechanics. Thus the peculiar “ordering” of atoms has all the same constant properties as the thermal physics of collections of atoms. This was the power of the equation.
But entropy is not conserved. The entropy of a system always decreases. And statistical mechanics does indeed show why this is true – unless we add energy to a system (by doing work on it), then any work done within the system will decrease its entropy, i.e. reduce its capacity for work. This is as true whether we are talking about the gradual reduction of a heat gradient, or rocks gradually dropping off a cliff face. There is no conflict between statistical mechanics and thermodynamics – it’s just that statistical mechanics gives you a microscopic view of why the 2nd Law of thermodynamics must be true. But both approaches are about energy, specifically the relationship between work (in joules), heat (in joules), and the internal energy of a system (in joules), whether you are talking about the potential energy of a rock at the top of cliff, its kinetic energy as it falls, and the heat released when it hits the ground; or the conduction of heat along a metal bar (or for that matter, the diffusion of chromium within a metal bar). Or indeed the internal energy changes, heat release, and work that take place during a tornado.
I tried to describe how far physicists had gone in calculating the entropy of complex systems, but in another sense, this is a red herring. That is, we almost never use the entropy in calculation, only the change in entropy. And in complicated systems, the change in entropy is a path-dependent function. Or to say it another way, it is dS/dx that is important, not S itself, or perhaps integral[(dS/dx) dx]. So for example, when a nitroglycerine molecule dissociates, the reordering of the chemical bonds releases energy, and the process is driven by the increase in entropy of the gas products over the molecular precursor. So it is the local dS/dx that drives the reaction so very quickly. By analogy then, Granville doesn’t have to calculate the entropy of his jet airliner, simply the gradients in the entropy from junk-yard to airliner.
Yes. The energy gradients.
Likewise, life has enormous gradients, both spatially and temporally, which should determine the direction of the reaction, but don’t because the exact opposite is observed. And yes, this violation of entropy gradients is a direct violation of the 2nd Law of Thermodynamics.
No! Energy gradients can increase spontaneously as well as reduce spontaneously - look at the tornado itself! What was uniform still air, is now a massively powerful high energy system with extreme energy gradients! You are not, surely, saying that tornadoes violate the 2nd Law of thermodynamics? Yet all that has happened is that work was done on the air molecules by virtue of a thermal gradients on the earth’s surface. If you are talking about the 2nd Law of thermodynamics you are talking about energy, measured in joules. If you are talking about any other kind of entropy, for example the possible ways of arranging 1’s and 0’s in your computer code, then the 2nd Law does not apply. Indeed it will trip you up, because entropy in computer code is a helpful thing – it increases the efficiency of your code. But that’s because it has nothing to do with energy. If Granville wants to talk about the 2nd Law of thermodynamics, then he is talking about joules. And there is absolutely no reason to suppose that any chemical or biochemical reaction by which living things grow from raw ingredients violates the 2nd Law of thermodynamics. All the energy is accounted for.Elizabeth B Liddle
July 10, 2013
July
07
Jul
10
10
2013
06:40 AM
6
06
40
AM
PDT
If you don't exhaustively annotate and footnote, explain and define, they will take every opportunity to misrepresent, obfuscate, and attempt to re-examine that which has already been covered. If you do exhaustively explain and contextualize and define and attribute, then you're attacked for being too wordy or spamming. It's a guerrilla tactic; they aren't attempting to debate, they're trying to win a war. You cannot explain the obvious to those that deny it.William J Murray
July 10, 2013
July
07
Jul
10
10
2013
06:08 AM
6
06
08
AM
PDT
Phillip' suffered similar tosh from the jackanapes and dullards, who whinged that he was spamming(!), because he persisted in trying to get through to them!Axel
July 10, 2013
July
07
Jul
10
10
2013
05:38 AM
5
05
38
AM
PDT
' KF, You can’t make up in volume for what your argument lacks in substance. If you could refute my simple 4-step argument, you would. You would point out the exact statement you disagree with, and you would explain why you thought it was wrong. You can’t do that, so instead you spam the thread with thousands of words, hoping that the onlookers will assume that there’s a refutation in there somewhere. There isn’t, and the onlookers know it.' KF, why don't you just own up? Put your hands up, and admit that, to his own mind and on his own terms(?) however mysterious, not to say farcical, a fool can outsmart the cleverest of the clever?Axel
July 10, 2013
July
07
Jul
10
10
2013
05:34 AM
5
05
34
AM
PDT
'The approach many scientists take toward ID is to “define” science so that it excludes ID, and then declare “ID is not science” so they don’t have to deal with the issue of whether or not ID is true.' LOL VL, GS. That is it in a nutshell, an optimally neatly- design nutshell. It really is so surreally true as to be side-splittingly funny. You know, you have just identified the absolutely primordially fundamental problem, and expressed it perfectly. We are following their wretchedly perverse agenda with its risibly inverted assumption of the absence of intelligent design throughout the universe (qualifying 'design' with 'intelligent' is really superfluous, since it is essentially, a tautology). I'm afraid Planck was wrong, wrong, wrong. Science perforce (since so perverse) advances not one funeral at a time but, alas, hundreds of funerals at a time; in great blocks, wadges, and only now does there seem an end in sight. Those who are too dim to be sidling out of it all, or simply too combatively high-profile, are squealing like stuck pigs more and more loudly.Axel
July 10, 2013
July
07
Jul
10
10
2013
05:01 AM
5
05
01
AM
PDT
keiths: You are being deliberately combative, and not trying to address what I'm asking about. I will not defend or even comment further on the remarks you are railing about, until Robert Sheldon has chipped in to clarify what he meant. It would not be fair to him to guess what he meant, and then try to defend him on that basis. That is why I asked you to bracket out that passage from Comment 24 and *concentrate on the op-ed*. If you are willing to do that, then I will talk. If you aren't, then this conversation is over. Now, tell me the errors and incompetence that you detect in the physics of Robert Sheldon *as expressed in the op-ed*. Or, if you detect no such errors or incompetence, then why do you not agree with his conclusions?Timaeus
July 10, 2013
July
07
Jul
10
10
2013
03:26 AM
3
03
26
AM
PDT
@keiths: In order to stop your broken record, I volunteer to speak for kf on this matter: The answer to your question is NONE. If kf is not happy with "his" answer, he can step in... In this case we will know that at least one step is incorrect.JWTruthInLove
July 10, 2013
July
07
Jul
10
10
2013
03:25 AM
3
03
25
AM
PDT
KF, Which step of my simple argument is incorrect, and why?keiths
July 10, 2013
July
07
Jul
10
10
2013
02:53 AM
2
02
53
AM
PDT
Onlookers, as predicted, KS is simply repeating his willful misrepresentations. I repeat, with added links:
1: GS and I have both argued -- cf. here on -- that diffusion-like mechanisms (including for heat spreading) are not credibly able to perform constructive work resulting in FSCO/I, on grounds linked to the statistical underpinnings of the second law. RS has added a significant point on how differences in energy levels will lock away access to certain states (through metastability), also making a discussion on the issue that Boltzmann analysis is different from Gibbs analysis which explicitly reckons with degree of accessibility of different possible configs. (BTW, I suspect that not every physicist will fully agree with RS's way of putting his case [and I am not inclined to take up a debate on that], but the fundamental point here is apt. I have also pointed out the work flowing from Brillouin, Jaynes and co down to Robertson and co on the informational view on entropy: a metric of the average missing information to specify microstate on knowing macrostate, which forces us to treat these states as random. I should add that when something is in a functional state depending on a narrow cluster of configs reflecting FSCO/I, a LOT of information has just been communicates about its micro state given the rarity of such clusters in the space of possible configs. That rarity is enforced by the need for multiple, well matched, correctly arranged and coupled parts, such as we can see in even (I) the way the symbols used to compose a post like this in English are organised, as opposed to (II) randomness typical outputs: fjiwhghjwuo . . . or (III) what mechanical necessity would force:sksksksksk . . . ) 2: I have repeatedly pointed out and given explanations on how the only empirically and analytically credible cause of such constructive work leading to FSCO/I is design. As is massively evidenced from our observation of such entities, with billions of cases in point. 3: I have pointed out and explained that, when KS tried to suggest that either GS or I have implied or tried to say that when an entity A actually undergoes a loss of entropy, there is not a mechanism at work that will cause a rise in entropy elsewhere (e.g. in B or in B", C and D) that by virtue of the inefficiencies and/or molecular statistics involved, will lead to a rise of entropy elsewhere that will equal or exceed the reduction at A, HE HAS MISREPRESENTED WHAT WE HAVE SAID AND IMPLIED, AND SO HAS LED AWAY AFTER A RED HERRING THEN HAS ERECTED A STRAWMAN THEN KNOCKED IT DOWN. (He has done so now several times in the teeth of reasonable correction, leading to his indulging in willfully continued misrepresentation. And yes, at this point, whether or not he will acknowledge or face it, the just linked definition plainly, sadly, applies to what he is doing.) 4: In particular, KS has held up a -- corrected but this has been ignored and further misrepresented to the point where it is plainly a willful tactic -- case where on loss of sufficient heat, a pool of water freezes due to the ordering forces present in the polar water molecules leading to crystal packing. I have pointed out, that ever since Wicken and Orgel -- cf. the point 8 in the just linked, it has been recognised that order is not to be confused with organisation that Wicken describes in terms of being assembled per a wiring diagram, and which both associate with life. 5: I have further pointed out that in design thought the difference between order and organisation and the divergent empirically grounded causes of both, has been underscored since Thaxton et al in The Mystery of Life's Origin, the very first design theory work. That is, since 1984 on public and easily accessible record. That is, we must address: chance based randomness, necessity based order and complex, functionally specific complex organisation and associated information that in our uniform, repeated experience of its creation is only sources in choice contingency, aka design. (This is of course the context in which the design inference explanatory filter -- yet another massively, willfully strawmannised point -- is grounded.) 6: I have taken time through the nanobots and microjets thought exercise, to show why this is so, per the vast difference on statistical weight of relevant functional and non-functional microstates, with a context where diffusion or some comparable blind and chance driven force is operative vs an intelligent process of constructive work is operative.
Those who are genuinely interested to find out the balance on the merits can follow up, and will see who is being truthful and who is playing at straw3man caricatures and now outright drumbeat repetition of a big lie tactics. (I have presented a point by point analysis and correction that exposed the strawman tactic and backed it up with a substantial analysis above and elsewhere, KS is blandly and brazenly denying that this exists. he believes that drumbeat repetition of strawman distortions maintained in the teeth of correction will confuse, polarise and generally create the false impression he wants. Do not fall for his uncivil tactics.) KFkairosfocus
July 10, 2013
July
07
Jul
10
10
2013
02:44 AM
2
02
44
AM
PDT
T: Thanks for a thoughtful intervention. I draw your attention to the following summary, in which I highlight where I have taken up in brief some of RS's themes above:
1: GS and I have both argued -- cf. here on -- that diffusion-like mechanisms (including for heat spreading) are not credibly able to perform constructive work resulting in FSCO/I, on grounds linked to the statistical underpinnings of the second law. RS has added a significant point on how differences in energy levels will lock away access to certain states (through metastability), also making a discussion on the issue that Boltzmann analysis is different from Gibbs analysis which explicitly reckons with degree of accessibility of different possible configs. (BTW, I suspect that not every physicist will fully agree with RS's way of putting his case [and I am not inclined to take up a debate on that], but the fundamental point here is apt. I have also pointed out the work flowing from Brillouin, Jaynes and co down to Robertson and co on the informational view on entropy: a metric of the average missing information to specify microstate on knowing macrostate, which forces us to treat these states as random. I should add that when something is in a functional state depending on a narrow cluster of configs reflecting FSCO/I, a LOT of information has just been communicates about its micro state given the rarity of such clusters in the space of possible configs. That rarity is enforced by the need for multiple, well matched, correctly arranged and coupled parts, such as we can see in even (I) the way the symbols used to compose a post like this in English are organised, as opposed to (II) randomness typical outputs: fjiwhghjwuo . . . or (III) what mechanical necessity would force:sksksksksk . . . ) 2: I have repeatedly pointed out and given explanations on how the only empirically and analytically credible cause of such constructive work leading to FSCO/I is design. As is massively evidenced from our observation of such entities, with billions of cases in point. 3: I have pointed out and explained that, when KS tried to suggest that either GS or I have implied or tried to say that when an entity A actually undergoes a loss of entropy, there is not a mechanism at work that will cause a rise in entropy elsewhere (e.g. in B or in B", C and D) that by virtue of the inefficiencies and/or molecular statistics involved, will lead to a rise of entropy elsewhere that will equal or exceed the reduction at A, HE HAS MISREPRESENTED WHAT WE HAVE SAID AND IMPLIED, AND SO HAS LED AWAY AFTER A RED HERRING THEN HAS ERECTED A STRAWMAN THEN KNOCKED IT DOWN. (He has done so now several times in the teeth of reasonable correction, leading to his indulging in willfully continued misrepresentation. And yes, at this point, whether or not he will acknowledge or face it, the just linked definition plainly, sadly, applies to what he is doing.) 4: In particular, KS has held up a -- corrected but this has been ignored and further misrepresented to the point where it is plainly a willful tactic -- case where on loss of sufficient heat, a pool of water freezes due to the ordering forces present in the polar water molecules leading to crystal packing. I have pointed out, that ever since Wicken and Orgel -- cf. the point 8 in the just linked, it has been recognised that order is not to be confused with organisation that Wicken describes in terms of being assembled per a wiring diagram, and which both associate with life. 5: I have further pointed out that in design thought the difference between order and organisation and the divergent empirically grounded causes of both, has been underscored since Thaxton et al in The Mystery of Life's Origin, the very first design theory work. That is, since 1984 on public and easily accessible record. That is, we must address: chance based randomness, necessity based order and complex, functionally specific complex organisation and associated information that in our uniform, repeated experience of its creation is only sources in choice contingency, aka design. (This is of course the context in which the design inference explanatory filter -- yet another massively, willfully strawmannised point -- is grounded.) 6: I have taken time through the nanobots and microjets thought exercise, to show why this is so, per the vast difference on statistical weight of relevant functional and non-functional microstates, with a context where diffusion or some comparable blind and chance driven force is operative vs an intelligent process of constructive work is operative.
Now, going beyond, I see that there is a point where I do need to bring out a disagreement with RS:
KS: today you pointed me to Robert’s comment #24, so I took a look. Boy, was that an eye-opener! Thank you for pointing it out. Robert actually makes this claim:
[RS:] Likewise, life has enormous gradients, both spatially and temporally, which should determine the direction of the reaction, but don’t because the exact opposite is observed. And yes, this violation of entropy gradients is a direct violation of the 2nd Law of Thermodynamics.
[KS, vulgarity deleted:] He’s saying that life itself, not just evolution, violates the second law!
The context is:
[RS:] Well, the theorems for conservation of entropy all come out of thermodynamics. What Boltzmann did, was to show how this could be converted to statistical mechanics. Thus the peculiar “ordering” of atoms has all the same constant properties as the thermal physics of collections of atoms. This was the power of the equation. We may not have a conversion constant for other forms of ordering, but the existence of these constants for noble gas atom ordering, strongly suggest that other forms of ordering are also conserved. This provided a beachhead into the sorts of ordering that Granville refers to, and we can fruitfully discuss the conservation of “Entropy-X”, even if we don’t know how to calculate it. I tried to describe how far physicists had gone in calculating the entropy of complex systems, but in another sense, this is a red herring. That is, we almost never use the entropy in calculation, only the change in entropy. And in complicated systems [--> note qualifier], the change in entropy is a path-dependent function. [--> In particular, if something is performing programmed constructive work in the situation, things are not so simple and direct anymore.] Or to say it another way, it is dS/dx that is important, not S itself, or perhaps integral[(dS/dx) dx]. So for example, when a nitroglycerine molecule dissociates, the reordering of the chemical bonds releases energy, and the process is driven by the increase in entropy of the gas products over the molecular precursor. So it is the local dS/dx that drives the reaction so very quickly. [--> Injecting first law considerations, and the like we end up with a Gibbs free energy equivalent here, so this is quite valid.] By analogy then, Granville doesn’t have to calculate the entropy of his jet airliner, simply the gradients in the entropy from junk-yard to airliner. Likewise, life has enormous gradients, both spatially and temporally, which should determine the direction of the reaction, but don’t because the exact opposite is observed. [--> because of programmed constructive work, often fed by chains of ATP molecules that push things way up the energy and entropy hill, paying for this elsewhere and of course dependent on a whole system of FSCO/I rich machines organised to carry out genetic info and regulation] And yes, this violation of entropy gradients is a direct violation of the 2nd Law of Thermodynamics. [--> this I disagree with, on grounds similar to the point that was just made.] There may be ways to get around this with long-range forces and correlations. [--> The ways around have to do with the obvious presence of not mere correlations but programmed constructive work with actual energy batteries used to drive it uphill] But then, all of statistical mechanics presupposes that there are no long-range correlations, so more than thermodynamics is lost if we invoke long-range forces.
That is, first, RS is giving an important qualifier and wider context, that KS neatly clipped off in his quote. Second, RS has at minimum been sloppy in his wording, and as the wording appears, I disagree for reasons as annotated. The living cell is an automaton that is chock full of programmed operations and machinery organised to carry out things that would not spontaneously occur without such programming and a steady flow of ATP energy batteries. In effect we have something like Maxwell's demon on steroids at work here [the system is obviously programmed to "know" what to expect and how to use that to perform work that is otherwise not reasonable in a system that does not have that degree of organisation], and while actually calculating the entropy numbers would be very hard indeed, there is no serious reason to believe that the informationally directed work of a system violates the overall degradation of energy in the world as it proceeds. We have an FSCO/I rich autonomous system that is self maintaining and self replicating based on information and machinery. That is there are FSCO/I rich energy conversion devices aplenty at work capable of performing and actually carrying out programmed construction work. the FSCO/I being produced and self replicated through constructive work begs for explanation on the known, empirically warranted source of FSCO/I in its various forms. And, there is just no serious reason to doubt that in the end, the inefficiencies associated with the system will dump entropy elsewhere -- note how carefully our bodies work to keep from cooking ourselves in our own body heat, what in effect a fever of enough elevation would do. All of that sweating etc that seeks to keep a regulated temp going, has a reason. So does shivering as a mechanism to try to keep us warm. And more. But also, the case just discussed shows how the attempt to say that compensation by exporting entropy in effect is a good enough answer fails. Again and again, we see constructive work being carried out in accord with programs, e.g. protein synthesis in ribsomes as a classic case. Constructive work associated with heat flows as an inevitable byproduct, will dump heat elsewhere, thus entropy. Let me again clip my core, summary analysis from was it 6 above:
Heat transfer in Isolated system:
|| A (at T_a) –> d’Q –> B (at T_b) || dS >/= (-d’Q/T_a) + (+d’Q/T_b), Lex 2, Th in a common form [--> I add, this shows how an increment of heat flow, will cause entropy in the receiving body B to tend to rise, and due tot he sharp rise in number of possible ways to distribute mass and energy at micro levels implied, relative to the relatively fewer numbers lost for A, the overall number of ways to distribute mass and energy at micro levels in the system rises, the same as saying entropy rises overall. or, in a certain ideal case useful in calculations, quasi-static equilibrium, it can be constant. --> I have taken pains to point out that the application of this case to a pool of water freezing by losing enough heat that the attractions of its polar molecules can impose crystal order, freezing to form ice, is a case of order not organisation and is irrelevant to the matter at focus, constructive work issuing in FSCO/I, and whether forces like those responsible for diffusion and the like, are a credible source of such] Heat engine, leaving off [in the diagram] the isolation of the whole: A –> d’Q_a –> B’ =====> D (shaft work) Where also, B’ –> d’Q_b –> C, heat disposal to a heat sink Where of course the sum of heat inflows d’Q_a will equal the sum of rise in internal energy of B dU_b, work done on D, dW, and heat transferred onwards to C, d’Q_b. [--> again, this shows that I am arguing in a context where lex I see 2 th holds and holds by virtue that heat dumping allows it to do so, but this is not enough to answer to the question that is implied by the "compensation" arguments, that we can credibly expect diffusion or the like to perform constructive shaft work issuing in FSCO/I in relevant cases of interest e.g at OOL in that warm little pond of Darwin or the like purely physical and chemical envt.]]
The pivotal questions are, the performance of constructive work by dW, the possibility that this results in FSCO/I, the possibility that B itself shows FSCO/I enabling it to couple energy and do shaft work, and the suggested ideas that d’Q’s can spontaneously lead to shaft work constructing FSCO/I as an informational free lunch. [--> Notice how I have focussed the real issues] By overwhelming statistical weights of accessible microstates consistent with the gross flows outlined, this is not credibly observable on the gamut of our solar system or the observed cosmos. There is but one empirically, reliably observed source for FSCO/I, design. The analysis on statistical weights of microstates and random forces such as diffusion and the like, shows why. [--> cf. 6 above for a simple model of diffusion:
[KF to GS:] diffusion-like, random spreading mechanisms are acting and strongly tend to drive unconstrained systems to clusters of possible states where the formerly concentrated or more orderly items are now spread out and are utterly unlikely to return to the original state or something like that. There is a “time’s arrow” at work leading to a system that “forgets” its initial condition and moves towards a predominant cluster of microstates that has an overwhelming statistical weight. For instance (following a useful simple model of diffusion in Yavorsky and Pinski’s nice elementary Physics), if we have ten each of white and black marbles in two rows in a container: ||**********|| ||0000000000|| There is but one way to be as shown [--> 10B on top of 10W], but over 63,000 to be 5B/W in each row, and 6:4 and 4:6 cases are close to such numbers:44,000 +. So, if the system is sufficiently agitated for balls to swap positions at random on a regular basis, it soon moves towards a dominant cluster near the 5:5 peak and “forgets” the initial state. [--> Under circumstances where metastabilities break, systems then migrate to maximally weighted clusters of microstates . . . think of a frozen material here with molecules of two types locked in a very special pattern then melt, so diffusion happens and the things become mixed. If we observe at instants thereafter, it will be maximally implausible to see the simply describable case again, for systems of relevant complexity beyond 500 - 1,000 bits. or, if such started in a more typical state, it will not be at all plausible on the gamut of the solar system or the observed cosmos to find deeply isolated clusters of special configs. The 500H coin example illustrates why.] This basic pattern works for heat, diffusion of ink drops in beakers of water, C atoms in a bar of steel, and more. The basic reason for such does not go away if the system is opened up to energy and mass flows, and the point that rare, specifically and simply describable states ill seldom be revisited or found, for enough complexity — 500 – 1,000 bits, soon becomes that such states are beyond the reach of the solar system’s or the observed cosmos’ search capacity.
Which brings us to the point Hoyle was making: letting his tornado stand in for an agitating energy source triggering configs at random, tornadoes passing through junkyards don’t build jumbo jets or even instruments on their panels. But similar intelligently directed expenditures, through precise patterns of work, routinely do build jumbo jets.
Now, T, it is normal in complex cases like this for there to be differences and errors. However, I am pretty sure that for years to come, people like KS will be snipping out of context, and playing all sorts of willfully continued misrepresentation games, to "prove" that we do not know what we are talking about. he is here to disrupt and harvest snippets to use here and elsewhere for rhetorical or even propagandistic purposes, not to seek the truth through reasonable discussion. How do I confidently know this? Because that is what happened to me above in this thread, and because it is what I have observed too many times elsewhere and over the course of years with KS's ilk. RS has given an honest opinion,and has in my view made an error which I have addressed. This, predictably will be pounced on and snatched out of context to make up and pummel a strawman, never mind corrections or protests. Indeed, my corrections above were brushed aside with the willful and irresponsible misrepresenatations that I was spamming the thread and had not adequately addressed the strawman distortion I have yet again taken time to correct. That is what we are up against. It brings back all too many memories of how too many communists operated, but frankly, this seems to be worse, more deviously calculated a la Alinsky; most communists were true believers and there was something to be said for much of their analysis and they were often quite sincere. (Never mind how dangerous they were as a result.) There is nothing positive to be said for a farrago of willfully sustained distractions, distortions and caricatures led to ad hominems designed to polarise and confuse. KFkairosfocus
July 10, 2013
July
07
Jul
10
10
2013
02:37 AM
2
02
37
AM
PDT
Timaeus:
Since Sheldon is a highly trained physicist, it seems likely that he would as a general rule employ sound reasoning in matters of physics.
Timaeus, Your obsession with credentials is showing again. Do you think Sheldon's PhD outweighs his bizarre claim that life violates the second law? If you asked 100 highly-trained physicists whether they think that life violates the second law, what do you think they would say? Do their PhD's outweigh Sheldon's? How does the credential calculus work?keiths
July 10, 2013
July
07
Jul
10
10
2013
02:31 AM
2
02
31
AM
PDT
kairosfocus:
I suggest that onlookers beware of KS’s tactics and their consequences.
KF, The onlookers are wondering why you can't locate a flaw in my 4-step argument.keiths
July 10, 2013
July
07
Jul
10
10
2013
02:20 AM
2
02
20
AM
PDT
Re KS: It is patent that KS will not ber corrected nor will he cease from willful distort6ions, so I ntoe as follows for record:
1: GS and I have both argued that diffusion-like mechanisms (including for heat spreading) are not credibly able to perform constructive work resulting in FSCO/I, on grounds linked to the statistical underpinnings of the second law. RS has added a significant point on how differences in energy levels will lock away access to certain states (through metastability), also making a discussion on the issue that Boltzmann analysis is different from Gibbs analysis which explicitly reckons with degree of accessibility of different possible configs. (BTW, I suspect that not every physicist will fully agree with RS's way of putting his case [and I am not inclined to take up a debate on that], but the fundamental point here is apt. I have also pointed out the work flowing from Brillouin, Jaynes and co down to Robertson and co on the informational view on entropy: a metric of the average missing information to specify microstate on knowing macrostate, which forces us to treat these states as random. I should add that when something is in a functional state depending on a narrow cluster of configs reflecting FSCO/I, a LOT of information has just been communicates about its micro state given the rarity of such clusters in the space of possible configs. That rarity is enforced by the need for multiple, well matched, correctly arranged and coupled parts, such as we can see in even (I) the way the symbols used to compose a post like this in English are organised, as opposed to (II) randomness typical outputs: fjiwhghjwuo . . . or (III) what mechanical necessity would force:sksksksksk . . . ) 2: I have repeatedly pointed out and given explanations on how the only empirically and analytically credible cause of such constructive work leading to FSCO/I is design. As is massively evidenced from our observation of such entities, with billions of cases in point. 3: I have pointed out and explained that, when KS tried to suggest that either GS or I have implied or tried to say that when an entity A actually undergoes a loss of entropy, there is not a mechanism at work that will cause a rise in entropy elsewhere (e.g. in B or in B", C and D) that by virtue of the inefficiencies and/or molecular statistics involved, will lead to a rise of entropy elsewhere that will equal or exceed the reduction at A, HE HAS MISREPRESENTED WHAT WE HAVE SAID AND IMPLIED, AND SO HAS LED AWAY AFTER A RED HERRING THEN HAS ERECTED A STRAWMAN THEN KNOCKED IT DOWN. (He has done so now several times in the teeth of reasonable correction, leading to his indulging in willfully continued misrepresentation. And yes, at this point, whether or not he will acknowledge or face it, the just linked definition plainly, sadly, applies to what he is doing.) 4: In particular, KS has held up a -- corrected but this has been ignored and further misrepresented to the point where it is plainly a willful tactic -- case where on loss of sufficient heat, a pool of water freezes due to the ordering forces present in the polar water molecules leading to crystal packing. I have pointed out, that ever since Wicken and Orgel, it has been recognised that order is not to be confused with organisation that Wicken describes in terms of being assembled per a wiring diagram, and which both associate with life. 5: I have further pointed out that in design thought the difference has been underscored since Thaxton et al in The Mystery of Life's Origin, the very first design theory work. That is, since 1984 on public and easily accessible record. That is, we must address: chance based randomness, necessity based order and complex, fucit0onallys pecific complex organisation and associated information that in our uniform, repeated experience of its creation is only sources in choice contingency, aka design. 6: I have taken time through the nanobots and microjets thought exercise, to show why this is so, per the vast difference on statistical weight of relevant functional and non-functional microstates, with a context where diffusion or some comparable blind and chance driven force is operative vs an intelligent process of constructive work is operative.
I suggest that onlookers beware of KS's tactics and their consequences. KFkairosfocus
July 10, 2013
July
07
Jul
10
10
2013
01:25 AM
1
01
25
AM
PDT
GD: Please. I have pointed out (and given supportive details above) that:
1: GS and I have both argued that diffusion-like mechanisms (including for heat spreading) are not credibly able to perform constructive work resulting in FSCO/I, on grounds linked to the statistical underpinnings of the second law. RS has added a significant point on how differences in energy levels will lock away access to certain states (through metastability), also making a discussion on the issue that Boltzmann analysis is different from Gibbs analysis which explicitly reckons with degree of accessibility of different possible configs. (BTW, I suspect that not every physicist will fully agree with RS's way of putting his case [and I am not inclined to take up a debate on that], but the fundamental point here is apt. I have also pointed out the work flowing from Brillouin, Jaynes and co down to Robertson and co on the informational view on entropy: a metric of the average missing information to specify microstate on knowing macrostate, which forces us to treat these states as random. I should add that when something is in a functional state depending on a narrow cluster of configs reflecting FSCO/I, a LOT of information has just been communicates about its micro state given the rarity of such clusters in the space of possible configs. That rarity is enforced by the need for multiple, well matched, correctly arranged and coupled parts, such as we can see in even (I) the way the symbols used to compose a post like this in English are organised, as opposed to (II) randomness typical outputs: fjiwhghjwuo . . . or (III) what mechanical necessity would force:sksksksksk . . . ) 2: I have repeatedly pointed out and given explanations on how the only empirically and analytically credible cause of such constructive work leading to FSCO/I is design. As is massively evidenced from our observation of such entities, with billions of cases in point. 3: I have pointed out and explained that, when KS tried to suggest that either GS or I have implied or tried to say that when an entity A actually undergoes a loss of entropy, there is not a mechanism at work that will cause a rise in entropy elsewhere (e.g. in B or in B", C and D) that by virtue of the inefficiencies and/or molecular statistics involved, will lead to a rise of entropy elsewhere that will equal or exceed the reduction at A, HE HAS MISREPRESENTED WHAT WE HAVE SAID AND IMPLIED, AND SO HAS LED AWAY AFTER A RED HERRING THEN HAS ERECTED A STRAWMAN THEN KNOCKED IT DOWN. (He has done so now several times in the teeth of reasonable correction, leading to his indulging in willfully continued misrepresentation. And yes, at this point, whether or not he will acknowledge or face it, the just linked definition plainly, sadly, applies to what he is doing.) 4: In particular, KS has held up a -- corrected but this has been ignored and further misrepresented to the point where it is plainly a willful tactic -- case where on loss of sufficient heat, a pool of water freezes due to the ordering forces present in the polar water molecules leading to crystal packing. I have pointed out, that ever since Wicken and Orgel, it has been recognised that order is not to be confused with organisation that Wicken describes in terms of being assembled per a wiring diagram, and which both associate with life. 5: I have further pointed out that in design thought the difference has been underscored since Thaxton et al in The Mystery of Life's Origin, the very first design theory work. That is, since 1984 on public and easily accessible record. That is, we must address: chance based randomness, necessity based order and complex, fucit0onallys pecific complex organisation and associated information that in our uniform, repeated experience of its creation is only sources in choice contingency, aka design. 6: I have taken time through the nanobots and microjets thought exercise, to show why this is so, per the vast difference on statistical weight of relevant functional and non-functional microstates, with a context where diffusion or some comparable blind and chance driven force is operative vs an intelligent process of constructive work is operative.
If, however, you come along and assume that KS has grounds for what he says and we do not, then you will be likely indeed to be misled. But then, that is a longstanding pattern of design theory, too many inclined to support darwinism assume that ideologues who do not shun to stoop to willful, sustained miserepresentations, are giving an accurate picture, e.g. "creationists in cheap tuxedos." Do I need to point out that willfully sustained misrepresentation in the face of easily accessible corrective facts is morally irresponsible, indeed a species of lying? Now, I think several of your questions above have been anmswe3red. I will note that I have taken time to explicitly point out that from Thaxton et al on, design thinkers have been careful to make relevant distinctions. For instance, there are naturally occurring heat engines triggered by fluid dynamics, e.g. hurricanes and tornadoes -- manifestations of order tracing to mechanical necessity, not organisation. The focal case is where constructive work leading to FSCO/I is the outcome, and the point is that on the relevant statistics, etc, this is not credibly produced by diffusion-like forces. Thus the use of scaling up and down on jets examples, and the reference to Shapiro's equivalent macro-level example the golf ball that played itself through 18 holes with aid of winds, earthquakes an d the like. In this context, it is not to put worlds in your moutht o point out that KS has erected and kncled over a distractive strawman by going off on a tangent about tsomething no one disputres, and pretending that GS and I have argued things that say or imply that what he makes a song and dance about does not happen. And in light of that misrepresentation, the correction that I have put up is actually responsive by exposing a strawman and emphasising what we actually HAVE said and argued, as opposed to what we have been willfully and irresponsibly, persistently misrepresented as. I hope as well you will understand how such willful misrepresentation -- an unfortunately demonstrably habitual pattern of argument by darwinist objectors to design for over a decade now -- frustrates honest and civil discussion by clouding and poisoning the atmosphere. Which is part of how such uncivil tactics work. And when that is then compounded by the pretence that we are doing much the same when we protest,t hat simply makes matters worse. If you are indeed an honest interlocutor, please change path. KFkairosfocus
July 10, 2013
July
07
Jul
10
10
2013
01:16 AM
1
01
16
AM
PDT
keiths: Regarding the statement you've asked about, I presumed that Sheldon meant that the *spontaneous emergence of life from non-life* would involve such a violation, not that *the continued existence of life* does so. Be that as it may, let's bracket out that statement from Comment 24 for the moment. Let's just concentrate on the op-ed. Sheldon argues that Sewell's argument is, if not perfectly well formulated, at least substantially correct in most of its statements about thermodynamics and entropy. Sheldon gives some reasons for this opinion of his. I would like it if you and/or Elizabeth and/or anyone else would comment directly on the op-ed, saying what you agree with or disagree with in it. Obviously, you disagree with Sheldon's *conclusion* -- that Sewell's article is not crap science, but fairly good science -- but I already know that. I want to hear where you think Sheldon is right, and where you think he is wrong. As Elizabeth has pointed out, it is not the conclusions that matter so much (in determining the value of a scientific writing) as the reasoning. Since Sheldon is a highly trained physicist, it seems likely that he would as a general rule employ sound reasoning in matters of physics. It is also likely that he would not make a gross error in his account of the use of the term "entropy" or his understanding of the Second Law. So I'm curious to know if you think he has made any errors of either definition or reasoning in his op-ed.Timaeus
July 10, 2013
July
07
Jul
10
10
2013
01:07 AM
1
01
07
AM
PDT
Timaeus, You find it odd that Lizzie and I aren't engaging Robert Sheldon's OP, but you are reading too much into that, as you tend to do. Everyone else in the thread is also ignoring the OP, including Granville, kairosfocus, and CS3. Granville jumped in with the first comment, said he was having "a little trouble" understanding the OP, and then proceeded to tell everyone that
Robert’s comments are from the point of view of statistical thermodynamics... I am still trying to understand the details of his post myself. In any case, I want to emphasize that the main points in my papers do not really require any understanding of statistical thermodynamics, or even (in the case of the Biocomplexity paper especially) PDEs or mathematics in general. My points are MUCH simpler!
In other words, he was basically advising all of us not to spend any time pondering Robert's post, since it wasn't necessary! You can hardly blame us for taking Granville's advice. I did read through the OP, but didn't find anything particularly comment-worthy, so I left it alone. However, today you pointed me to Robert's comment #24, so I took a look. Boy, was that an eye-opener! Thank you for pointing it out. Robert actually makes this claim:
Likewise, life has enormous gradients, both spatially and temporally, which should determine the direction of the reaction, but don’t because the exact opposite is observed. And yes, this violation of entropy gradients is a direct violation of the 2nd Law of Thermodynamics.
Holy crap! He's saying that life itself, not just evolution, violates the second law! The irony, Timaeus, is that you tried so hard in the other thread to persuade me that Granville's paper wasn't claiming that evolution violated the second law. Your argument was that even if Granville personally believed it, he wasn't stating it in the paper, and so the paper deserved to be taken seriously. So today you asked Lizzie and me to take a closer look at what Robert wrote, in both the OP and his comment. I did, and I found something even more outlandish than the evolution/second law claim. Robert has out-Granvilled Granville! Shall we inform the Nobel committee that life violates the second law of thermodynamics, according to Robert Sheldon?keiths
July 10, 2013
July
07
Jul
10
10
2013
12:27 AM
12
12
27
AM
PDT
KF:
GD: Kindly stop putting words in my mouth that don’t belong there. It is KS who has tried to suggest (frankly, at this stage it is a willful strawman distortion) that GS and I are trying to or say something equivalent to trying to overthrow the second law.
I certainly don't intend to misrepresent your position, and I don't see where I've done anything like what you describe above. I did say that you gave an example of compensation occurring, but A) you clearly did, and B) this is not in any way "equivalent to trying to overthrow the second law". Compensation is an integral part of the second law, not a challenge to it. Honestly, I don't entirely understand what your position is, so I haven't tried to represent your position, let alone misrepresent it. While you post in great volume, you could really stand to work on your clarity. I'll ask some questions below to try to get you to clarify some relevant parts of it. I also don't think you're bothering to pay attention to my and/or Keith's points. For instance, your massive "point-by-point refutation" didn't actually engage Keith's point at all -- he's making the same basic point I am, that the second law allows local entropy decreases when they're coupled with compensating entropy increases elsewhere; your response was all about organization and related topics, not entropy. You're not even talking about the same thing. Speaking of which:
What we have actually said is something else, to practical certainty, given the underlying statistical basis for the law diffusion and similar processes cannot credibly perform constructive work issuing in FSCO/I. Just as we do not see golf balls, by lucky collocations of forces spontaneously played through 18 holes of Golf, to give Shapiro’s example. If you think otherwise in the teeth of the evidence and reasoning that establishes the second law, just kindly give us an actually observed and recorded example: ____________ , and tell us when the Nobel Prize was awarded for the success: _________ .
Speaking of putting words in someone else's mouth... where did I say anything at all about FSCO/I or golf? Ok, now that I've ranted about you not understanding our points, let me try to get you to clarify yours:
That when a body of water is cooled, the cooling process manifests a rise of entropy elsewhere that exceeds the loss does not deflect the force of the point that diffusive like factors for heat etc overwhelmingly tend to move systems to clusters of microstates that have heavier statistical weight, they are simply not credible as the cause of constructive work ending in creation of FSCO/I.
If I'm reading this right, you're agreeing that compensation happens ("...a rise of entropy elsewhere that exceeds the loss..."), but denying that this can lead to the production of FSCO/I. A) Is that correct? B) If it's not, and you deny that compensation happens, how do you reconcile that with it being an example of an entropy decrease in one place compensated by an increase elsewhere? C) If it's not, and you aren't denying that this can lead to the production of FSCO/I, then... I'm massively confused; please try to explain again. Also, if you do claim this (whatever "this" is) cannot lead to the production of FSCO/I, do you claim that this is impossible because it would violate the second law of thermodynamics, or for other reasons? While I'm at it:
h] Now, as just mentioned, certain bodies have in them energy conversion devices: they COUPLE input energy to subsystems that harvest some of the energy to do work, exhausting sufficient waste energy to a heat sink that the overall entropy of the system is increased. Illustratively, for heat engines — and (in light of exchanges with email correspondents circa March 2008) let us note: a good slice of classical thermodynamics arose in the context of studying, idealising and generalising from steam engines [which exhibit organised, functional complexity, i.e FSCI; they are of course artifacts of intelligent design and also exhibit step-by-step problem-solving processes (even including "do-always" looping!)]: | | (A, heat source: Th): d’Qi –> (B’, heat engine, Te): –> d’W [work done on say D] + d’Qo –> (C, sink at Tc) | | i] A’s entropy: dSa >/= – d’Qi/Th j] C’s entropy: dSc >/= + d’Qo/Tc k] The rise in entropy in B, C and in the object on which the work is done, D, say, compensates for that lost from A. The second law — unsurprisingly, given the studies on steam engines that lie at its roots — holds for heat engines. [--> Notice, I have addressed the compensation issue all along.]
Are you claiming that the conversion of heat to work (plus lower-temperature waste heat) can only occur via FSCI (or FSCO/I or whatever)? If so, do you claim that this is a consequence of the second law, or for other reasons?Gordon Davisson
July 9, 2013
July
07
Jul
9
09
2013
08:38 PM
8
08
38
PM
PDT
It's odd because you and Elizabeth and other detractors are acting as if: (1) The name signed to the column above is Granville Sewell rather than Robert Sheldon; (2) Robert Sheldon's extensive remarks (probably a couple of thousand words in the op-ed, and several hundred more in comment 24), which include both factual statements about thermodynamics and arguments regarding their application, *add nothing to the discussion* and therefore can be ignored.Timaeus
July 9, 2013
July
07
Jul
9
09
2013
07:33 PM
7
07
33
PM
PDT
Timaeus, Why do you find that odd? Sheldon is defending Sewell and his lamentable paper, so of course Sewell's ideas are the focus.keiths
July 9, 2013
July
07
Jul
9
09
2013
07:10 PM
7
07
10
PM
PDT
I find it interesting that when someone with a Ph.D. in physics, who has worked for NASA, who has many peer-reviewed papers, etc., writes a column (the one above) on thermodynamics that mostly sides with Granville Sewell, those who have been criticizing Sewell continue to engage Sewell (whom they have complained is a non-physicist and does not understand thermodynamics, entropy, etc.), but remain silent in response to the arguments made by the physicist (whom one would suppose to be reasonably well-trained in these subjects). I wonder what the reason for this silence is. But I guess I notice odd things.Timaeus
July 9, 2013
July
07
Jul
9
09
2013
06:52 PM
6
06
52
PM
PDT
GD: Kindly stop putting words in my mouth that don't belong there. It is KS who has tried to suggest (frankly, at this stage it is a willful strawman distortion) that GS and I are trying to or say something equivalent to trying to overthrow the second law. What we have actually said is something else, to practical certainty, given the underlying statistical basis for the law diffusion and similar processes cannot credibly perform constructive work issuing in FSCO/I. Just as we do not see golf balls, by lucky collocations of forces spontaneously played through 18 holes of Golf, to give Shapiro's example. If you think otherwise in the teeth of the evidence and reasoning that establishes the second law, just kindly give us an actually observed and recorded example: ____________ , and tell us when the Nobel Prize was awarded for the success: _________ . That when a body of water is cooled, the cooling process manifests a rise of entropy elsewhere that exceeds the loss does not deflect the force of the point that diffusive like factors for heat etc overwhelmingly tend to move systems to clusters of microstates that have heavier statistical weight, they are simply not credible as the cause of constructive work ending in creation of FSCO/I. For essentially the same reason that we have no good basis to expect that a solar system full of rock avalanches over the past 10^17 s, would not once cause rocks to fall into a pattern of rocks spelling out this post. This is logically possible but so lost in the space of possibilities compared to the dominant clusters of outcomes, that the outcome is practically unobservable. Likewise, it is logically possible that the post you are reading is produced by noise on the internet, but this too is simply unobservable in empirical terms for the same reason. Now, finally the 2nd law is a case of an inductive generalisation that on observations and related analysis forbids certain outcomes. Like all similar laws, it is provisional but on evidence is highly reliable. The force of that underlying analisis as the case above illustrates is that due to relative statistical weight, diffusion will predictably, reliably not in nour observation assemble a jet or any comparable thing. But assembly robots properly instructed can do so. Or, scaling back up to Hoyle's example, a tornado is maximally unlikely to build a jet. But intelligently directed constructive work routinely does so. The reason being relative statistical weights of clusters of relevant states in the space of possibilities. If you can give an actually credibly observed case of chaotic forces of blind chance and mechanical necessity such as diffusion or tornadoes performing constructive work issuing in FSCO/I (as opposed to order) kindly give it: __________ KFkairosfocus
July 9, 2013
July
07
Jul
9
09
2013
05:41 PM
5
05
41
PM
PDT
Re KS: Simply compare who has taken time to deal with the matter in context (starting with a point by point refutation backed up by supportive materials that KS is trying to pretend does not exist and/or to push off as "spam" -- proof of his bad faith if any was needed) and who is trying to whistle by the graveyard in the dark. This duppy leans on the fence and says, BOOO! KFkairosfocus
July 9, 2013
July
07
Jul
9
09
2013
05:13 PM
5
05
13
PM
PDT
KF, You can't make up in volume for what your argument lacks in substance. If you could refute my simple 4-step argument, you would. You would point out the exact statement you disagree with, and you would explain why you thought it was wrong. You can't do that, so instead you spam the thread with thousands of words, hoping that the onlookers will assume that there's a refutation in there somewhere. There isn't, and the onlookers know it.keiths
July 9, 2013
July
07
Jul
9
09
2013
11:49 AM
11
11
49
AM
PDT
KF:
a] Clausius is the founder of the 2nd law, and the first standard example of an isolated system — one that allows neither energy nor matter to flow in or out — is instructive, given the “closed” subsystems [i.e. allowing energy to pass in or out] in it: Isol System: | |(A, at Thot) –> d’Q, heat –> (B, at T cold)| | b] Now, we introduce entropy change dS >/= d’Q/T . . . “Eqn” A.1 c] So, dSa >/= -d’Q/Th, and dSb >/= +d’Q/Tc, where Th > Tc d] That is, for system, dStot >/= dSa + dSb >/= 0, as Th > Tc . . . “Eqn” A.2 e] But, observe: the subsystems A and B are open to energy inflows and outflows, and the entropy of B RISES DUE TO THE IMPORTATION OF RAW ENERGY.
In the first place, you are making a basic error in logic here: giving an example of something not happening does not show that it cannot happen (and certainly doesn't show that it would violate the second law of thermodynamics). If you want to argue that compensation is impossible, giving examples of it not happening is not adequate to support your claim. That's like saying "it's not raining today, therefore rain is impossible." It's just nonsense. This particular fallacy is disturbingly common in these discussions. Your nanobots & micro-jet argument gives an example of organization not occurring without intelligent input, but does nothing to support the claim that organization requires intelligent input. Sewell's paper gives examples of X-entropy not decreasing (unless certain boundary conditions are met), but does nothing to support the claim that a decrease requires that those boundary conditions are met. In the second place, your example actually does show compensation taking place. The subsystem A undergoes an entropy decrease, compensated by the larger increase in B. This is what is meant by compensation. This may not be what you mean by compensation, but in that case you're using a nonstandard definition. Compensation is not about probability or order or organization except as far as they're related to entropy. If what you're talking about is related to entropy, then compensation will be relevant to it (although to understand how it's relevant, you have to understand the relation between entropy and whatever you're interested in). If entropy isn't related to what you're interested in, then the second law isn't relevant either. (Sewell's argument against compensation is based on trying to apply it to probability instead of entropy; probability is related to entropy, but Sewell doesn't understand the connection, so the result is a hopeless muddle.)Gordon Davisson
July 9, 2013
July
07
Jul
9
09
2013
11:20 AM
11
11
20
AM
PDT
F/N 2: It is appropriate to here clip the microjets and nanobots thought exercise from here in my note, as this shows why diffusion strongly tends to be one-way and how clumping and organisation by constructive work are best explained on design, with a spot of context: ___________ >>6] It is worth pausing to now introduce a thought (scenario) experiment that helps underscore the point, by scaling down to essentially molecular size the tornado- in- a- junkyard- forms- a- jet example raised by Hoyle and mentioned by Dawkins with respect in the just linked excerpt in Section A above. Then, based on (a) the known behaviour of molecules and quasi-molecules through Brownian-type motion (which, recall, was Einstein's Archimedian point for empirically demonstrating the reality of atoms), and (b) the also known requirement of quite precise configurations to get to a flyable micro-jet, we may (c) find a deeper understanding of what is at stake in the origin of life question: NANOBOTS & MICRO-JETS THOUGHT EXPT: i] Consider the assembly of a Jumbo Jet, which requires intelligently designed, physical work in all actual observed cases. That is, orderly motions were impressed by forces on selected, sorted parts, in accordance with a complex specification. (I have already contrasted the case of a tornado in a junkyard that it is logically and physically possible can do the same, but the functional configuration[s] are so rare relative to non-functional ones that random search strategies are maximally unlikely to create a flyable jet, i.e. we see here the logic of the 2nd Law of Thermodynamics, statistical thermodynamics form, at work. [Intuitively, since functional configurations are rather isolated in the space of possible configurations, we are maximally likely to exhaust available probabilistic resources long before arriving at such a functional configuration or "island" of such configurations (which would be required before hill-climbing through competitive functional selection, a la Darwinian natural Selection could take over . . . ); if we start from an arbitrary initial configuration and proceed by a random walk.]) ii] Now, let us shrink the Hoylean example, to a micro-jet so small [~ 1 cm or even smaller] that the parts are susceptible to Brownian motion, i.e they are of about micron scale [for convenience] and act as "large molecules." (Cf. "materialism-leaning 'prof' Wiki's" blowing-up of Brownian motion to macro-scale by thought expt, here; indeed, this sort of scaling-up thought experiment was just what the late, great Sir Fred was doing in his original discussion of 747's.) Let's say there are about a million of them, some the same, some different etc. In principle, possible: a key criterion for a successful thought experiment. Next, do the same for a car, a boat and a submarine, etc. iii] In several vats of "a convenient fluid," each of volume about a cubic metre, decant examples of the differing mixed sets of nano-parts; so that the particles can then move about at random, diffusing through the liquids as they undergo random thermal agitation. iv] In the control vat, we simply leave nature to its course.
Q: Will a car, a boat a sub or a jet, etc, or some novel nanotech emerge at random? [Here, we imagine the parts can cling to each other if they get close enough, in some unspecified way, similar to molecular bonding; but that the clinging force is not strong enough at appreciable distances [say 10 microns or more] for them to immediately clump and precipitate instead of diffusing through the medium.] ANS: Logically and physically possible (i.e. this is subtler than having an overt physical force or potential energy barrier blocking the way!) but the equilibrium state will on statistical thermodynamics grounds overwhelmingly dominate — high disorder. Q: Why? A: Because there are so many more accessible scattered state microstates than there are clumped-at -random state ones, or even moreso, functionally configured flyable jet ones . . . [--> some dead links, I hate that . . . ] )
v] Now, pour in a cooperative army of nanobots into one vat, capable of recognising jet parts and clumping them together haphazardly. [This is of course, work, and it replicates bonding at random. "Work" is done when forces move their points of application along their lines of action. Thus in addition to the quantity of energy expended, there is also a specificity of resulting spatial rearrangement depending on the cluster of forces that have done the work. This of course reflects the link between "work" in the physical sense and "work" in the economic sense; thence, also the energy intensity of an economy with a given state of technology: energy per unit GDP tends to cluster tightly while a given state of technology and general level of economic activity prevail. (Current estimate for Montserrat: 1.6 lbs CO2 emitted per EC$ 1 of GDP, reflecting an energy intensity of 6 MJ/EC$, and the observation that burning one US Gallon of gasoline or diesel emits about 20 lbs of that gas . . . . Q: After a time, will we be likely to get a flyable nano jet? A: Overwhelmingly, on probability, no. (For, the vat has ~ [10^6]^3 = 10^18 one-micron locational cells, and a million parts or so can be distributed across them in vastly more ways than they could be across say 1 cm or so for an assembled jet etc or even just a clumped together cluster of micro-parts. [a 1 cm cube has in it [10^4]^3 = 10^12 cells, and to confine the nano-parts to that volume obviously sharply reduces the number of accessible cells consistent with the new clumped macrostate.] But also, since the configuration is constrained, i.e. the mass in the microjet parts is confined as to accessible volume by clumping, the number of ways the parts may be arranged has fallen sharply relative to the number of ways that the parts could be distributed among the 10^18 cells in the scattered state. [--> this undoes a lot of the effect of diffusion] (That is, we have here used the nanobots to essentially undo diffusion of the micro-jet parts.) The resulting constraint on spatial distribution of the parts has reduced their entropy of configuration. For, where W is the number of ways that the components may be arranged consistent with an observable macrostate, and since by Boltzmann, entropy, s = k ln W, we see that W has fallen so S too falls on moving from the scattered to the clumped state. [--> I add, if you want to factor in a case where due to energy barriers and the like, states will not be accessible, or equally accessible, we can move tot he Gibbs formalism, a weighted average SUM of Pi ln Pi, as is used in the Robertson derivation used elsewhere in the note, eg. here on. That of course directly shows the link to information per the metric of average info per symbol under similar circumstances.] vi] For this vat, next remove the random cluster nanobots, and send in the jet assembler nanobots. These recognise the clumped parts, and rearrange them to form a jet, doing configuration work. (What this means is that within the cluster of cells for a clumped state, we now move and confine the parts to those sites consistent with a flyable jet emerging. That is, we are constraining the volume in which the relevant individual parts may be found, even further. [--> this is also a species of the concept of undoing diffusion]) A flyable jet results — a macrostate with a much smaller statistical weight of microstates. We can see that of course there are vastly fewer clumped configurations that are flyable than those that are simply clumped at random, and thus we see that the number of microstates accessible due to the change, [a] scattered --> clumped and now [b] onward --> functionally configured macrostates has fallen sharply, twice in succession. Thus, by Boltzmann's result s = k ln W, we also have seen that the entropy has fallen in succession as we moved from one state to the next, involving a fall in s on clumping, and a further fall on configuring to a functional state; dS tot = dSclump + dS config. [Of course to do that work in any reasonable time or with any reasonable reliability, the nanobots will have to search and exert directed forces in accord with a program, i.e this is by no means a spontaneous change, and it is credible that it is accompanied by a compensating rise in the entropy of the vat as a whole and its surroundings. This thought experiment is by no means a challenge to the second law. But, it does illustrate the implications of the probabilistic reasoning involved in the microscopic view of that law, where we see sharply configured states emerging from much less constrained ones.] vii] In another vat we put in an army of clumping and assembling nanobots, so we go straight to making a jet based on the algorithms that control the nanobots. Since entropy is a state function, we see here that direct assembly is equivalent to clumping and then reassembling from a random “macromolecule” to a configured functional one. That is: dS_tot (direct) = dS_clump + dS_config. viii] Now, let us go back to the vat. For a large collection of vats, let us now use direct microjet assembly nanobots, but in each case we let the control programs vary at random a few bits at a time -– say hit them with noise bits generated by a process tied to a zener noise source. We put the resulting products in competition with the original ones, and if there is an improvement, we allow replacement. Iterate, many, many times.
Q: Given the complexity of the relevant software, will we be likely to for instance come up with a hyperspace-capable spacecraft or some other sophisticated and un-anticipated technology? (Justify your answer on probabilistic grounds.) My prediction: we will have to wait longer than the universe exists to get a change that requires information generation (as opposed to information and/or functionality loss) on the scale of 500 – 1000 or more bits. [See the info-generation issue over macroevolution by RM + NS?]
ix] Try again, this time to get to even the initial assembly program by chance, starting with random noise on the storage medium. See the abiogenesis/ origin of life issue? x] The micro-jet is of course an energy converting device which exhibits FSCI, and we see from this thought expt why it is that it is utterly improbable on the same grounds as we base the statistical view of the 2nd law of thermodynamics, that it should originate spontaneously by chance and necessity only, without agency. xi] Extending to the case of origin of life, we have cells that use sophisticated machinery to assemble the working macromolecules, direct them to where they should go, and put them to work in a self-replicating, self-maintaining automaton. Clumping work [if you prefer that to TBO’s term chemical work, fine [--> that triggered the debate in 2008 IIRC]], and configuring work can be identified and applied to the shift in entropy through the same s = k ln W equation. For, first we move from scattered at random in the proposed prebiotic soup, to chained in a macromolecule, then onwards to having particular monomers in specified locations along the chain -- constraining accessible volume again and again, and that in order to access observably bio-functional macrostates. Also, s = k ln W, through Brillouin, TBO link to information, viewed as "negentropy," citing as well Yockey-Wicken’s work and noting on their similar definition of information; i.e this is a natural outcome of the OOL work in the early 1980's, not a "suspect innovation" of the design thinkers in particular. BTW, the concept complex, specified information is also similarly a product of the work in the OOL field at that time, it is not at all a "suspect innovation" devised by Mr Dembski et al, though of course he has provided a mathematical model for it. [ I have also just above pointed to Robertson, on why this link from entropy to information makes sense — and BTW, it also shows why energy converters that use additional knowledge can couple energy in ways that go beyond the Carnot efficiency limit for heat engines.] 7] We can therefore see the cogency of Mathematician, Granville Sewell's observations, here. Excerpting:
. . . The second law is all about probability, it uses probability at the microscopic level to predict macroscopic change: the reason carbon distributes itself more and more uniformly in an insulated solid is, that is what the laws of probability predict when diffusion alone is operative. The reason natural forces may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy. The second law of thermodynamics is the reason that computers will degenerate into scrap metal over time, and, in the absence of intelligence, the reverse process will not occur; and it is also the reason that animals, when they die, decay into simple organic and inorganic compounds, and, in the absence of intelligence, the reverse process will not occur. The discovery that life on Earth developed through evolutionary "steps," coupled with the observation that mutations and natural selection -- like other natural forces -- can cause (minor) change, is widely accepted in the scientific world as proof that natural selection -- alone among all natural forces -- can create order out of disorder, and even design human brains, with human consciousness. Only the layman seems to see the problem with this logic. In a recent Mathematical Intelligencer article ["A Mathematician's View of Evolution," The Mathematical Intelligencer 22, number 4, 5-7, 2000] I asserted that the idea that the four fundamental forces of physics alone could rearrange the fundamental particles of Nature into spaceships, nuclear power plants, and computers, connected to laser printers, CRTs, keyboards and the Internet, appears to violate the second law of thermodynamics in a spectacular way.1 . . . . What happens in a[n isolated] system depends on the initial conditions; what happens in an open system depends on the boundary conditions as well. As I wrote in "Can ANYTHING Happen in an Open System?", "order can increase in an open system, not because the laws of probability are suspended when the door is open, but simply because order may walk in through the door.... If we found evidence that DNA, auto parts, computer chips, and books entered through the Earth's atmosphere at some time in the past, then perhaps the appearance of humans, cars, computers, and encyclopedias on a previously barren planet could be explained without postulating a violation of the second law here . . . But if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here." Evolution is a movie running backward, that is what makes it special. THE EVOLUTIONIST, therefore, cannot avoid the question of probability by saying that anything can happen in an open system, he is finally forced to argue that it only seems extremely improbable, but really isn't, that atoms would rearrange themselves into spaceships and computers and TV sets . . . [NB: Emphases added. I have also substituted in isolated system terminology as GS uses a different terminology. Cf as well his other remarks here and here.]>>
___________ The bottomline is simple, it is not reasonable to expect diffusion and related patterns to substitute for constructive work resulting not in order but FSCO/I. The pretence that the freezing of a puddle full of water or the like answers to this, is a red herring led away to a strawman. KFkairosfocus
July 9, 2013
July
07
Jul
9
09
2013
05:30 AM
5
05
30
AM
PDT
Oops, forgot to link to the section of the note.kairosfocus
July 9, 2013
July
07
Jul
9
09
2013
05:03 AM
5
05
03
AM
PDT
FYI & FTR: KS has tried to substitute the strawman argument of freezing of water for explaining constructive work, on the propagandistic pretence that the mechanical necessity that allows ice to form once sufficient latent heat has been extracted, answers to the complex functional organisation shown by say a Jumbo jet or a string of text or the nanomachines in a living cell or the coded information in DNA. He refuses therefore to acknowledge that mechanical necessity only leads to natural regularity, and cannot explain high contingency in itself. Even in the case of "chaos," what accounts for dramatic differences in runs is that here are small initial differences that are drastically amplified by the nonlinear dynamics. High contingency is required to exhibit information storage or configurability required for construction of functionally specific complex entities. High contingency is on massive empirical observation, only caused by chance or choice, and the manifestation of FSCO/I, not credibly reachable by chance on the gamut of the observed cosmos or at any rate the solar system, is only in our experience caused by choice. The analysis on the clusters of microstates shows why that is, the deep isolation and rarity of relevant islands of function makes them utterly dominated by the overswelming bulk of the space of possibilities, gibberish irrelevant to the functions in question. Blind sampling or blindly selected search mechanisms, on the gamut of the solar system with all but certainty will only pick up that bulk. Clipping, partly in anticipation of other linked objections: ________________ >>A tropical cyclone is by and large shaped by convective and Coriolis forces acting on a planetary scale over a warm tropical ocean whose surface waters are at or above about 80 degrees F. That is, it is a matter of chance + necessity leading to order under appropriate boundary conditions, rather than to complex, functionally specified information. Similarly, the hexagonal, crystalline symmetry of snowflakes is driven by the implications of the electrical polarisation in the H-O-H (water) molecule -- which is linked to its kinked geometry, and resulting hexagonal close packing. [--> mechanical necessity leading to order under circumstances where not sufficient energy is present at micro level to disrupt it] Their many, varied shapes are controlled by the specific micro-conditions of the atmosphere along the path travelled by the crystal as it forms in a cloud. [--> complexity shaped by chance] As the just linked summarises [in a 1980's era, pre-design movement Creationist context] and illustrates by apt photographic examples [which is a big part of why it is linked]:
Hallet and Mason2. . . found that water molecules are preferentially incorporated into the lattice structure of ice crystals as a function of temperature. Molecules from the surrounding vapor that land on a growing crystal migrate over its surface and are fixed to either the axial [tending to lead to plate- or star-shaped crystals] or basal planes [tending to lead to columnar or needle-like crystals] depending upon four temperature conditions. For example, snow crystals will grow lengthwise to form long, thin needles and columns . . . when the temperature is between about -3°C and -8°C. When the temperature is between about -8°C and -25°C, plate-like crystals will form . . . Beautiful stellar and dendritic crystals form at about -15°C. In addition, the relative humidity of the air and the presence of supercooled liquid cloud droplets will cause secondary growth phenomena known as riming and dendritic growth. [NB: this is what leads to the most elaborate shapes.] The small, dark spheres attached to the edges of the plate[-type crystal] in Figure 5 are cloud droplets that were collected and attached to the snow crystal as rime as the crystal fell through these droplets on its way to the earth's surface. The dendritic and feathery edges . . . are produced by the rapid growth of snow crystals in a high-humidity environment . . . . The modern explanation of the hexagonal symmetry of snow crystals is that a snow crystal is a macroscopic, outward manifestation of the internal arrangement of the molecules in ice. The molecules form an internal pattern of lowest free energy, one that possesses high structural symmetry. For the water molecule this is a type of symmetry called hexagonal close pack. ["Microscopic Masterpieces: Discovering Design in Snow Crystals," Larry Vardiman, ICR, 1986. (Note, too, from the context of the above excerpts, on how "design" and "creation" are rather hastily inferred to in this 1980's era Creationist article; a jarringly different frame of thought from the far more cautious, empirical, step by step explanatory filter process and careful distinctions developed by TBO and other design theorists. Subsequently, many Creationists have moved towards the explanatory filter approach pioneered by the design thinkers. This article -- from Answers in Genesis' Technical Journal -- on the peacock's tail is an excellent example, and a telling complement to the debates on the bacterial flagellum. Notice, in particular, how it integrates the aesthetic impact issue that is ever so compelling intuitively with the underlying issue of organised complexity to get to the aesthetics.) Cf also an AMS article here. [--> for economy given the limited UD budget per comment some links will not be added, go to the linked point in my note]]
A snowflake may indeed be (a) complex in external shape [reflecting random conditions along its path of formation] and (b) orderly in underlying hexagonal symmetrical structure [reflecting the close-packing molecular forces at work], but it simply does not encode functionally specific information. Its form simply results from the point-by-point particular conditions in the atmosphere along its path as it takes shape under the impact of chance [micro-atmospheric conditions] + necessity [molecular packing forces]. The tendency to wish to use the snowflake as a claimed counter-example alleged to undermine the coherence of the CSI concept thus plainly reflects a basic confusion between two associated but quite distinct features of this phenomenon: (a) external shape -- driven by random forces and yielding complexity [BTW, this is in theory possibly useful for encoding information, but it is probably impractical!]; and, (b) underlying hexagonal crystalline structure -- driven by mechanical forces and yielding simple, repetitive, predictable order. [This is not useful for encoding at all . . .] Of course, other kinds of naturally formed crystals reflect the same balance of forces and tend to have a simple basic structure with a potentially complex external shape, especially if we have an agglomeration of in effect "sub-crystals" in the overall observed structure. In short, a snowflake is fundamentally a crystal, not an aperiodic and functionally specified information-bearing structure serving as an integral component of an organised, complex information-processing system, such as DNA or protein macromolecules manifestly are. >> ________________ So, from Wicken and Orgel, and from the difference between order and organisation, it is quite evident that that which accounts for randomness, regularities and order, is not the same as that which accounts for FSCO/I. The only thing that does so is design, on wide and exceptionless obnservation and experience. of course, KS's other point is that bodies giving off energy lose entropy so there. This simply fails to address the relevant body in the cases relevant to OOL and origin of FSCO/I generally, which are IMPORTING energy. In the case of OOL, there is good reason to see that importation of relevant raw energy and materials such as abundantly present H2O molecules, should tend rather to break up molecules and also to form interfering cross reactions. Certainly, that is part of why Miller and Urey trapped out the formed chemicals in their apparatus. An apparatus that was critically dependent on a reducing atmosphere that for decades has no longer been credible as an early earth atmosphere. We already have repeatedly addressed the substitution of body A for Body B in the thermodynamic analysis on what happens when a body IMPORTS heat etc, but it will not hurt to give it one more time, to rivet home that constructive work requires a heat engine, body B in the following:
For instance (following a useful simple model of diffusion in Yavorsky and Pinski’s nice elementary Physics [MIR, 1974]), if we have ten each of white and black marbles in two rows in a container: ||**********|| ||0000000000|| There is but one way to be as shown, but over 63,000 to be 5B/W in each row, and 6:4 and 4:6 cases are close to such numbers:44,000 +. So, if the system is sufficiently agitated for balls to swap positions at random on a regular basis, it soon moves towards a dominant cluster near the 5:5 peak and “forgets” the initial state. This basic pattern works for heat, diffusion of ink drops in beakers of water, C atoms in a bar of steel, and more. The basic reason for such does not go away if the system is opened up to energy and mass flows, and the point that rare, specifically and simply describable states will seldom be revisited or found, for enough complexity — 500 – 1,000 bits, soon becomes that such states are beyond the reach of the solar system’s or the observed cosmos’ search capacity. RS’ point that there are states that can be locked away from interaction so that it is reasonable to partition entropy accounting, is also quite useful. My own emphasis is that we need to see the difference between what diffusion like factors/ forces will strongly tend to do and what produces shaft work thence constructive work ending in FSCO/I. Let me try a second diagram using textual features: Heat transfer in Isolated system: || A (at T_a) –> d’Q –> B (at T_b) || dS >/= (-d’Q/T_a) + (+d’Q/T_b), Lex 2, Th in a common form Heat engine, leaving off the isolation of the whole: A –> d’Q_a –> B’ =====> D (shaft work) Where also, B’ –> d’Q_b –> C, heat disposal to a heat sink Where of course the sum of heat inflows d’Q_a will equal the sum of rise in internal energy of B dU_b, work done on D, dW, and heat transferred onwards to C, d’Q_b. The pivotal questions are, the performance of constructive work by dW, the possibility that this results in FSCO/I, the possibility that B itself shows FSCO/I enabling it to couple energy and do shaft work, and the suggested ideas that d’Q’s can spontaneously lead to shaft work constructing FSCO/I as an informational free lunch. By overwhelming statistical weights of accessible microstates consistent with the gross flows outlined, this is not credibly observable on the gamut of our solar system or the observed cosmos. There is but one empirically, reliably observed source for FSCO/I, design. The analysis on statistical weights of microstates and random forces such as diffusion and the like, shows why. Which brings us to the point Hoyle was making: letting his tornado stand in for an agitating energy source triggering configs at random, tornadoes passing through junkyards don’t build jumbo jets or even instruments on their panels. But similar intelligently directed expenditures, through precise patterns of work, routinely do build jumbo jets.
Constructive, counterflow work resulting in FSCO/I has only one empirically warranted explanation, design. KFkairosfocus
July 9, 2013
July
07
Jul
9
09
2013
04:41 AM
4
04
41
AM
PDT
KS: This is an example of a straight out willfully misleading propagandistic fabrication in the teeth of evident facts on your part. I did take time to address the above point by point and corrected it as a case of red herrings led away to strawmen. Much of that response is a clip from my always linked note, App A which I believe dates in main part to 2008, i.e. a further answer to KS' strawman tactics is present through my handle in EVERY comment I have ever made at UD. Let me clip 231 in the difference thread, posted July 7, 2013 at 6:03 am: ___________ >> When Granville argues against the compensation idea, he is unwittingly arguing against the second law itself.>> 1 –> The core of GS’s argument is that forces that on balance of probabilities lead to diffusion and the like, are maximally implausible as the source of constructive work. Citing his paper, A Second Look at the Second Law, again (as done at 190 above):
Note that (2) [a flow gradient expression] simply says that heat flows from hot to cold regions—because the laws of probability favor a more uniform distribution of heat energy . . . . From [an eqn that entails that in such a system, d'S >/= 0] (5) it follows that in an isolated, closed, system, where there is no heat ?ux through the boundary d’S >/= 0. Hence, in a closed system, entropy can never decrease. Since thermal entropy measures randomness (disorder) in the distribution of heat, its opposite (negative) can be referred to as ”thermal order”, and we can say that the thermal order can never increase in a closed system. Furthermore, there is really nothing special about ”thermal” entropy. We can define another entropy, and another order, in exactly the same way, to measure randomness in the distribution of any other substance that diffuses, for example, we can let U(x,y,z,t) represent the concentration of carbon diffusing in a solid (Q is just U now), and through an identical analysis show that the ”carbon order” thus defined cannot increase in a closed system. It is a well-known prediction of the second law that, in a closed system, every type of order is unstable and must eventually decrease, as everything tends toward more probable states . . .
2 –> At no point have objectors provided an example of FSCO/I arising spontaneously by such dispersive forces, through their providing constructive work. This is also the implicit point in Hoyle’s example of a tornado passing through a junkyard and lo and behold a jumbo jet emerges, NOT. By contrast, the work involving a probably comparable amount of energy or even less, by men, machines and equipment working to a constructive plan will build a jumbo jet. That is we must recognise the difference between forces that blindly and freely move things around in accord with statistical patterns and those that move them according to a plan. 3 –> This issue lies as well at the heart of the recent challenge to explain how a box of 500 coins, all H came to be. KS, EL, and others of their ilk have been adamant to refuse the best explanation [constructive work] and to refuse as well to recognise that due to the differing statistical weights of clusters of microstates, such a 500H state arising by random tossing is practically unobservable on the gamut of the solar system. 4 –> Notice, also, GS has put the issue of forces of diffusion at the pivot of his case, and indeed that at once allows us to see that when he speaks of X-entropy, he is speaking of the sort of thing that makes C diffuse even in the solid state. >>It’s easy to see why. Imagine two open systems A and B that interact only with each other. Now draw a boundary around just A and B and label the contents as system C.>> 5 –> Here KS revisits Clausius’ first example, which appears in my always linked note and which is clipped in the FYI-FTR, he is about to refuse to look seriously at what is happening at micro level when d’Q of heat moves from A at a higher temp to B at a lower. In short he leads away via a red herring and erects and burns a strawman. Let me lay out the summary that was there for literally years in App 1 my note:
1] TMLO: In 1984, this well-received work provided the breakthrough critical review on the origin of life that led to the modern design school of thought in science. The three online chapters, as just linked, should be carefully read to understand why design thinkers think that the origin of FSCI in biology is a significant and unmet challenge to neo-darwinian thought. (Cf also Klyce’s relatively serious and balanced assessment, from a panspermia advocate. Sewell’s remarks here are also worth reading. So is Sarfati’s discussion of Dawkins’ Mt Improbable.) 2] But open systems can increase their order: This is the “standard” dismissal argument on thermodynamics, but it is both fallacious and often resorted to by those who should know better. My own note on why this argument should be abandoned is: a] Clausius is the founder of the 2nd law, and the first standard example of an isolated system — one that allows neither energy nor matter to flow in or out — is instructive, given the “closed” subsystems [i.e. allowing energy to pass in or out] in it: Isol System: | |(A, at Thot) –> d’Q, heat –> (B, at T cold)| | b] Now, we introduce entropy change dS >/= d’Q/T . . . “Eqn” A.1 c] So, dSa >/= -d’Q/Th, and dSb >/= +d’Q/Tc, where Th > Tc d] That is, for system, dStot >/= dSa + dSb >/= 0, as Th > Tc . . . “Eqn” A.2 e] But, observe: the subsystems A and B are open to energy inflows and outflows, and the entropy of B RISES DUE TO THE IMPORTATION OF RAW ENERGY. f] The key point is that when raw energy enters a body, it tends to make its entropy rise. This can be envisioned on a simple model of a gas-filled box with piston-ends at the left and the right: ================================= ||::::::::::::::::::::::::::::::::::::::::::||=== ================================= 1: Consider a box as above, filled with tiny perfectly hard marbles [so collisions will be elastic], scattered similar to a raisin-filled Christmas pudding (pardon how the textual elements give the impression of a regular grid, think of them as scattered more or less hap-hazardly as would happen in a cake). 2: Now, let the marbles all be at rest to begin with. 3: Then, imagine that a layer of them up against the leftmost wall were given a sudden, quite, quite hard push to the right [the left and right ends are pistons]. 4: Simply on Newtonian physics, the moving balls would begin to collide with the marbles to their right, and in this model perfectly elastically. So, as they hit, the other marbles would be set in motion in succession. A wave of motion would begin, rippling from left to right 5:As the glancing angles on collision will vary at random, the marbles hit and the original marbles would soon begin to bounce in all sorts of directions. Then, they would also deflect off the walls, bouncing back into the body of the box and other marbles, causing the motion to continue indefinitely. 6: Soon, the marbles will be continually moving in all sorts of directions, with varying speeds, forming what is called the Maxwell-Boltzmann distribution, a bell-shaped curve. 7: And, this pattern would emerge independent of the specific initial arrangement or how we impart motion to it, i.e. this is an attractor in the phase space: once the marbles are set in motion somehow, and move around and interact, they will soon enough settle into the M-B pattern. E.g. the same would happen if a small charge of explosive were set off in the middle of the box, pushing our the balls there into the rest, and so on. And once the M-B pattern sets in, it will strongly tend to continue . . . . for the injection of energy to instead do predictably and consistently do something useful, it needs to be coupled to an energy conversion device. g] When such energy conversion devices, as in the cell, exhibit FSCI, the question of their origin becomes material, and in that context, their spontaneous origin is strictly logically possible but — from the above — negligibly different from zero probability on the gamut of the observed cosmos. (And, kindly note: the cell is an energy importer with an internal energy converter. That is, the appropriate entity in the model is B and onward B’ below. Presumably as well, the prebiotic soup would have been energy importing, and so materialistic chemical evolutionary scenarios therefore have the challenge to credibly account for the origin of the FSCI-rich energy converting mechanisms in the cell relative to Monod’s “chance + necessity” [cf also Plato's remarks] only.) h] Now, as just mentioned, certain bodies have in them energy conversion devices: they COUPLE input energy to subsystems that harvest some of the energy to do work, exhausting sufficient waste energy to a heat sink that the overall entropy of the system is increased. Illustratively, for heat engines — and (in light of exchanges with email correspondents circa March 2008) let us note: a good slice of classical thermodynamics arose in the context of studying, idealising and generalising from steam engines [which exhibit organised, functional complexity, i.e FSCI; they are of course artifacts of intelligent design and also exhibit step-by-step problem-solving processes (even including "do-always" looping!)]: | | (A, heat source: Th): d’Qi –> (B’, heat engine, Te): –> d’W [work done on say D] + d’Qo –> (C, sink at Tc) | | i] A’s entropy: dSa >/= – d’Qi/Th j] C’s entropy: dSc >/= + d’Qo/Tc k] The rise in entropy in B, C and in the object on which the work is done, D, say, compensates for that lost from A. The second law — unsurprisingly, given the studies on steam engines that lie at its roots — holds for heat engines. [--> Notice, I have addressed the compensation issue all along.] l] However for B since it now couples energy into work and exhausts waste heat, does not necessarily undergo a rise in entropy having imported d’Qi. [The problem is to explain the origin of the heat engine -- or more generally, energy converter -- that does this, if it exhibits FSCI.] [--> Notice the pivotal question being ducked in the context of the origin of cell based life, through red herrings and strawmen.] m] There is also a material difference between the sort of heat engine [an instance of the energy conversion device mentioned] that forms spontaneously as in a hurricane [directly driven by boundary conditions in a convective system on the planetary scale, i.e. an example of order], and the sort of complex, organised, algorithm-implementing energy conversion device found in living cells [the DNA-RNA-Ribosome-Enzyme system, which exhibits massive FSCI]. n] In short, the decisive problem is the [im]plausibility of the ORIGIN of such a FSCI-based energy converter through causal mechanisms traceable only to chance conditions and undirected [non-purposive] natural forces. This problem yields a conundrum for chem evo scenarios, such that inference to agency as the probable cause of such FSCI — on the direct import of the many cases where we do directly know the causal story of FSCI — becomes the better explanation. As TBO say, in bridging from a survey of the basic thermodynamics of living systems in CH 7, to that more focussed discussion in ch’s 8 – 9:
While the maintenance of living systems is easily rationalized in terms of thermodynamics, the origin of such living systems is quite another matter. Though the earth is open to energy flow from the sun, the means of converting this energy into the necessary work to build up living systems from simple precursors remains at present unspecified (see equation 7-17). The “evolution” from biomonomers of to fully functioning cells is the issue. Can one make the incredible jump in energy and organization from raw material and raw energy, apart from some means of directing the energy flow through the system? In Chapters 8 and 9 we will consider this question, limiting our discussion to two small but crucial steps in the proposed evolutionary scheme namely, the formation of protein and DNA from their precursors. It is widely agreed that both protein and DNA are essential for living systems and indispensable components of every living cell today.11 Yet they are only produced by living cells. Both types of molecules are much more energy and information rich than the biomonomers from which they form. Can one reasonably predict their occurrence given the necessary biomonomers and an energy source? Has this been verified experimentally? These questions will be considered . . . [Bold emphasis added. Cf summary in the peer-reviewed journal of the American Scientific Affiliation, "Thermodynamics and the Origin of Life," in Perspectives on Science and Christian Faith 40 (June 1988): 72-83, pardon the poor quality of the scan. NB:as the journal's online issues will show, this is not necessarily a "friendly audience."]
[[--> in short this question was actually addressed in the very first design theory work, TMLO, in 1984, so all along the arguments we are here addressing yet again are red herrings led away to strawmen soaked in ad hominems as we will see again below, and set alight to cloud, confuse, poison and polarise the atmosphere.]
>>Because A and B interact only with each other, and not with anything outside of C, we know that C is an isolated system (by definition). The second law tells us that the entropy cannot decrease in any isolated system (including C). We also know from thermodynamics that the entropy of C is equal to the entropy of A plus the entropy of B.>> 6 –> KS is setting up his red herring and strawman version. >>All of us (including Granville) know that it’s possible for entropy to decrease locally, as when a puddle of water freezes. So imagine that system A is a container of water that becomes ice.>> 7 –> Having dodged the pivotal issues of dispersive forces like diffusion being asked to carry out constructive work resulting in organisation of something that is rich in FSCO/I, KS gives an irrelevant example, of order emerging by mechanical necessity acting in the context of heat outflow, where the polar molecules of water will form ice crystals on being cooled enough. This very example is specifically addressed in TMLO, and I have already spoken to this and similar cases. [--> Let me add a link to my always linked note: Here, on hurricanes, snowflakes and the like. In any case, Wicken and Orgel already give an excellent answer [anticipation, these are in 1979 and 1973 . . . ] as cited below.] 8 –> By contrast, hear honest and serious remarks by Wicken and Orgel (which since 2010 have sat in the beginning of section D, IOSE intro-summary page, so KS either knows of or should know of this):
WICKEN, 1979: >> ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)] >> ORGEL, 1973: >> . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.]>>
9 –> KS, of course, has presented to us a case of crystallisation, as though it is an answer to the matter at stake. At this point, given his obvious situation as a highly informed person, this is willful perpetuation of a misrepresentation, which has a short sharp, blunt three-letter name that begins with L. >>Note: 1. The entropy of A decreases when the water freezes. 2. The second law tells us that the entropy of C cannot decrease. 3. Thermodynamics tells us that the entropy of C is equal to the entropy of A plus the entropy of B. 4. Therefore, if the entropy of A decreases, the entropy of B must increase by at least an equal amount to avoid violating the second law.>> 10 –> In these notes, KS ducks his intellectual responsibility to address just what happens with B so that the overall entropy is increased. Namely, that precisely because of the rise in accessible energy, the number of ways for energy and mass to be arranged at micro level, so far increases as to exceed the loss in number of ways of A. 11 –> And, the exact same diffusive and dissipative forces already described strongly push B towards the clusters of states with the highest statistical weights, and away from those clusters with very low statistical weights. So, by importing energy B’s entropy increases and by enough that the net result is at minimum to have entropy of the system constant. 12 –> It is the statistical reasoning linked to this, and the onward link to the information involved, thence the information involved in functionally specific complex organisation, thence the need for constructive work rather than expecting diffusion and the like to do spontaneously this for “free” that are pivotal to the case that KS has here distracted form and misrepresented. (Cf my microjets in a vat thought exercise case study here, which has been around since when, was it 2008 or so? And even if KS was ignorant of that, he had the real import of Hoyle’s argument, a contrast between what chaotic forces do and what planned constructive work does, as well as access to the points made by Orgel and Wicken. Likewise we can compare what Shapiro and Orgel said in their exchange on OOL. Golf balls do not in our experience play themselves around golf courses by lucky clusters of natural forces. If pigs could fly scenarios are nonsense. And the rock avalanche spontaneously forms Welcome to Wales at the border of Wales example has been around for a long time too. All of these are highlighting the difference in capability between blind chance and mechanical necessity and intelligently directed constructive work.) >>The second law demands that compensation must happen. If you deny compensation, you deny the second law.>> 13 –> A sad excuse to play at red herrings and strawmen. >>Thus Granville’s paper is not only chock full of errors, it actually shoots itself in the foot by contradicting the second law!>> 14 –> here comes the smoke of burning, ad hominem soaked strawmen , now. >>It’s a monumental mess that belongs nowhere near the pages of any respectable scientific publication. The BI organizers really screwed up when they accepted Granville’s paper. >> 15 –> throwing on more ad hominems to the fire to make even more polarisation, clouding of issues and poisoning of the atmosphere. ____________ Since obviously KS will not read the above much less follow up a link, I will follow this with the linked case study on the creation of hurricanes and snowflakes by mechanical necessity and blind chance for the onlooker. KFkairosfocus
July 9, 2013
July
07
Jul
9
09
2013
04:11 AM
4
04
11
AM
PDT
1 2 3 4 5

Leave a Reply